Blog

The Promise and Pitfalls of AI in the Perioperative Space

Facebook
X
LinkedIn

By Chris Lamont
Step into an operating room today and you’ll find technology woven into every detail, monitors track vital signs, scheduling systems juggle case loads, and software logs every dose and incision. It was inevitable that artificial intelligence would join the perioperative mix. The pitch is compelling: algorithms that predict complications before they happen, systems that keep the OR running on time, and tools that relieve clinicians of documentation burdens.

However, as hospitals test these tools, the story is proving to be more complicated. The promise is real, but so are the risks, biases, workflow disruptions, liabilities, and ethical blind spots. For healthcare leaders and perioperative teams, the challenge is to separate hype from reality and adopt AI with eyes wide open.

Where AI Has Helped

At Massachusetts General Hospital, researchers tested an AI model to predict case length using thousands of past surgeries. The system reduced scheduling errors and improved OR utilization by nearly 15% (Hanna et al., JAMA Network Open, 2022). In Toronto, anesthesiologists trialed an algorithm that analyzed vital signs in real time, catching subtle trends toward hypotension before clinicians intervened. In those moments, AI served as an extra safety net.

Documentation is another area where AI is already proving useful. A Mayo Clinic pilot of AI-driven natural language processing showed reductions in coding errors and improved efficiency in generating operative notes (Anesthesia & Analgesia, 2021). For clinicians facing burnout, these incremental wins matter.

Where AI Stumbles

But not all pilots end with success stories. A U.S. children’s hospital tested a scheduling AI trained primarily on adult elective cases. When applied to pediatric trauma, predictions were so inaccurate that surgeons quickly lost trust and reverted to manual overrides. Bias in training data is not theoretical; it can compromise patient safety.

Over-reliance is another danger. In already noisy environments, additional AI alerts can worsen alarm fatigue. False positives may be ignored, while false negatives may be trusted too much. When clinicians start asking, “What does the AI say?” before trusting their own judgment, autonomy, and accountability are at risk.

And integration is rarely seamless. Poorly designed interfaces add clicks and slow workflows. Instead of freeing clinicians, AI can distract them from the patient on the table.

Legal and Compliance Realities

The law has not kept pace with the development of AI. Under current malpractice frameworks, clinicians remain responsible for decisions, even if they follow an AI recommendation. The “learned intermediary” doctrine makes it clear: tools may advise, but physicians remain accountable (AMA Council on Ethical and Judicial Affairs, 2023).

Privacy is another minefield. Training AI requires enormous datasets. Even “de-identified” surgical data carries re-identification risks. HIPAA in the U.S. and GDPR in Europe strictly regulate secondary use of patient information. Hospitals that assume data-sharing is harmless risk fines, lawsuits, and reputational harm.

Regulatory oversight is evolving. The FDA has classified some AI tools as “Software as a Medical Device,” but continuous-learning systems raise unanswered questions: Does every update require fresh approval? Until clear frameworks exist, hospitals face compliance uncertainty.

And vendors sometimes stretch the truth. Marketing phrases like “real-time risk detection” sound impressive, but without peer-reviewed validation, they can cross the line into misrepresentation. Health systems that buy based on exaggerated claims risk not only wasting investment but also legal exposure.

The Ethical Dimension

Legal issues are only part of the story. Ethical principles also demand scrutiny.

· Autonomy: Patients often don’t know their perioperative data is being used to train AI. Transparency and, where feasible, informed consent are essential.

· Beneficence and Non-Maleficence: AI should improve care, not create new risks. Poorly validated algorithms that add noise or bias may do more harm than good.

· Justice: Wealthier hospitals can afford advanced AI, while resource-limited systems may fall behind, widening disparities in surgical safety and outcomes.

Ethics demand that AI serve all patients, not just those treated in hospitals with the most significant budgets.

A Path Forward

For leaders considering AI adoption, the choice is not between embracing or rejecting it, but between doing it recklessly or responsibly.

1. Pilot cautiously. Start small, track outcomes, and validate performance before scaling.

2. Build governance. Form oversight committees including clinicians, IT, compliance, and ethicists.

3. Demand transparency. Insist vendors provide validation data and explainability.

4. Support clinicians and train staff not just in how to use AI, but in how to challenge it.

5. Respect patients. Be transparent about data use, and guard against widening disparities.

AI has the potential to support perioperative teams by catching risks earlier, easing documentation, and improving efficiency. But it is not a magic solution. Without careful governance, it can erode trust, introduce new errors, and raise serious legal and ethical problems.

The OR may be filled with technology, but it is still people, surgeons, anesthesiologists, and nurses who carry the responsibility. AI can assist them, but it cannot replace their judgment. And accountability, no matter how advanced the algorithm, still lies in human hands.

About the Author

Chris Lamont is the Vice President of Sales & Marketing at Picis Clinical Solutions. With over 25 years of experience in healthcare IT, sales, and partner development, he leads initiatives that help hospitals understand and revise the value from optimized perioperative workflows to achieve measurable results. Chris is passionate about building strong client relationships and advancing technology that empowers clinicians.

References

Hanna, M. et al. “Development and Validation of a Machine Learning Model to Predict Surgical Case Duration.” JAMA Network Open, 2022.

Fleuren, L.M. et al. “Machine Learning for Sepsis Early Recognition in the Intensive Care Unit: A Systematic Review.” Lancet Digital Health, 2020.

Mayo Clinic. “Artificial Intelligence–Driven Documentation in the Perioperative Setting.” Anesthesia & Analgesia, 2021.

AMA Council on Ethical and Judicial Affairs. “Ethical Considerations for Artificial Intelligence in Medicine.” 2023.

U.S. FDA. “Proposed Regulatory Framework for Modifications to Artificial Intelligence/Machine Learning-Based Software as a Medical Device.” 2021.