Use Case Illustration

AI-Assisted Surgical Decision-Making Without Contemporaneous Decision Records

Why this document exists

This use case examines a publicly reported incident involving AI-assisted surgical systems. It does not assess clinical correctness, assign liability, or offer regulatory judgment. Its purpose is narrower and more important: to illustrate how harm involving AI was compounded by the absence of contemporaneous, human-readable decision records, and how that absence was neither necessary nor inevitable.

The failure described here is not a failure of artificial intelligence. It is a failure of institutional memory.

The situation as it now exists

AI systems are increasingly embedded in surgical and clinical environments. They assist with visualization, navigation, anatomical identification, monitoring, and risk assessment. These systems are rarely introduced through a single, explicit governance moment. Instead, they arrive through procurement, vendor training, and gradual normalization within practice.

Over time, reliance increases. Scrutiny decreases. Authority becomes implicit. When AI influences care, it often does so quietly, shaping attention and judgment without leaving a durable trace of how much it mattered or who decided to trust it.

When something goes wrong, institutions are left trying to answer questions that should never have been retrospective.

What the public reporting revealed

According to public reporting, AI-assisted systems were used in surgical contexts where misidentification or incorrect guidance occurred and patient harm resulted. Afterward, it was not possible to clearly determine how influential the AI system had been, whether its use exceeded its intended role, who explicitly authorized reliance on it, or what limitations were understood at the time.

Statements following the incident emphasized complexity, training issues, or diffuse responsibility. What was missing was a contemporaneous record showing how the decision to rely on AI was made when it mattered.

That absence is the point.

What actually failed

The failure was not model accuracy, system design, or clinician competence. The failure was the absence of a simple, durable record capturing who decided to rely on an AI system, under what conditions, with what understanding of risk, and until when that reliance was valid.

Without such records, accountability collapses into reconstruction. Reconstruction invites hindsight bias. Hindsight bias invites adversarial narratives. And institutional memory erodes precisely when it is needed most.

This is not a documentation problem. It is a governance problem.

How AI OSI would have changed what could be known

AI OSI does not prevent error. It prevents silence.

If a standing record had existed defining how the AI system was approved to be used, what it was not approved to do, and who owned that approval, silent scope drift would have been visible before harm occurred.

If a plain-language decision record had been created at the moment of care, it would have captured how influential the AI system was, whether human override was real or symbolic, and what risks were known if the system was wrong. That record would have frozen reality before outcomes rewrote it.

If a first-capture incident record had been created immediately after the unexpected outcome, it would have preserved what was known before legal review, public messaging, or vendor alignment reshaped the narrative.

None of this requires deep technical insight. It requires naming responsibility while it still exists.

Why this was not inevitable

Nothing about this incident required new regulation, better models, or banning AI from surgery. It required no future law and no novel technology.

It required a small number of boring records, created at the right time, by named humans, acknowledging AI’s role in decisions that could not be undone.

The absence of those records did not cause the harm. It caused the inability to understand it cleanly, defend it honestly, or learn from it responsibly.

What this illustrates more broadly

This failure mode is not unique to healthcare. It appears wherever AI influences decisions that affect people’s bodies, livelihoods, finances, rights, or futures. Employment systems, financial scoring models, and public-sector automation all fail in the same way when decisions are not recorded as decisions.

The question is always the same: who decided to trust the system, and where is that written down?

Conclusion

AI did not make this situation unavoidable. Silence did.

AI OSI exists to replace silence with memory, not after the fact, but at the moment decisions matter. That difference is small, unglamorous, and decisive.

It is not a promise of safety. It is a refusal to forget.

References

Reuters, AI enters operating room; reports arise of botched surgeries, misidentified body parts, February 9, 2026.
https://www.reuters.com/investigations/ai-enters-operating-room-reports-arise-botched-surgeries-misidentified-body-2026-02-09/