The new first opinion: AI and the clinical judgment gap
April 29, 2026
Tags
AI IN HEALTHCARE |
There’s a shift happening in healthcare we’re not fully acknowledging yet.
AI isn’t just supporting care behind the scenes anymore. For many people, it’s quietly become the first place they go when something feels off.
More often than not, the journey looks like this: a symptom appears, concern starts to build, and the care path begins with a question typed into a screen.
A new Gallup poll shows nearly six out of ten Americans use AI tools or chatbots to research health concerns before visiting a clinician, and this behavior also extends beyond a single platform. An April 2026 analysis found that out of 500,000 health-related conversations with a widely used AI tool, nearly one in five involved personal symptom or condition discussion.
That means decisions about whether to wait, worry, or seek care are being shaped before a provider is ever involved. And that reality is worth paying closer attention to.
Why patients turn to AI first to answer their healthcare questions
I’m a big believer in AI and its potential to be a positive force in healthcare in specific areas. At the same time, it’s hard not to feel pulled in different directions. Some days, the optimism comes easily. Other days, the concerns feel just as real.
Healthcare is difficult to navigate, even when everything is going well. Many people aren’t engaging from a calm, preventive mindset. They show up anxious, uncomfortable, or worried that something might be wrong.
It’s in those moments, access to clear, understandable information matters more than we realize.
When AI confidence influences clinical decisions
At the same time, the very thing that makes AI helpful is also where risk begins to surface.
In healthcare, AI doesn’t have to be wildly wrong to cause harm.
It just has to sound convincing.
Large language models are designed to be clear, calm, and coherent. When someone is trying to interpret symptoms or decide what to do next, that tone carries a lot of weight. Most people aren’t scrutinizing the answers or questioning the output; they’re trying to make sense of how they feel and find direction.
We’re already seeing situations where guidance sounds reassuring but isn’t clinically appropriate, especially in more complex or ambiguous cases. Sometimes, symptoms that warrant attention are framed in a way that delays care. Other times, people are pushed toward higher levels of care than necessary.
Recent independent research from Mount Sinai underscores this risk. In a physician‑designed evaluation of AI medical triage, the system under‑triaged more than half of cases that clinicians determined required emergency care—particularly in scenarios where the danger wasn’t immediately obvious and clinical judgment mattered most.
The issue isn’t just whether an answer is technically right or wrong. It’s what that answer leads to. Does it create false confidence? Does it delay escalation? Does it send someone down a path that’s harder to correct later?
The gap between health information and patient outcomes
Even when the initial guidance is sound, healthcare outcomes are shaped by what happens next.
This part of the care journey is easy to overlook. Patients leave visits with information, but not always with clarity. They remember pieces of the conversation, but not all of it. The questions they didn’t think to ask often show up later, between visits, when no clinician is in the room.
That’s increasingly when people turn back to AI. Not to start over, but to make sense of what they’ve already been told and decide what to do next.
AI can be helpful in that moment. It makes information easier to access and easier to revisit. But an answer alone doesn’t guarantee understanding, and understanding doesn’t always translate into clear next steps.
That’s the gap. And it’s where things can quietly go wrong, even when no single step feels incorrect.
Rethinking “human in the loop” in AI-driven healthcare
This is where the conversation around "human in the loop" needs to evolve.
Too often, it’s treated as something added at the end of an experience. A safeguard for edge cases. A final check before action is taken.
In reality, it needs to be part of how care begins.
Healthcare depends on connection, context, and continuity.
Clinical judgment isn’t just about knowing the right answer. It’s about recognizing uncertainty, understanding what’s missing, and knowing what needs to be ruled out before the next step is safe.
AI models are trained to respond and keep people engaged. Clinicians are trained to look beyond the surface and ask what else might be going on. Those are very different capabilities, and we’re not close to a point where one replaces the other.
The goal isn’t to slow things down by adding friction; it’s to accelerate understanding while ensuring there’s a clear, clinically grounded path forward. Sometimes that means real‑time clinician involvement. Other times, it’s follow‑up, review, or escalation after the interaction.
What matters most is that responsibility for what happens next is clear.
Designing the next phase of AI-enabled care
AI is already part of how care begins. The more important question now is how we design what follows. Good AI in healthcare isn’t defined by how much it can handle on its own. It’s defined by how well it supports the right decisions at the right time, and how thoughtfully it steps aside when clinical judgment matters most. |