I Half-Jokingly Promoted ChatGPT to My Primary Care Provider
A second brain on the desk, stitching together what the system leaves fragmented.
I’ve been joking that I promoted ChatGPT to be my primary care provider. It’s only half a joke.
My real doctor still signs the refills, orders labs, and sends referrals. Specialists still run the scans and read the imaging. I’m not confusing a language model with a physician. The healthcare system still controls the gates. But more and more, I’m the one walking into those appointments with the context, the continuity, and the strategy.
What changed was that I stopped being a passive patient.
“A tentative impression gets copied forward. A vague concern hardens into chart language. One clinician inherits a summary from another, and by the time a third repeats it, a maybe starts to sound like a fact.”
I no longer want to show up to medical appointments as a container for whatever interpretation happens to be circulating that day. I also do not want to leave, nod along, and then reconstruct later what was said, what was implied, and what was actually grounded in evidence.
I got tired of how much modern medicine depends on fragmentation, haste, and memory gaps. If your case is simple and your clinicians are careful, you may never notice. Once it gets complicated, you start to see the seams.
A tentative impression gets copied forward. A vague concern hardens into chart language. One clinician inherits a summary from another, and by the time a third repeats it, a maybe starts to sound like a fact. Once a label gets into the record, it starts steering everything that follows. The burden is quietly shifted to the patient to notice the slippage, question it, and try to correct it.
Patients are expected to absorb this process without staring at it too closely. Visits are rushed. The context is scattered across portals, notes, imaging reports, and whatever the last clinician chose to emphasize. The conclusion in front of you may be carrying more confidence than the evidence deserves. What travels best is usually the version that is easiest to summarize, not the version that is most careful.
“Trying to do first-pass thinking in a rushed appointment is a good way to leave with someone else’s summary instead of your own understanding.”
ChatGPT gave me a place to slow down, lay the record out, and think before I walked into the room.
I could line up lab results, imaging reports, symptom timelines, medication history, and the contradictory claims around any long-running issue. I could ask the annoying follow-up questions: what does this result support, what does it not support, which finding deserves the most weight, where is someone inferring beyond the data, and what do I need to ask at the next visit if I want the conversation back on firmer ground?
Trying to do first-pass thinking in a rushed appointment is a good way to leave with someone else’s summary instead of your own understanding.
One of the clearest examples for me was a liver workup. At one point, my chart was carrying language suggestive of advanced fibrosis. Serious chart language reorganizes everything around it. You read each new result through that lens. You hear clinicians differently. The whole situation starts to feel settled long before it is.
Side by side, the record contradicted the clarity of the charts. The tests did not align. Imaging was overstated. The labs pointed elsewhere. And some providers followed an inherited narrative rather than the full record.
I used ChatGPT to help sort through that because I wanted help doing the work the system was not going to do for me. It helped me line up the evidence, compare findings across time, separate strong claims from weak ones, and see where the story stopped matching the data.
“The problem was practical and immediate. How do I get through this day without being stupid?”
The questions I brought into the room got better. Instead of defaulting to “So should I be worried?”, I could ask what a given test could establish, what remained ambiguous, and why one finding was being weighted more heavily than another. That changed the structure of the visit. I was no longer there just to receive a conclusion. I was there to see whether the reasoning held up.
The difference showed up in one visit. A provider started describing my imaging in a way that was simply wrong. It was not a reasonable difference in interpretation. I knew the report. I knew what it said, and I knew what it did not say. My honest impression was that she either had another case mixed into mine or had skimmed the chart and let habit fill in the blanks.
Without that preparation, an error like that arrives wearing institutional authority. It comes with a white coat, a confident tone, and the expectation that the patient will fall in line. Most people do not have the time or footing to challenge it in real time. Most people lose that battle before it even starts.
I pushed back.
Later, I noticed that across three visits, total staff interaction time with me added up to thirty-seven minutes. Twenty-five of those were during the fibroscan itself. Thirty-seven minutes across three visits tells you what the system is built to do. It is built to move people through. Continuity is secondary, and careful reasoning survives only when an individual clinician makes room for it.
The useful part is the running thread it gives me. I can hold together portals, chart notes, lab interfaces, imaging reports, and hurried conversations in one place. I can pressure-test interpretations before they harden into chart history and turn a pile of disconnected facts into something I can bring into an appointment.
“It makes me a harder patient to rush or manage on autopilot.”
It is just as useful between visits, and maybe more so. A surprising amount of health management is just dealing with badly timed, unglamorous problems while you are tired and trying to keep the day from getting worse. One of the clearest examples for me happened on a trip to Mexico.
I had a long van tour to Teotihuacan booked in the heat while dealing with foodborne illness and a sciatic flare radiating into my foot and big toe. The problem was practical and immediate. How do I get through this day without being stupid? Stay hydrated without worsening the GI side, survive the van ride without feeding the sciatica, keep walking without turning the foot into a bigger problem, and take enough over-the-counter medication to function without making a bad decision out of frustration.
That is the kind of problem medicine leaves you alone with all the time. It is mundane, time-sensitive, and too small to earn much formal attention, which is why people end up managing it half-blind.
ChatGPT helped in the least glamorous way, which was also the most useful. It helped me keep making decent decisions while hot, tired, uncomfortable, and far from home. I used it to think through electrolyte options for a long day in the sun without pouring gasoline on the GI problem. It reminded me to break up the van ride and move around so I was not marinating in sciatic irritation for hours. Once the toe became a real issue, it helped me think through shoe adjustments and ways to offload pressure. And when my driver handed me something I heard as “Tegra,” it helped me reason through what that medication probably was before I decided to take it.
The same pattern shows up across the rest of my care. Specialist visits go better when I have already pressure-tested the plausible explanations. Lab trends make more sense when I am not staring at isolated red numbers in a portal and spinning out. Symptoms become patterns to test instead of reasons to panic. When I’m trying to understand whether a flare is tied to a specific food, a pattern of intake, a medication change, or something broader, it helps me keep the variables straight. I get a better sense of which changes are worth trying, which are noise, and what kind of follow-up would tell me whether something is improving.
Some of that has translated into measurable changes. My liver enzymes moved into a healthy range for the first time in years. Platelets improved too. I’m not claiming a single clean causal story here. Bodies are more complicated than that. But being more organized, less reactive, and more consistent clearly helped.
For me, the near-term value of AI in healthcare is simple. It lowers the cost of being an informed patient.
You can prepare before the appointment, inspect the reasoning during it, and recover the thread afterward. You have somewhere to take the chart note that sounds too certain, the lab result nobody explained, the symptom cluster that does not quite fit, or the question that shows up after the clinic is closed.
What it gives patients, at minimum, is more room to push back.
I no longer treat the first interpretation I hear in a medical setting as the final word, and I no longer assume that official language automatically means careful language. The system is busy, uneven, and sometimes sloppy. Pretending otherwise does not protect patients. It just makes them easier to rush.
My doctors continue to be central to my care. I still rely on their access, training, and ability to intervene. But patients like me no longer have to accept clinical interpretation as a one-way transaction.
So yes, I joke that ChatGPT is my primary care provider.
What I really mean is that it has become the place where I do the thinking that rushed appointments do not leave room for. It is where I go to lay the evidence out, notice what does not add up, and figure out what I need to ask next.
It makes me a harder patient to rush or manage on autopilot.