On medical AI in 2023

“To study persons is to study beings who only exist in, or are partly constituted by a certain language”. -- Charles Taylor, Sources of the Self

Large language model-based AI (LLM’s) are the epitome of what can be constituted by language alone. They can easily take isolated linguistic philosophy to its absurd extreme.

Unfortunately, this means there is a disconnect between a statement that is correct in the context of its LLM and the statements we want, which appropriately address a scientific or clinical context in the world outside of the LLM. This results in AI dialog replies that currently make medical advisory AI impossible to trust.

Current LLM base AI’s are masters of what Harry G. Frankfurt called “bullshit.” Until AI can distinguish and eliminate fictional or obsolete diagnoses and treatments within its model, separating them from those which help the patient, it cannot be trusted with any unsupervised role in patient care.

Risks for impaired post-stroke cognitive function

In a printed posted to the medRxiv preprint archive this month, I found a chart review of patients with stroke to determine factors (other t...