Yves here. Chuck L suggested that this recent comment by IM Doc warranted being hoisted as a stand-alone post. The notion that AI administered medicine amounts to medicine is a horrorshow, particularly for those with sub-clinical conditions and patterns that are out of band (which yours truly has in more than one specialist category).
It may be possible to throw a lot of sand in the gears of these schemes if enough patients get behind it. A potential point of leverage is the plan to video and record patient visits. Any entertainer or prominent person should be freaked out about the idea of recordings of them being taken and potentially hacked or otherwise misused. If this implementation can be blocked in a couple of pro-patient states, such as California and New York (the New York Department of Financial Services, which regulates medical insurers, is not tolerant of insurer grifting and run-arounds), that would throw a huge spanner in the project, since having different versions in different markets would be difficult and therefore not appealing.
A related way to throw sand in the gears is invoking patient rights to not be recorded in two-party consent states, where all parties have to consent to a meeting being recorded. They are:
California
Delaware
Florida,
Illinois
Maryland
Massachusetts
Montana
Nevada
New Hampshire
Pennsylvania
Washington
While there needs to be an official sighting that these “services” are indeed being developed (say a discussion in a trade publication or a conference), one way to start pushing back, if you are in any of the all-party consent states, is to notify your doctors that you do not consent to having your visits recorded and you want that noted in your records. You will probably need to remind them at the time of scheduling and when you walk in for the exam. Patient resistance before this even gets off the ground could slow or limit uptake.
For starters – and I understand this is already in beta format – I have heard about it through the grapevine from doctors already involved.
There will be cameras and microphones in the exam room. Recording both the audio and video of everything that is done. The AI computer systems will then bring up the note for the visit from thin air – after having watched and listened to everything in the room. Please note – I believe every one of these systems is done through vast web services like AWS. That means your visit and private discussions with your doctor will be blasted all over the internet. I do not like that idea at all. This is already being touted to “maximize efficiency” and “improve billing”. My understanding from those that have been experimented upon as physicians, that as you are completing the visit, the computer will then begin demanding that you order this or that test because its AI is also a diagnostician and feels that those tests are critical. It will also not let you close the note until you have queried the patient about surveillance stuff – ie vaccines and colonoscopy, even for visits for stubbed toenails. And unlike now when you can just turn that stuff off, it is in control and watching and listening to your every move. The note will not be completed until it has sensed you discussing these issues with your patient and satisfied that you pushed hard enough.
I understand also that there is a huge push to begin the arduous task of having AI take over completely things like reading x-rays and path slides. Never mind the medicolegal issues with this – ie does the AI have malpractice insurance? Does it have a medical license? Who does the PCP talk to when there is sensitive material to discuss with a radiologist, as in new lesions on a mammogram etc? Are we to discuss this with Mr. Roboto?
There are other examples I have heard but they are so outlandish that they should probably wait for another day for further confirmation.
The glee with which the leaders of this profession are jumping into this and soon to be forcing this upon us all gives one a very sick feeling. Complete disregard for the ethics of this profession dating back centuries. I had a very similar sick reaction to the glee exhibited in the days of Oxy-Contin and the COVID vaccines.
There are days I am so so glad I am going to likely retire long before all of this really comes to fruition. All I can say is live as healthy a life as you can. Your medical system is very soon going to throw a rod.
Logistics wise, this is gunna be a helluva operation. Take a look at the video and sound recordings. They will have to be stored in server farms somewhere. And over time each patient will have more and more of these digital recordings added to their medical file. In passing, will guys really want those rectal examinations to go onto video? Or will women not be worried about having their pelvic examination recorded? (Don’t forget to smile for the camera between grunts) Regardless, I assume that they want this to be the standard operating procedure for potentially hundreds of millions of people. Are the server farms there to store this humongous amount of data or will they have to build hundreds of these server farms for all this data. Is the electricity there to handle it all? What about the different power grids? Will the local internets be able to handle the transmission of so much data? I don’t think that the infrastructure will be able to handle it and as a way to satisfy Silicon Valley grifters, it may prove a bridge too far.
I am not at all sure there are plans to save any of this, at least not for the long haul. But the way I understand it, and the way the EMR systems work now is that the brains and guts of the operations are done somewhere else as the data is all being uploaded and downloaded to servers like AWS. If you ever have the privilege of using these systems as I do, there are constant and frequent very long pauses in the operation as the data is being flown all over the world to be processed elsewhere. It is the clumsiest computer experience I have ever had. And it is all of them. All day every day. The general public would never tolerate this kind of performance with social media or streaming.
My guess is that the idea is to have the AV data processed by the AI at the time of service, to have the inputs from the AI processed and then to have the physician note as the permanent record. Hopefully, there would be no saving of the AV material. Maybe for a few days, but not long term. At least that is the way I understand the current thinking.
That being said, the default in medical care in the USA since the advent of EMR is to SAVE EVERYTHING. It is difficult to wade through the sheer amount of data from overnight patient care in these charts these days. It used to be that Marge the nurse would tell me all I needed to know on my AM rounds in 30 seconds. Often Marge and I would stand at the bedside in the AM and discuss the patients overnight status. Nowadays, I never talk to Marge. Rather, her shift ends at 7 AM and she is frantically entering data into a system on her 12 patients so she can get home by 10 AM. Data that is so cumbersome that no one ever looks at it.
The entire system is ridiculous.
And to all those who think their data is safe and encrypted. What a laugh. I have been hearing reports this week of a “major children’s hospital in the midwest” that is literally shut down by hackers. Two weeks ago, a major hospital system in Oklahoma was so crippled by hackers that it had to cease operations to some degree, transfer patients to other hospitals in the area and divert their ER.
All of these reports are always nebulous so as not to be scary. The reports almost always minimize the true extent of the issues.
But “safe” is not the word I would describe to use it. This hospital crippling is now becoming a major issue. The data they now have that is being hacked is largely written reports. How fun will that be when it is AV material? I cannot even believe the willing quiescence of my colleagues to these issues. The mass employment of the medical profession, as predicted, is having massive consequences to everyone in our society.
AI apps require a lot of computing power, so they very often run in “the cloud”, e.g., an AWS server farm. For this, the exam data will be sent to the cloud app, probably via an API.
And just as IM Doc says, data in the cloud can be hacked and taken hostage in ransomware attacks.
Try a web search for “cloud ransomware”, e.g.:
Scores of US credit unions offline after ransomware infects backend cloud outfit
https://www.theregister.com/2023/12/02/ransomware_infection_credit_unions/
Naturally, these new AI apps will be targeted by bad actors looking to shake down big hospitals for ransom, with your exam data in the mix. Hospitals will be obvious targets because the data is very sensitive.
Is hospital “infection control” ready for this variant? We all know the answer.
Thanks for this comment about EMRs. Apparently by medico group’s EMRs are controlled by a private entity somewhere else. At my last visit I found a small but potentially important error when I asked the nurse if I could look at her screen of my topload ERM. I pointed it out and asked her if she or someone could correct the error. For whatever reason, she told me no one could correct the EMR data except the private company. (What is this? A credit rating agency?) That did not give me a warm fuzzy feeling about the accuracy of EMR data. / sigh
@IM Doc — I am a longtime “patient” (actually impatient!) of the “World Class” Cleveland Clinic. As of now, I will never again tell a doctor or nurse that I’m taking a supplement or any OTC drugs or treatments. I will divulge ONLY information that is specifically relevant to getting “care” I am unwilling to forgo, and perhaps not even then. Why? Once I provide any information it is entered into “MyChart” for all time. When an item is no longer relevant or true, it’s impossible to get it deleted. I have been trying to get obsolete information removed for four years, without success. So lying to my doctors will be my rule from now on. Nice work, medical profession!
And an infection control note. Recently, I was in a crowded Clinic facility for 2-1/2 hours for an out-patient procedure. The only masks in sight were worn by a couple of other patients, and me.
IM Doc, like so many others here, I am deeply grateful for all you add to Naked Capitalism.
Thank you Carla. I always give inconsistent answers to medical providers. For race, I sometimes click Asian or Hispanic and I’m temped to click something else such as Chinese or African American. Sometimes I click non smoker and the next time, long time smoker. The worst part is (and It’s almost funny) that “they” have never ask about the inconsistency. Do they care, pun intended?
Thank you Carla, Bsn, IM Doc and Yves. And thank you tech whizzes. It is nigh impossible to deal with these outfits without some type of mychart nonsense. Not only is the information inaccurate, I believe the pharmacy conglomerates add every single medication you’ve ever taken and you cannot get anything removed. It’s the inaccurate information that bothers me the most. I keep my mouth shut now. It’s easy to say “stay healthy” unless you have a chronic illness. Then you do the best you can and try to find independent outfits. Cheers!
Carla, bravo!! I have lied to docs consistently for decades, I have heard far too many moral judgments from them in my life. I have too many kids (four!), I’m too poor, married to a guy of the wrong color (he’s Filipino, I’m white), etc., etc., etc. And then the OBs wonder why my last three were home births! Now this is just icing on the proverbial cake. There’s a reason so many people avoid medical care. Maybe if they were treated better? But obviously we are going in the wrong direction. Oh well, I’m 70 and tired of fighting. And I’m way too livid about Gaza, this is one more thing to be upset about. Yves is correct, we need to keep ourselves healthy.
The reports of “a major children’s hospital shut down by hackers” are quite real. It’s Lurie Children’s Hospital in Chicago and has been going on for almost two weeks.
https://www.cnn.com/2024/02/06/tech/cybersecurity-incident-chicago-lurie-childrens-hospital/index.html
Compared to the movies that hundreds of millions of people stream constantly, I doubt the internet bandwidth of these videos will be hard to handle. As for storage requirements, I am less sure about the technical difficulties. But if it is not currently being accessed then it shouldn’t draw much power. And if it is archived storage like a disk on a shelf, it won’t take any power.
Right – but won’t doctors need to access that information regularly in many cases? For example people with long-term illnesses etc.
Right. Just trust that “structural fortuity” in personally identifiable data handling will keep the patient “safe.” That it will just be in a disk kept on a shelf. That we mopes, who daily flood ourselves with streamed content, and now the products of AI dissimulation, like AI “girlfriends” and “analysts” that occasionally encourage us to be depressed and commit suicide, should just accept the latest “move fast and break stuff” infarct in the heart of the body politic.
This approach sounds to me like an extension of the Panopticon’s self-justification that “if you’re not doing anything wrong, you have nothing to fear.”
My wife and I both are retired nurses, with personal experience in seeing false and incorrect information incorporated into the “permanent record” (and some med professionals rightly, and often sarcastically and ironically, call it that), and how difficult it always is now to change a jot or tittle of crap data.
When charting was done on paper, correcting a “data” error required a line through the wrong info and a sign-off at that point in the chart, hopefully near in time to the original entry. There was some bit of accountability then, though there was still plenty of moral-hazard cheating. Medical errors and fraud continue to be a Big Thing — will these systems of systems make that situation better, or worse? Our PID are already sucked up and sold by, inter alia, pharmacies and “insurers.” Who lobby daily for further relaxation of any data protections, to make their looting “all nice and legal.”
It was a serious sin to go back way later and change paper-chart information, because the potential for fraud and cover-up was an obvious moral hazard. What’s to protect the patient or other participant in “care” from opaque cheating or deadly incompetence or malign bias by any part of these systems? We mopes should bear in mind that unlike the days of Marcus Welby, M.D, the “care seeker” now and in future is just grist for the monetized mill. (Welby, of course, was super-doc to pretty much only the “middle and upper class.” See also the fairy tale plot line in the “Royal Pains” teevee series…)
And as testified in comments here, getting incorrect data removed or corrected is increasingly impossible, because “integrity (as in unchangeability) and continuity of the data set” is a higher priority than health of the patient, who is now just a “partner in care,” a consumer with, as in every other life area, ever-more-limited “rights.”
Do personal injury and malpractice lawyers who, in pursuing their clients’ and their own interests, seem in the past to serve as a brake on fraud and impunity in the “system,” have anything collectively to say about this set of crapifications, which might limit discovery of deadly errors and wrongdoing by the commercial interests? Proof that a practitioner followed the “standard of care” is often a defense to liability. So the AI what, presumptively follows the standard of care? Who or what is liable when injury and death are caused by glitches and propagated errors in “the permanent record?” Will HAL lose his/their license to practice, or have to pay a crippling but justified judgment? Will medical liability be arbitrated by an AI? Or “disappeared” by corporate fiat?
So many horrors, so little chance that, short of a sudden or creeping Butlerian Jihad, there will be any kind of homeostatic change.
But yeah, what possibly could go wrong? Trust us! You data are ipso facto correct, and safely stored in a floppy disk on a dusty shelf, never to be used for malevolent purposes…
The interesting thing about data centers and server farms is where do you put them so that they’re convenient to where you need them? A lot of people, and a lot of the medical centers that serve those people, are on coasts. Coastal environments are not great for data centers. And do the people dealing with the collected data need to be HIPAA trained? What about the hardware and techs who service the equipment? Can the data be stored/accessed/operated on out of state relative to the patient? I feel like the general outline of this plan is something tech bros and investors signed off on before they thought through how to execute it.
The audio is easy to do in basically realtime. There are audio transcriptions services available like Spinach that work extremely well in transcribing Zoom and Google Meet calls. You can get transcripts and its own summaries. Not that I’d trust this for patient encounters, but it looks “good enough” that someone will implement it, even if medical care ought to be held to a higher standard of accuracy and completeness.
Buy a source of white source (a hiss) and put it close to the microphone.
White Noise Generator – Apps on Google Play
I’d guess the vast majority of exam room video and sound will be immediately condensed and used to update to patient and provider profiles, much like the ad profiles compiled on all of us. If storage is cheap enough, perhaps it’ll be saved. But it’s not all that much compared to what each person already stores on-line (cloud drives, e-mail accounts, etc.) Storing and analyzing medical exam videos seems easier than processing self-driving car videos.
Information that you can use, leverage. Thanks, will be sure to share.
I work for a private equity backed mental health care “startup” and I can confirm the company is going to do this. There are a couple of products we are looking at which are billed as visit transcription tools. We already use speech to text software (Dragon Naturally Speaking) and this is the “next logical step”. The clinician will look over the AI generated visit notes and edit/update as needed.
So far so good.
THEN the AI will recommend at the end of the notes the “correct” (most expensive) billing codes for the visit. This is the part where the clinician loses almost all control over the visit. Upcoding is the bread and butter of the business. If a psychiatrist thinks something isn’t needed they better have a real good reason for “leaving money on the table”
That’s one of the 3 pillars of US healthcare, the others being claims denial by insurers and administrators salaries.
AI transcription, not new but simply increasing the number of visits that can be upcoded daily by a clinician.
Or generally, AI: more fraud more quickly.
When you close your eyes, you can still see the progress bars on the transcription screen.
Green zone means
bonusyou get to keep your job.For those in the variable comp tier, there are handy little nudges, courtesy Sunstein, to show the marginal additions to your options from each upgraded code, each claim denied, each
patientmeatsock accelerated through the system.Notably absent below board level are the notions of health status and outcomes, on a secret server with double-secret probationary access.
Yes! Exactly this! Providers are paid based on number of visits seen and number of codes billed!
Re Dragon speech-to-text software. This is being piloted in UK hospital Trusts including my local one. I did audio-typing in 3 specialities on and off for a couple of years early pandemic and immediately spotted the huge (additional) vulnerability that was about to be introduced to patient records.
The medical version of the software is marketed using the “advantage” that the physicians don’t need to “train it” to their voice and accent. Ergo, all the work is done in the cloud-based data centres. Massive potential for hacking right there. The alternative system – having each physician take 16-20 hours to train the software on a local machine/network to their voice – is a non-starter because physicians don’t have time or inclination to do this.
I immediately spotted all this because I used IBM ViaVoice back in 1998-2001 to dictate half my PhD (full of medical/statistics/econometrics jargon) when I got very bad RSI. Even back then, training the software to your own voice worked surprisingly well. My suggestion for preserving security in today’s world is employ a small number of audio-typists who effectively do simultaneous translation: listen to the audio of the physician and using your knowledge of their (sometimes very strong) accents and terminology to simultaneously dictate into a local version of Dragon trained to that audio-typist’s voice. I’ve tried doing this using non-sensitive old recordings outside the job and after a bit of practice squared the circle of not requiring the physician to do anything different/new whilst keeping all dictated data on an airgapped machine. But that doesn’t suit the powers that be…..
sounds frightfully expensive. Hard to believe it will be implemented to improve patient care rather than patient profitability (even to negative patient outcomes), with the doctor unable to argue against the AI….. A brave new world!
This comment is a version of show me the incentives and I will show you the outcome.
@VTDigger — sounds to me like it would be hell to work for your company.
Doctors have already allowed themselves to become mere “customer service agents” dispensing predetermined guidelines, protocols and standards of care, rather than relying on their own clinical experience and skill. Many of them could easily be replaced by an iPhone app.
Replacing physicians with an iPhone app would be the ideal I’m sure.
/sarc
I was concerned when my dentist “woke up” their ECHO, asking for calming music during a visit. Though HITECH, HIPPA and EMR’s were bad for confidentiality, this brings it to a new level.
Siri, listen in to this exam.
Hidden in the discussions is the increase in malpractice insurance premiums. Ask around.
There has to be yet another way to financialize, read game, the systems.
Can’t let those best and brightest slack off when there is more to extract into the invisible bezzle.
Insurance appreciates that investment banking, PE, consulting and the rest bear the brunt of bad publicity.
Anywhere else in the world and I think these would merely be doctor’s assistants, which could actually be helpful and useful provided the doctor is making the decisions, could even lighten the workload – e.g. ordering the tests, writing the prescriptions. But put such a thing in the United States, where the medical model is driven by profit, not health, and privacy is not legally protected, and insurance companies have more authority than doctors, and it’s bound to be mis-used, of course. I don’t think it’s that it’s AI, you could literally come up with anything at all and plug it into the US health care system and it just won’t work, but plug it into, say, the Canadian health care system and it just might.
Just heard last night that humana is working on this. The pitch is the provider saves time not having to document. Or think, I suppose.
Experience and judgement? So old-fashioned. / ;)
Software Architect by trade here. The issue with The Cloud is not security per se. Security can be equally bad in AWS or in a private data center. The issue is three letter agencies having access to your data. The worse is the US-East-1 region of AWS which sits literally in their lawn in Virginia and is also a huge region which lots of software companies use for low latency access to east coast residents.
Doctors need a union, too. And it needs to draw some lines in the sand. Very very soon.
Presumes a significant number of licensed doctors aren’t just happy to be in on the bezzle. Got to pay off those student loans, then accumulate in best American style.
Thanks so much for this post. The direction suggested is entirely believable to me. In fact, I know a uni prof in CS and datasets working with a PhD nurse teaching in a med school who were/are working on plans to record patient conversations with nurses, since patients are often more open with nurses than with doctors about their issues. The idea was to record all these visits/conversations and then datamine them to find better ways to treat patients. Sounds good in theory. Lots of things sound good in theory…like EMRs. I had this conversation with the uni prof over 8 years ago. / oy
I could imagine AI-assistance potentially being helpful in pathology studies, but would want the assessments to include the reasons for the diagnosis, so that a human pathologist could evaluate the results and concur or disagree/offer an alternative interpretation.
The thought occurs that tools like this, if developed by the specialists themselves, might be beneficial. But one gets the sense that these things are not being developed to improve care so much as to speed it up and make it more profitable.
A layman’s view; correction is welcome.
In the AI biz, what you describe became codified as so-called “expert systems” in the 1980s. The computer scientist who coined the term, Edward Feigenbaum, got involved in various start-up companies to try and commercialize the technology. By the 90s, though, it hit the wall of what was actually possible, and AFAIK most all of those firms disappeared. There has been a debate about what happened to this idea, but a decent account can be found in: Leith P., “The rise and fall of the legal expert system” (2010). I would think usefulness of this type of system is lower in the field of medicine, as the kinds of judgments physicians make are by nature more complicated.
In any case, speed it up, make it more profitable, yes, that would be my reading as well.
Actually at least in Australia a company called beamtree has been delivering expert systems for path labs for quite some time. They’re at least reasonably successful and some of the underlying (I think it’s ripple down rules?) is used in most path labs in the country, I believe.
Is it impossible to hope that AI hackers can make a huge public disaster of this dystopian trend, where perhaps they gather huge streams of private medical information and videos and make a giant public mess of the whole thing? What happens when AI attempting to achieve something meets an AI dedicated to undoing that same thing?
AI is already in use in mental healthcare. Take a look at lyssn.io or blueprint-health. This is part of the healthcare crapification program. AI will create algorithms that will maximize profit for PE. Clinician judgement will be overriden. Okay some patients will be hurt or die. But the savings!
AI and EMR and credit rating agencies all seem like ways to off-load personal, professional, or corporate accountability to unaccountable programs. Clever…I guess.
Lets enumerate some ways this could go horribly wrong! Not from a Dr. perspective, I’ll leave that to IM Doc, but from a tech perspective. I’m a tech worker and think in terms of security and risk, and while I’m not the smartest, I’ll HumanGPT out some ideas here for our consideration.
How about, firstly, someone “poisons” the image recognition so it thinks one of those small arm bones has always set wrong and recommends surgery and resetting.
https://www.technologyreview.com/2023/10/23/1082189/data-poisoning-artists-fight-generative-ai/
This process of “poisoning” changes some bits to make the computer think an image looks like something else, so that when it sees other images like it it misidentifies them. Very cool stuff.
Now just wait til someone finds ways to poison audio and video recognition processes. It hasn’t happened yet to my knowledge but I am sure someone will do it, because why not? Breaking stuff isn’t just for the malicious actors, lots of curious tech types love breaking stuff. Just wait til def-con next year and we’ll see what comes out of it…
So yeah, poisoned audio recognition so that everytime it hears “rash” it thinks it heard “scratch” and everytime it hears “sore” it thinks it heard “cold”. That would make for some interesting chart notes.
Now lets talk about the supporting tech. Elsewhere in the thread, people are concerned about hospitals getting shutdown because their systems get locked up by ransomware, deployed by threat actors who are looking for payment to unlock. This happens weekly.
Example:https://www.bleepingcomputer.com/news/security/ardent-hospital-ers-disrupted-in-6-states-after-ransomware-attack/
Causes are in my opinion very similar to why nurses are burnt out. Security and Tech teams in hospitals are under resourced both in human power and money power. you can’t design an operational security program if you’re constantly running around trying to keep the vital sign monitors from crapping out. Another reason hospitals get hit very often is all the devices that require internet connection and are wildly unsecure and unsupported due to critical vulnerabilities not being fixed.
Adding complexity to an already overloaded and under-resourced system is another concern, to go with the last paragraph. Someone in admin is definitely going to be over the moon about yet another vendor, but when push comes to shove and the camera thing needs an update, is that vendor going to to be doing that themselves, or does that work fall on someone like me who is still trying to figure out why the monitors at the nurses station on the 4th floor will only connect to the vitals monitors on the second floor?
Cloud based complications, surveillance, secure and compliant storage of records…. list goes on.
just some thoughts… I agree with IM Doc. This is going to be circus.
Stay healthy. Stay out of hospitals.
Eat your vegetables! https://www.youtube.com/watch?v=4sVwMsCF4vM
I have to admit I fantasize about the old X-men ability that made them invisible to machines. No camera could see them. No audio device could record them. They were just invisible to anything electrical or mechanical. Seems like it would be a useful thing to have these days.
Not directly pertinent to the issue being discussed here, but:
“What happens when AI attempting to achieve something meets an AI dedicated to undoing that same thing?” (Jackman February 17, 2024 at 11:24 am, above) is a very, very interesting question – for the future, I hope.
Will the internet eventually be destroyed by warring AI bots trying to undo each others work? Does the US Government have plans to incorporate AI into the ICBM system? Could the US, Chinese and Russian ICBM bots ally themselves against the UK & French ones and nuke Europe? (Not that they would really need to, as Europe is very effectively destroying itself anyway). Sounds like SF, but so does the Medical AI system.
A variation on that idea:
https://en.wikipedia.org/wiki/Colossus:_The_Forbin_Project
Of course, if AI was simply being used to helpfully transcribe exam notes, it wouldn’t be a big deal. The problem is that the “rubric” is being applied in order to coerce physicians to up-code, up-sell, and over-prescribe. Of course, the next layer will push back on this and try to deny the insurance claim.
Now that I’m on Medicare I have seen this every time that I go in for a well patient exam. I’ve been mis-diagnosed, mis-(and over)prescribed, and watched my latest physician fight to save me an all-day trip, $50 ferry fare, and $30 worth of fuel to obtain a 10-minute rule-out test that could and should have been performed at his office instead of at the regional testing-mill.
This physician is still young and feisty. He’ll eventually stop seeing insurance or Medicare patients, as several of my past doctors did. I’m grateful that we have IM Doc and KLG sharing information with us. What a stupid system.
As an IT professional well versed in how things can go wrong with technology generally and AI in particular, I disagree. A few questions I would need to see answered before reaching this conclusion:
– Is it accurate?
– Does it work in settings with poor acoustic properties?
– Does it handle mumbling, heavy accents etc. on the part of physicians or patients?
– Does it work in settings with a lot of background noise?
– Does it recognize medical terms and jargon and transcribe them accurately?
– Is it purely transcribing, or performing additional functions like summarizing? (This is typically the AI part)
– What is the overall impact on accuracy compared to physician note taking?
– Does the physician check the transcript/summary/whatever for accuracy, so that any errors can be caught?
– If the physician checks for accuracy, how much time does it typically take them to find and correct any errors? Is the overall process more or less efficient than the physician doing it all themselves?
– Does this come with operational pressures that might cause physicians to shortcut accuracy checks or skip them entirely? For example, timetabling shorter appointments so that physicians no longer have time to rewrite inaccurate portions.
– Is the software simply capturing input, or is it performing other medical functions? It seems like a short step from this to ‘recommended’ treatments or drug courses.
– Is there any measurable difference in clinical outcomes under this system? Any treatments that get recommended more or less often? Is there an overall improvement or degradation in patient outcomes?
– Would this ever be used in high risk situations where accuracy is critical? For example, recording patient medication doses in hospital, surgical instructions, or similar (communication errors of this type are a leading cause of preventable injury or death in hospitals). If not, how will health providers identify which situations those are, and ensure that it isn’t used for them? If so, what accuracy standards does it have to meet? Does it meet them? How will incidents be tracked and reported?
– Is the patient being properly informed of this additional use of their personal information? Is there a requirement to gather consent? (Per Yves, we know this is true at least in some locations). If there’s no requirement, could there be a reasonable expectation based on similar applications in other fields? If they don’t give consent, will they still be able to receive care? If the answer is no (for example, if this becomes ubiquitous and there is no opt-out) is there a possible patient rights issue?
– If there’s an error, who is liable? What are the patient’s rights in this situation?
– Will the patient be able to request access to this data, correct it, and/or ask for it to be deleted?
That’s just off the top of my head, and before even getting into the extra issues in countries with predatory or pathological systems like the US.
Exactly fellow Chris.
There is no basis to assume that people are looking to insinuate AI in the medical field because they want to improve patient care. It’s not even a matter of reading files more efficiently, like they use it for in legal circles these days. The purpose for AI in this context is nakedly to charge more for services that cost less to offer and remove liability so that billing becomes even more impenetrable.
Someone suggesting otherwise has a burden to prove that this new technology is not an extension of what has been happening for at least the past 40 years.
Not a doctor, but in another licensed profession that requires professional liability insurance. As a licensed/registered professional (in my case geology and hydrogeology), I have to certify that various projects, tests, reports, designs, etc. were performed by me or under my direct supervision. The company that I currently work for recently started pushing AI at us, and since January the pushing has rapidly increased in intensity. Given the certification requirement of “prepared by me or under my direct supervision”, I can’t affix my professional certification to any project that uses AI since I have no idea what the basis for training the AI was, and therefore have no way of directly supervising it. One bad AI experience and I, the company, the client, and the victims (geology problems e.g. drinking water, landslides, landfills, etc.) are all in trouble.
Coordinated pushback against AI from physicians, engineers, geologists, architects, hairdressers,… every profession that requires board certification will be necessary to stop the momentum toward AI dystopia.
Bravo!
This is it exactly. If I can’t recreate the calls, how can I approve them? As an engineer, I can’t work with AI as currently construed and stamp any drawing or calculations produced in such a manner. Nor would I want to.
I’m about to read this, but Yves’ mention to take action in advance inspired me to comment that at a recent doc visit (PA of course, not a “real” doctor) there was not a transcriber in the room as has been the case over the las 10 or so years. The Doc and I talked and he examined a bit, but did it all on his phone. Did he have recording enabled to be transcribed later by Gaggle translate? Oh what a world. I will surely ask next visit as to why the change. Thanks again, IM Doc and the NC crew.
I read the article and all the comments at this point. Great stuff. Redleg suggests “pushback against AI from physicians, engineers, geologists, architects, hairdressers,… every profession that requires board certification will be necessary to stop the momentum toward AI dystopia.” and I applaud that suggestion. Being recently retired, I don’t need licensing but have many previous cohorts who do. I will bring this concept up with them. I also encourage everyone to find alternative medical systems and some are being built as I type. Check into the FLCCC as they are establishing a “Provider’s Archive” that patients may turn to for assistance. Let’s all try to find, and support financially, various alternatives.
As a consumer, if what I’m getting is an AI diagnosis, why is a doctor necessary? Can’t I skip the medical system and just run the software myself? It seems an awful lot like the industry is sowing the seeds of its own destruction.
I think what you are talking about is the desired end state i.e. you will pay a subscription every month for medical advice/diagnosis.
Why one should absolutely refuse to use Mychart, or other electonic “conveniences” when visiting a doctor, imaging center or lab.
Write on the responsibility to pay line: “No payment for services until hard copy recieved in United States Mail.” or “No payment until a copy of the image you are about to take is on a disk in my hands.”
Valid technical reasons to refuse these “services,” and which trigger HIPPA
“I don’t own a computer and use a flip phone.”
Or, one can just state that they refuse to use a privacy invading, data-mining hackable portal.
If one accepts email from providers, use Protonmail, never the commercially available products which state right there in your terms of us that they have the right granted by you to save, leak and monetize your images and data.
If providers send you e-mail, it probably won’t be secure encrypted e-mail like Protonmail. The contents will therefore be available for those who snoop on your e-mail. If you’re sending e-mail to them, unless you agree on a secret code or phase outside of e-mail, they can’t read your encrypted e-mail!
AI-driven billing code inflation? No wonder they are slobbering all over this. It’s like a big, juicy profit steak! Could it be suddenly the 70K ICD10 codes won’t be anywhere near enough?
I work in a profession that scans these codes from regional labs and hospitals for research purposes. There has been years of effort put into using AI to assist in our efforts, and billing code inflation would fit into such efforts like a hand in a glove, handling more codes with less staff. It’s a win for “everyone”!
AI will finally be able to unleash the wicked power of all that data that is being hoovered up on all of us . Now add to that medical data on your most intimate details.
Imagine what all that data can be used for in ways you did not consent to.
Some ways might seem minor: we already see targeted ads.
And we see reports of FB using targeted psychological manipulation programming.
AI will be all of that on steroids. It will unleash vast powers imo…
How this fits the remarkably commentary by IM DOC, I am not yet sure. But, it seems sure to fit:
https://www.nytimes.com/2024/02/15/science/columbia-cancer-surgeon-sam-yoon-flawed-data.html
February 15, 2024
A Columbia Surgeon’s Study Was Pulled. He Kept Publishing Flawed Data.
The quiet withdrawal of a 2021 cancer study by Dr. Sam Yoon highlights scientific publishers’ lack of transparency around data problems.
By Benjamin Mueller
The stomach cancer study was shot through with suspicious data. Identical constellations of cells were said to depict separate experiments on wholly different biological lineages. Photos of tumor-stricken mice, used to show that a drug reduced cancer growth, had been featured in two previous papers describing other treatments.
Problems with the study were severe enough that its publisher, after finding that the paper violated ethics guidelines, formally withdrew it within a few months of its publication in 2021. The study was then wiped from the internet, leaving behind a barren web page that said nothing about the reasons for its removal.
As it turned out, the flawed study was part of a pattern. Since 2008, two of its authors — Dr. Sam S. Yoon, chief of a cancer surgery division at Columbia University’s medical center, and a more junior cancer biologist — have collaborated with a rotating cast of researchers on a combined 26 articles that a British scientific sleuth has publicly flagged for containing suspect data. A medical journal retracted one of them this month after inquiries from The New York Times.
Memorial Sloan Kettering Cancer Center, where Dr. Yoon worked when much of the research was done, is now investigating the studies. Columbia’s medical center declined to comment on specific allegations, saying only that it reviews “any concerns about scientific integrity brought to our attention.”
Dr. Yoon, who has said his research could lead to better cancer treatments, did not answer repeated questions. Attempts to speak to the other researcher, Changhwan Yoon, an associate research scientist at Columbia, were also unsuccessful.
The allegations were aired in recent months in online comments on a science forum and in a blog post by Sholto David, an independent molecular biologist. He has ferreted out problems in a raft of high-profile cancer research, including dozens of papers at a Harvard cancer center that were subsequently referred for retractions or corrections.
From his flat in Wales, Dr. David pores over published images of cells, tumors and mice in his spare time and then reports slip-ups, trying to close the gap between people’s regard for academic research and the sometimes shoddier realities of the profession.
When evaluating scientific images, it is difficult to distinguish sloppy copy-and-paste errors from deliberate doctoring of data. Two other imaging experts who reviewed the allegations at the request of The Times said some of the discrepancies identified by Dr. David bore signs of manipulation, like flipped, rotated or seemingly digitally altered images.
Armed with A.I.-powered detection tools, scientists and bloggers have recently exposed a growing body of such questionable research, like the faulty papers at Harvard’s Dana-Farber Cancer Institute and studies by Stanford’s president that led to his resignation last year….
“Armed with A.I.-powered detection tool”
Propaganda designed to inure us to AI would be my guess
Thank you. As someone who has spent a career in academic sciences I can only say two things:
Many academic researchers can be led astray by their hopes that something is true.
Many corporations can be eager to claim the next best thing, no matter the provenance or unbridled claims of some scientists.
Late to the party here. Trying to digest IMDoc’s incredibly important warning and contribution to the blog, which I appreciate, plus lots of incisive comments.
Like others, I fear this is just another case of putting profits over patient care. AI optimized to recommend/order the most expensive course of treatment sounds like a winner. Or, from a legal standpoint, the most conservative, customary standard of care that may not be appropriate for the individual patient, but will keep the doctor (or in this case the corporate behemoth health care cartel) out of jeopardy of a malpractice lawsuit.
From a legal aspect, the key is going to be agency. An airline in Canada just lost a case where they tried to claim “it wasn’t us, it was the AI!” See:
https://www.cbc.ca/news/canada/british-columbia/air-canada-chatbot-lawsuit-1.7116416
The court held Air Canada liable to the plaintiff to pay compensation to a grieving grandchild who claimed they were misled into purchasing full-price flight tickets by an ill-informed chatbot.
This was a small claims court, though, so hardly a precedent-setter. You can bet your bottom $ that other big corporations will try to argue that they’re not liable for chatbots or AI that cause harm. Whether the courts buy that, remains to be seen.
And don’t forget about Microsoft and others indemnifying their customers from AI-based lawsuits. This is one way to get around the problem. Similarly, I can see big AI services like ChatGPT and Google Bard (now Gemini?) offering to indemnify their health care provider customers.
Doctors owe their patients a duty of care that is generally a higher standard in the law vs. the guy at Home Depot who sells you a snow blower. If AI is allowed to completely cut the doctor out, who is a proper defendant to sue? Microsoft? The techie who wrote the bad code?
Does the AI have a medical license or board certification in compliance with local laws and regulations? If I were a lawyer I’d start there.
AI scrambles due diligence and ethical responsibilities into a murky mess that’s best resolved by removing AI from the system.
Russel Brand in a quick 5 minute video showing a doctor talk about trust-
https://twitter.com/rustyrockets/status/1758824866940088702
With AI-assistance, Orson Wells will narrate.
OMG – late for another party.
1,581 views Sep 23, 2023 Last night I trained Orson Welles’ voice into a model.
The Time Machine was narrated by Orson Wells.
OMG – another Wells.
The Time Machine is a post-apocalyptic science fiction novella by H. G. Wells, published in 1895.
Remember, Orson Wells also narrated the movie ” F for Fake “.
way late to this party…but something someone said about overdrugging:
when my stepdad was in a private hospital contracted to the VA…and on his deathbed…he suddenly took a turn for the worse.
when we investigated, we found that they had printed out his entire history of prescriptions…going back some 20 years.
pages and pages of drugs.
the doctors at this facility had prescribed them all…and almost killed him, bvecause many of the drugs were very contraindicated with each other.
so mom yelled and stomped her feet, and back to the VA he went, and made a little bit of a comeback.
but this was summer 2021, and they were overwhelmed with covid…and the spinal cord clinic he would usually be in(for special care, due to autonomic response and other weird things with spinal injuries) was being used as a covid ward.
so they stabilised him….and sent him to another private hospital down the road…and the same thing happened: they gave him a million meds…some of which he hadnt taken in decades.
and when i tried to film with my fone when mom was reaming out the, i suspect, damage control woman, she yelled at me to stop or i’d be removed.
i said, try it lady…and she left the room.
my mom…has always been a very litigious person…and has spent lots of $ on several really dumb suits,lol.
i told her…now is the time.
but she refused.
so that last “hospital”(“Kindred Healthcare” in san antonio) likely killed Don…and i wouldnt be surprised if it wasnt part of some grand plan to weed out the old and expensive.
his last day…a sunday in august, the power went out…and the ancient diesel genny wouldnt start…the place was 90 degrees and dark.
they had little gensets brough in to run the monitors and respirators.
a total shit show.
i filed a formal complaint against them, in mom’s stead…but was told i didnt have standing,lol.
last time i was down there, kindred was still open.
and likely still a eugenecist shitshow.
thanks, IMDoc for bringing this to our attention.
In one-party consent states, e.g., Kansas the practice of recording doctor-patient meeting, exams, etc, is a slam-dunk. The doc who is party to the meeting, exam, etc., may consent and the patient will have no recourse. [https://recordinglaw.com/united-states-recording-laws/one-party-consent-states/kansas-recording-laws/]
What’s next? Recording confessions and recommending hail Marys? Recording legal counselling and recommending plea bargains? How many basic laws can you apparently break if you put an AI sticker on something?
Are these companies being run out of post-Soviet states? China? Who drives this crap?
Robo-call medicine. Hoo boy.
A precedent has been set by the Delta chatbot debaucle, that the humans who created the AI, are at fault if something goes wrong because of it.