Good afternoon and welcome to my first rough draft of Coffee Break, which will be an ongoing part of our week here. This will be a project with an unknown evolutionary trajectory, depending on feedback from the community. Comments, criticism, and reader input are most welcome. Details are still in the making, so patience is requested. –KLG
Part the First: AI Kills Your Critical Thinking Skills. Medical students are very enthusiastic about AI – Algorithmic Intelligence (yes, that is what I call it, because I can). I remember very well the moment two years ago when ChatGPT popped its head up and students in my tutorial group were fairly thrilled at the bright, shiny new shortcut to knowledge right there in their phones. They have gone from books and paper to laptops to tablets to a phone that can be carried in one hand, all the time. The shadow medical curriculum now is the large thing casting shadows. Is this a good thing? Or a bad thing? Or just a thing? I tend to believe (OK, hope for) the latter. However, I do know from first-hand experience with students that the further they are away from their data (in this case the knowledge required to become a wise and effective physician) the more likely they are to miss the point entirely. This is not something we want in a physician, scientist, historian, psychologist, or philosopher.
Recently while I was thinking about how to improve our curriculum for preclinical medical students, up pops a link from Gizmodo, Microsoft Study Finds Relying on AI Kills Your Critical Thinking Skills. Key Point: “Over the course of the study, a pattern revealed itself: the more confident the worker was in the AI’s capability to complete the task, the more often they could feel themselves letting their hands off the wheel.” That is exactly what Elon Musk and others want us to do, isn’t it! So, whatever could be the problem?
The underlying study by a group of scientists at Carnegie-Mellon University (nice pair of American Oligarchs, those two, but they did leave a tangible legacy – especially the first) and at Microsoft in Cambridge (not the one across the river from Boston) is here: The Impact of Generative AI on Critical Thinking: Self-Reported Reductions in Cognitive Effort and Confidence Effects From a Survey of Knowledge Workers. The results are “self-reported,” which is probably the only kind of data available for this project. The “user’s task-specific self-confidence and confidence in GenAI are predictive of whether critical thinking is enacted…” I thought the purpose of GenAI is to vitiate the need for critical thinking, but on the other hand, this study does “reveal new design challenges and opportunities for developing GenAI tools for knowledge work.” The words “knowledge” and “work” are doing a lot of work here but we shall see. The scientists at Microsoft might be on to something. If it doesn’t kill us first.
In the meantime, we would do well to remember T.S. Eliot from The Rock:
Where is the Life we have lost in living?
Where is the wisdom we have lost in knowledge?
Where is the knowledge we have lost in information?
The cycles of Heaven in twenty centuries
Bring us farther from God and nearer to the Dust.
Part the Second: Progress on COVID-19? in Differential protection against SARS-CoV-2 reinfection pre- and post-Omicron. This open access paper is very technical but somewhat promising. From the Abstract:
The severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) has rapidly evolved over short timescales, leading to the emergence of more transmissible variants such as Alpha and Delta…the Omicron variant marked a major shift (and) raised concerns regarding (the) potential impact on immune evasion, disease severity and the effectiveness of vaccines and treatments…Before Omicron, natural infection provided strong and durable protection against reinfection…(but)…during the Omicron era, protection was robust only for those recently infected, declining rapidly over time and diminishing within a year. These results demonstrate that SARS-CoV-2 immune protection is shaped by a dynamic interaction between host immunity and viral evolution, leading to contrasting reinfection patterns before and after Omicron’s first wave. This shift in patterns suggests a change in evolutionary pressures, with intrinsic transmissibility driving adaptation pre-Omicron and immune escape becoming dominant post-Omicron, underscoring the need for periodic vaccine updates to sustain immunity. (emphasis added)
Yes, we know that. We also know that durable immunity to coronaviruses has been a chimera, so far, in birds, cats, and people. One thing about this work, though. The first thing to do when reading a biomedical research paper, even before reading the abstract, is to check the acknowledgments. The authors seem to have had access to a good database and they were not required to grub for grants to get the work done. One might also note that Al-Jazeera is also funded by the government of Qatar, whose leader seems after all these years to allow the reporters and editors to do their jobs. But of course, YMMV, especially around these parts.
Part the Third: The New and Improved Department of Health and Human Services. I looked, the Secretary of Health and Human Services has generally been a politician of one sort or another, going back the when it was the Department of Health, Education, and Welfare (two out of three are bad words today; no wonder the name was changed). But few of them have been quite the lightning rod that is the current incumbent. He has quite a following among citizens of various stripes, and he is on a mission. The physicians at Science-Based Medicine are not favorably impressed, however. Based on my priors, they are correct.
The Chinese proverb, “May you live in interesting times.” is probably neither Chinese nor a proverb. Its most likely origin lies in something said by Sir Austen Chamberlain, half-brother to the much more famous Neville. But I am getting a bit tired of this, in a working life that has been coextensive with the rise of the Neoliberal Dispensation. Nevertheless, “You are not obligated to complete the work, but neither are you free to desist from it.” (Pirkei Avot: Ethics of Our Fathers, 2:21). We have work to do!
Frist.
Passing the torch!
Thank you, Lambert.
I think this is the first “First!” comment I’ve seen on NC. Cute.
fr0st p0st?
We like science. Here’s a new Construction Physics on how to make a jet engine. Lotsa pictures.
https://www.construction-physics.com/p/why-its-so-hard-to-build-a-jet-engine
Thank you, KLG, for your contributions.
Ditto!
Thank you for your previous writing, and I’m looking forward to seeing this weekly column.
Welcome!
aye.
but do give yerself time to grow into it(large waders, and all).
what Lambert calls “workflow”, etc, i call “set and setting”,lol.
i hafta actively set aside a period of time to sit down and write(i mean, besides my random comments)…ritualise it, turn off fone,look at the weather forecast, etc.
hard to do, even for me…so much work, so little body to do it,lol.
and i set my own agenda and schedule.
cripplehood taught me the value of some structure…preferably predictable…in ones ‘works and days’…
but i reckon you’ve got the chops,lol.
(and are likely better organised than i ever have been)
i’m pullin fer ya.
I had to look this up to make sure I had it right:
https://en.wikipedia.org/wiki/Set_and_setting
Thanks for the advice Amfortas and welcome KLG!
Re; “Where is the Life we have lost in living?”
120 years ago, there were no freezers or refrigerators in peoples homes. Maybe an icebox if you could afford to buy ice?
So, people preserved food by drying, fermenting, pickling, salting (“the pork barrel” was a barrel filled with salted pork)….smoking… burying in root cellars…
There were no “use by dates”. We used our knowledge, hunger, sense of smell and vision to decide what to eat, or not to eat.
The end result was we were surrounded by yeast, bacteria, fungi ……. in our food, in our guts and on our skin. And this is how we were engineered by God, or by god or by evolution…. to be literally swimming in life. (like a fish in water)
The salinity of the ocean is about 3.5% salt. When I make sauerkraut, I use between 2% and 4% salt of the total weight. (water and cabbage) wow
Sauerkraut, natto, kvas, beer, kefir, yogurt, kimchi……..
This is a very good “how to” youtube channel
https://www.youtube.com/@CleanFoodLiving
Something to chew on. RFK Jr. moves to eliminate public comment on HHS decisions, STAT.
Welcome (back) and thanks for agreeing to serve the NC community in this way. I’ve enjoyed your previous contributions, especially for the way in which they’ve conveyed the real world conditions prevailing in academic science. I’m retired from academia myself, though not in a biomedical field, and your descriptions of the way that things actually work always rang pretty true.
Following on from your item #2, this newest posting from the “Independent SAGE continues” got my attention: The rare flip-side of the immunological coin: Post-vaccine syndrome (PVS).
I don’t have the background to evaluate the technical content, so the part that impressed me came at the end where it deals with how to handle findings of rare but severe side-effects, when any such discussion occurs in a polluted information environment (my wording not theirs) full of bad-faith actors. My own career was in a field with a lot less public visibility, so I never had to deal with such dilemmas. If you have the time, I’d be interested in your take on this.
Wow. If this is indeed your “first rough draft”, we are all in store for an incredible ride! You knocked it out of the park, IMNSFHO. Looking forward to next week.
Would like your comments on the TV show “House”, where Dr. House applies critical thinking to cure diseases that confound other doctors. I’m sure this show couldn’t be made today because Dr H questioned “The Science (TM)”
Welcome back and looking forward to reading more from you!
Thank you! Look forward to your future thoughts on science.
Thank you for stepping up to contribute concerning a field that is literally “life or death” for many of us. (I speak as a late sixty-something who fits many of the ‘markers’ for being “Death prone.”)
Stay safe while helping the rest of us do the same.
i recently took to using chatgtp to help with the coding portion of my homework. it gave me incredibly wrong answers that in no way matched what i was trying to model. but it was helpful in giving me the proper syntax, as it has been a few years since i worked in python.
i agree that the further one gets from the data, the more one will miss the point. it’s really important to know the physical meanings of numbers. i dont think this is just an issue in the medical field either. bit of an allegory of the cave situation. the more you look at representations of things (be it data, essays, or shadows on the wall), the less you look at what’s actually happening outside the cave. people love to blame phone for everything wrong with society, and while i don’t think it’s to blame for everything, i think looking at representations instead of reality all day is really bad for the mind.
Thanks KLG!
While we’re musing on the flailings of AI, I’ll observe that our Large Language Models are expert at generating convincing and plausible statements that may not be factual. Some call these hallucinations, but all the LLM output is hallucination, some of it just accords with our knowledge.
So there really is a problem of trust. If you don’t know enough to know if the information inaccurate, then you can’t trust it, if you do, you don’t need it.
Maybe if we view AI as a paralegal or an undergrad research intern it becomes more useful. IDK. I mentioned this once before, I recently overheard a PHD student who is working on their dissertation that, regarding the lit review, AI was terrible at searching for articles, but very good at identifying if selected articles qualified by the metrics provided. I’ve seen first hand that AI is very good at admin-speak and generating boilerplate responses for “make-work” requests often foisted upon faculty by administration, and this should make a few people here smile.
Thank you for launching this — really looking forward to your posts and the discussions in the comments.
And thank you for diving in with such an on-point introduction to the issue with generative “AI” (yes, scare quotes). I teach in the Humanities and this has been a burning issue since the first avalanche of hype around ChatGPT hit a couple of years ago. It’s been something of a headache for many of my colleagues, and I know one person who actually quit teaching over this. In response to the arrival of these tools, many universities have formulated policies on the use of generative “AI” but generally they push off responsibility onto the faculty, by not taking any position at an institutional level and only suggesting boilerplate text for syllabi, to the effect of “the use of generative AI for homework is: (a) acceptable; (b) acceptable under certain conditions; ( c) never acceptable.” The faculty then get to choose one option.
Having some background in computer science and familiarity with the fortunes of “AI”, I have opted for ( c), I.e., that its use is never acceptable in a Humanities course. However, it is one thing to impose rules and it is another to persuade the students that generative AI is really not something they should be messing with. There’s a lot to be said about this and I’m looking forward to hearing more on the science surrounding the impact of these new tools on critical thinking.
I admired KLG’s previous posts and loved how he shared his knowledge. I am looking forward to more wisdom and insight.
Due to my rare visual problems, I stopped watching movies and television entirely. However, my memory of the movies I have previously watched is still pretty good. Do you remember that scene in the movie The Matrix where Neo points to a helicopter and asks Trinity, “Can you fly that thing?” Trinity replies, “Not yet.” She then tells the operator that she needs a pilot program for this helicopter model. The operator proceeds to upload the knowledge and experience for flying a helicopter into Trinity’s brain, and magically enough, she suddenly knows how to fly a helicopter.
That’s what I need! I need a shortcut to knowledge because I learn things slowly and I never seem to have enough time to learn new things because my medical problems make doing ordinary things difficult. Unfortunately, life doesn’t work that way. You have to put in the time to acquire knowledge. It took me a day or two to digest the latest medical paper I read, entitled “Early life stress impairs VTA coordination of BLA network and behavioral states.” [link] Certainly, there are readers here who have intellectual capabilities beyond what I have and could digest, learn, and apply the findings of Stone et al. a whole lot faster. Do share your shortcuts to knowledge, especially those that don’t involve phones, AI, and pie in the sky technology.
By the way, the ensuing scene from The Matrix was the greatest action sequence in movie history. Best action scene evah! From the rooftop, Neo and Trinity fly the helicopter, with Trinity as pilot and Neo manning a machine gun mounted to the helicopter, to the top floor of the building where the agents have taken Morpheus hostage. As Neo unleashes a barrage of bullets at the agents, Morpheus awakens from his stupor, realizing that this is Neo and Trinity’s rescue operation. Bullet casings literally rain from the helicopter, water from the fire sprinklers rain onto the agents and Morpheus, and an unbelievable number of bullets riddle through the agents. Matrix bullet time ensues as Morpheus attempts to run and escape by jumping from the building to the hovering helicopter. It is at this moment that Neo realizes Morpheus doesn’t have the momentum to make the jump (presumably by seeing the slow motion bullet hit Morpheus in the leg), and Neo reacts accordingly and jumps out of the helicopter to save and catch Morpheus. It is also at this moment that we the audience and the characters[1] in the movie realize that Neo is The One—that is, a deity who possesses divine powers in the matrix. But Neo is not done. Neo is still holding the rope that is tethered to the about-to-crash helicopter. The only way to save Trinity is for Neo to secure the rope to himself, brace for the full force of the falling helicopter, and hope Trinity figures out a way to sever the rope from the helicopter, which she miraculously does because…Hollywood.
I got carried away in this comment. Here is some more. The Morpheus rescue scene reminded me of the Jimi Hendrix song “Machine Gun.” This song has the most amazing guitar solo. Do take a listen.
[1]: In the preceding scene on the rooftop, Trinity was surprised at how agile Neo is in dodging Agent Smith’s bullets. “How did you move like that? You moved like they did.”
2 more lines:
With three times the pain
And your own self to blame
KLG, welcome to the show!
I’m torn between the horrors and the virtues of LLM’s. As a retired English teacher, a wave of despair comes over me when I think of the extra hours that LLM’s are forcing on careful graders of high school and college humanities papers.
Personal use of LLM’s has been beneficial for me, however. I haven’t found it hard at all to seperate BS LLM from quality LLM. When in doubt, verify elsewhere. LLM’s can be such time savers and mind expanders. This much seems undeniable. Along these lines, this Noema piece struck me as striking decent balance between human and algorythmic (yes!) intelligence.
The use of LLM’s by unformed or malicious minds, however, is the rub. Not to mention the quality of the tiny number of human minds responsible for creating and owning the LLM’s, all racing each other at breakneck speeds to see who can make the biggest buck by next week.
As a non-retired member of a Humanities faculty, my vantage point is that those “extra hours” just aren’t happening. Perhaps you taught at a highly-paid private school, but for most in the profession those extra hours are not compensated with overtime pay. In lieu of “no-AI” policies, the extra effort is endless. Ergo, not happening. And I say this as one who has spent many hours every semester, giving detailed written feedback on student writing.
Yes, there are detection tools like Turnitin, but there have been highly-publicized cases in which its use led to a lawsuit. Everybody I know started saying “I can spot the AI content,” but then later conceded “it’s complicated.” Realistically, then, I think the most egregious cases are getting flagged, while any grey or edge cases are just getting waved through.
And what sorts of students will be produced by this regime? What can we say about all of those who used “AI” tools instead of actually learning real skills, and were waved through by an overworked, underpaid regime? They got their B.A., but can they write even a seven-page research paper without using LLMs? How could we say, since they were waved through the process?
If your goal is creative writing and writers’ block has been an issue for you, then yes I suppose LLMs could be seen as a “mind expander”, though for those of us in the Humanities working on research and the production of knowledge, they appear nothing short of inimical to the development of critical thinking. From X:
https://x.com/danish037/status/1894428793194123541
My policy now is that LLMs should be treated as a secondary source, and that means they must be cited correctly (e.g. Chicago, Turabian, MLA, whatever). Failure to do this is plagiarism, which is an automatic ”F” for the assignment, and possibly the class. Moreover, just as Wikipedia is not an acceptable secondary source (because, among other failings, it can and does contain plagiarized text), neither are LLMs.
No surprise that AI kills your thinking skills. Take a look at spelling. By being such an avid reader in earlier years, I was pretty good in spelling and in my family, I am the one that people ask how to spell a word. But then spell-check came along and I can see my own spelling degrade as I use it in comments. You get mortified when after a comment is posted and the 5-minute window passes, that you spot some glaring, idiotic spelling mistake that you make. But in hindsight, you mostly know because a word looks ‘wrong.’
But if kids start to rely on AI from the get go, they will never know if an answer is wrong or not as they will not have that solid basis of knowledge to compare itself against. Or they will just accept that the AI is right in the same way several times lawyers have gone into court with AI generated legal arguments only to find that the AI had just made up some random cases because a) they figured that the AI would be truthful and b) they were too lazy to look up the legal precedents the AI quoted to see if they were real or not. For people that learn the old-fashioned way, that may give them an edge against their colleagues that depend on AI then.
I would add another example of people “un-learning” skills because of new technology.
the high percentage of people I know now, who seem to not have any idea ‘”where” they are, how they got there, or how they are going to drive to the next place; in exact terms because they have looked at a map and know where everything is in relation to each other , geographically.
So many people now rely on the car/phone navigation system, wherever they go. If you ask them about route options, they just say they go wherever the device tells them.
It is like the loss of knowing people’s phone numbers, because they are “in your phone”. The majority of phone numbers I know off the top of my head are old landlines from forty years ago, that mostly probably don’t exist anymore.
Dead reckoning, a dying art.
You could drop me anywhere in the southern Sierra and I’d have a good idea of where I was at based upon the peaks, the terrain and a lifetime of living on the land, if only temporarily.
Rarely do I see somebody younger pull out a good old fashioned topo map, as you can do it so easily on a smartphone, but the former really allows you to glimpse the big picture of everything around, while you mostly get it in small snatches on a phone.
Welcome, kind and knowledgeable sir. All good things are headed our way. May your offering bring the Music of the Spheres to my world. All the best, all~ways🌅
The Gene Hackman death is a science/medical story. In the latest from the LAT they say Hackman’s pacemaker stopped nine days ago so that is the likely time of death.
https://www.latimes.com/california/story/2025-02-28/what-we-know-about-hackman-wife-dog-death
Carbon Monoxide has been ruled out and toxicology tests may take months.
What is being called AI threatens to strip the humanity from customer service, remove the humanity — such as there is — from the insurance coverage we are compelled to buy for our homes, our cars, and our health, and now add the de-schooling of our physicians from the critical thinking skills, the intuition vital to their profession. To gain these declines AI is dragging on our crumbling electric grid. My understanding of present manifestations of AI is that it is an oversold, pattern recognition system often ‘trained’ with faulty and ambiguous ‘training data’ scooped up from the web and any other exploitable source. This leads me to question whether the problem is the ‘AI’ pattern recognition algorithms or the way our barely human overseers are misusing a potentially valuable tool.
However, I have read of other applications of the so-called AI pattern recognition algorithms that are yielding and promise great advances in understanding protein structure and function, and in purposeful de novo protein design. I am thinking of the successes of the AlphaFold programs and work done at the Institute for Protein Design at the University of Washington, which I believe is also dependent on ‘AI’. While I am not sure how much of the protein design efforts are over-hyped, AlphaFold appears to be an advance of sorts. Given that there is success in applying computer driven pattern recognition to proteins, I fear much of the potential of computer aided pattern recognition could be lost in the tendency of human laziness and the relentless push for ‘efficiency’ and research ‘productivity’. I suspect too little effort has been devoted to gaining understanding of the structure and mechanism of the patterns that are discovered — the heart of how they work and their ‘meaning’ — the kinds of effort described in yesterday’s link about June Huh’s work in mathematics. How many researchers in protein research could spend a couple of years attempting to unravel the mysteries of protein structure the same way that June Huh was able to investigate his mathematics? In a similar vein, I suspect too little effort is devoted to understanding how computer pattern recognition works and make the recognition process more transparent — so that a human can follow and understand how a pattern was found and how the pattern operates — its heart.
Thank you for agreeing to make your new entry into the weekly or frequently posting schedule, looking forward to the ongoing reveals and a peak from within the medical industry , and professionals that frequently will and have dropped useful knowledge.
To borrow a phrasing from film, in the excellent movie Margin Call there is a high pressure, impromptu board meeting where Jeremy Irons saunters in as the board chairman. To paraphrase his directive to the young analyst portrayed by Zachary Quinto , we can all share from the above expertise.
Based only on the paragraph selected from the paper referred to in this post I come to a different conclusion than the “underscoring the need for periodic vaccine updates to sustain immunity.” Perhaps the “change in evolutionary pressures, with intrinsic transmissibility driving adaptation pre-Omicron and immune escape becoming dominant” is an indication that many of the existing animal immune systems could prove no match for the ability of the new strains of virus to adapt around them. Instead of efforts to craft more vaccines in an endless race against new virus strains the time is well past when we should fall back on some old-fashioned masking and some new fashioned air filtration systems … and while we are at it, how about some sick leave to encourage those who are sick to stay home. I believe it is time to begin more basic research into the fine details of how various immune systems, including those in our flora, operate and how viruses evade them. I also believe that research might do well to study approaches for dealing with the looming threat of fungal diseases as the increasing temperatures select for heat tolerant fungi which may begin breaching the mammalian body temperature barriers to infection.
I was speaking with a British radiologist working with the NHS and (of course) some private company claiming to bring AI to radiology. One interesting thing this doctor said was how radiology is far beyond simply looking at an MRI scan and identifying tumors and such, and how this task forms the basis of training for more specialized care. While these algorithms are flawed, the much more worrying thing is that young doctors are shown to become reliant on them and, therefore, less proficient at the core skillset needed to become a specialist. It will take many years to see the effect of this, but once the ball is rolling, it is hard to fix. I asked, “If this is having such a negative effect on the skillset of new doctors and can cause catastrophe in a decade, why the hell are we going forward with this?”
He just shrugged and said, “Well, we don’t have enough doctors.” Insanity.
I’m British (and technically 50% Aussie) and one of my best friends from my uni days is now a consultant radiologist. I’ve not gone into detail about this topic recently when chatting to him but am 99% sure he’d say the same thing, based on past conversations.
They’re definitely understaffed and he is counting down the days till he can take early retirement.
Belated, but I want to flag this, from the part two abstract:
isn’t meaningfully true. There were serious waves with obviously large amounts of reinfections thru 2021, most notably in Manaus (Brazil) and India iirc. I remember GM arguing this forcefully and persuasively with various Herd Immunity goobers at the time.
The sheer mayhem of the delta waves worldwide in 2021, at a time of at least partial notional (if variably effective due to technical errors) protections, and after earlier waves of mass infection in the preceding year and a bit, make this an extremely dubious claim. Certainly not a claim that merits the confidence with which it is breezily asserted in the abstract. Especially considering that wild type lineages were mostly out of the picture by northern summer 2022, further reducing the opportunity to test the idea of ‘strong and durable protection’.