Why Must Humans Compete for Electric Power with AI Bullshit Generators Programmed by Ritual Incantations?

This is Naked Capitalism fundraising week. 826 donors have already invested in our efforts to combat corruption and predatory conduct, particularly in the financial realm. Please join us and participate via our donation page, which shows how to give via check, credit card, debit card, PayPal, Clover, or Wise. Read about why we’re doing this fundraiserwhat we’ve accomplished in the last year, and our current goal, karōshi prevention.

By Lambert Strether of Corrente.

As readers have understood for some time, AI = BS. (By “AI” I mean “Generative AI,” as in ChatGPT and similar projects based on Large Language Models (LLMs)). What readers may not know is that besides being bullshit on the output side — the hallucinations, the delvish — AI is also bullshit on the input side, in the “prompts” “engineered” to cause the AI generate that output. And yet, we allow — we encourage — AI to use enormous and increasing amounts of scarce electric power (not to mention water). It’s almost as if AI is waste product all the way through!

In this very brief post, I will first demonstrate AI’s enormous power (and water) consumption. Then I will define “prompt engineering,” looking at OpenAI’s technical documentation in some detail. I will then show the similarities between prompt “engineering,” so-called, and the ritual incantations of ancient magicians (though I suppose alchemists would have done as well). I do not mean “ritual incantations” as a metaphor (like Great Runes) but as a fair description of the actual process used. I will conclude by questioning the value of allowing Silicon Valley to make any society-wide capital investment decisions at all. Now let’s turn to AI power consumption.

AI Power Consumption

From the Wall Street Journal, “Artificial Intelligence’s ‘Insatiable’ Energy Needs Not Sustainable, Arm CEO Says” (ARM being a chip design company):

AI models such as OpenAI’s ChatGPT “are just insatiable in terms of their thirst” for electricity, Haas said in an interview. “The more information they gather, the smarter [sic] they are, but the more information they gather to get smarter, the more power it takes.” Without greater efficiency, “by the end of the decade, AI data centers could consume as much as 20% to 25% of U.S. power requirements. Today that’s probably 4% or less,” he said. “That’s hardly very sustainable, to be honest with you.”

From Forbes, “AI Power Consumption: Rapidly Becoming Mission-Critical“:

Big Tech is spending tens of billions quarterly on AI accelerators, which has led to an exponential increase in power consumption. Over the past few months, multiple forecasts and data points reveal soaring data center electricity demand, and surging power consumption. The rise of generative AI and surging GPU shipments is causing data centers to scale from tens of thousands to 100,000-plus accelerators, shifting the emphasis to power as a mission-critical problem to solve… The [International Energy Agency (IEA)] is projecting global electricity demand from AI, data centers and crypto to rise to 800 TWh in 2026 in its base case scenario, a nearly 75% increase from 460 TWh in 2022.

From the World Economic Forum,

AI requires significant computing power, and generative AI systems might already use around 33 times more energy to complete a task than task-specific software would.

As these systems gain traction and further develop, training and running the models will drive an exponential increase in the number of data centres needed globally – and associated energy use. This will put increasing pressure on already strained electrical grids.

Training generative AI, in particular, is extremely energy intensive and consumes much more electricity than traditional data-centre activities. As one AI researcher said, ‘When you deploy AI models, you have to have them always on. ChatGPT is never off.’ Overall, the computational power needed for sustaining AI’s growth is doubling roughly every 100 days.

And from the Soufan Center, “The Energy Politics of Artificial Intelligence as Great Power Competition Intensifies“:

Generative AI has emerged as one of the most energy-intensive technologies on the planet, drastically driving up the electricity consumption of data centers and chips…. The U.S. electrical grid is extremely antiquated, with much of the infrastructure built in the 1960s and 1970s. Despite parts of the system being upgraded, the overall aging infrastructure is struggling to meet our electricity demands–AI puts even more pressure on this demand. Thus, the need for a modernized grid powered by efficient and clean energy is more urgent than ever…. [T]he ability to power these systems is now a matter of national security.

Translating, electric power is going to be increasingly scarce, even when (if) we start to modernize the grid. When push comes to shove, where do you think the power will go? To your Grandma’s air conditioner in Phoenix, where she’s sweltering at 116°F, or to OpenAI’s data centers and training sets? Especially when “national security” is involved?

AI Prompt “Engineering” Defined and Exemplified

Wikipedia (sorry) defines prompt “engineering” as follows:

Prompt engineering is the process of structuring an instruction that can be interpreted and understood [sic] by a generative AI model. A prompt is natural language text describing the task that an AI should perform: a prompt for a text-to-text language model can be a query such as “what is Fermat’s little theorem?”, a command such as “write a poem about leaves falling”, or a longer statement including context, instructions, and conversation history.

(“[U]nderstood,” of course, implies that the AI can think, which it cannot.) Much depends on the how the prompt is written. OpenAI has “shared” technical documentation on this topic: “Prompt engineering.” Here is the opening paragraph:

As you can see, I have helpfully underlined the weasel words: “Better,” “sometimes,” and “we encourage experimentation” doesn’t give me any confidence that there’s any actual engineering going on at all. (If we were devising an engineering manual for building, well, an electric power generating plant, do you think that “we encourage experimentation” would appear in it? Then why would it here?)

Having not defined its central topic, OpenAI then goes on to recommend “Six strategies for getting better results” (whatever “better” might mean). Here’s one:

So, “fewer fabrications” is an acceptable outcome? For whom, exactly? Surgeons? Trial lawyers? Bomb squads? Another:

“Tend” how often? We don’t really know, do we? Another:

Correct answers not “reliably” but “more reliably”? (Who do these people think they are? Boeing? “Doors not falling off more reliably” is supposed to be exemplary?) And another:

“Representive.” “Comprehensive.” I guess that means keep stoking the model ’til you get the result the boss wants (or the client). And finally:

The mind reels.

The bottom line here is that the prompt engineer doesn’t know how the prompt works, why any given prompt yields the result that it does, doesn’t even know that AI works. In fact, the same prompt doesn’t even give the same results each time! Stephen Wolfram explains:

[W]hen ChatGPT does something like write an essay what it’s essentially doing is just asking over and over again “given the text so far, what should the next word be?”—and each time adding a word.

Like glorified autocorrect, and we all know how good autocorrect is. More:

But, OK, at each step it gets a list of words with probabilities. But which one should it actually pick to add to the essay (or whatever) that it’s writing? One might think it should be the “highest-ranked” word (i.e. the one to which the highest “probability” was assigned). But this is where a bit of voodoo begins to creep in. Because for some reason—that maybe one day we’ll have a scientific-style understanding of—if we always pick the highest-ranked word, we’ll typically get a very “flat” essay, that never seems to “show any creativity” (and even sometimes repeats word for word). But if sometimes (at random) we pick lower-ranked words, we get a “more interesting” essay.

The fact that there’s randomness here means that if we use the same prompt multiple times, we’re likely to get different essays each time. And, in keeping with the idea of voodoo, there’s a particular so-called “temperature” parameter that determines how often lower-ranked words will be used, and for essay generation, it turns out that a “temperature” of 0.8 seems best. (It’s worth emphasizing that there’s no “theory” being used here; it’s just a matter of what’s been found to work [whatever that means] in practice [whose?].

This really is bullshit. These people are like an ant pushing a crumb around until it randomly falls in the nest. The Hacker’s Dictionary has a term that covers what Wolfram is exuding excitement about, which covers prompt “engineering”:

voodoo programming: n.

[from George Bush Sr.’s “voodoo economics”]

1. The use by guess or cookbook of an obscureor hairy system, feature, or algorithm that one does not truly understand. The implication is that the technique may not work, and if it doesn’t, one will never know why. Almost synonymous with black magic, except that black magic typically isn’t documented and nobody understands it. Compare magic, deep magic, heavy wizardry, rain dance, cargo cult programming, wave a dead chicken, SCSI voodoo.

2. Things programmers do that they know shouldn’t work but they try anyway, and which sometimes actually work, such as recompiling everything.

I rest my case.

AI “Prompt” Engineering as Ritual Incantation

From Velizar Sadovski (PDF), “Ritual Spells and Practical Magic for Benediction and Malediction: From India to Greece, Rome, and Beyond (Speech and Performance in Veda and Avesta, I.)”, here is an example of an “Old Indian” Vedic ritual incantation (c. 900 BCE):

The text boxed in red is a prompt — natural language text describing the task — albeit addressed to a being even less scrutable than a Large Language Model. The expected outcome is confusion to an enemy. Like OpenAI’s ritual incantations, we don’t know why the prompt works, how it works, or even that it works. And as Wolfram explains, the outcome may be different each time. Hilariously, one can imagine the Vedic “engineer” tweaking their prompt: “two arms” gives better results than just “arms,” binding the arms first, then the mouth works better; repeating the bindings twice works even better, and so forth. And of course you’ve got to ask the right divine being (Agni, in this case), so there’s a lot of professional skill involved. No doubt the Vedic engineer feels free to come up with “creative ideas”!

Conclusion

The AI bubble — pace Goldman — seems far from being popped. AI’s ritual incantations are currently being chanted in medical data, local news, eligibility determination, shipping, and spookdom, not to mention the Pentagon (those Beltway bandits know a good think when they see it). But the AI juice has to be worth the squeeze. Cory Doctorow explains the economics:

Eventually, the industry will have to uncover some mix of applications that will cover its operating costs, if only to keep the lights on in the face of investor disillusionment (this isn’t optional – investor disillusionment is an inevitable part of every bubble).

Now, there are lots of low-stakes applications for AI that can run just fine on the current AI technology, despite its many – and seemingly inescapable – errors (“hallucinations”). People who use AI to generate illustrations of their D&D characters engaged in epic adventures from their previous gaming session don’t care about the odd extra finger. If the chatbot powering a tourist’s automatic text-to-translation-to-speech phone tool gets a few words wrong, it’s still much better than the alternative of speaking slowly and loudly in your own language while making emphatic hand-gestures.

There are lots of these applications, and many of the people who benefit from them would doubtless pay something for them. The problem – from an AI company’s perspective – is that these aren’t just low-stakes, they’re also low-value. Their users would pay something for them, but not very much.

For AI to keep its servers on through the coming trough of disillusionment, it will have to locate high-value applications, too. Economically speaking, the function of low-value applications is to soak up excess capacity and produce value at the margins after the high-value applications pay the bills. Low-value applications are a side-dish, like the coach seats on an airplane whose total operating expenses are paid by the business class passengers up front. Without the principal income from high-value applications, the servers shut down, and the low-value applications disappear:

Cory Doctorow: What Kind of Bubble is AI?

Now, there are lots of high-value applications the AI industry has identified for its products. Broadly speaking, these high-value applications share the same problem: they are all high-stakes, which means they are very sensitive to errors. Mistakes made by apps that produce code, drive cars, or identify cancerous masses on chest X-rays are extremely consequential.

But why would anybody build a “high stakes” product on a technology that’s driven by ritual incantations? Airbus, for example, doesn’t include “Lucky Rabbit’s Foot” as a line item for a “fully loaded” A350, do they?

There’s so much stupid money sloshing about that we don’t know what do with it. Couldn’t we give consideration to the idea of putting capital allocation under some sort of democratic control? Because the tech bros and VCs seem to be doing a really bad job. Maybe we could even do better than powwering your Grandma’s air conditioner.

Print Friendly, PDF & Email
This entry was posted in Guest Post, Ridiculously obvious scams, Technology and innovation on by .

About Lambert Strether

Readers, I have had a correspondent characterize my views as realistic cynical. Let me briefly explain them. I believe in universal programs that provide concrete material benefits, especially to the working class. Medicare for All is the prime example, but tuition-free college and a Post Office Bank also fall under this heading. So do a Jobs Guarantee and a Debt Jubilee. Clearly, neither liberal Democrats nor conservative Republicans can deliver on such programs, because the two are different flavors of neoliberalism (“Because markets”). I don’t much care about the “ism” that delivers the benefits, although whichever one does have to put common humanity first, as opposed to markets. Could be a second FDR saving capitalism, democratic socialism leashing and collaring it, or communism razing it. I don’t much care, as long as the benefits are delivered. To me, the key issue — and this is why Medicare for All is always first with me — is the tens of thousands of excess “deaths from despair,” as described by the Case-Deaton study, and other recent studies. That enormous body count makes Medicare for All, at the very least, a moral and strategic imperative. And that level of suffering and organic damage makes the concerns of identity politics — even the worthy fight to help the refugees Bush, Obama, and Clinton’s wars created — bright shiny objects by comparison. Hence my frustration with the news flow — currently in my view the swirling intersection of two, separate Shock Doctrine campaigns, one by the Administration, and the other by out-of-power liberals and their allies in the State and in the press — a news flow that constantly forces me to focus on matters that I regard as of secondary importance to the excess deaths. What kind of political economy is it that halts or even reverses the increases in life expectancy that civilized societies have achieved? I am also very hopeful that the continuing destruction of both party establishments will open the space for voices supporting programs similar to those I have listed; let’s call such voices “the left.” Volatility creates opportunity, especially if the Democrat establishment, which puts markets first and opposes all such programs, isn’t allowed to get back into the saddle. Eyes on the prize! I love the tactical level, and secretly love even the horse race, since I’ve been blogging about it daily for fourteen years, but everything I write has this perspective at the back of it.

7 comments

  1. JBird4049

    >>Overall, the computational power needed for sustaining AI’s growth is doubling roughly every 100 days.

    Every hundred days? May I assume that power usage is also doubling every hundred days? Power production certainly isn’t increasing. Yes, and the greedy, corrupt, and incompetent buffoons at Pacific Gas & Electric keep my utilities nicely exorbitant because reasons; I also assume that the CPUC, which has been owned by PG&E et alii for over a century, will go along with this, also because reasons as the profits, bonuses, and dividends must ever increase. Well, hopefully we won’t get rolling blackouts again if they block out energy from domestic customers to the more affluent business customers.

    A major problem in our economy is the lack of funding from workers’ income to actual small (and I assume medium) business loans to start or expand businesses, which also chokes local revenue for state and local governments that depend on their constituents having income to tax; yet, we have gobs of cash to waste on destructive BS.

    Reply
  2. Acacia

    Thanks, Lambert. It’s kind of sad to think that students are being sold “prompt engineering”, as if this is going to be lucrative in the future.

    And as for the future of all this, may I humbly suggest that we start with the past, specifically, the opening paragraphs of Kant’s famous essay on the Enlightenment, published in 1784 (apologies for not using ChatGPT to summarize):

    An Answer to the Question: What is Enlightenment?

    1. Enlightenment is man’s emergence from his self-imposed immaturity. Immaturity is the inability to use one’s understanding without guidance from another. This immaturity is self-imposed when its cause lies not in lack of understanding, but in lack of resolve and courage to use it without guidance from another. Sapere Aude! “Have courage to use your own understanding!” — that is the motto of enlightenment.

    2. Laziness and cowardice are the reasons why so great a proportion of men, long after nature has released them from alien guidance (natura-liter maiorennes), nonetheless gladly remain in lifelong immaturity, and why it is so easy for others to establish themselves as their guardians. It is so easy to be immature. If I have a book to serve as my understanding, a pastor to serve as my conscience, a physician to determine my diet for me, and so on, I need not exert myself at all. I need not think, if only I can pay: others will readily undertake the irksome work for me. The guardians who have so benevolently taken over the supervision of men have carefully seen to it that the far greatest part of them (including the entire fair sex) regard taking the step to maturity as very dangerous, not to mention difficult. Having first made their domestic livestock dumb, and having carefully made sure that these docile creatures will not take a single step without the go- cart to which they are harnessed, these guardians then show them the danger that threatens them, should they attempt to walk alone. Now this danger is not actually so great, for after falling a few times they would in the end certainly learn to walk; but an example of this kind makes men timid and usually frightens them out of all further attempts.

    This, then, is where AI appears to be taking us: to a docile, cowardly, uncurious, techno-feudal society.

    Reply
  3. Alan Sutton

    As Michael Hudson says: “All economies are planned. The argument is about who gets to do the planning”.

    This is another illustration of governments outsourcing their responsibility to at least identify priorities for resource allocation.

    And also another illustration of how the “market” is actually not very efficient except in terms of, maybe, maximising profit rather than ensuring good social outcomes. Except in this case it doesn’t look like there is any certainty of any real profit yet.

    It turns out that handing over planning to private finance ensures outcomes that are not even rational, let alone equitable. How unexpected!

    Reply
    1. jsn

      A market is only as good as its design.

      A market designed to maximize short term yields facilitates maximal fraudulent “innovation”.

      A market maximizing fraudulent “innovation” backed by a fiat issuing Central Bank is, well, what we have: a funnel of public money to private con persons, mostly men who will convert their cash to some offshore asset somewhere leaving the public holding the bag. The logical conclusion of “the efficient markets hypothesis.”

      Reply
  4. aletheia33

    intoning: with this incantation i bind all the gears of war.
    i unleash the gears of peace, and of providing food, good healthcare, education, a safe home, a humane old age, and so on to everyone.

    …one can hope at least that the delusional “implementation” of “AI” might hasten the material/technological collapse of the war “machine” (including the “biological warfare” departments).
    perhaps one aspect of the incipient collapse that might mitigate the disaster.

    thank you lambert for this great “sendup.” not to trivialize your work or your points–but it IS also very funny.

    Reply
  5. Reader Keith

    If you’d like to spend 45 minutes of your life you’ll never get back listening to prompt engineers discuss their craft, here ya go.

    – These young people are well-intentioned (and well paid), I wish them a long and successful career
    – All they are trying to do is “get the model to do things!”
    – Be honest with the model! Don’t expect too much!
    – Give the model good directions! (aka, tell it exactly what to do while burning $$ and getting sub-par results)
    – Did I mention not too expect too much?
    – Some voodoo and incantations may be required
    – Some experts have gotten LLMs to not play a Gameboy!
    – If at first you don’t succeed, spend a weekend writing better prompts!
    – If you make it past the first 15 minutes of this video, you may be prompt engineering material!

    https://www.youtube.com/watch?v=T9aRN5JkmL8

    Reply

Leave a Reply

Your email address will not be published. Required fields are marked *