By Lambert Strether of Corrente.
AI = BS (results on Covid and Alzheimers). As I wrote: “I have no wish to advance the creation of a bullshit generator at scale.” Despite this, I have never filed Artificial Intelligence (AI) stories under “The Bezzle,” even though all the stupid money sloshed into it, once it became apparent that Web 3.0, crypto, NFTs etc. were all dry holes. That’s because I expect AI to succeed, by relentlessly and innovatively transforming every once-human interaction, and all machine transactions, into bullshit, making our timeline even stupider than it already is. Probably a call center near you is already working hard at this!
Be that as it may, the Biden Adminstration came out last week with a jargon-riddled and prolix Executive Order (EO) on AI: “Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence” (“Fact Sheet“). One can only wonder whether an AI generated the first paragraphs:
My Administration places the highest urgency on governing the development and use of AI safely and responsibly, and is therefore advancing a coordinated, Federal Government-wide approach to doing so. The rapid speed at which AI capabilities are advancing compels the United States to lead in this moment for the sake of our security, economy, and society.
In the end, AI reflects the principles of the people who build it, the people who use it, and the data upon which it is built. I firmly believe that the power of our ideals; the foundations of our society; and the creativity, diversity, and decency of our people are the reasons that America thrived in past eras of rapid change. They are the reasons we will succeed again in this moment. We are more than capable of harnessing AI for justice, security, and opportunity for all.
The administrative history of the EO is already disputed, with some sources crediting long-time Democrat operative Bruce Reed, and others [genuflects] Obama. (Characteristically, Obama drops his AI reading list, without actually summarizing it.) Biden is said to have been greatly impressed by watching Mission: Impossible – Dead Reckoning: Part One at Camp David (“[a] powerful and dangerous sentient AI known as ‘The Entity’ goes rogue and destroys a submarine”), and by being shown fake videos and images of himself and his dog. (Presumably Biden knew the video was fake because Commander didn’t bite anyone.)
I’ll present the best summary of the EO I could find shortly; curiously, I couldn’t find a simple bulleted list that didn’t take up half a page. Mainstream coverage was generally laudatory, though redolent of pack journalism. Associated Press:
… creating an early set of guardrails that could be fortified by legislation and global agreements …
The Biden administration’s AI executive order has injected a degree of certainty into a chaotic year of debate about what legal guardrails are needed for powerful AI systems.
And TechCrunch:
The fast-moving generative AI movement, driven by the likes of ChatGPT and foundation AI models developed by OpenAI, has sparked a global debate around the need for guardrails to counter the potential pitfalls of giving over too much control to algorithms.
As readers know, I loathe the “guardrails” trope, because implicit within are the value judgments that the road is going to the right destination, the vehicle is the appropriate vehicle, the driver is competent and sober, and the only thing that’s needed for safety is guardrails. It’s hard to think of a major policy initiative in the last few decades where any of those judgments were correct; the trope is extraordinarily self-satisfied.
Coverage is not, however, in complete agreement on the scope of the EO. From the Beltway’s Krebs Stamos Group:
Reporting requirements apply to large computing clusters and models trained using a quantity of computing power just above the current state of the art and at the level of ~25-50K clusters of H100 GPUs. These parameters can change at the discretion of the Commerce Secretary, but the specified size and interconnection measures are intended to bring only the most advanced “frontier” models into the scope of future reporting and risk assessment.
So my thought was that the EO is really directed at ginormous, “generative” AIs like ChatGPT, and not (say) the AI that figures out how long the spin cycle should be in your modern washing machine. But my thought was wrong. From EY (a tentacle of Ernst & Young):
Notably, the EO uses the definition of “artificial intelligence,” or “AI,” found at 15 U.S.C. 9401(3): “a machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations or decisions influencing real or virtual environments.” Therefore, the scope of the EO is not limited to generative AI; any machine-based system that makes predictions, recommendations or decisions is impacted by the EO.
So the EO could, at least in theory, cover that modern washing machine.
Nor was coverage in complete agreement on the value of regulation per se, especially in the Silicon Valley and stock picking press. From Steven Sinofsky, Hardcore Software, “211. Regulating AI by Executive Order is the Real AI Risk.”
Instead, this document is the work of aggregating policy inputs from an extended committee of interested constituencies while also navigating the law—literally what is it that can be done to throttle artificial intelligence legally without passing any new laws that might throttle artificial intelligence. There is no clear owner of this document. There is no leading science consensus or direction that we can discern. It is impossible to separate out the document from the process and approach used to ‘govern’ AI innovation. Govern is quoted because it is the word used in the EO. This is so much less a document of what should be done with the potential of technology than it is a document pushing the limits of what can be done legally to slow innovation.
Sinofsky will get no disagreement from me in his aesthetic judgement of the EO as a deliverable. However, he says “slow[ing] innovation” like that’s a bad thing. Ditto “throttl[ling] artificial intelligence.” What’s wrong with throttling a bullshit generator?
Silicon Valley’s other point is that regulation locks in incumbents. From Stratechery:
The point is this: if you accept the premise that regulation locks in incumbents, then it sure is notable that the early AI winners seem the most invested in generating alarm [“the risk of human extinction”] in Washington, D.C. about AI. This despite the fact that their concern is apparently not sufficiently high to, you know, stop their work. No, they are the responsible ones, the ones who care enough to call for regulation; all the better if concerns about imagined harms kneecap inevitable competitors.
On the bright side, from Barron’s, if you play the ponies:
First, I want to make it clear I’m not antiregulation. You need rules and enforcement; otherwise you have chaos. But what I’ve seen in all my years is that many times the incumbent that sought to be regulated had such a hand in the creation of the regulation they tilt the scales in favor of themselves.
There’s a Morgan Stanley report where they studied five large pieces of regulatory work and the stock performance of the incumbents. It proved it’s a wonderful buying opportunity, when people fear that the regulation is going to hurt the incumbent.
So that’s the coverage. The best summary of the EO I could find is from The Verge:
The order has eight goals: to create new standards for AI safety and security, protect privacy, advance equity and civil rights, stand up for consumers, patients, and students, support workers, promote innovation and competition, advance US leadership in AI technologies, and ensure the responsible and effective government use of the technology.
Several government agencies are tasked with creating standards to protect against the use of AI to engineer biological materials, establish best practices around content authentication, and build advanced cybersecurity programs.
The National Institute of Standards and Safety (NIST) will be responsible for developing standards to ‘red team’ AI models before public release, while the Department of Energy and Department of Homeland Security are directed to address the potential threat of AI to infrastructure and the chemical, biological, radiological, nuclear and cybersecurity risks. Developers of large AI models like OpenAI ‘s GPT and Meta’s Llama 2 are required to share safety test results.
Do you know what that means? Presumably the incumbents and their competitors know, but I certainly don’t. More concretely, from the Atlantic Council:
What stands out the most is not necessarily the rules set out for industry or broader society, but rather the rules for how the government itself will begin to consider the deployment of AI, with security being at the core. As policy is set, it will be extremely important for government bodies to “walk the walk” as well.
Which makes sense, given that the Democrats are highly optimized for spookdom (as is Silicon Valley itself, come to think of it). And not especially optimized for you or me.
Now let’s turn to the detail. My approach will be to list not what the EO does, or what its goals (ostensibly) are, but what is missing from it; what it does not do (and I’m sorry if there’s any disconnect between the summary and any of the topics below; the elephant is large, and we are all blind).
Missing: Teeth
From TechCrunch, there’s an awful lot of self-regulation and voluntary compliance, and in any case an EO is not regulation:
[S]ome might interpret the order as lacking real teeth, as much of it seems to be centered around recommendations and guidelines — for instance, it says that it wants to ensure fairness in the criminal justice system by ‘developing best practices on the use of AI in sentencing, parole and probation, pretrial release and detention, risk assessments, surveillance, crime forecasting and predictive policing, and forensic analysis.
And while the executive order goes some way toward codifying how AI developers should go about building safety and security into their systems, it’s not clear to what extent it’s enforceable without further legislative changes.
For example, the EO requires testing. But what about the test results? Time:
One of the most significant elements of the order is the requirement for companies developing the most powerful AI models to disclose the results of safety tests. [The EO] doesn’t, however, set out the implications of a company reporting that its model could be dangerous. Experts are divided—some think the Executive Order solely improves transparency, while others believe the government might take action if a model were found to be unsafe.
Axios confirms:
It’s not clear what action, if any, the government could take if it’s not happy with the test results a company provides.
A venture capitalist remarks:
“Without a real enforcement mechanism, which the executive order does not seem to have, the concept is great but adherence may be very limited,” [Bradley Tusk, CEO at Tusk Ventures] said.
(Of course, to a venture capitalist, lack of compliance — not sure about that watered-down “adherence” — might be a good thing.)
Missing: Transparency
From AI Snake Oil:
There is a glaring absence of transparency requirements in the EO — whether pre-training data, fine-tuning data, labor involved in annotation, model evaluation, usage, or downstream impacts. It only mentions red-teaming, which is a subset of model evaluation.
IOW, the AI is treated as a black box. If the outputs are as expected, then the AI tests out positive. Did we just try that, operationally, with Boeing, and find discover that not examining the innnards of aircraft didn’t work out that well? That’s not how we build bridges or buildings, either. In all these cases, the “model” — whether CAD, or blueprint, or plan — is knowable, and the engineering choices are documented. (All of which could be used to make the point that software engineering, whatever it may be, is not, in fact engineering; Knuth IMSNHO would argue it’s a subtype of literature.)
Missing: Finance Regulation
From the Brookings Institution:
Sometimes what is not mentioned is telling, and this Executive Order largely ignores the Treasury Department and financial regulators. The banking and financial market regulators are not mentioned once, while Treasury is only tasked with writing one report on best practices among financial institutions in mitigating AI cybersecurity risks and provided a hardly exclusive seat along with at least 27 other agencies on the White House AI Council. The Consumer Financial Protection Bureau (CFPB) and Federal Housing Finance Agency heads are encouraged to use their authorities to help regulated entities use AI to comply with law, while the CFPB is being asked to issue guidance on AI usage that complies with federal law.
In a document as comprehensive as this EO, it is surprising that financial regulators are escaping further push by the White House to either incorporate AI or to guard against AI’s disrupting financial markets beyond cybercrime.
Somehow I don’t think finance is being ignored because we could abolish investment banking and private equity with AI. A cynic might urge that AI would be very good at generating supporting material or even algorithms for accounting control fraud, and it’s being left alone for that reason.
MIssing: Labor Protection
From Variety:
Among other things, Biden’s AI executive order directs federal agencies to “develop principles and best practices to mitigate the harms and maximize the benefits of AI for workers by addressing [what on earth does “addressing” mean?] job displacement; labor standards; workplace equity, health, and safety; and data collection.” In addition, it calls for a report on “AI’s potential labor-market impacts, and study and identify options for strengthening federal support for workers facing labor disruptions, including from AI.”
A report! My goodness! As Variety gently points out:
In its deal reached Sept. 24 with studios, the WGA secured provisions including a specification that “AI-generated material can’t be used to undermine a writer’s credit or separated rights” in studio productions. Writers may choose to use AI, but studios “can’t require the writer to use AI software (e.g., ChatGPT) when performing writing services,” per the agreement.
Joe Biden is, of course, a Friend To The Working Man, but from this EO, it’s clear that a union is a much better friend.
Missing: Intellectual Property Protection
From IP Watchdog:
The EO prioritizes risks related to critical infrastructure, cybersecurity and consumer privacy but it does not establish clear directives on copyright issues related to generative AI platforms….
Most comments filed by individuals argued that AI platforms should not be considered authors under copyright law, and that AI developers should not use copyrighted content in their training models. “AI steals from real artists,” reads a comment by Millette Marie, who says that production companies are using AI for the free use of artists’ likeness and voices. Megan Kenney believes that “generative AI means a death of human creativity,” and worries that her “skills are becoming useless in this capitalistic hellscape.” Jennifer Lackey told the Copyright Office her concerns about “Large Language Models… scraping copyrighted content without permission,” calling this stealing and urging that “we must not set that precedent.”
In other words, the Biden Administration and the authors of the EO feel that hoovering up terabytes of copyrighted material is jake with the angels; their silence encourages it. That’s unfortunate, since it means that the entire AI industry, besides emitting bullshit, rests on theft (or “original accumulation,” as the Bearded One calls it).
Missing: Liability
Once again from AI Snake Oil:
Fortunately, the EO does not contain licensing or liability provisions. It doesn’t mention artificial general intelligence or existential risks, which have often been used as an argument for these strong forms of regulation.
I don’t know why the author thinks leaving out liability is good, given that one fundamental “innovation” or AI is stealing enormous amounts of copyrighted material, for which the creators ought to be able to sue. And if the AI nursemaid puts the baby in the oven and the turkey in the crib at Thanksgiving, we ought to be able to sue for that, too.
Missing: Rights
From, amazingly enough, the Atlantic Council:
In October 2022, the White House Office of Science and Technology Policy published a Blueprint for an AI Bill of Rights. The Blueprint suggested that the United States would drive toward a rights-based approach to regulating AI. The new executive order, however, departs from this philosophy and focuses squarely on a hybrid policy and risk-based approach to regulation. In fact, there’s no mention of notice, consent, opt-in, opt-out, recourse, redress, transparency, or explainability in the executive order, while these topics comprised two of the five pillars in the AI Bill of Rights.
“[T]here’s no mention of notice, consent, opt-in, opt-out, recourse, redress, transparency, or explainability.” Wow, that’s odd. I mean, every EULA I’ve ever read has all that. Oh, wait….
Missing: Privacy
From TechCrunch:
For example, the order discusses concerns around data privacy — after all, AI makes it infinitely more easy to extract and exploit individuals’ private data at scale, something that developers might be incentivized to do as part of their model training processes. However, the executive order merely calls on Congress to pass “bipartisan data privacy legislation” to protect Americans’ data, including requesting more federal support to develop privacy-preserving AI development techniques.
Punting to Congress. That takes real courage!
Conclusion
Here’s Biden again, from his speech on the release of the EO:
We face a genuine inflection point, one of those moments where the decisions we make in the very near term are going to set the course for the next decades … There’s no greater change that I can think of in my life than AI presents.
What, greater than nuclear war? Surely not, though perhaps Biden doens’t “think of” that. Reviewing what’s missing from the EO, it seems clear to me that despite glibertarian bro-adjacent whinging about regulation, the EO is “light touch.” You and I, however, deserve and will get no protection at all. “Inflection point” for whom? And in what way?
Just a half of a way through the post and I see, from ‘TechCrunch’: “… surveillance, crime forecasting and predictive policing,…”
Ah ha! ‘Pre Crime’ in all it’s dystopian glory.
The AI isn’t going to be the “adversary.’ It’s users will.
As for financial matters; AI will be a whizz at “front running.”
Oh, and as for the last point about “excessive radiation release events,” anybody remember this line?
“Shall we play a game?”
What fun! When Dabney Coleman comes off as the best person to be running things, then we are well and truly….
What was Matthew Broderick thinking?
https://youtu.be/-1F7vaNP9w0
“ ‘Global thermonuclear war’ — ♡♡♡ yeah, that ought to impress her ♡♡♡ …”
Also: the story of the IMSAI 8080 used in filming:
https://www.imsai.net/the-wargames-imsai/
I’ve been thinking about that movie a lot in the last year. Thinking maybe we should force the top people in the Biden Administration and Congress to watch it.
But then I remember Rice and Rumsfeld testifying that no one imagined people would use airplanes as missiles.
As you said “We are well and truly….”
I tripped over that passage too. Yikes.
In re intellectual property:
As the IP Watchdog article mentions, the EO kicks the copyright can down the road, mentioning a “consultation” with the U.S. Copyright Office (which, it’s worth remembering, is part of the Library of Congress and thus not an executive branch agency, although it often functions like one) after the Office digests the nearly-10,000 comments it got in response to its notice of inquiry on AI and IP (comments period closed October 30).
At Section 5.2(c)(iii) it says:
“within 270 days of the date of this order or 180 days after the United States Copyright Office of the Library of Congress publishes its forthcoming AI study that will address copyright issues raised by AI, whichever comes later, consult with the Director of the United States Copyright Office and issue recommendations to the President on potential executive actions relating to copyright and AI. The recommendations shall address any copyright and related issues discussed in the United States Copyright Office’s study, including the scope of protection for works produced using AI and the treatment of copyrighted works in AI training.”
> the EO kicks the copyright can down the road mentioning a “consultation”
Let’s not be too cynical. Maybe, after the consultation, they’ll write a report!
Thanks, Lambert, for this deep dive.
It sounds like this EO is more about the appearance of “doing something” but will actually do pretty much nothing to slow the incoming tsunami of deepfakes, AI-enabled revenge p*rn, and other dodgy AI “use cases”.
Doing nothing or nearly nothing but with great fanfare and “societal concerns” with lots of law articles and hundreds of references to other laws seems to be the motto of our globalist neolibs.
Ironically, the justification for this EO describes general American values as the basis for past successes, but appears not to assign any of the success to EOs, regulation or ‘guardrails’, which I find quite telling.
As for cybersecurity, I wonder if the EO takes a position on software backdoors, as I consider injected responses in general AI to be the actual greatest threat. The increased human intervention/locking down of possible responses is visible in chatgpt vs. the initial weeks of operation. Who can argue with the output of a logic machine?
Oh those Bidenites. They think they are so smart, that no one has noticed their MO.
Release some big pile of words some of which that might indicate they are noticing public concern and wishes and dealing with the problems the public sees or are experiencing. Leave much of it purposely vague. Do not back up legally anything remotely regulatory or publicly beneficial. If the public is stupid enough to make use of any supposed benefits do not point out any fine print issues or more often how little real foundation there is. Wait for the legal challenges in 3…2…1. The court will throw out everything not corporate or big money friendly usually leaving the public worse off than before the Bidenites crafted Joe’s orders.
In my younger years I would have expected the majority of the press to be all over things like this. But now that most of our newspapers and news media are just subsidiaries of corporate 800 lb gorillas either the editors have already been told their is no reason to have human beings craft news “stories” from press releases OR the writers just haven’t figured out they are disposable.
Yes, very well put. Great piece, Lambert, thank you. When I hear “self-regulation and voluntary compliance”, as well as reading the blah blah bullsh$t especially the “eight goals”:
“AI safety and security, protect privacy, advance equity and civil rights, stand up for consumers, patients, and students, support workers..”
Where do we do this at all? Consumer, patient student and support worker support?? (Try no support, much less twice in one sentence. “Equity and civil rights”? More worthless claptrap from this guy who has clout, he’s the President, but fails to use it effectively or at all.
My ask is to take the time to learn about this stuff. Mozilla has a good start
https://ai-guide.future.mozilla.org
Sounds kinda like you’re implying that we haven’t done our homework.
I’ve been in the industry and have been following the fortunes of “AI” for over 40 years.
Hbu?
But isn’t that already a given?
In fact, of course, this is just another case of this guy’s mouth being way out in front of the feeble entity they call his brain. In fact it makes you wonder how he manipulated his way to the presidency – until you remember that he was selected by his controllers, and docile controllability is the chief qualifying characteristic. They don’t care what he says, they only care that he does what he’s told to do.