Yves here. With science (particularly medicine) corrupted by commercial interests, yet elite authorities and mouthpieces insisting that the masses defer to “the science,” debates and decisions over safety are moving more and more into the political realm. That normally would not be a bad thing. Most favor risk avoidance with respect to large scale experiments on the public, consistent with the dictates of the precautionary principle. However, many views are now influenced by finely-tuned PR campaigns….again on behalf of monied interests. So even with more informed layperson input on novel and potentially dangerous technologies…who can mind the minders if the minders are very clever at cherry-picking and spinning relevant information?
Please note that the post includes three infographics that are useful but not critical to the post. Sometimes I can see in the code how to resize them, but these had no obvious clues. If any helpful readers can advise, please e-mail me at yves-at-nakedcapitalism-dot-com with “Resize” in the subject line. Or you can view them at the original location.
By Michael Schulson, a contributing editor for Undark whose work been published by Aeon, NPR, Pacific Standard, Scientific American, Slate, and Wired, among other publications., and Peter Andrey Smith, a senior contributor at Undark, whose stories have been featured in Science, STAT, The New York Times, and WNYC Radiolab. Originally published at Undark
The project was so secret, most members of Congress didn’t even know it existed.
In 1942, when an elite team of physicists set out to produce an atomic bomb, military leaders took elaborate steps to conceal their activities from the American public and lawmakers.
There were good reasons, of course, to keep a wartime weapons development project under wraps. (Unsuccessfully: Soviet spies learned about the bomb before most members of Congress.) But the result was striking: In the world’s flagship democracy, a society-redefining project took place, for about three years, without the knowledge or consent of the public or their elected representatives.
After the war, one official described the Manhattan Project as “a separate state” with “a peculiar sovereignty, one that could bring about the end, peacefully or violently, of all other sovereignties.”
Today’s cousins to the Manhattan Project — scientific research with the potential, however small, to cause a global catastrophe — seem to be proceeding more openly. But, in many cases, the public still has little opportunity to consent to the march of scientific progress.
Which specific experiments are safe, and which are not? What are acceptable levels of risk? And is there science that simply should never be done? Such decisions are arguably among the most politically consequential of our time. But they are often made behind closed doors, by small groups of scientists, executives, or bureaucrats.
In some cases, critics say, the simple decision to do the research at all — no matter how low-risk a given experiment may be — advances the field toward riskier horizons.
In the text and graphics that follow, we attempt to illuminate some of the key people who are currently entrusted with making these weighty decisions in three fields: pathogen research, artificial intelligence, and solar geoengineering. Identifying such decision makers is necessarily a subjective exercise. Many names are surely missing; others will change with the incoming administration of Donald Trump. And in every field, decisions are rarely made in isolation by any one person or even small group of persons, but as a distributed process involving varying layers of input from formal and informal advisers, committees, working groups, appointees, and executives.
The extent of oversight also varies across disciplines, both domestically and across the globe, with pathogen research being much more regulated than the more emergent fields of AI and geoengineering. For AI and pathogen research, our focus is limited to the United States — reflecting both a need to limit the scope of our reporting, and the degree to which American science currently leads the world in both fields, even as it faces stiff competition on AI from China.
With those caveats in mind, we offer a sampling — illustrative but by no means comprehensive — of people who are part of the decision-making chain in each category as of late 2024. Taken as a whole, they appear to be a deeply unrepresentative group — one disproportionately White, male, and drawn from the professional class. In some cases, they occupy the top tiers of business or government. In others, they are members of lesser-known organizational structures — and in still others, the identities of key players remain entirely unknown.
Pathogen Research
Most research with dangerous bacteria and viruses poses little risk to the public. But some experiments, often called gain-of-function work, involve engineering pathogens in ways that may make them better at infecting and harming human beings.
The scientists who do this work say their goal is to learn how to prevent and fight future pandemics. But, for a portion of such experiments, an accidental lab leak could have global repercussions.
Today, many experts are convinced that Covid-19 jumped from an animal to a person — and most evidence collected to date points squarely in that direction. Still, some scientists and U.S. government analysts believe that the Covid-19 pandemic may have originated at a Chinese laboratory that received U.S. funding
Whatever the reality, the possibility of a lab leak has heightened public awareness of risky pathogen research.
One of the secretive committees that makes decisions about potential gain-of-function research is housed with the National Institutes of Health. The other is part of the Administration for Strategic Preparedness and Response within HHS. Spokespeople for both offices declined to share details about the committees’ memberships, or even to specify which senior officials coordinate and oversee the committees’ activities.
“I think some of this is for good reason, like preserving the scientific integrity and protecting science from political interference,” said one former federal official who worked outside of HHS, in response to a question about why details about oversight are often difficult to pin down. (The official spoke on condition of anonymity because the views expressed may not reflect those of their current employer.) “I think some of this is also driven by an inability of HHS to understand how to navigate increasing public scrutiny of this kind of work,” the official added, describing the lack of transparency around the special HHS review panel as “totally crazy.”
Artificial Intelligence
If pathogen research is mostly funded and overseen by government agencies, AI is the opposite — a massive societal shift that is, in recent years, led by the private sector.
The consequences of the technology are already far-reaching: Automated processes have denied people housing and health care coverage, sometimes in error. Facial recognition algorithms have falsely tagged women and people of color as shoplifters. AI systems have also been used to generate nonconsensual sexual imagery.
Other risks are hard to predict. For years, some experts have warned that a hyperintelligent AI could pose profound risks to society — harming human beings, supercharging warfare, or even leading to human extinction. Last year, a group of roughly 300 AI luminaries issued a one-sentence warning: “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”
Many other experts, especially in academia, characterize those kinds of warnings principally as a marketing stunt, intended to deflect concern from the technology’s more immediate consequences. “The very same people who are making and profiting by AI are the ones who are trying to sell us on an existential threat,” said Ryan Calo, a co-founder of the University of Washington’s Center for an Informed Public.
“It’s cheaper to guard against existential threat that is future speculative,” he said, “than it is to actually solve the problems that AI is creating today.”
Despite calls for regulatory scrutiny, no federal agency comparable to the U.S. Food and Drug Administration conducts pre-market approval for AI systems, requiring developers to prove the safety and efficacy of their product prior to use.
Federal regulatory agencies have made limited moves to oversee specific applications of the technology, such as when the Federal Trade Commission banned Rite Aid from using face-recognition software for five years. At the state level, California’s governor recently vetoed a controversial bill that may have curbed the tech’s development.
Solar Geoengineering
In theory, injecting particles into the atmosphere could reflect sunlight, cooling the planet and reversing some of the worst effects of climate change. So could altering clouds over the ocean so that they reflect more light.
In practice, critics say, solar geoengineering could also bring harms, both directly (for example, by changing rainfall patterns) or indirectly (by sapping resources from more fundamental climate solutions like reducing greenhouse gas emissions.) And once interventions are underway, they may be difficult or dangerous to stop.
Right now, the science on geoengineering largely consists of computer models and a handful of small-scale tests. But in 2022, worried about where the field was trending, hundreds of scientists and activists called for a moratorium on most research. Some experts suggest that even small, harmless real-world tests are paving the way for future, riskier interventions.
Within the U.S., no single government agency exercises clear-cut control over the decision of whether to test or use that technology, although certain outdoor experiments could plausibly trigger regulators’ attention — for example, if they affect endangered species. Globally, experts say, it remains unclear how existing international treaties or agencies could limit solar geoengineering, which could allow a single country or company to unilaterally alter the global climate.
“It’s a very small group of people” making decisions about solar geoengineering, said Shuchi Talati, founder of the Alliance for Just Deliberation on Solar Geoengineering. “It’s a very elite space.”
Racist, reactionary MICIMATT required a white-flight, suburbanite bourgeoisie, addicted to gas, oil, coal, uranium & shiny cheap/ slave labor imports to fuel forever-wars; with nuclear subs, carriers; unlimited budgets with poison pills to keep “the help” red-lined into sacrifice zones, until urban renewal removed any competition, mass transit, ethnic, union or lefty press & silenced contradictory fact? Refugees built THEM the final say? If Trump’s Tech Boi plutocrats need SMR, fracked CNG & bitumen to power AI, they’ll simply have academic pundits & SCIENCE dictate a plethora of bio-fuel, geoengineering, GE monoculture, carbon sequestration and bridge fuel boondoggles to obviate any need for nascent AGW-mitigation imports from China?
Wow. That’s a powerful, condensed commentary. I think I agree.
Matter, of perspective?
SERIOUSLY, best wishes to ALL!
MICIMATT: Military-Industrial-Congressional-Intelligence-Media-Academia-Think-Tank
Thank you.
I served for 15 years on my university’s Research Ethics Review Board. Such panels have become very good at what they do, but there are glaring weaknesses as well. All scientists–both natural and social–think that THEIR research is of pressing importance, and there are always cowboys who think that rules don’t apply to them. At least in regard to human subjects, review boards have become very good at protecting minors and securing informed consent. There are, however, two structural weaknesses of review boards. First is in their scope, those areas of research that aren’t covered by the board. Most of what the article described would not be covered by ethics review. Second, there is an assumption that researchers have the right to do research, so long as their work is compliant with ethics rules. Almost never is the question raised as to whether the research should be done at all.
Readers using the Firefox browser can right click on the image, select “This Frame” and select “Open this Frame in a new Tab”. The image will be larger, easily legible, and there will be ‘>’ indications to view a small stack of associated images. This is not exactly what you requested but it might do the trick for some who use Firefox. I do not know whether other browsers have similar capabilities.
This does not answer what Yves requested but it might help users of Firefox.
Yes, works for me in FFox. Full screen, facial images recognizable in New Tab.
(Off topic, but related: a few days ago a commenter suggested a ‘finger-printing’ blocker available for FFox browser (CanvasBlocker). I installed it and it works. It also allows me to view blocked URL’s (RT website).) YMMV
Neat trick, thank you.
Geo Engineering.
This is proven fact not theory. Why? Scientists have been able to see the ramifications of a number of volcanoes that have put material into the upper atmosphere, and then watched the effects world wide over years. That is geo engineering, by nature.
There is solid data on ocean cloud formation due to sulfur emissions from ships. Sulfur in fuel was banned worldwide in 2020. There has been a noticeable reduction in cloud formation along the sea routes that ship take. This has also been measured in increased sea temps along those routes and estimates as to world wide temperature increases.
And finally we humans are already doing geo engineering in the form of CO2, Methane, black carbon as well as a host of other chemicals which have been shown to be changing our climate including rain fall patterns and temperature changes. Of course there are those that don’t agree with that premise, but the high 90%’s of climate scientists do think this is true.
Could doing actual proactive geo engineering to reduce climate temperature increases, create problems, yes sure. And it has to be compared to the actual changes we are doing and the changes and damages that will occur from doing nothing.
One aspect of man made proactive geoengieering is that every one of the processes is very short lived and requires continuous replenishment. Allowing for fine tuning of the location and quantity of the material to better target the desired effects.
There is no question that massive changes are happening but is moving to low carbon fuels going to happen fast enough to stop the worst of the changes?