Yves here. Given the willingness of far too many businesses (and one assumes government agencies) to implement at best flukey and at worst error-producing AI with a lack of adult supervision, one can understand why many are calling for regulation. But the wee problem is the AI overlords have also been engaging in AI scaremongering…so as to get regulation. Why? They are afraid that too many AI implementations have low barriers to entry (as in could be developed with pretty small and specific training sets). Having to comply would limit who could develop AI by imposing compliance costs. So odds greatly favor that any regulatory regime will be designed to further enrich AI titans.
My preferred remedy is liability. Make AI developers and implementers liable, with treble damages and recovery of legal costs if they could have foreseen the bad outcomes. This can be done by statute. That would lead AI entrepreneurs to be a lot more careful before they foisted their creations on the public at large.
By Tom Valovic, a writer, editor, futurist, and the author of Digital Mythologies (Rutgers University Press), a series of essays that explored emerging social and cultural issues raised by the advent of the Internet. He has served as a consultant to the former Congressional Office of Technology Assessment and was editor-in- chief of Telecommunications magazine for many years. Tom has written about the effects of technology on society for a variety of publications including Common Dreams, Counterpunch, The Technoskeptic, the Boston Globe, the San Francisco Examiner, Columbia University’s Media Studies Journal, and others. He can be reached at jazzbird@outlook.com. Originally published at Common Dreams
By virtue of luck or just being in the right place at the right time, I was the first journalist to report on the advent of the public internet.
In the early 1990s, I was editor-in-chief of a trade magazine called Telecommunications. Vinton Cerf, widely considered to be “father of the internet,” was on our editorial advisory board. Once Sunday afternoon, Vint contacted me to let me know that the federal government was going to make its military communication system, ARPANET, available to the general public. After reading his email, I more or less shrugged it off. I didn’t think much of it until I started investigating what that would really mean. After weeks of research and further discussions, I finally realized the import of what Vint had told me with its deeper implications for politics, society, culture, and commerce.
As the internet grew in size and scope, I started having some serious concerns. And there was a cadre of other researchers and writers who, like myself, wrote books and articles offering warnings about how this powerful and incredible new tool for human communications might go off the rails. These included Sven Birkerts, Clifford Stoll, and others. My own book Digital Mythologies was dedicated to such explorations.
While we all saw the tremendous potential that this new communications breakthrough had for academia, science, culture, and many other fields of endeavor, many of us were concerned about its future direction. One concern was how the internet could conceivably be used as a mechanism of social control—an issue closely tied to the possibility that corporate entities might actually come to “own” the internet, unable to resist the temptation to shape it for their own advantage.
The beginning of the “free service” model augured a long slow downward slide in personal privacy—a kind of Faustian bargain that involved yielding personal control and autonomy to Big Tech in exchange for these services. Over time, this model also opened the door to Big Tech sharing information with the NSA and many businesses mining and selling our very personal data. The temptation to use free services became the flypaper that would trap unsuspecting end users into a kind of lifelong dependency. But as the old adage goes: “There is no free lunch.”
Since that time, the internet and the related technology it spawned such as search engines, texting, and social media, have become all-pervasive, creeping into every corner of our lives. By default, and without due process of democratic participation or consent, these services are rapidly becoming a de facto necessity for participation in modern life. Smartphones have become essential tools that mediate these amazing capabilities and are now often essential tools for navigating both government services and commercial transactions.
Besides the giveaway of our personal privacy, the problems with technology dependence are now becoming all too apparent. Placing our financial assets and deeply personal information online creates significant stress and insecurity about being hacked or tricked. Tech-based problems then require more tech-based solutions in a kind of endless cycle. Clever scams are increasing and becoming more sophisticated. Further, given the global CrowdStrike outage, it sometimes seems like we’re building this new world of AI-driven digital-first infrastructure on a foundation of sand. And then there’s the internet’s role in aggravating income and social inequality. Unfortunately, this technology is inherently discriminatory, leaving seniors and many middle- and lower-income citizens in the dust. To offer a minor example, in some of the wealthier towns in Massachusetts, you can’t park your car in public lots without a smartphone.
Will AI Wreck the Internet?
Ironically, the Big Tech companies working on AI seem oblivious to the notion that this technology has the potential to be a wrecking ball. Conceivably, it could diminish everything that’s been good and useful about the internet while creating unprecedented levels of geopolitical chaos and destabilization. Recent trends with search engines offer a good example. Not terribly long ago, search results yielded a variety of opinions and useful content on any given topic. The searcher could then decide for her or himself what was true or not true by making an informed judgment.
With the advent of AI, this has now changed dramatically. Some widely used search engines are herding us toward specific “truths” as if every complex question had a simple multiple-choice answer. Google, for example, now offers an AI-assisted summary when a search is made. This becomes tempting to use because manual search now yields an annoying truckload of sponsored ad results. These items then need to be systematically ploughed through rendering the search process difficult and unpleasant.
This shift in the search process appears to be by design in order to steer users towards habitually using AI for search. The implicit assumption that AI will provide the “correct” answer however nullifies the whole point of a having a user-empowered search experience. It also radically reverses the original proposition of the internet i.e. to become a freewheeling tool for inquiry and personal empowerment, threatening to turn the internet into little more than a high-level interactive online encyclopedia.
Ordinary citizens and users of the internet will be powerless to resist the AI onslaught. The four largest internet and software companies Amazon, Meta, Microsoft, and Google are projected to invest well over $200 billion this year on AI development. Then there’s the possibility that AI might become a kind of “chaos agent” mucking around with our sense of what’s true and what’s not true—an inherently dangerous situation for any society to be in. Hannah Arendt, who wrote extensively about the dangers of authoritarian thinking, gave us this warning: “The ideal subject of totalitarian rule is not the convinced Nazi or the convinced Communist, but people for whom the distinction between fact and fiction (i.e., the reality of experience) and the distinction between true and false (i.e., the standards of thought) no longer exist.”
Summing up, we need to radically reassess the role of the internet and associated technologies going forward and not abandon this responsibility to the corporations that provide these services. If not, we risk ending up with a world we won’t recognize—a landscape of dehumanizing interaction, even more isolated human relationships, and jobs that have been blithely handed over to AI and robotics with no democratic or regulatory oversight.
In 1961, then FCC Chairman Newton Minow spoke at a meeting of the National Association of Broadcasters. He observed that television had a lot of work to do to better uphold public interest and famously described it as a “vast wasteland.” While that description is hardly apt for the current status of the internet and social media, its future status may come to resemble a “black forest” of chaos, confusion, misinformation, and disinformation with AI only aggravating, not solving, this problem.
What then are some possible solutions? And what can our legislators do to ameliorate these problems and take control of the runaway freight train of technological dependence? One of the more obvious actions would be to reinstate funding for the Congressional Office of Technology Assessment. This agency was established in 1974 to provide Congress with reasonably objective analysis of complex technological trends. Inexplicably, the office was defunded in 1995 just as the internet was gaining strong momentum. Providing this kind of high-level research to educate and inform members of Congress about key technology issues has never been more important than it is now.
Curious that there was no mention of Suchir Balaji’s death/murder. Ian Carroll @IanCarrollShow had an interesting Tweet on the individuals that make up the board members of Open AI.
Congressional Office of Technology Assessment. I’m sure that will take care of the problem!
The same people leading that department will rotate into their next position at the very companies they were supposed to regulate. Isn’t that how it works now?
I do think he unconsciously knows where this is heading though when he worried: we risk ending up with a world we won’t recognize—a landscape of dehumanizing interaction, even more isolated human relationships, and jobs that have been blithely handed over to AI and robotics with no democratic or regulatory oversight.
So more of the same, only on steroids.
Good comment Blue (sounds like I’m cheering on the Dodgers). The chances of a congressional OTA are slim and none. How many people have had their SSI numbers, date of birth (etc.) leaked? Nearly everyone. And what did our government do? I raised the question a couple days ago in Water Cooler asking does anyone have ideas on strategies for getting off the internet.
I am sometimes reminded of Rosa Parks. If everyone refused to use a “smart” phone to park in Mass. would the parking lot stay in business? One person in line saying, “No, I’m not going to use this QR code to order my dinner” could easily cause the entire restaurant to empty as people agree, “yes, this is BS, I’m out a here too”. Actions speak louder than words – Rosa Parks showed us that.
I have my small bit of resistance – I refuse to use QR codes to order from a menu or do anything else. I recently had lunch at a fancy San Diego beach side restaurant with a group of friends from high school and they wanted us to scan a QR code stuck to the table to see a menu. The QR code was stuck to one side of a table for six. This is a restaurant where it would be easy to spend at least $75 EACH on lunch (shared appetizers, mimosa x 2, entree and tip). I went full Karen and demanded to speak to the manager making it clear I was not annoyed with the server. Guess what – they found perfectly nice paper menus for us and comped us appetizers.
I saw a lost dog sign with a QR code instead of a phone number when I was out running.
Many of these these tech “solutions” don’t seem to have much function. Why is it helpful to not be able to use a menu? My parents are pretty on the ball with basic tech and they can’t manage with these silly things. Imagine if I saw your dog but couldn’t get in touch if I didn’t know how to use a QR code.
Apologies to anyone named Karen, every one I know named Karen is a sweetheart. I wish there was a better term.
OTOH it does afford new opportunities. In Edinburgh, UK for example, there is a relatively new popup restaurant market. A large location with space for many small food suppliers. You sit at a table, scan the QR code and then can order any item from any food supplier, and have it brought to your table – you could have drink from one place, a starter from another, main from a third and desert from a fourth if you wanted. Hard to do without using the Internet. The menus and food suppliers change frequently, so trying to do it all on paper would be difficult.
However, the main issue for me is that everything arrives randomly, especially if you are with a group and all order from different places.
In general though, you are absolutely right. My Mum refuses to have a mobile phone, and is gradually being disenfranchised from society all in the name of “progress”.
There was a newish restaurant here in Ann Arbor that started out thinking it was hip, groovy and cool to base menu-access on a qr code. If you needed a physical menu, they would bring you a heavy scrollable tablet-device menu scanner.
Apparently rebellion was widespread and sullen enough that the restaurant abandoned qr code and tablet menus just in time and brought in good old paper menus.
This is the restaurant. You can see from the photo how groovy, cool and hip it is. Good food, though.
https://www.experience4m.com/
In 2013, post Snowden’s whistleblowing and other lapses, the US military via then General Alexander promised to weaponize the internet.
https://www.nsa.gov/Press-Room/Speeches-Testimony/Article-View/Article/1619508/transcript-of-remarks-by-gen-keith-alexander-commander-us-cyber-command-uscyber/
And here we are.
Imo, the corporations are doing what the US military wants them to do. The internet came out of military development and funding to begin with. Most of the big internet tech companies live off govt contracts.
Have a print library of older reference material: an old pre-2010 dictionary, an old encyclopedia set, etc. AI can’t muck with analog print. When I read Canada wanted to get rid of all pre-2008 books in their school libraries (in the name of social justice) a big red flag went up for me. Whitney Webb is thinking of putting out a subscribers print newsletter. / my 2 cents
I keep making the point that if AI crapifies the Internet to the point where it merely spits out “AI generated” truths, then we will see science abandon the Internet. Science always requires skepticism and questioning the status-quo to move forward. The day humans stop questioning, science dies.
If that forces science back to peer-reviewed print journals along with physical libraries and labs, all the better, but I remember the dawn of the Internet when the big hope was that the Internet would democratize information.
Oh well, it was nice while it lasted.
The day humans stop questioning, science dies.
ChrisFromGA: I don’t think humans have stopped questioning. The problem is that they are not allowed to question anything without fear of losing their job or being labelled a terrorist. IMHO opinion science died a long time ago once it became about access to funding.
I do agree about what you say here: Science always requires skepticism and questioning the status-quo to move forward. Einstein would give you a thumbs-up!
Maybe science will have to move over to the Dark Web to keep it safe-
https://en.wikipedia.org/wiki/Dark_web
It wasn’t “Canada”, it was Ontario’s Peel District School Board (the second largest school board in Canada). The policy in question was an “equitable distribution cycle” which was part of a larger response to a controversy that included widespread allegations of systemic discrimination in hiring within the district. It appears that some of the schools just decided to remove everything published before 2008 as that would be easier than reading through the policy.
Of course, not saying that the policy itself wasn’t deeply flawed.
Thanks.
Not only was Newton Minnow right but television has become an even vaster wasteland with infotainment instead of news and double the amt of advertising. So it’s hardly surprising that the internet is going in the same direction.
But unlike the television age we now do at least have greater ability to talk back to the would be manipulators and in fact I and others are doing it right now.
Meanwhile the tech bros stand triumphantly on quicksand and the bs nature of their latest AI fad will become obvious soon enough. It’s the destructive side of tech seen in weapons and carbon spewing lifestyles that we and the planet really have to worry about. It’s a little late to worry about losing our minds.
If we go by the evidence, the internet has been a major disaster for humanity that only looks to get worse. The best course would be to shut it down completely and turn it back over to the engineers to rework into something that has security, integrity and privacy protection built in.
Since that won’t happen, the two alternatives would be big liabilities for letting people’s personal data be leaked or stolen, and passing a Right To Remain Offline bill, that says things like no insistence on using the internet for your personal business (so, you have the right to a paper bill in postal mail at no extra cost, you can’t be limited to paying online via a “portal”, you have a right to all network connectivity being removed from a vehicle you buy at no charge and without compromising the vehicle’s ability to get you places, you can’t be required to download an app to interact with your town/state/federal services, etc.). The liabilities for allowing people’s data to be stolen from your systems by hackers should be big ones, like $100,000.00 per person per incident. After the first few companies go bankrupt because of that, we should see businesses being more serious about security. Right now there is really no penalty. And the above should also apply to AI. You have the right to have any data about yourself not be included in AI training sets, including pictures and DNA. You have the right to always be able to talk to a human rather than an AI when contacting a business (or government). In terms of internet “content” degradation, that’s really no different than the bot crapification of everything we’ve been living with for a decade or more. AI is just another type of bot generating noise in the communication system. The internet needs an “immune system” of some sort that recognizes bot-generated traffic and neutralizes it.
The open internet, with its lack of an editor-in-chief, was supposed to usher in a golden era of mass participation in information creation, dissemination and exchange. With the barriers to creating and posting content torn down and the marginal cost of distributing said content plummeting to zero, we have mass participation alright, bots, scammers, extremists, trolls etc are in on the action too and the result is a drastic decline in the qualitative mean of online discourse.
With the avalanche of synthetic AI generated content that’s about to be unleashed on the open web, the already heavy burden of separating fact from fiction that internet users have to carry is about to get heavier, much heavier. AI is the coup de grace for the small vestiges of trust left on the web and anybody expecting that the corporations that own it will adopt an empathetic posture towards users and somehow build firewalls between us and the torrent of lies, fake content and misinformation headed our way is sadly deluded. The gross tonnage of capex dollars being ploughed into moving the capability frontier of this technology forward means the internet will become almost entirely synthetic (with authenticity an anachronism reserved for its niche corners). The pendulum of culture will hopefully swing towards “made by humans” content becoming a differentiating, competitive moat for new vertical platforms that embrace it and make it an explicit value proposition to users.
I see this as the only hope: vertical, human-centric platforms that put authenticity at the centre of the user experience will one day go from niche to mainstream as humans collectively tire of, and repudiate, the AI regime that our tech overlords are foisting upon us.
Not that I want to just call this guy a complete idiot, but… you really do have to be mentally stuck in about 1992 to think that reviving the Congressional Office of Technology Assessment would do anything at all. Does anyone, anywhere really think that a circa 2024 US congressman is going to take the recommendation of an OTA paper over the wishes of a valley CEO?
The internet of today is already pretty terrible and pointless compared to 15 or even 10 years ago. My hope, and increasingly I do think that this is a realistic and plausible scenario, is that the firehose of AI bullshit is going to render the internet sufficiently useless that people are going to get a lot more offline in response. Facebook and Twitter are both currently serving as examples of just that dynamic playing out.
I completely agree with your intro Yves. Marc Andreessen alleged on Joe Rogan’s show that the Biden admin told the big three that it wants to regulate them so that it only has to deal with those three. If the blob wants to finish automating the surveillance and mediation of everything that passes over the internet then such a public/private arrangement could make sense. I don’t trust Andreessen much but maybe a bit more than the Biden admin.
Same thing that happened to other media. Radio and TV started out with the idea of uplifting society with educational and cultural shows. Symphony concerts and jazz. Educational TV (PBS) was used in classrooms before it was bought out by corporations. Before they took money from coal mining interests to promote MTM. That didn’t go over well with viewers like you.
The internet was anybody who could take a day to learn HTML so millions of people had personal websites. I had three and built some for others. The code got more complicated than most people wanted to mess with. The chances of showing up on a search diminished.
You used to be able to say anything. The troll wars were glorious. As long as nobody got shot anything went. True freedom of speech.
The internet is harder to cage than broadcast media so there should always be independent voices if we can find them.
I use Yandex more often. It seems to help.
I always thought some versino of the AI problems could always be seen from the early days of ithe internet, although I’ll confess that the particular form that it would eventually take may not have been apparent.
The trouble with the internet was always that, even if fully realized, anybody could write anything for hte whole to see, so to speak. That’s a lot of information with dubious veracity to wade through if anyone was going to be make sense of. Some means of organizing all the informati9on into bite sized pieces that people could digest was going to be “necessary” for anyone to make any use of it and that was going to be immensely powerful. I remember thinking that search engines were going to be the key, back in the heyday of Yahoo, then thought differently when Google swiftly supplanted Yahoo. But, in many ways, Google took on the role that I expected out of Yahoo. The role of AI is somewhat unexpected (or, at least the form of influence that AI seems to have taken–but that’s probably not finalized yet by a long shot.) but the big outlines should have been apparent from early on, I think.
Anything the establishment can sink its claws into is going to have problems; however, as long as sites, such as Naked Capitalism, are around, I say that the Internet continues to have much to offer.
I think what will happen as online content gets further detached from “reality” isn’t “online content will be viewed as a dumpster fire and abandoned” instead, “reality” will be “abandoned in favor of the online dumpster fire”.
People are heavily addicted to “online content” to the point in which consuming it isn’t just a habit, its a dedicated part of their every day life. (To most it’s probably in the same category as sleep or eating.) That sort of regularity isn’t done away with because the “utility diminishes” or the “harm increases”. Not even a catastrophic consequence can shake most people loose. So they will just lash out at “reality” simply for not being the thing they are addicted to.
So..
If (and it’s a big if) there is going to be a change it’s going to be in some younger generation that looks at the downsides of the online addiction and says “They are crazy, that is Not going to be me”.
From my understanding, much of the story is in the first three paragraphs of the article. A publicly-funded project was handed over to private capital, and it was inevitable, or at least likely that this would become consolidated and so made worse for the user to the profit of the owner, whichever giant company emerged from the early fights.
One can still avoid the corporate internet, if you’ve memorized or written down the URLs of the few sites worth checking, and have developed the critical literacy skills necessary to draw from the other sites what you can. I credit, partially, the internet with having given many this skillset.
The radical reassessment called for will not happen at the level required to affect industry wide changes, because of the vested interests involved. When people are engaged in a profitable activity, they will justify it in sometimes wild and entertaining ways, but they will justify it, no matter how crazy or inconsistent, in order to keep doing it.
Reminds me of the “Futurama” episode.
“Behold! The Internet!”
“My God! It’s full of… ads!”
https://www.youtube.com/watch?v=3Lp_EaVd0xU
«My preferred remedy is liability. Make AI developers and implementers liable, with treble damages and recovery of legal costs
if they could have foreseen the bad outcomes.»Perhaps, “if a reasonable person would have avoided the bad outcome.” A reasonable poison pill, IMHO, because AI developers cannot articulate a reasonable thought process for their artificial agents. Among other problems, artificial agents have no personal understanding of what is “harm” to a real, human person.
I’ve been on the Net since the ARPANET days circa 1980 and, like the author, started thinking this would be something big in the 90s. For me, the awakening was probably seeing the first NSCA Mosaic browser, and how www sites started multiplying rapidly.
Agree with the others, above, that a revived Congressional Office of Technology Assessment is not going to stop the current juggernaut.
I don’t see “A.I.” as a passing fad. We shake our heads at tech bros pushing this new unwanted tech on hundreds of millions of people, crapifying search, and seemingly-bonkers initiatives such as populating Meta with countless new bots and fake profiles, e.g., a “queer, African-American ‘woman’ residing in the AI space”, but what if, in fact, the Tech Bros HAVE gamed this out and know that most people will just put up with it?
Everyday life has become deeply mediated by different forms of technology, and how many people are really going to cut the cord on all that? For many people, I would guess it will be some version of “meh, I’m okay with Twitter/Reddit/whatever, but Meta is just a bridge too far.”
We are merging at high speed into a P.K. Dick-esque world of fakes and sexy, candy-colored simulations, and I don’t see anything that can stop this.
Agree with you totally Acacia.
>We are merging at high speed into a P.K. Dick-esque world of fakes and sexy, candy-colored simulations
I’ve been wondering what happens when the AI training data (internet scrapes) becomes earlier AI slop. Some ugly recursive idiocy.
Isn’t it more sane to allow self censorship, to credit your people with enough intelligence to see what is right or wrong … say to not elect an assassin (for another country no less) as your leader.
Else you must obey what some stranger from wherever says, maybe even Rupert
This web is already too censored, has been since day one
Younger people will never know the magic of connecting to Darpanet with a 2400 baud modem and being able to access universities all over the planet. Heady days when HTML became the language of choice and free Apache servers became available! The promise of free and open communication did not last long as the advertising model drove development, and that is the model we now have. It is too bad we couldn’t have decided to pony up a few bucks/month to pay for the servers and service, but ‘free’ was compelling, and too many wanted to get rich.
New laws won’t matter if they aren’t enforced. Just charge AI companies an interest rate of 10% on their loans and this whole problem would disappear. High interest rates and AI will both lead to high unemployment, so might as well increase interest rates now, so that we will just have high unemployment but no AI.
There is a beauty in Yves‘ suggestions I find appealing. I would add jail time however. I find that money is a cost of doing business but jail time is a real-time impediment to doing business.
I find myself opting out of Win or MacOS as they start to keylog and record and look to Linux more than hitherto
I tire of AI hype and crappy YouTube derivative videos voiced by AI drones. The fact is the Tech Sector is mature and is now cannibalising its customer base to sell to Regime Overseers like Peter Thiel
I like that Russia has banned Telegram for all government communications.