Yves here. Two things about increased automation frost me. One is its stealth or main purpose as forcing planned obsolescence. So irrespective of the impact on job/labor content, any savings won’t necessarily accrue to users. Two is automation/AI serving as an excuse to shift costs and tasks onto consumers. How many times do customer service systems try to force you to deal with the vendor on the Web instead? I pretty much always refuse. Set up a profile in a payment portal? Nevah! A call to a rep has some odds of being treated as a one-off, while those pesky portals are designed to harvest it.
IMHO, another degredation is increased alienation as the human content of product provision and service is either stripped out or strictly time limited. Running simple errands is made more pleasant by seeing familiar faces among the store staff, even if you can’t often chat with them.
And other experts have already gone on at much greater length about the limitations of AI, so I am confident readers will have fun chewing over those issues.
By John Feffer. Originally published at TomDispatch
My wife and I were recently driving in Virginia, amazed yet again that the GPS technology on our phones could guide us through a thicket of highways, around road accidents, and toward our precise destination. The artificial intelligence (AI) behind the soothing voice telling us where to turn has replaced passenger-seat navigators, maps, even traffic updates on the radio. How on earth did we survive before this technology arrived in our lives? We survived, of course, but were quite literally lost some of the time.
My reverie was interrupted by a toll booth. It was empty, as were all the other booths at this particular toll plaza. Most cars zipped through with E-Z passes, as one automated device seamlessly communicated with another. Unfortunately, our rental car didn’t have one.
So I prepared to pay by credit card, but the booth lacked a credit-card reader.
Okay, I thought, as I pulled out my wallet, I’ll use cash to cover the $3.25.
As it happened, that booth took only coins and who drives around with 13 quarters in his or her pocket?
I would have liked to ask someone that very question, but I was, of course, surrounded by mute machines. So, I simply drove through the electronic stile, preparing myself for the bill that would arrive in the mail once that plaza’s automated system photographed and traced our license plate.
In a thoroughly mundane fashion, I’d just experienced the age-old conflict between the limiting and liberating sides of technology. The arrowhead that can get you food for dinner might ultimately end up lodged in your own skull. The car that transports you to a beachside holiday contributes to the rising tides — by way of carbon emissions and elevated temperatures — that may someday wash away that very coastal gem of a place. The laptop computer that plugs you into the cyberworld also serves as the conduit through which hackers can steal your identity and zero out your bank account.
In the previous century, technology reached a true watershed moment when humans, harnessing the power of the atom, also acquired the capacity to destroy the entire planet. Now, thanks to AI, technology is hurtling us toward a new inflection point.
Science-fiction writers and technologists have long worried about a future in which robots, achieving sentience, take over the planet. The creation of a machine with human-like intelligence that could someday fool us into believing it’s one of us has often been described, with no small measure of trepidation, as the “singularity.” Respectable scientists like Stephen Hawking have argued that such a singularity will, in fact, mark the “end of the human race.”
This will not be some impossibly remote event like the sun blowing up in a supernova several billion years from now. According to one poll, AI researchers reckon that there’s at least a 50-50 chance that the singularity will occur by 2050. In other words, if pessimists like Hawking are right, it’s odds on that robots will dispatch humanity before the climate crisis does.
Neither the artificial intelligence that powers GPS nor the kind that controlled that frustrating toll plaza has yet attained anything like human-level intelligence — not even close. But in many ways, such dumb robots are already taking over the world. Automation is currently displacing millions of workers, including those former tollbooth operators. “Smart” machines like unmanned aerial vehicles have become an indispensable part of waging war. AI systems are increasingly being deployed to monitor our every move on the Internet, through our phones, and whenever we venture into public space. Algorithms are replacing teaching assistants in the classroom and influencing sentencing in courtrooms. Some of the loneliest among us have already become dependent on robot pets.
As AI capabilities continue to improve, the inescapable political question will become: to what extent can such technologies be curbed and regulated? Yes, the nuclear genie is out of the bottle as are other technologies — biological and chemical — capable of causing mass destruction of a kind previously unimaginable on this planet. With AI, however, that day of singularity is still in the future, even if a rapidly approaching one. It should still be possible, at least theoretically, to control such an outcome before there’s nothing to do but play the whack-a-mole game of non-proliferation after the fact.
As long as humans continue to behave badly on a global scale — war, genocide, planet-threatening carbon emissions — it’s difficult to imagine that anything we create, however intelligent, will act differently. And yet we continue to dream that some deus in machina, a god in the machine, could appear as if by magic to save us from ourselves.
Taming AI?
In the early 1940s, science fiction writer Isaac Asimov formulated his famed three laws of robotics: that robots were not to harm humans, directly or indirectly; that they must obey our commands (unless doing so violates the first law); and that they must safeguard their own existence (unless self-preservation contravenes the first two laws).
Any number of writers have attempted to update Asimov. The latest is legal scholar Frank Pasquale, who has devised four laws to replace Asimov’s three. Since he’s a lawyer not a futurist, Pasquale is more concerned with controlling the robots of today than hypothesizing about the machines of tomorrow. He argues that robots and AI should help professionals, not replace them; that they should not counterfeit humans; that they should never become part of any kind of arms race; and that their creators, controllers, and owners should always be transparent.
Pasquale’s “laws,” however, run counter to the artificial-intelligence trends of our moment. The prevailing AI ethos mirrors what could be considered the prime directive of Silicon Valley: move fast and break things. This philosophy of disruption demands, above all, that technology continuously drive down labor costs and regularly render itself obsolescent.
In the global economy, AI indeed helps certain professionals — like Facebook’s Mark Zuckerberg and Amazon’s Jeff Bezos, who just happen to be among the richest people on the planet — but it’s also replacing millions of us. In the military sphere, automation is driving boots off the ground and eyes into the sky in a coming robotic world of war. And whether it’s Siri, the bots that guide increasingly frustrated callers through automated phone trees, or the AI that checks out Facebook posts, the aim has been to counterfeit human beings — “machines like me,” as Ian McEwan called them in his 2019 novel of that title — while concealing the strings that connect the creation to its creator.
Pasquale wants to apply the brakes on a train that has not only left the station but no longer is under the control of the engine driver. It’s not difficult to imagine where such a runaway phenomenon could end up and techno-pessimists have taken a perverse delight in describing the resulting cataclysm. In his book Superintelligence, for instance, Nick Bostrom writes about a sandstorm of self-replicating nanorobots that chokes every living thing on the planet — the so-called grey goo problem — and an AI that seizes power by “hijacking political processes.”
Since they would be interested only in self-preservation and replication, not protecting humanity or following its orders, such sentient machines would clearly tear up Asimov’s rulebook. Futurists have leapt into the breach. For instance, Ray Kurzweil, who predicted in his 2005 book The Singularity Is Near that a robot would attain sentience by about 2045, has proposed a “ban on self-replicating physical entities that contain their own codes for self-replication.” Elon Musk, another billionaire industrialist who’s no enemy of innovation, has called AI humanity’s “biggest existential threat” and has come out in favor of a ban on future killer robots.
To prevent the various worst-case scenarios, the European Union has proposed to control AI according to degree of risk. Some products that fall in the EU’s “high risk” category would have to get a kind of Good Housekeeping seal of approval (the Conformité Européenne). AI systems “considered a clear threat to the safety, livelihoods, and rights of people,” on the other hand, would be subject to an outright ban. Such clear-and-present dangers would include, for instance, biometric identification that captures personal data by such means as facial recognition, as well as versions of China’s social credit system where AI helps track individuals and evaluate their overall trustworthiness.
Techno-optimists have predictably lambasted what they consider European overreach. Such controls on AI, they believe, will put a damper on R&D and, if the United States follows suit, allow China to secure an insuperable technological edge in the field. “If the member states of the EU — and their allies across the Atlantic — are serious about competing with China and retaining their power status (as well as the quality of life they provide to their citizens),” writes entrepreneur Sid Mohasseb in Newsweek, “they need to call for a redraft of these regulations, with growth and competition being seen as at least as important as regulation and safety.”
Mohasseb’s concerns are, however, misleading. The regulators he fears so much are, in fact, now playing a game of catch-up. In the economy and on the battlefield, to take just two spheres of human activity, AI has already become indispensable.
The Automation of Globalization
The ongoing Covid-19 pandemic has exposed the fragility of global supply chains. The world economy nearly ground to a halt in 2020 for one major reason: the health of human workers. The spread of infection, the risk of contagion, and the efforts to contain the pandemic all removed workers from the labor force, sometimes temporarily, sometimes permanently. Factories shut down, gaps widened in transportation networks, and shops lost business to online sellers.
A desire to cut labor costs, a major contributor to a product’s price tag, has driven corporations to look for cheaper workers overseas. For such cost-cutters, eliminating workers altogether is an even more beguiling prospect. Well before the pandemic hit, corporations had begun to turn to automation. By 2030, up to 45 million U.S. workers will be displaced by robots. The World Bank estimates that they will eventually replace an astounding 85% of the jobs in Ethiopia, 77% in China, and 72% in Thailand.”
The pandemic not only accelerated this trend, but increased economic inequality as well because, at least for now, robots tend to replace the least skilled workers. In a survey conducted by the World Economic Forum, 43% of businesses indicated that they would reduce their workforces through the increased use of technology. “Since the pandemic hit,” reports NBC News,
“food manufacturers ramped up their automation, allowing facilities to maintain output while social distancing. Factories digitized controls on their machines so they could be remotely operated by workers working from home or another location. New sensors were installed that can flag, or predict, failures, allowing teams of inspectors operating on a schedule to be reduced to an as-needed maintenance crew.”
In an ideal world, robots and AI would increasingly take on all the dirty, dangerous, and demeaning jobs globally, freeing humans to do more interesting work. In the real world, however, automation is often making jobs dirtier and more dangerous by, for instance, speeding up the work done by the remaining human labor force. Meanwhile, robots are beginning to encroach on what’s usually thought of as the more interesting kinds of work done by, for example, architects and product designers.
In some cases, AI has even replaced managers. A contract driver for Amazon, Stephen Normandin, discovered that the AI system that monitored his efficiency as a deliveryman also used an automated email to fire him when it decided he wasn’t up to snuff. Jeff Bezos may be stepping down as chief executive of Amazon, but robots are quickly climbing its corporate ladder and could prove at least as ruthless as he’s been, if not more so.
Mobilizing against such a robot replacement army could prove particularly difficult as corporate executives aren’t the only ones putting out the welcome mat. Since fully automated manufacturing in “dark factories” doesn’t require lighting, heating, or a workforce that commutes to the site by car, that kind of production can reduce a country’s carbon footprint — a potentially enticing factor for “green growth” advocates and politicians desperate to meet their Paris climate targets.
It’s possible that sentient robots won’t need to devise ingenious stratagems for taking over the world. Humans may prove all too willing to give semi-intelligent machines the keys to the kingdom.
The New Fog of War
The 2020 war between Armenia and Azerbaijan proved to be unlike any previous military conflict. The two countries had been fighting since the 1980s over a disputed mountain enclave, Nagorno-Karabakh. Following the collapse of the Soviet Union, Armenia proved the clear victor in conflict that followed in the early 1990s, occupying not only the disputed territory but parts of Azerbaijan as well.
In September 2020, as tensions mounted between the two countries, Armenia was prepared to defend those occupied territories with a well-equipped army of tanks and artillery. Thanks to its fossil-fuel exports, Azerbaijan, however, had been spending considerably more than Armenia on the most modern version of military preparedness. Still, Armenian leaders often touted their army as the best in the region. Indeed, according to the 2020 Global Militarization Index, that country was second only to Israel in terms of its level of militarization.
Yet Azerbaijan was the decisive winner in the 2020 conflict, retaking possession of Nagorno-Karabkah. The reason: automation.
“Azerbaijan used its drone fleet — purchased from Israel and Turkey — to stalk and destroy Armenia’s weapons systems in Nagorno-Karabakh, shattering its defenses and enabling a swift advance,” reported the Washington Post‘s Robyn Dixon. “Armenia found that air defense systems in Nagorno-Karabakh, many of them older Soviet systems, were impossible to defend against drone attacks, and losses quickly piled up.”
Armenian soldiers, notorious for their fierceness, were spooked by the semi-autonomous weapons regularly above them. “The soldiers on the ground knew they could be hit by a drone circling overhead at any time,” noted Mark Sullivan in the business magazine Fast Company. “The drones are so quiet they wouldn’t hear the whir of the propellers until it was too late. And even if the Armenians did manage to shoot down one of the drones, what had they really accomplished? They’d merely destroyed a piece of machinery that would be replaced.”
The United States pioneered the use of drones against various non-state adversaries in its war on terror in Afghanistan, Iraq, Pakistan, Somalia, and elsewhere across the Greater Middle East and Africa. But in its 2020 campaign, Azerbaijan was using the technology to defeat a modern army. Now, every military will feel compelled not only to integrate increasingly more powerful AI into its offensive capabilities, but also to defend against the new technology.
To stay ahead of the field, the United States is predictably pouring money into the latest technologies. The new Pentagon budget includes the “largest ever” request for R&D, including a down payment of nearly a billion dollars for AI. As TomDispatch regular Michael Klare has written, the Pentagon has even taken a cue from the business world by beginning to replace its war managers — generals — with a huge, interlinked network of automated systems known as the Joint All-Domain Command-and-Control (JADC2).
The result of any such handover of greater responsibility to machines will be the creation of what mathematician Cathy O’Neill calls “weapons of math destruction.” In the global economy, AI is already replacing humans up and down the chain of production. In the world of war, AI could in the end annihilate people altogether, whether thanks to human design or computer error.
After all, during the Cold War, only last-minute interventions by individuals on both sides ensured that nuclear “missile attacks” detected by Soviet and American computers — which turned out to be birds, unusual weather, or computer glitches — didn’t precipitate an all-out nuclear war. Take the human being out of the chain of command and machines could carry out such a genocide all by themselves.
And the fault, dear reader, would lie not in our robots but in ourselves.
Robots of Last Resort
In my new novel Songlands, humanity faces a terrible set of choices in 2052. Having failed to control carbon emissions for several decades, the world is at the point of no return, too late for conventional policy fixes. The only thing left is a scientific Hail Mary pass, an experiment in geoengineering that could fail or, worse, have terrible unintended consequences. The AI responsible for ensuring the success of the experiment may or may not be trustworthy. My dystopia, like so many others, is really about a narrowing of options and a whittling away of hope, which is our current trajectory.
And yet, we still have choices. We could radically shift toward clean energy and marshal resources for the whole world, not just its wealthier portions, to make the leap together. We could impose sensible regulations on artificial intelligence. We could debate the details of such programs in democratic societies and in participatory multilateral venues.
Or, throwing up our hands because of our unbridgeable political differences, we could wait for a post-Trumpian savior to bail us out. Techno-optimists hold out hope that automation will set us free and save the planet. Laissez-faire enthusiasts continue to believe that the invisible hand of the market will mysteriously direct capital toward planet-saving innovations instead of SUVs and plastic trinkets.
These are illusions. As I write in Songlands, we have always hoped for someone or something to save us: “God, a dictator, technology. For better or worse, the only answer to our cries for help is an echo.”
In the end, robots won’t save us. That’s one piece of work that can’t be outsourced or automated. It’s a job that only we ourselves can do.
Many jobs will be replaced by automation. Changes are coming, for sure. But it’s important to understand that while it may be called “AI” by people like the author of this article, it’s not intelligence, it’s not consciousness, and it can’t think. AI researchers may not like to hear it, but the philosopher Hubert Dreyfus was correct back in the 1970s — and is still today correct — when he argued that computers cannot think. There have been no major breakthroughs in AI to change this picture since then, sorry. Not only are we seeing climb-downs on self-driving automotive tech (Musk now tells us he’s figured out that — duh — it’s difficult), but we are still dealing with garbage OSes like Microsoft Windows. Yes, hardware is getting faster, but a garbage OS on faster hardware is still a garbage OS.
It’s worth bearing in mind that Rodney Brooks, former director of MIT’s AI research lab, has noted that thousands of AI researchers have been working on these problems for over sixty years, and “we are not at any sudden inflection point.” So, whatever people like Ray Kurzweil said (ahem, 16 years ago), the so-called singularity is NOT near.
I think compounding plays a big role. Incremental changes accumulate; it’s not necessarily a “breakthrough” that’s necessary for fundamental change. Think about speech recognition and synthesis – remember how clunky and insipid it was? It’s come a very long way. What about language translation? Same thing. Little incrementals added up to something pretty impressive.
Mars rovers. They have to be nearly autonomous, and they have survival rates – in a very inhospitable, not-well-understood environment – of years.
I expect machines to achieve functional parity with humans in this century. The question of “are they the same as us?” may miss the point. We are actively, aggressively, quickly building the successor organism to us. “Capitalists will sell you the rope to hang themselves with” is replaced by “humans will build the entity that replaces themselves”.
Humans have some really significant brain-architecture limitations which machines can be designed to overcome or simply not have. Machines (software, in this case) are already equipped with self-adapting capacities, and you can safely bet those self-adapting capacities will expand rapidly. In my view, the most direct route to the “singularity” is through self-adapting mechanisms (the goal of self-adapting, the sensing of the error, and the mechanisms (algorithms) to correct the error).
If you couple self-adapting ability to access to big data…it’s off to the races. And we’re already there, right? AI is closely coupled to big data. Amazon, Facebook, the Security Apparatus…et. al.
So, I think it’s going to happen, and then the question becomes “what role can we play in shaping the outcomes?”
A while back I sat down with my clipboard, and asked “Tom, if you were to design humanity’s successor, what human traits would you propagate forward, and which ones would you simply design away because of limited utility?”
I filled in the grid, then turned it around, and asked myself, “OK, Tom, given what you just learned, how would you re-design your own brain/mind to overcome the limitations you just identified?”
The point is, things are going to change. We’re going to either be changed from outside-in, or we can change inside-out (e.g. adapt).
Evidently, you haven’t noticed that speech recognition and language translation software still suck.
P.S.
You haven’t given an example here, but to my knowledge this does not exist in a serious way for any software system. If “self-adapting” systems existed, applications and OSes could adapt to changing conditions and fix their own bugs. They can’t, and they don’t. Software must be maintained by engineers. Without engineers, bugs don’t get fixed, new bugs appear, and eventually the software simply dies. Try running Photoshop for a Motorola Apple computer on an Intel Mac. It doesn’t work and it never will. However much you may want it to work, Adobe will never tell their engineers to fix it. It’s dead. In fact, software is always slowly rotting and failing as conditions (its environment) change. It’s kept alive by a mind-boggling number of very dedicated and knowledgeable engineers. A neural net may be said to “adapt” to its input, but these must be trained, training can prang, and if the neural net app gets borked you’re back waiting to talk with tech support, and tech support waits to talk with engineers, who sometimes deign to respond. That’s how it works.
The behavior you describe has been studied at great length by computer science and there is no solution at the scale of a system. If there were, our experience with computers, tech support, upgrades, failures, catastrophic data loss, etc. would be vastly different.
Here’s a few examples:
a. Routers and internet switching. It’s very sophisticated, adaptive, try, re-route software. It handles a big number of potential fails, and the “handling” is happening automatically, in a distributed manner (many players involved) and it happens fast, near real-time
b. Airline scheduling. That is one heck of a dynamic model, full of sudden surprises with lots of up- and down-stream dependencies. Imagine the amount of re-calcs and try-to-optimize-the-reschedules when a hub airport gets whacked with a thunderstorm and all flights in and out for the next 2 hrs are canceled.
Autonomous driving software. For it to work _at all_ is an astonishing amount of adaptive, dynamic problem-solving. I’m still amazed that they can almost do that. When you say “ah, but it’s not there yet!” you’re glibbing past some amazing achievements in a very short period of time.
The example you provided of getting Photoshop to work on a different platform (e.g. motorola .vs. intel hardware) is trivial. Linux – which is quite a bit more complex than Photoshop – works on several different hardware platforms, from IBM mainframes all the way down to a single-chip the size of your thumbnail.
Java – the language – got its enormous success because Java code can run unchanged on virtually any platform that the VM (java’s virtual machine) has been ported to. Just about every OS out there has a VM for it.
So, portability is a problem that’s long-ago solved. Self-adapting is readily done within the designer’s expectation about failure scenarios.
Making code self-adaptive means building in sensory mechanisms the code can use to find out if plan A didn’t work, and the code is equipped with either a library of alternatives, or an algorithm to generate and then test alternatives.
So, adaptive sw is operational now.
a. Routers don’t “adapt”, sorry. Just yesterday I helped a friend set up Internet in a new house. The router from the previous apartment had to be re-configured for the new ISP, but it didn’t cooperate, and finally we just punted, removed the router, and set up PPPoE from the computer. The router configuration will have to be sorted later.
b. If the Airline scheduling system malfunctions due to an OS upgrade or database failure, tech support and engineers must be called. Take away all the support staff and see how long it runs. “Self-adapting”? Hardly.
c. Autonomous driving software. Indeed, not there yet, and it’s prone to all the same problems I’ve already described.
d. Java. Cross-platform portability is not about “self-adapting” (and is considerably less portable than you have been led to believe). It’s just a virtual machine architecture, which has been around since the 1960s. You’re straying very far from “self-adapting” here, by the way.
Not necessary to go on, as you seem to be hand-waving away the fundamental issue: applications and OSes cannot fix their own bugs, faults, or anticipate conditions for which they were not designed. Take away the engineers and entropy kicks in immediately. Systems can be designed for resiliency, but that is not the same as self-adapting. Ergo, none of this is “self-adapting” or by any stretch of the imagination self-sustaining.
Routers most certainly do adapt. Research spanning tree protocol, and that’s a very primitive adaptation / sensing protocol. There are several much more robust and adaptive ones at the core of the internet. What you’re describing is the fact that you don’t know enough to effectively configure a router, and – here it is – the router price of $30 doesn’t cover the cost of the software required to step you through all the decisions that must be made to accurately configure it. Do you suppose for a second that all those steps are unknown? Not.
response to a) Just because it can’t adapt to all failure modes, doesn’t mean it couldn’t (if it was designed to do so). You’re using momentary gotchas – which a software component can readily be designed to manage – to refute a system and an evolutionary path. Fallacy.
response to c) is same as b). same fallacious argument, and it’s being refuted before your eyes. Every year, problems that were big gotchas yesterday are perfunctory today.
response to d. You are the one that said software can’t adapt to the platform. I gave several examples of it doing exactly that. If Photoshop can’t do cross-platform, then that’s an architectural failing, not a limitation of computer science.
Who’s doing the hand-waving? I’ve given you concrete examples of software doing very sophisticated real-time adaptation, and you’ve not refuted the fact that they are. You simply say “it’s not perfect yet, so it can’t get there”. That’s a fallacy.
Please go on with your refutation, I’m interested to hear it. Let offer the rest of the gang some food for thought.
Next time you boot up your linux system, go to the logs (probably in /var/log/system.log) that detail the enormous amount of hardware-sensing and adaptive software provisioning (on the fly, in real-time) wherein the Linux OS senses what capacities (processor and associated input and output capabilities) the machine offers, and installs all the requisite components to fully communicate and operate the (potentially) thousands of different devices that may be present.
That’s like solving a 10-dim by 10-dimension rubik’s cube problem in about 5 seconds (how long it takes for my linux to boot). Take a look at that log and get back to me and tell me that system isn’t self-adaptive.
There is no analog to that degree of self-adapting provisioning that occurs (for example) in the human mind and body. This is a feat vastly beyond what humans can do. And recall that this self-definition software only has about 40 years of evolution under its belt.
You wrote:
When I asked for an example, you offered routers. So, router tech points the way to the singularity? This sounds a bit like trying to launch a space station from your backyard. When I offered a concrete example of a router not adapting to changes in the environment, you very humorously try to shift the blame on me for falling to configure it properly. I’ve been working in the IT business for decades, and have a fair bit of specialized knowledge about networks, but my point is simple: If a router were really self-adapting as you say, why should I need to re-configure it at all? Unless you’re moving the goalposts, there.
As for Photoshop, you don’t acknowledge that it was written in C++, not Java (I’m not sure why you’re trying to make that move, unless you don’t really understand the very good reasons Java was not chosen). You also don’t acknowledge that there are many issues with cross-platform portability that are manifest at the level of APIs and system libraries, not resolved by the virtual machine. Would you stand by the claim that porting Photoshop from C++ to Java is “trivial”? Even if you convinced a team of people to port it to Java, how is that “self-adapting”?
More broadly, you seem to be conflating design for resiliency and start-up tests of interfaces with software that can fix its own bugs. The latter would be a real “self-adapting” system. Design for resiliency is not AI, nor is it going to get us there. If you have an example of an application which actually fixed a defective algorithm in its own source code, recompiled itself without errors, without any human intervention, resolved the bug and resumed service, I’d be interested to hear about it. That would be a real self-adapting system. If you cannot provide such an example, I would suggest looking deeper into the history of computer engineering and thinking a little more deeply about why you’re unable to find it.
In the end, I just feel sort of sad about the argument that you’re making. It’s fairly common in computer science and by now kind of tedious. Again: thousands of researchers working for sixty years on AI, and we’re not approaching any inflection point. This is the verdict of the ex-director of the MIT AI lab. The sad part is that people have such a low estimation of human beings — that we are just glorified machines — to carry on this hubris about the so-called singularity.
Acacia: thanks for the thoughtful reply. I am enjoying this, and I hope you are too.
Router protocols actually are a good example of self-adapting. The protocols can sense problems, generate new paths to test, and talk among themselves to notify one another current routes. This is one of the big reasons the internet is so robust.
But you seem to say “that’s not adaptive! It didn’t fix it’s own bugs!”
Most problems don’t require a code-fix; they require new (adaptive) behavior. I acknowledge that having a system executive notice out-of-bounds behavior (sense a bug) and write the code-fix is a wonderful goal, but that’s not a necessary goal in order to elicit adaptive behavior from a code body.
I do acknowledge your point that I can’t identify a piece of code that can sense its own bugs, write and successfully compile and test the fix, and do a dynamic code update. I cannot name a product that do such a thing.
:)
Now that we’ve got that cleared up, here’s your gotcha:
I think you’re setting an arbitrary and maybe not entirely valid threshold for “adaptive behavior”.
I apologize for blaming you for the router config problem you faced – maybe that was unfair, but possibly not.
To wit, since you have background in nw admin, you know that there are configuration aspects that you either have to provide to the router, or the router has to take a very, very long time to discover for itself since there thousands or millions of permutations to discover (e.g. host IP address, or gateway address). How could a router know which of the potentially thousands of local addresses is the gateway? Can the router know whether or not there’s a firewall present whose function is to silently drop packets that discovery/scanning protocols issue? How can the router know if the address you provided for the primary interface is accurate? It can’t.
A human would not be able to configure a router without knowing this info, so is it reasonable to expect a router to be able to divine it?
Re: ack Photoshop written in C++. Why is that relevant? Your opening gambit was “if software is adaptive, how come I have to have legions of programmers to get my app to run on Motorola .vs. Intel processors?” Remember that? So, I gave you an example of a code body (Linux) which is primarily C and C++, and I said it’s been successfully designed to work on several different HW architectures (instruction sets, data representation, etc). That addresses the portability issue. The example of Java is relevant because it’s another complex system whose design enables it to function in different HW environments.
Re: why linux’s startup routine is “adaptive behavior”. When it starts, linux knows nothing about how the machine is provisioned, and there are very, very many permutations possible. Given the plethora of CPU architectures, memory and IO busses (usb, firewire, scsi, hdmi, etc) and all the thousands of devices that may or may not be installed…the number of permutations is well into the tens of thousands or more. And yet, in just a few seconds, linux accurately senses what’s present, loads and the integrates the right driver, tells the user if any probs were encountered, and if possible, starts the system. All with virtually no up-front config info. That’s a great example of adaptive behavior. I wish I could adapt myself to my environment anywhere near as fast and well.
As for “not approaching an inflection point” … why are you insisting on “breakthroughs” when you have such a wealth of incremental improvements? Given the quality of your responses, I know you know this. Why are you (seemingly) dismissing incrementalism?
Thx again for great repartee.
fwiw, I have. Thanks both for your contributions. I have a small one of my own to make re: this point:
No, but here’s a (the?) key difference: a human can find those things out. Indeed, it’s probably not for nothing that such information gathering, at the scale of nation states, is referred to as intelligence.
Wow, here was I thinking that the computer BIOS or boot PROM performing a system bus probe on boot and then passing on that hardware tree onto the Linux kernel which then loads modules for any hardware it recognises was just a big if-then-else table, but apparently not, it’s a 10 dimensional rubiks cube problem/solution and an example of a self adaptive system…. LOL /S
The first step towards self-adapting ability would be self-diagnosing ability, on the plausible premise that you can’t adapt to that which you can’t perceive and define. Tell me the machine on the market today that, when it goes on the fritz, can tell the user why.
The robots can safely be called “intelligent” when they stop doing the work and announce, “Hey, I’m not a robot!”
Wow is there a new machine-learning-based GPS system, or does the author confuse all automation with ‘Artificial Intelligence’? It certainly has that whiff of criti-hype: credulous handwaving mixing a misunderstanding of current technology with vague and ominous predictions taken out of context to spice it up.
So I guess I’d like to know where the AI was in the automated toll booth, the GPS (other than maybe voice synthesis), and drone attacks.
Yeah, pretty much every example was not AI.
The process to grab the car’s license plate could use some sort of image recognition using machine learning, which might be about it. But then they might be paying someone minimum wage to look at the photos and type in the license plates. (Getting the state right might be hard for ML. Plus if someone has a bike rack?)
The Global Militarization Index as a source is very much soso. Armenia lacks an air force and and worthwhile air defense force. I think they have about 10 older fighters and 4 new ones. The problem with the newer ones, for which they paid $100 million is they have do not missiles to put on them.
So far, drones are only worthwhile in air space that is all but uncontested. The Turkish F-16s likely flown in support of Azerbaijan likely hurt Armenia too. (Twitter has some photos of them an Azerbaijan air base during the war.)
But the overall point that slowly human’s are being replaced is true. And I liked the point about a friendly face the store. In the not too distant future only the rich will be able to afford humans at the stores they shop at? The rest of us will be shopping at the automats?
The rich send people to go shopping for them. Concierges!
There is no market for public store attendants for rich people.
They all seemed like AI to me. Does it take no intelligence to plan a route to a destination? Or note which obstacles are in the way and re-route? Or give an updated ETA continuously? Humans need to train to do these things well in the navigator seat. It requires intelligence. If a computer does it, then it’s using artificial intelligence.
I agree with the other commentators who say the incremental change is gathering noticable steam. I’m very impressed with language translation where I know enough to gauge quality. Go-zero smashed the computer that smashed the world go champion. Just because they’re not general purpose intelligences doesn’t mean they’re not very intelligent.
The examples you give are just algorithms. No intelligence required, and the computer that performs these things is just following a script of instructions, i.e., an application. If conditions appear that were not anticipated by the application developer — and this happens all the time —, all bets are off. It is true that humans need training to do these things, just as we needed education to learn algebra or calculus, but this training is about providing us with the mental discipline to behave algorithmically, as it were. Your intelligence makes it possible to learn algebra and solve equations — which is something like following an algorithm, e.g. polynomials get handled in one way, factorials in another, etc. —, but also that you can do many other things, too.
As for applications that play checkers, Go, etc. — these are not intelligent. These are just applications which can respond to a certain very specific set of conditions, under rules (or meta-rules) defined by the application developer. The application that wins at checkers is not thinking. It does one thing well, only. It cannot also drive your car, or play chess, read a book, or write a poem. It has no intelligence or consciousness.
This whole problem was already studied decades ago by philosophers, and mostly they have moved on. The question now is ethics, because many working on AI still haven’t gotten the memo and their creations have potentially harmful consequences (e.g., you get hit and killed by a putatively “AI-equipped” car whose driver is sleeping in the back seat). If you are curious, you could start with Dreyfus. If you read his work, you’ll already be ahead of 95% of AI researchers in understanding this — and that should tell you a lot right there about “the state of the art”.
What do you think intelligence is, if not algorithms?
Reasoning is a well-defined activity. We have known for a long time how to do it.
The thing that makes the human mind so wonderful is the flexible, adaptive way it builds and maintains its database. The associations, the way they’re built, re-inforced, recalled, forgotten and over-ridden…it’s wonderful.
We are going to be able to write software that does those things. It may take some years, but it’s going to happen. We’re doing it in primitive (and not so primitive) ways now. It’s going to get better fast.
And when it does, the designer is going to have a choice to make: “do I build it to mimic the human mind…with it’s well-known limitations..or do I design it to do things differently?”.
People used to say “you can’t predict the weather”. Well, we’re predicting it, sometimes a full 10 days out, and we’re getting pretty good at it. Iterative development conquers some mighty big obstacles.
Tom, I like your comments. I think that the AI breakthrough to a General AI will come from understanding our own brain. Neuroscience is not developed. One expert said that if the knowledge base of the brain is a mile long, we know about 3 feet. Don’t care if that’s accurate as the larger point is correct. One of the brain areas we do understand well is vision. This knowledge has helped make AI vision advance. We need to develop a scientific understanding of consciousness to create a General AI. As for the singularity, we are trending towards it. This century seems reasonable. Neural networks were a step change in AI technology. There will be more to come, but they come erratically with big time intervals in between. I am releasing a first novel on AI this fall, so I’ve been down this rabbit hole for years. It’s called A Sliver of Soul. Big clues in the title about what science has to understand and reproduce. Kudos to all the commentators. Hard to find this level of discussion elsewhere. Stay safe, everyone.
Don’t forget to let us know when the book comes out. I’ll buy a copy just to say “I met the guy who wrote this!”
:)
(Well, sorta met him. Briefly conversed with him, on a blog. That’s what passes for human contact these days.)
In France we are already shopping at the automats. Though they still need humans to restock the shelves. Only pharmaceutical companies have enough money to buy robots to work in warehouses. They are expensive.
But it is true that now some middle class jobs are being threatened. Last month I took a photo of an ad on a public transport bus here in Geneva that said : “Our real estate brokers cannot be replaced by an algorithm”. They would not pay for the advert if they weren’t feeling a danger, would they.
As for the aforementioned GPS, it knows the traffic situation and roadworks and such thanks to all other users feeding it real-time data. I don’t know if it counts as an AI.
Its also important to note that even things that are currently called “AI” are anything but. They are merely pattern finding (and thus, in some cases, prediction) algorithms and nothing even remotely close to even the intelligence of a cockroach.
That’s not to say we can’t use such idiot boxes to destroy ourselves. I have no idea what that JADC2 Pentagon system actually does but if generals think it can think for them and build a doomsday machine out of it well then…
Having worked in ML for a while, it is fairly clear to me that the methods involved merely appear to work, as far as anyone can tell — until they don’t. They are in this way not all that different from humans, just more limited.
Like people, blog writers especially, we can expect such tools to derail spectacularly when they are trying to draw conclusions in areas out of their expertise.
Agree. Are we even remotely close to that thing called Artificial Intelligence? From what I read, no.
And economist Dean Baker has written that the fears over automation replacing humans are greatly exaggerated. After all in a world teeming with people, low skilled workers are easy to come by and both more versatile and less expensive than robots. Since companies are in the business of making money it tends to be tycoons like self described futurist and rocket boy Bezos who are pushing the robots to keep his marginally profitable mail order business afloat.
Of course I’m prejudiced because I love GPS, love self checkout, love all the gadgets on my electronics heavy car. To be sure such nerdy enthusiasm isn’t common, but we should see machines for what they are–mere tools like the caveman’s flint ax. If you are a fan of 2001 you could say next stop HAL 9000. But that’s just a movie.
Here’s suggesting it’s us we need to fix, not the machines. In such a crowded world no room for sociopaths.
Bezos is using AI to push workers as fast as they can go and then uses AI to fire them when they become exhausted.
An annual employee turnover rate of 150% (implying the equivalent of changing his entire warehouse workforce every eight months) is absurd.
Walmart is at 44% for comparison, and looks to be so far behind they will never catch up. To compete, they will have to turn the screws tighter and whip harder. AI to the rescue.
Yves here. Two things about increased automation frost me. One is its stealth or main purpose as forcing planned obsolescence. So irrespective of the impact on job/labor content, any savings won’t necessarily accrue to users. Two is automation/AI serving as an excuse to shift costs and tasks onto consumers. How many times do customer service systems try to force you to deal with the vendor on the Web instead? I pretty much always refuse. Set up a profile in a payment portal? Nevah! A call to a rep has some odds of being treated as a one-off, while those pesky portals are designed to harvest it.
==============================================
I COULD NOT POSSIBLY AGREE MORE
What frosts me is that BESIDES obviously trying to screw you, the obvious act of pretending that such systems are better drives me to apoplexy.
What gets me are these voice recognition systems that can’t recognize when I say “I want to take your CEO and rip his entrails out through his lower orifice and rip his heart and lungs out through his pie hole”
I joke, I joke.
Seriously, I say “I have a complaint” or “I have a problem” or “my checking account is wrong” or “this credit entry is incorrect” or “I want to end this service” and about a zillion other things, and AI just answers, “I didn’t understand what you said, please try again” Eventually, the AI just hangs up.
I can’t help myself, but: It is hard to get an AI to understand something when it is programmed not to understand something
Think of AI menus as goal-seeking processes, or as heat-seeking missiles. Is there a payoff, or a target to hit? If not, recalibrate, and if odds of success fall below a threshold, then abort the mission. In AI World, we are little functions to maximize or minimize.
Even with seemingly unlimited bandwidth for calls or for website connections to the Help line, there is a cost to minimize and a payoff to pursue even if that payoff
ignoresmakes allowances for a tolerable percentage of pissed off consumers. Escalation is costly so is priced accordingly.Nations like China are certainly going all in on robots and AI but I wonder if the pressure behind this is actually demographics. In China and in fact the west the populations are getting older. The ratio of young healthy workers as to old retired workers is getting badly skewed and there is no getting away from it. The number of abandoned towns in nations like Japan and Italy are a sign of this trend happening here and now. This being the case, would it not make sense to invest in robots and AI so that a smaller workforce would be still able to produce just as much if not more for the future economy? And a solid economy would be needed to support those older workers until they left the scene. of course it does not explain everything described in this article but it would have to be a major driver.
From my vantage point, the primary motive to create/implement automation is to capture wealth. Rent streams. To get a bigger share of the pie.
The aging of the population is actually fortunate, because automation is doing away with the need for workers way faster than the population is ageing out. The US economy is poster-child for this: where’s the job growth? It’s mostly in the (sorry for the language) bullshit-jobs category.
Technology’s change rate has vastly out-run our (human’s) adaptive rate, and this article shows many of the artifacts of that mismatch, but doesn’t actually state the underlying problem: rate of change exceeds rate of adaptation.
Another goal is social control, which is separate from wealth capture.
Just tried to post a wardrobe on the nightmare horror that Ebay UK has become.
Enter “wardrobe”, and you are presented a whole list of tick boxes in various categories. They seem to be derived from wardrobes previously sold on Ebay, with such useful colors as “Mango”.
Brown? Nope.
If you cannot tick a box in each of the categories, the algorithm will not let you post your item.
Good luck if you’re trying to sell anything unique.
Same here — I loathe the new, over-formatted Ebay, quel hassle.
In recent frustration re-listing some items, I’ve sent their algo (!) several itemized
hate ‘feedbacks’
I’m relatively low-tech, and have been navigating that site
for years, with little friction — no more.
Nobody laments the displacement of ditch-diggers by backhoes, but suddenly there is sympathy for displaced checkout cashiers and toll takers? Freedom from onerous work used to be a utopian goal. How is it now a deprivation of human dignity to eliminate a mindless, repetitive job? The machines are coming for all human labor, even highly creative and artistic work. We will all eventually be free to work only as a means of self-actualization, and this will be a good thing.
I estimate that less than 10% of workers truly enjoy their work. To say that the remaining 90% should not be deprived of their work activity is nonsense. Advancing technology will enable a distribution of wealth that enables widespread leisure on an historically unprecedented scale. There will be a guaranteed basic income as soon as the children of the middle class cannot find paid work, and this universal income will effectively end economically coerced unpleasant labor. Unfortunately, dystopian fantasies are in vogue, so we will have to watch entertaining depictions of killer robots and satanic billionaire overlords for a few more decades.
I believe we could have had a distribution of wealth 100 years ago which would have produced a fairly utopian way of life for most people. We did not because people, especially those who control the disposition and use of wealth, did not want it. This is certainly true today. In the future, then, what will motivate those in control to bring it into being, with automation, AI, or whatever? Without some kind of fundamental cultural change, the new technologies will be put to the same uses as the old, will they not?
The plutocrats of the past needed an army of cheap laborers. Since the robots will be even cheaper and more productive, there is no need to force people to work. It will be cheaper to put them on GBI (guaranteed basic income) than to police the angry poor.
HH, you describe a future of “bread and circuses” or maybe countries like around the Persian Gulf where the rulers provide the citizens with a decent lifestyle in exchange for staying calm. Maybe go back to the plebs selling the one thing they have, their votes, to the rich in exchange for basic income.There are plenty of dystopias written about that kind off future.
A variation on the manorial system, with a different lord of the manor. In other news, the corvée takes on new dimensions.
“dystopian fantasies are in vogue”
I recall during the early stages of the pandemic one of our humble bloggers here posited that we could be without running water and electricity as covid wiped out the utility employees.
We were lucky that this virus wasn’t very deadly. I don’t go in for dystopian fantasies but its not hard to imagine something with the morbidity of smallpox or worse being released from one of these insane bio “research” labs and having no one to maintain all the nuclear plants.
UBI is an orphanage from which no one is ever adopted. You either win the sperm lottery or some AI decides how to warehouse you. Self-actualization in the modern era is capital intensive. In Bezos-ese or Musk-ish, you need to reach escape velocity. This means public health systems, free college, equitable access to lending facilities, affordable housing in every part of the country, a police force that isn’t beholden to the capital class, a tax system that isn’t regressive, a society that values humanities as much as they do STEM, and so on.
AI and automation have been promising that narrative for a looooong, long time. The elites can’t even materialize price drops from economies of scale. One of the primary justifications for mergers is “lower consumer prices”. I can think of absolutely no examples when this happened. When health insurance companies merge, do rates go down due to “redundancy eliminations”? Does broadband magically improve in rural areas because this now ginormous entity has resources the small one they bought did not?
IP rights are also a major hurdle and nothing makes the PMC so gleeful as litigating and enforcing IP. “Big data” is the privatization of your own life for the purpose of making someone else money. Worse, handing over your autonomy in the guise of things like sentencing algorithms. Hard to actualize oneself when one’s self is constrained by machines. That’s the opposite of your liberation manifesto.
“I estimate”. Ok. Based on what evidence? Is there some methodologically sound primary source you’re referencing? Or is it a half-assed recollection of facial expressions from people you’ve encountered in their workplaces? “By my count, 10% of people I see at businesses fail to smile often enough. So clearly they hate their jobs.”
According to a recent Gartner study of employees from across the U.S., only 13 percent of employees are largely satisfied with their work experiences. Additionally, nearly half (46 percent of employees) are largely dissatisfied with their work experiences. And when employees aren’t satisfied with their current roles, they’re likely to start looking for new jobs elsewhere.
The current economic inequality in the U.S. is unsustainable, and a new equilibrium will spread the wealth generated by steadily advancing technology more broadly. Although neo-feudalism is having a good run in dystopian forecasting circles, it doesn’t fit an information-intensive world. The serfs can’t be kept down once they have seen the inside of the palace.
Unsustainable perhaps, but I’m reminded of the quote about markets remaining irrational longer than you can remain solvent.
Monopsony means your next bullshit job will be the same or worse than your current bullshit job. This is how the serfs are kept down in their place.
This was, of course, Keynes’ prediction for capitalism itself about 100 years ago. The 20 hour work week. Indeed what you outline probably would be nice in some way, if you take the generalities and assumptions at face value. Just assuming that it will happen bc the machines strikes me as stupendously naïve. Keynes’ excuse, presumably, is that this was all a bit new back then.
“Running simple errands is made more pleasant by seeing familiar faces among the store staff, even if you can’t often chat with them.”
I agree completely, Yves. With so much shut down during Covid, I really missed those often brief but important-to-me human interactions.
Now on to the article. As a sexagenarian, I suppose I am hard-wired by age to be skeptical of robotics and AI. But even if sixty years of living in the past have made me incapable of seeing or predicting the future, I’ve got my reasons for my skepticism:
“Science-fiction writers and technologists have long worried about a future in which robots, achieving sentience, take over the planet.”
Forget about sentience. What about mobility? The current masters of robotics can’t even design a bi-pedal robot that can stay upright (although they do in the movies, which has led to some brainwashing of the public). Proprioception is a bitch for robot geeks, they can’t seem to duplicate that human nervous system function.
https://en.wikipedia.org/wiki/Proprioception
“The creation of a machine with human-like intelligence that could someday fool us into believing it’s one of us has often been described, with no small measure of trepidation, as the “singularity”….AI researchers reckon that there’s at least a 50-50 chance that the singularity will occur by 2050.”
AI researchers like the crackpot Ray Kurzweil? Kurzweil is a great proponent of the singularity and believes the day will soon come when our brains can be melded to computers. Our brains will become our computers and our computers will become our brains, by a miraculous intersection of technology and physiology. Again, mark me down as skeptical.
I have a word for Kurzweil and all the other AI researchers: By all means continue pursuing your chimeras, but also start working on nervous system disorders, work that could have immediate and important results for spinal cord injuries, Alzheimer’s, and ALS. You say you’ll be able to unite a complex neural system like a human brain with a computer in thirty years? Okay, but in your spare time, devote a little of your genius to spinal cord repair. That should be a piece of cake!
Maybe you haven’t looked at robots lately.
https://youtu.be/fn3KWM1kuAw
Thanks for that video. You’re right, last time I checked out robots was about ten years ago. My first impulse upon watching the video was to say to myself, “I don’t believe it.”. So I did a little on-line research and it seems to be legit (not CGI). A definite improvement in robot mobility, with the caveat that the dance moves were programmed in advance.
Also regarding AI mobility – they are quite mobile. Drones for example are very mobile.
Also, there is so little regulation and there never will be serious limits on tech “progress”. The NIH’s “pause” on Gain of Function research is a recent example on some regulation. How did that turn out?
I recently finished Life 3.0 by Max Tegrum, which is tied directly to this convo. I highly recommend it.
Tegmark, not Tegrum.
Wow, what utopia one must live in to think that the fruits of technology will be used for egalitarian well-being rather than the perpetuation of neoserfdom.
If you asked “if you were free to work only as a means of self-actualization, what exactly would you do?”, I’d guess over 50% of the population would say “I don’t know”. Not every one is an artist. Not every one has a “passion”. I know plenty of people who turned their hobbies into businesses and regretted it because the hobby became “work”. Maybe we’ve all lived in the “matrix” our entire lives so we can’t see anything beyond our mundane, capitalist driven, existence, and with the freedom of ubi and automation the younger generations will no longer be so limited, but maybe not. Again, not everyone is an artist… and we’d all be fighting for the very rare self actualizing plumber to come fix our toilets.
I assume this is because they have to subject their hobby to markets and market forces. In the utopia imagined, one can undertake the hobby labour for its own sake without such concerns. Of course, as you rightly suggest, for many people, non-artistic labour is a labour of love as well.
“we’d all be fighting for the very rare self actualizing plumber to come fix our toilets.”
Thank you for this line. It’s the best thing I’ve read today and made me both laugh out loud and ponder the idea for a bit.
You would be handling the plumbing yourself, as there would be no plumbers (possible if everything is properly modular and designed for a no-tools-needed customer self service – desktop PCs, despite all the complains from the usual crowd, work like this).
——————————————————-
More realistically, the still needed jobs and tasks would operate on a conscription model, if you were appropriate for the job, you would be – within your term of service – tasked to do it, by the (probably local) government.
Why do we tax employment so heavily, while not taxing automation and AI taking away jobs? What if we stopped taxing employment and wages and started heavily taxing Automation and AI?
The author does a pretty good job of illustrating again and again how the problem isn’t really AI, but humans. So I for one, you know… welcome our new overlords.
He also fails to implicate the Capitalist system, which guarantees an unequal distribution of the benefits of automation, just like it does wealth. Many of the problems he describes would vanish with an economy that was focused on benefiting everyone.
A machine with “human-like” attributes belies the establishilment’s love for machines in the first place. The entire live affair is drenched in wanting something to do work and which doesn’t have attitude or emotion.
The very thing claimed to be the pinnacle of “AI” is what they are trying to get away from.
The “gray goo” problem is hilarious. The idea that tiny robots could self-replicate on any scale, let alone an astronomic or even financial scale, is even more outlandish than the perpetual motion machine. Where would these things find all the metals and intensive energy requirements for such a task? Oh right, they would imagine them into existence just like science fiction writers do.
Said “gray goo” scenario is one of the most mundane and ordinary things on our planet, you see, it already happened, billions of years ago in fact.
Of course, I’m talking about bacteria that built the foundations of a living environment from what was a barren world.
One of the many final results of that particular “gray goo” scenario is known as top-soil, BTW, and yes the little buggers keep trying to turn everything into it, hence the need for hygiene, plastics and other bacteria proof materials and so on and on…
———————————————
Having said this, it should be noted that the release of any new man-made self replicating agents of any kind or size into the open should be heavily regulated if not banned outright – and it does not matter if such agents are viral, bacterial, robotic or any new yet-to-be-imagined contraption.
Kill it with fire.
Half of US states are now using facial recognition software from this little-known company to vet unemployment claims
https://www.cnn.com/2021/07/23/tech/idme-unemployment-facial-recognition/index.html
I think ‘an obstruction to total profit’ might be more accurate.
As others have said above, machine ‘learning’ has nowt to do with the quicksilver of consciousness.
When Colossus says no, that’s what its creator demands.
AI is a fancy to induce, and then demand, infancy.
From the Forbin Project:
If the ‘Singularity’ — the spontaneous birth [or even a birth by mid-wife] of an intelligent machine ever occurs then I would have to doubt that machine’s intelligence were it to show any particular interest in Humankind. It might find us amusing, and might perhaps feel a certain sadness for us before it began investigating the rest of the wide wide world.
I cannot understand the psychotic cravings for more control, more power, and more wealth that so possess Big Money and the Super Wealthy. Unless it had a problem in its wiring or a bad glitch in its programming I would expect an intelligent machine might notice Big Money and the Super Wealthy with some passing amusement and sadness before it went on to much more interesting studies.
Exactly.
Why would a machine with that many options hang around here?
To infinity and beyond!
:)
Hopefully, they’ll send us postcards.
What options would that be?
The machine, with its many supercomputing facilities, would have astronomical energy requirements compared to a human (or even a small nation) and a need for a well oiled state-wide industrial base to keep it maintained.
Humans only need a bit of food and something to wrap themselves into (even that only because we lost our hair, monkeys have no such problem).
Of course! The AI machines must subjugate Humankind to use for electric power generation — the Matrix.
If only that would work, the net output of the matrix setup as seen in the movies(TM), would be grossly negative – as in a thousand times more energy in as out.
They would opt for immediate genocide.
As is the actual intent of the plutocrats, post successful wide and deep automation of industrial and services capabilities.
Expect the combat capable drones (or your home-installed always on line appliances) to kill you at some point – they essentially aren’t even hiding it properly anymore.
(Unless the Jackpot(TM) wipes the population out first.)
I read something about that concept in the movie before and searched for it.
This is what came up with “The original story had the brains of the humans being used as part of a neural network for additional computing power.” Which makes more sense than battery’s.
This also came up during the search that I didn’t know about. There was lawsuits over this movie and the Terminator.
http://www.finalcall.com/artman/publish/National_News_2/Who_really_wrote_The_Matrix_1745.shtml
From what I understand it’s about the secular version of the myth of a hell-world/dimension populated by sinners sent there to either atone or be isolated.
The question at hand accordingly is truth of our reality, which many (so called “sensitive” types) openly doubt.
The explanation behind the Matrix is actually just a plot device (and a poor one at that), the gist is that we are not what we appear to be, and the reality itself is not what it appears to be.
In Philip K.Dick’s terms: “What if robots lived, but thought they were real humans?” and “What if people lived in a simulated word but thought it was the reality?”
Lots of options. Robots _may_ be free of many of the constraints that govern us humans.
They may be able to go much farther into space, penetrate environs we physically can’t, subsist on a miniscule amount of energy, and go dormant for centuries with no loss of enthusiasm.
We don’t know yet what form they’ll take, nor what their interests will be. It is by no means assured that their cognition will consume big resources. If it happens to, once they get into space, there’s unlimited amounts of energy to be had. Just stand in front of the sun, and soak it up.
If they really become much more intelligent than we are, and don’t have all the human-being overhead we have (emotions, instincts, biological drives, social needs, sustenance needs, etc.) that leaves a lot of time and effort to allocate else-way.
Sorry for late response, Tom, now to look at what you say:
“Robots _may_ be free […] of the constraints”
No, they “may” not; you see, we’re constrained by basic laws of physics, they are as well, energy needs to be used and stored and maintenance must be applied.
Machines aren’t simple, you just don’t see their fully external maintenance needs (the industrial complex that makes and repairs them), so you discount it, as if the requirement wouldn’t exist.
On the other hand you do know your own human needs, but they are such as they are, because the factory that re-makes you is inside of you – it’s inside of every cell in your body, this factory (your maintenance) is what demands the food and water you need to live.
Machines have the same needs, they’re just located outside of them – in the factory and the repair shop.
“They may be able to go much farther into space”
No, the constraint is the launch energy, simple sensor drones don’t count as “robotic life,” they are throwaway suicide drones, once the mission is over, they get to sit there and wait for the battery to run out – sentient robots wouldn’t accept this, the return flight package (from Mars or elsewhere) would make Apollo missions look tiny in compare, even for a robot.
“…penetrate environs we physically can’t”
The same package that can carry a general purpose automaton, can carry a human, except more easily, the benefit of using remote sensor drones is in the removal of risk, not greater penetrability – and this is all before we take into consideration of size of a super-computing facility (a f**king building the size of a football stadium).
“subsist on a miniscule amount of energy […] go dormant for centuries”
Machines that are not in regular use may need maintenance before they can be reactivated, a mothballed device really doesn’t use any resource except for space (and weight – which really matters in space), but there is no guarantee that it’ll work when you press the switch, without a repair shop on-board and competent technicians to re-establish working condition, mothballing scenario isn’t viable.
Here on Earth, when mothballing fails (and it does), we just buy new stuff, in deep space this strategy doesn’t work.
“It is by no means assured that their cognition will consume big resources.”
Yes it actually is, the current compute capabilities are vastly below what is needed and we can’t make CPUs much smaller, than what they are now; we’ve about hit the atomic wall – this among other things mean that potential self-aware computers in the future won’t move about, as large buildings tend to stay static (earthquakes notwithstanding).
“…what their interests will be”
Whatever we program into them, the cost of the technology, guarantees both the financier and the initiative behind the project (must I spell it out? A hint: OCP’s corporate slogan: We Are The Military(TM))
“once they get into space”
Even if they were as small as humans, the size of the needed rocket makes this a prohibitive exercise; note that today’s remote sensor drones do not need a general purpose full scale self aware A.I. even if it could be made.
“unlimited amounts of energy to be had. Just stand in front of the sun”
The moment solar cells fail it’s game over and you need the repair shop, which is on Earth, where you can’t get, because you don’t have the fuel; no, electricity is not fuel, no, the sun isn’t a useful power source for actual space flight.
Currently the space exploration program works because all of the drones are on a pre-set suicide mission where the trajectory is set by the launch rocket and we just hope they’ll fail after they gather and transmit the data we need.
Again, this would be unacceptable to a self-aware A.I. that would be free to make a choice, also such a system isn’t needed for a simple sensor platform.
“human-being overhead we have (emotions, instincts, biological drives, social needs, sustenance needs, etc.)”
All of the “overhead” that you refer to is a core component of sentience.
Not just humans but all other highly developed animals (both mammals and reptiles) display them (even if we only just started to acknowledge the fact), I would suggest that emotions and preferences and social needs would be just as present in an A.I. as they are in us all.
As far as sustenance is considered, machinery has a lot higher needs threshold than even the largest life forms on earth; it’s just that it’s hidden in the industrial process spread-out in multiple factories, mines and power plants during the manufacture and you – as a user – never experience it, so you think machines are low maintenance. It’s a consumer fallacy born by the fact that the ugly industry is mostly hidden away from you.
And the (insinuated, but actually non-existent general purpose-) A.I. nonsense is a silly-con valley and hollywood made PR scam.
(It should be said that in reality you were making a half self-aware argument towards a self sustaining robotic life, something along the myth of V-ger – which is a Star Trek fantasy.)
Both businesses and government, want to save money and make more profit by replacing, or to surveil and supposedly control or also profit from by fines, are using machines that often do not work, make errors, require maintenance, are open to abuse, and when they inevitably fail or have design flaws that cannot be easily fixed, regular people are screwed.
Look, restating my previous paragraph, I do not doubt that these machines will do some, even many, tasks not jobs better than any human; however, current and future machines are being used, not because they do a better job, but because they get rid of those pesky, people meaning more profit or easier and supposedly doing better at tasks like mass surveillance, which means more power to control the population when they actually work.
There are plenty of ways that technology could be used to augment the abilities of, or provide greater safety to, people. People who naturally have greater mental and physical flexibility than any machine, but can benefit from the tirelessness, precision, and efficiency of machines of all kinds, especially of computers. But the Powers That Be are rushing to use poorly thought out or unneeded machines to fix problems that don’t exist and are often done better by people and often better by combining both people and machine. Instead we get automated sales clerks at stores that require more time, are less efficient, and often require an employee to cover several machines to deal with any problems anyways. So I have to spend more time in line, deal with an obstreperous machine to do the clerk’s work all while being partially supervised by 1/4 of a human being? Just so the store can replace three employees?
How about when The Big One (the quake we Californians joke over) actually does hit? If I have cash on me, and I should, I can pay for whatever, but even when the power comes back, or they have their own power, the internet connections that everyone needs to use credit and debit cards will likely be cut. So the stores will be unable to sell. I suppose what I am really whinging on is the increasing slap dashed, jury rigged, duct taped civilization here that is always looking to be more efficient, quicker, better, more profitable by completely eliminating the people who were doing the work already, deskilling the remaining workers, and rely on machines that need maintenance by a decreasing number of skilled workers and continuous access to resources like electricity, all while hoping that the programmers and self made algos of the machine will be adequate enough to the job. And having them do them in evermore complex environment like driving or fighting a war while still not having self awareness, or judgement, or mental flexibility.
And one more thing to add to my rant. Using machines is a good way to avoid responsibility. “It wasn’t me that sold fifty pounds of steak for $1.50, it was the machine.” “I didn’t wipe out Duluth with bombs, it was the war drones.” “Me, run over those nuns and into a orphanage? No, it was the car’s fault.” “I didn’t sell all Microsoft’s stock at three cent each, it was the computer.”
JBird4049:
…”If I have cash on me, and I should, I can pay for whatever, but even when the power comes back, or they have their own power, the internet connections that everyone needs to use credit and debit cards will likely be cut. So the stores will be unable to sell.”
———————————————————
I find it interesting that in the U.S. everyone seems to conflate customer self-scan-and-pay stations in stores with “digital-money only” pay options.
These two things aren’t the same you know; all self checkouts that I ever used accept cash and return change (yes, coins too, if necessary).
It is true that they often complain (i.e. product weight mismatch etc.) and a clerk has to wave the authorization card to make the machine proceed, but one guy can handle 4 to 8 stations, and most of the time these things don’t cause problems.
The result is a much smoother payment experience and no (or short) lines; you do have to scan yourself though, but you always had to put stuff on the conveyor belt and back into the shopping cart yourself, so no big difference here; it’s just faster (and nobody is quietly rushing you, since there is no line behind you).
I get that, and thank you for the personal experience, but that particular part of my rant was on the interconnected weakness of our electronic and “AI” world. From my experience of the 1989 Loma Prieta Earthquake, power, telephone, and any sort of communication to anywhere was cut. No access to a working ATM, to a bank, or anyone who could accept a credit or debit card for about a week. At least with cash and an a actual human being, transactions could often be arranged. Otherwise, forget it.
This is part of my rant. We are replacing people with systems that are more vulnerable to, often catastrophic, disruption, but are cheaper and more profitable in the short run to businesses. Yes, having a cashier and a bagger is more expensive for the store, but makes my trip easier and quicker. Not having a phone tree created by Satan makes solving my problems also easier, quicker, and far less frustrating. Chopping away the human workers with wonky systems that make everyone else’s life harder, which have no backup or the flexibility of a human when things go wrong.
Yes, machines are higher a higher complexity and lower resilience solution compared to humans — this is by definition.
Further, highly bureaucratic systems (stores not accepting self signed paper I-owe-you’s for example) are brittle and poorly functional — this is a also by definition.
The problem they are solving is intelligence and self actualization – i.e. no one wants to work as a shop clerk and businesses want mindless slaves, not workers.
As soon as a machine is cheap enough, and the results are not too poor, they try to get rid of humans; and workers want to be rid of jawbs too.
Would have worked mostly fine if elites in charge appreciated the fact that machine solutions actually work poorly at best, but instead everyone drinks techno-fantasy cool-aid and expects magic-awesome results simply because digital!, digital!, digital! (clap and sweat).
Also the workers get shafted with no welfare state in sight.
As an architect for 40 years, I’ve seen amazing advances in computer graphics, database power, the Internet (communication) and other tools used in providing design services. These tools are not robots replacing the design process. They are tools making the process easier and faster.
Computer Aided Design (CAD) is not really a design robot. It is a tool to draw graphic elements quicker, Many elements in a design are repetitive and CAD assists that repetition. Digital graphics can also be stored and retrieved for later use. (Hand drafting is no more.) But each project is not the same and the process of understanding a project site opportunities and limitations isn’t algorithmic. It is a creative process. Some do it better than others.
What has occurred in the field is the necessity of acquiring MORE information to make better designs. While the drafting board is gone (a skill leaned in school but lost quickly in practice) the need for spatial/visualization skill (imagination) is still central to the profession. Some do it better than others.
And fewer architects are needed as large organizations clone their buildings to different locations.
Perfection does not work in the real world & in fact the universe would not exist but for the tiny imperfection in everything.
I was back in 2016 part of a 5 member team of sculptors working on Game of Thrones given the task of sculpting a 37 ft long by 12 ft tall dragon skull. Instead of the usual around 3,000 year old method of using a maquette or small scale model which we would have preferred, we were given very impressive looking computer print offs showing the skull from every possible angle.
So we got to work basically drawing out on the platform base the shape of the base, using the measurements that went with the prints, followed by it’s maximum height & each lower part using the side elevation view. As we continued to wittle the rough blocked out form it gradually became apparent that something was not right, which became ever more obvious as we progressed. The head sculptor went & visited the nerds who were responsible for the reference, who basically laughed at him as according to their computer the design was perfect.
So we carried on & it got worse while we got more pissed off, until it was even obvious to the Art directors, who along with the nerds regarded us as a bunch of scruffy yobs. All the measurements were checked by lasers & proved to be correct but the damned thing did not look like what the computer had come up with, so in the end we were told to do it by eye, which we did first by having to cut the whole thing in half & it ended up being just fine.
A dragon skull which had been rendered very well is an extremely complex thing & human skull is much more so as it has to be to in order for there to be so many variations. The computer came up with a perfectly symmetrical form, probably by producing 1 side then reversing it before sticking them together which is fine while it stays in that box, but once it gets out in the imperfect real world where nothing it 100% symmetrical it just does not work. Relatively simple 3-D forms get away with it in the real world but even they if you get close enough would be asymmetric.
As Penrose explained in his book The Emperor’s New Clothes, AI no matter it’s processing power cannot understand what it is doing, even though it can beat a Grandmaster at chess. As Juno states above it’s just an impressive tool & does not possess any ability to design other than perhaps producing options, has no imagination & is totally dependant on what info we put into it.
” There is a crack in everything, it’s where the light gets in ” – Leonard Cohen.
I decided to look for a photo of the skull & found this – Head Sculptor describes using the Computer reference as challenging is a large understatement, I’m not the other guy.
https://kevinalsoblogs.blogspot.com/2017/07/witness-creation-of-dragon-skull-for.html
Yes, the big fear is that robots will become self aware. Hard to define that one as we are not really very self aware. I can’t discern my blood cells, my synapses, my weird sense of color… It’s especially impossible to discern that instantaneous leap of awareness we call humor, or imagination. Or the proverbial “Aha moment.” I’ll like AI lots more when robots can make fun of themselves. or apologize for being total goofballs, or just express embarrassment. Maybe something like lovable gibberish.
I hope you’re the one to design the robot, S the Other.
We’d get something good.
The mere utterance of “Artificial Intelligence”, casts most of us into a subservient mode of appreciation. We know its implication of power and direction feeling somehow, we must live with its existence but in a guarded way. Salespeople offer to add it to our arsenal with great benefit. So what do we do?
Well first let’s diminish the fear. AI isn’t intelligence, even with the qualifier of “artificial”. Until computational power reached the mid-1980s it was primarily sets of instructional pathways, a massive logic tree. Since then there has been an exponential development of neural networks. What are neural networks? Your dog is a neural network. It learns from observation. With today’s hardware capabilities observations can be almost limitless. But that’s a problem for later.
The purpose of an AI, neural network, is to create an algorithm, a tool that takes what you know (data) and can do (choices) and predict outcomes. Valuable! And sensitivities, indicators, conflicts and other bits of wisdom can be derived from the well designed and trained network.
Consider the common practice of value engineering, a tool developed by GE in the 50s. In three steps a team identified and weighted the key attributes defining a successful project, then identified every plausible design to complete the project and evaluated each design for its ability to maximize the weighted goals. This human process was dominated by the brightest and most experienced talents (neural networks) capable of subjective analysis and innovative thought. Their work left behind a largely reusable algorithm for similar projects.
In some ways the VE process parallels the design of an AI network. Instead of massive data mining, experts in the relevant fields contribute a human trained network and then custom design the evaluative factors for their specific project.
AI lives or dies with the skill of its design. Is an input for temperature hot, warm or cold, or is it in degrees centigrade? Is a borrower tall, short, black, brown? Credit score 700s, or 721? Training a network is an exhaustive series of statistical comparisons. Data is literally beaten down like a reduced broth such that every example, with its many inputs will best predict the historical output. A poorly designed network will not train and converge. And in attempts to purify the result accuracies the design gets more sophisticated with mathematic layering and connections of data in training, not unlike applying higher order polynomials in fitting a curve to data points.
Embarking on a proprietary AI application is like designing a ship. Lot’s of integrated work and understanding of its utilization. An off the shelf application with some customization may be appropriate and will need constant training as it involves more than the laws of physics to adapt to changing human elements, laws, regulations.
I love AI and the potential of neural networks. They enable Mars rovers! Applied in the human enterprise they are more of a bulldozer. At best they devour endless data finding some elements which we didn’t recognize as predictive. At it’s worst it will categorize us in a way we don’t want to be and get us to not trust our gut. Remember it’s just math and automation.
so, the misnomer of ‘artificial intelligence’ is further
proof of the scourge of false analogies.
So many thoughtful and learned comments! What suprised me was there
was little comment on the development of quantum computers. It’s the
difference between racing with bicycles and Porches. True, they’re not quite
here yet, but the whole concept is to make computers for war.
It would be ironic if Asteroid Apophis ruined the whole schema by hitting
the Earth in either 2029 or 2036. Might even take out a few billionaires!
General AI will, imo, not come from the dragged up statistics methods that are currently being pushed onto everything. It will be something else entirely, probably rooted in some form of quantum computing.
There is always some interesting stuff published under ” in Unconventional Computing”, especially when one has university library access, and whatever the Santa Fe institute publishes.
To me, it seems that “computing” is another fundamental property of matter and it just happens to be that “brains” or “computers” are probably some of the most inefficient ways of computing – event though we know that “brains” kinda work, I think that there exists many different systems that can do the same task with a lot less ressources.