Yves here. Another sighting on the “Beware robots and artificial intelligence!” beat. Aside from the fixation on job risks, I keep wondering about hidden costs, like wire monkeying and increased hostility among people when they know they are not dealing with a person. I certainly do when I have to deal with phone prompts. All that adrenaline with no where to go…
By Arthur Marusevich, a lawyer based in Canberra. Originally published at Independent Australia
It’s almost scary to think that the world as we know it may well be run by Artificial Intelligence (AI) one day.
While the risk of an imminent AI disruption of the labour market may sound like a fantasy, those with the most advanced AI technologies at hand think that AI is an imminent threat.
They say an Industry 4.0 or cyber physical systems (CPS) revolution is coming whether we like it or not. Is this really true?
AI in the labour market means the use of intelligent software to optimise the delivery of services by humans.
However, in a recent meeting with U.S. governors, business magnate Elon Muskwarned:
“AI is a fundamental existential risk for human civilisation and I don’t think people fully appreciate that … [AI] is the scariest problem.”
Worth reading Superintelligence by Bostrom. We need to be super careful with AI. Potentially more dangerous than nukes.
— Elon Musk (@elonmusk) August 3, 2014
But if this AI business is such an unfettered terminator, why has Mr Musk’s warning fallen on deaf ears? Why haven’t regulators and companies rung the alarm bells yet?
Well, aside from one explanation that it may be a conspiracy, some experts think that Mr Musk’s statement is an unnecessary exaggeration of the reality. It is true that Mr Musk may have access to the most cutting-edge AI technology in pursuit of his autonomous machines; however, he is not the only one.
Others with access to similar technology, such as Arizona State Universitycomputer scientist Subbarao Kambhampati, have a different view.
Kambhampati says:
‘While there needs to be an open discussion about the societal impacts of AI technology, much of Mr Musk’s oft-repeated concerns seem to focus on the rather far-fetched super-intelligence take-over scenarios .… Mr Musk’s megaphone seems to be rather unnecessarily distorting the public debate, and that is quite unfortunate.’
Additionally, nowhere in the ‘2016 Obama Administration AI Report‘ do we see any references to such imminent threats. So, does this mean that we should disregard Mr Musk’s warning?
Perhaps not entirely, according to a recent report published by the International Bar Association Global Employment Institute (IBA GEI).
AI Researchers Disagree With Elon Musk’s Warnings About Artificial Intelligence https://t.co/5UufhOc0KJ#AI #future
— Warren Whitlock (@WarrenWhitlock) July 21, 2017
This report has in fact raised some alarming issues as to the faith of both blue and white-collar sectors unless AI is proactively monitored and regulated.
Coordinator of the report, Gerlind Wisskirchen, IBA GEI Vice Chair for Multinationals, commented:
Certainly, a technological revolution is not new, but in past times it has been gradual. What is new about the present revolution is the alacrity with which change is occurring, and the broadness of impact being brought about by AI and robotics. Jobs at all levels in society presently undertaken by humans are at risk of being reassigned to robots or AI and the legislation once in place to protect the rights of human workers may be no longer fit for purpose, in some cases.
… The AI phenomenon is on an exponential curve, while legislation is doing its best on an incremental basis. New labour and employment legislation is urgently needed to keep pace with increased automation.
In the past, the human workforce was mainly involved in mass production of raw materials and manufacturing. Today it is about service delivery. This tertiary sector consists of almost 70 per cent of the human workforce and involves the use of individual effort and skill to deliver a service for someone else. It is this sector that is supposedly under threat from AI.
IBA – Law requires reshaping as AI and robotics alter employment, states new IBA report @JohnAFlood @lisawebley https://t.co/leNANMCO9s
— Paresh Kathrani (@PKathrani) April 4, 2017
What is the Extent of the Threat?
No doubt the advent of AI machines powered by complex algorithms and computer applications has already begun to strongly influence the world of work; so much that sometimes it is impossible to run the world without them. Labour in the automotive, chemical, agricultural, IT, media, finance and insurance industries has already been dominated by AI robots.
In Australia, the legislation explicitly allows government agencies to use AI in making automated decisions. For example, the Centrelink “robodebt” endeavour that generates debt statements for members of the public, without human intervention.
However, does this mean that the end of the human labour force has arrived? Perhaps not yet.
The current trend in automation does not equate to an imminent threat to human workers; the most obvious reason being that humans are adaptable and automation is controllable. As long as costs for service delivery by both AI and humans can be moderated, automation will not completely displace human labour. For instance, production in the clothing, catering and construction industries is still delivered by human labour because there is no AI technology that is as affordable as human labour.
Of course, such a conclusion requires a more precise and individual examination of all the sectors by country and region. However, the available evidence suggests that – unlike in the past where humans have actively participated in the mass production and service delivery – automation will allow humans to supervise this process, thereby enabling them to be more productive and creative.
T2 Artificial intelligence is OK for data analysis but doesn’t do empathy. Great AI compares to worst bedside manner. #HealthXPh
— Jim Katzaman (@JKatzaman) July 29, 2017
The Future
Predicting the future of the labour market is a difficult task. While it is true that human labour is poised to be displaced by AI, the current trend of automation suggests that the human element is an indelible part of the labour market. No doubt Industry 4.0 is coming if not already here; however, experts disagree as to the rate of the impact it will have on the global workforce. It is currently a little over-dramatised, so they say.
For instance, we learn from the well-known economist John Maynard Keynes – who, in 1930, coined the term “technological unemployment” – that our attitudes to impending AI technology should neither refuse to accept that the labour market will change dramatically nor assume that it will end the world as we know it.
It’s a balancing act. And both schools of thought agree with this. As a result, regulators and governments must be more proactive with their efforts to achieve this balance. For example, when driverless cars replace human drivers, by investing in education and creating training programs, human drivers can be reskilled for other jobs. Bill Gates suggests that the money for such initiatives can be earned by taxing robots’ productivity just like we tax humans.
So, as long as a balance is achieved, there is no need to panic just yet. Humans are adaptable and empathy will always remain an essential ingredient of our service delivery processes. This, AI cannot compete with.
Robot Tax or No- Thoughts anyone? https://t.co/J31Da5n9nk
— Cmike (@TheColfud) August 3, 2017
If the market (our deity?) is allowed to allocate the benefits of automation without government putting limits on how the benefits are allocated then automation is a threat for most people in employment. So the closing remark from the author:
is currently a reason to panic as governments are now only by accident looking after the public good when it comes to the allocation of generated wealth.
& any time I see re-education as the main solution then I know the author is ignorant about how the job-market works. But what to expect from someone in a protected profession like a lawyer?
On a side note: I’ve recently been back to an old employer. Ten years after I left they’ve doubled the number of people needed to do the same work. Job-creators or simply bad management?
Bad management will keep job-losses due to efficiency gains to a minimum.
Yes, as to the re-education bit, if all the jobs are to be taken by AI, what exactly is it people will be retrained to do? Polish the robots?
Its up to them. The government doesn’t want the moral hazard of responsibility. They can live off their investments, golf, whatever.
TPTB have already traded away the jobs that won’t be automated too. Telework from the other side of the world is a bridge to automation.
That is why the Clintons are so popular in countries where the wealthy stand to profit greatly from the deal they struck.
As there is currently little systemic discussion as new technologies are introduced in the market (unless of course there is significant harm to life or property by said introduction), why would anyone expect there to be a fruitful discussion of robots and AI now?
People don’t need jobs. What they need is a living.
Jobs pretty much suck, actually. You take orders all day and the profits go to someone else. Don’t a lot of people fantasize about being a small proprietor or something, working for themselves? Who cares if jobs go away as long as there’s still a way for people to command the resources they need to live.
>Who cares if jobs go away as long as there’s still a way for people to command the resources they need to live.
How could there be without government involvement?
And that under the new international services rules would be stealing the food off the plates of hungry corporations.
Reading fiction of late, the solutution appears to be a mass die off of people. Usually presented the other way, as in the near extinction event happens, and the survivors and their robots command a suddenly resource rich environment.
One should point out that during an earlier phase of the industrial revolution the concern was that factories were turning people into machines, not that the machines were going to somehow take over. At the movies Fritz Lang’s Metropolis was an example and in America Charlie Chaplin’s Modern Times provided a satirical version. The truth is we now have an economy with lots of repetitious, mechanical work performed by workers who do it solely because they need the money. They do get the fulfillment of having a job and belonging to something with other people like themselves, but the work itself is, as Chaplin thought at least, anti-human. Better to be a Tramp?
As the above article points out the real problem could be that government and laws aren’t keeping pace with technological change–that the elites who are running things are themselves mechanical and robotic and lack imagination. Let’s change them, and keep the technology which, in any event, is likely unstoppable.
I guess I would restate your sentence:
“The truth is we now have an economy with lots of repetitious, mechanical work performed by workers who do it solely because they need the money.”
The truth is we now have an economy with very few jobs — that don’t consist of — lots of repetitious, mechanical work performed by workers who do it solely because they need the money.
I believe Fritz Lang’s vision of the world of work fits much more than just factory work — and it is by design by our elites who are mechanical and robotic and lack imagination.
All the science points to the current environment with extreme wealth being a trigger for all sorts of clearly wrong, asocial, amoral behavior. A good argument could and should be made that we’re on a road which will lead to the human race’s destruction if we continue to let money determine people’s lives. Especially if nobody is working, the connection between money and entitlement will grow repulsive even to the monied and entitled. It already is.
“They do get the fulfillment of having a job and belonging to something with other people like themselves, but the work itself is, as Chaplin thought at least, anti-human.?”
at the worser jobs of course this fulfillment from being with people like oneself is the fulfillment of making sarcasm, jokes, etc. about how bad work is, more like the solidarity of cell mates than anything.
There’s an assumption that true AI is, if nothing else, at least neutral and more objective in its ‘decision making’ process than people. We are still a long way from true AI. What we have now is AI-like algorithms based on databases of sometimes faulty information. The databases can be faulty through error or mischief or malfeasance.
Take, for example, one huge company getting into the AI business. It practically controls what is and is not considered good content on the internet. Yet its database is riddled with errors. In advertising for more business in the AI field it points to its extensive database as evidence that its algorithms are authoritative. Recent history indicates overconfidence in that regard. Imaging true AI where the interconnected algo are all running off faulty databases, and the real time feedback is never checked against the real, analog world.
“If a librarian were caught trashing all the liberal newspapers before people could read them, he or she might get in a heap o’ trouble. What happens when most of the librarians in the world have been replaced by a single company? Google is now the largest news aggregator in the world, tracking tens of thousands of news sources in more than thirty languages and recently adding thousands of small, local news sources to its inventory. It also selectively bans news sources as it pleases. ”
…
“The key here is browsers. No browser maker wants to send you to a malicious website, and because Google has the best blacklist, major browsers such as Safari and Firefox – and Chrome, of course, Google’s own browser, as well as browsers that load through Android, Google’s mobile operating system – check Google’s quarantine list before they send you to a website. ”
…
“In 2011, Google blocked an entire subdomain, co.cc, which alone contained 11 million websites, justifying its action by claiming that most of the websites in that domain appeared to be “spammy.” According to Matt Cutts, still the leader of Google’s web spam team, the company “reserves the right” to take such action when it deems it necessary. (The right? Who gave Google that right?)”
https://www.usnews.com/opinion/articles/2016-06-22/google-is-the-worlds-biggest-censor-and-its-power-must-be-regulated
And that’s just one big example of algos getting it wrong. Now imagine the effects of true AI arising from faulty inputs. My guess is Gates and Musk know how faulty the databases are for dependable AI.
Thanks for this post.
AI is garbage in, garbage out. It doesn’t know how to classify data it doesn’t understand. Tiny, random alterations in its input data can lead to massively stupid outcomes no human would ever produce. For example, a self-driving AI misreads a random block of pixels in its video input and classifies a video compression artifact as an oncoming bus and swerves into the sidewalk. (It’s programmed to save its occupants, to hell with other people.)
Current algorithms are not human inspectable which means there’s essentially no useful explanation for why a decision has been made. The explanation we do receive is along the lines of, “factor ABC was activated and it had a decision weight of 0.3 which led to the outcome”.
Current AI is also massively stupid in that it can’t generalize from what it knows to what it doesn’t know. It mindlessly applies the rules it learned from old data to new, even if it has no human guidance on what the correct output should be for new data. This also means we’re straitjacketing ourselves into the world we live in now. So we have prison sentencing AI that’s secretly sensitive to racial and poverty indicators and gives out higher sentences even though those decisions were based on human prejudice. And hiring AI will find non-obvious reasons to shuffle the resumes of women and POC to the bottom of the pile for tech and exec positions, because that’s the data it learned from on who “thrives” in those positions.
Elites don’t trust peasants and will choose to place more and more power into the hands of these algorithms. Elties like the world we live in now. They’re delighted by technologies designed to perpetuate it. We’re headed for Brazil on steroids, where the rules are designed by a blind and brainless god and even more power is removed from the people, because the AI said so, and it’s supposedly fairer and more rational than we are….
There is an aphorism to consider:
.
Klaatu, Daneel Olivaw, or a series of time warp Terminators. Science Fiction always becomes science fact. Careful what you wish to be benign.
Open the pod bay doors, HAL.
There were a few times in the past when I called a company about something, and it took so long for me to get to a real person, that my frustration and overflowing adrenaline prompted me to be quite rude to the person. I felt bad afterwards, because the person hadn’t done anything wrong; it was the poorly designed system that was responsible. I remember my frustration with the speech recognition system caused me to sigh at one point, and the stupid system said:
At that point, I was still talking to a machine, so I didn’t feel at all bad about shouting obscenities at it.
I suppose clear definitions of AI need to be in place. But with my off the cuff multi-definitions all I can imagine is disaster. Anything intelligent will first serve/feed/protect/promote/reproduce itself or at best it’s owner/creator. Anything in the way be damned. That’s certainly going to be true for something “intelligent” designed by humans/war machines/corporations.
You mean something like this?
I already put the link some time ago in a comment, but this is really too good to pass.
Yes, I remember that! Hilarious! My situations were never quite that bad, but I’m not Scottish. :-)
I am Scottish descent. I know that accent well.
When technologists talk about being frightened by AI proper — as opposed to its supposed effects on the job market — they’re basically referring to the idea of a self-optimizing intelligence that isn’t constrained by human emotions. AI wouldn’t be motivated by friendship, family ties, human morality, biological needs, or mortality. It might not give a damn what humans want and might become too intelligent to be controlled by primitive simian brains. “Skynet”, basically.
There are two big problems with this.
First, Skynet would need a method of crossing the barrier from silicon into the real world. An AI sitting in a box on a shelf isn’t going to be a threat, unless it can somehow persuade you to do what it wants. It’s hard to think of how an AI would gain physical access to the world without humans noticing or participating, and it seems like it would be easy enough for us to disconnect it if we started becoming concerned.
Second, AI that’s superintelligent is going to rapidly run into the laws of thermodynamics. Computers run hot for a reason. Calculations create heat which needs to be mitigated. And computers need energy, lots of it, on drip. Superintelligent AI would quickly run into physical limitations that may not have workarounds in a terrestrial environment. And AI that wants to improve the hardware on which it runs would again need some sort of ability to get that hardware manufactured and installed in itself.
A computer’s attention span is only as long as its power cord. Failing that, a claw hammer will get its attention pretty fast.
Is it a good idea for AI researchers and theoreticians to think about superintelligent AI and how to build some morality into it? Absolutely. But there’s a reason we see con artist technologists on the hype train. Fear of runaway AI is an amusement park ride. It’s a way for comfortable people to frighten themselves safely and then congratulate themselves on how farsighted they are.
That sounds like unfettered capitalism to me.
All of this was the subject matter of the nifty movie Colossus: the Forbin project.
The world’s richest man is arguing for taxing and slowing automation . . .
Of course Bill would point his bony finger at anything else and say tax that. Left unsaid is tax me.
Bill would scream to high heaven if his servers that spew out digital ones and zeros for money at zero marginal expense, were defined as automation and robotics and Microsoft were assessed an “automation tax”.
In the past, the human workforce was mainly involved in mass production of raw materials and manufacturing. Today it is about service delivery. This tertiary sector consists of almost 70 per cent of the human workforce and involves the use of individual effort and skill to deliver a service for someone else. It is this sector that is supposedly under threat from AI.
That is bullshit jawbs sector, the same sector the author of this post makes his living from. I for one would welcome an AI lawyer or accountant and be able to slash that ridiculously expensive overhead.
That Bill thinks a special tax should be paid for using AI to compensate those that lost their jawbs is absurd.
Those that lost their jawbs to globalization got nothing except a punch in the face. Where was Bill when that carnage was at it’s height about a decade or so ago? Not giving a crap and counting his loot.
First, in this world of “deregulation” and “regulation is evil/big government”, how do we expect the government to be proactive in legislating AI?
Second, and this is a side piece, what about the chip implantation in Wisconsin? What happens when it is decided you need your chip to get by in the world? What happens when it is decided you are to be cut off from society because of any given reason? What happens if that control is given to AI, and AI decides you are to be cut off? What if AI screws up, has a programming error, or is hacked, and you are cut off? We are talking no money, no food, your car is now AI, so a signal can be sent to your car to prevent it from starting. Look at how many people have their identities stolen every year. What has the govt/regulation done to proactively protect them?