Yves here. I’ve always thought this remote-controlled self-driving car concept was madness, but it is still useful to have a well-argued takedown.
By Kevin Cashman, who currently works at the Center for Economic and Policy Research (CEPR). In the past, he has been employed in climate change/environmental protection and as a bicycle mechanic. The views expressed are his own. Originally published at his website
A Phantom Auto employee remotely driving a vehicle, via Phantom Auto.
Although hype about self-driving vehicles is everywhere, the actual technology behind them is starting to disappoint. Early predictions about when fully autonomous vehicles — that is, vehicles that would never need a human driver since a computer could handle every possible situation that would arise — would be available are starting to be proven wrong. Self-driving technology has also claimed its first life: in Tempe, Arizona, a test vehicle plowed into a pedestrian and killed her during what would seem like ideal conditions for a computer. Some preliminary evidence suggests that this means that the technology is, today, less safe than human operation.
As investors and others see signs of this failure, there is increasing interest in technologies that could bridge the gap between partially autonomous operation and fully autonomous operation. One of these technologies is “teleoperation,” or when a human would remotely take over to drive when the computer can no longer drive safely. This means that even though the technology cannot be fully autonomous, passengers would still not need to be involved in the operation of the vehicle at all.
The New York Times profiled one company peddling this technology, Phantom Auto. Shai Magzimof, the chief executive of Phantom Auto, says that his company “[…] want[s] to be the OnStar for the autonomous industry” to address the so-called “edge cases” where computers need help driving. To do this, Phantom Auto talks about how it reduces latency — the time it takes for data to pass between the human driver sitting far away and the car that he’s driving — by combining signals from various cellular networks.
Combining signals from various providers is an interesting possible solution to some situations where one network might lose coverage or have an interruption, but it isn’t clear how it would reduce the latency overall from a network, which has a maximum rate at which it can send data (and it isn’t likely Phantom Auto can change this). This rate is generally sufficient for things like web browsing and watching videos, but it remains to be seen if it is adequate for driving. If a car is traveling at a high speed, for example, very low latency becomes even more crucial to safe operation. Any delay in response time is dangerous, which is one major reason that one cannot drive safely under the influence of alcohol or other drugs.
Even ignoring the problems with Phantom Auto’s claims about latency, teleoperation will not be able to otherwise solve the problems with self-driving car technology if customers expect continuous operation of their vehicles. One obvious aspect where this is the case is the availability of a data connection. Remote operators won’t be able to take control in rural areas with no networks at all, or when all networks are overloaded, or when there are certain kinds of technical problems.
Another problem is the time in which it would take a remote operator to connect and then become aware of the situation the computer needs help with. Certainly, there will be less serious edge cases where an operator will have time to connect, become aware of the situation, and start to operate the vehicle. However, many cases need to be addressed right away. If a computer driving a vehicle doesn’t know that it needs help until the last second before it crashes, a human will not be able to intervene and avoid the situation. There are also likely edge cases where the computer does not even have the awareness that it needs intervention before it enters a potentially disastrous situation. Phantom Auto’s technology does nothing to help with these last two types of edge cases, which are the most important and serious types. It wouldn’t have saved the pedestrian in Tempe, for example.
Phantom Auto’s approach to managing its staff which would handle intervention also seems to dramatically misinterpret their role. Magzimof says in another article that “[…]one remote driver at Phantom can handle five vehicles at a time, perhaps moving up to 10 in a year and eventually to a thousand as AI gets progressively better at eliminating more corner cases.” While this sounds like how a customer service call center might operate, it’s not how a teleoperation service could function. Call centers can tolerate high demand in various ways: they can put callers on hold, ask more employees to come in, reallocate resources, simply stop responding to customers, etc.
Teleoperation has none of these luxuries. If there’s a snowstorm that completely confounds self-driving technology, thousands of remote operators need to be immediately available to take over for those hapless vehicles even if self-driving technology has eliminated most edge cases. These remote operators must also instantly take over, meaning that they all need to be sitting at their desks attentively waiting for a situation that could or could not arise. This would be a very taxing job to have, and like backup drivers, who are supposed to sit behind a wheel and do nothing as self-driving cars are tested, and likely impossible for most people to do well all the time.
Teleoperation is one of the latest developments that is supposed to excuse the lack of progress on self-driving technology overall. Instead of a viable way to address that technology’s failings, it’s a way to paper over the problems of an industry that is not achieving what it said it would. As a transition technology, teleoperation has more drawbacks than simply a self-driving car that allows passengers to take over. For situations in which there might not be passengers able or willing to drive, the technology cannot take over fast enough to avoid the worst edge cases. In these contexts, continuous operation would need to be sacrificed in order for teleoperation to be useful for less serious edge cases. Noncontinuous operation, such as a driverless truck pausing when it needed to use teleoperation, would present new safety concerns and probably require significant public investment in infrastructure to make feasible on a wide scale.
In general, fully autonomous self-driving technology has the potential to save many lives and be very useful — in certain contexts. Policymakers should focus on the steady development and safe testing of the technology until it can meet our needs, rather than rush to get it on the road for the sake of industry profits and expectations.
Leaving aside the technical considerations, which this piece sums up nicely, what on earth is the point of teleoperation? Presumably I would have to pay for this teleoperation service, and if I want to pay someone to drive me around, I can do that today.
When you, fortunately, are 95 years old? I’d rather have a chauffeur than a computer, come on!
When I’m 95 years old, I’ll want someone to help me in and out of the car, and stow my bags in the trunk…
It is a lot like the the motivation for the self-driving car: conning investors. Riding on the masterful hype pumps of Uber and Tesla, smaller companies have discovered that they too can find
suckersinvestors eager to get in on the ground floor of acollapsing hi-rise‘unique investment opportunity’.I still don’t get who is supposedly demanding this self-driving tech in the first place, other than the tech bros running the con.
The idea that these vehicles will save lives is extremely dubious and simply cannot be known ahead of time. I really want to punch the next person who is so sure that they will. The best way to reduce traffic accidents is to have less traffic and giving everyone their own ‘self driving’ car won’t solve that issue. Public transportation will. If I don’t care to drive myself, I can always call a cab or take the bus.
This whole stupid industry is nothing but a solution in search of a problem.
Won’t somebody think of the children!
I find safety-based justifications to be as disingenuous as they are unconvincing.
“Won’t somebody think of the children!”
We should start with school buses as the guinea pigs ./s
Won’t somebody think of the children!
https://www.youtube.com/watch?v=RybNI0KB1bg
Last I heard, it’s the alcohol industry that would really like cars that could drive their inebriated owners home.
On a side note: imagine a self-driving car with a tele-operated backup driver. The snowstorm hits, the robot car slows to a stop, and the voice starts: “Your business is very important to us. Please continue to hold and an operator will be with you shortly. The average wait time is…”
Uber, of course, would add “demand pricing” to the operator’s availability, and charge a premium to jump the queue.
Another problem highlighted, but not articulated, by this article is the overall cost of this type of technology. There is a whole lot of infrastructure that needs to get built out, in order to perform the same functions that one human driver can do right now. That doesn’t seem to be very efficient – especially considering that, right now, it’s not even any safer than that one human driver.
That is a very good point & I cannot imagine how it could be done or be in anyway cost effective in the rural environment I am currently living in, which comprises of narrow & often twisting country lanes. Winter nights are extremely difficult in various forms of inclement weather, visibility is bad to say the least in a situation of having to negotiate through hazards such as sharp bends in which the oncoming could be a large truck, a cyclist or even a pedestrian ( at least at night headlights give some warning ). The odd falling tree or large branches, potholes where there were none before patches of black ice & rabbits, foxes, badgers & cats, & the occasional dog all suddenly appearing out of nowhere.
Many of those who could probably afford one of these machines live out in these kind of environments, which would I suspect add to the possible hazards for the likes of me in my old citroen, & I don’t suppose that the life of an apparently suicidal rabbit running around in a circle of panic would be deemed as being important enough to risk the diversion of such a piece of expensive kit.
Well this could prove interesting. If there is a fatal accident, will the teleoperator find themselves facing manslaughter charges? Be sued maybe for wrongful death? Will their company pick up any of the tab for this or will it pretend that its teleoperators are independent contractors?
How about if one of these people had five cars to control and all five cars found themselves in a sudden snowstorm as they are in the same area. What’s their next move? Will these people try to control numerous cars in sort of like a loose pack on the highway?
The only job that I can relate this to is air traffic controllers and the amount of stress in this job is legendary. When there is a crash of an airplane, an air traffic controller has to be immediately relieved as they get messed up pretty bad. Will a company have in place similar procedures for their teleoperators?
Now for the $64,000 dollar question. A teleoperator finds that because of bad weather conditions, that they have to tell the drivers that they have three seconds to take control of their own cars. You are one of the drivers. You take two seconds to understand the situation and find that you are basically between a rock and a hard place. You now have one second left to decide what you are going to do. Does that work for you?
High tech Rube Goldberg operation.
I can believe that someone could come up with a half baked idea like this and try to promote it. I find it harder to believe that anyone with a lick of common sense would ever invest in it.
Edge cases. A tech bro’s idiotic way to describe “don’t know what to do”. I keep reading it over and over again when it comes to self driving cars and it’s a phrase used to hide reality.
Phantom Auto is a really good descriptive name for this service for one obvious reason. Won’t be there when you need it.
Kevin trots out the “saving lives” schtick for AVs. These AI devices that learn as they go have worse vision than Mr. Magoo. I see no evidence these things will be any safer than a person paying attention and could be worse too. As the guy from MIT explains the most valuable thing about self driving cars is trapped attention so that I can sell you things.
What would you like a tech bro to sell you during your hour of trapped attention today? A Juiceroo?
“Coffee is for closers only.”
I’m all in, baby!
Perhaps I am simply dull of mind or ignorant, but I can see no possible point to deploying something that is at best unproven and at worst unworkable. Until the robotic controllers reach the perfection posited by Asimov’s Three Laws of Robotics, I think it folly to loose such on the unsuspecting public. Was it Vermont that once had a law that an automobile must be preceded by a man riding a horse and ringing a bell? I applaud that spirit if not the details until the proponents can prove it poses no more hazard than a person.
Great point.
However, if you think about the ways they alleviated the dangers of the automobile: they got rid of horses, had a massive media campaign against pedestrians, making leisurely walkers into evil “Jaywalkers”, and set into motion the acceptability of killing more than 10,000 people per year and climatic ruination via uncontrolled suburban sprawl with the enticement of “freedom of the road”.
Now we are accepting killing increasing numbers of pedestrians and cyclists with equivalent of drunken driving, by allowing the operation of cell phones by drivers. Free speech?
Media campaigns do matter, as they seem so successful at directing the masses into their own doom. Never underestimate the power of a bad idea.
To punt or not to punt.
If they can come up with an algorithm smart enough to accurately and consistently figure out when to pass control to a human, then they can come up with an algorithm that doesn’t need the human in the first place.
And if it’s a hung over, half asleep human that’s supposed to do it when the wifi finally makes a connection, heaven help us all.
Scam.
It’s a scam.
Why is that so hard to understand?
I do recommend that they change the company name to “Phantom Blockchain”,
It will do wonders for their valuation.
Saving lives? How about all the resources were consuming at the cost of the lives of, for example, Congolese child miners, to provide the cobalt and rare earths for the batteries for all this electronic trash? How about the lives that will be lost as we cook the planet to produce the electricity to produce these and all the other electronic trash we don’t really need?
Isn’t this just another example of how capitalist society has reached a dead end? Capitalist organization only considers how to maintain itself, not solve social needs or problems. These are solutions looking for a problem in order to generate income. Income is the purpose of this endeavor.
Talk about perpetuating myths. People BAD, Machines GOOD.
I was talking with a colleague yesterday about the Uber crash, and his response was, “you know, people are really bad drivers”- his whole point being that these courageous tech people are trying valiantly to improve our lives. All reason has left these people. What makes driving safe is proper road design, education, proper regulation, and law enforcement. When the profit motive passes a certain threshold of tolerance, all hell will break loose. It’s corruption plain and simple.
Driverless vehicles do have a purpose in society, but mainly in making delivery of goods and services more efficient. But those saving have to be realized and distributed in a just manner- which is NOT capitalism. Driverless vehicles need to operate on dedicated tracks or roads- not intermingled with common movement. The infrastructure needed to build a safe system would be a massive social undertaking. But then, the debate should be about proper mass transit systems and that is a debate that cannot happen because economical, socially subsidized mass transit cuts into the profit motive.
The choice boils down to wanting a properly functioning society or capitalism. The needs of the capitalist system run contrary to definable social needs. The mayhem that driverless vehicles will cause if improperly implemented into the social fabric are a win for capitalist thinking. The few will make lots of money, while the many will bear most of the negative costs. All the while, “Investors” will drive the process forward, playing on peoples greed to cloud their better judgement. Savings will be misallocated by professionals too blinded by their own short term interests to seek a larger good.
In a weird way, being crushed under the wheels of a driverless truck is very appropriate metaphor for our current state of affairs. Full steam ahead!
And by the way, I have a 13 Billion aircraft carrier to sell you. I will even give you a discount if you buy two!
Don’t believe that Commie Putin when he says Russia isn’t afraid of those carriers because they have figured out how to sink them with missiles. He lies. Anyway, the point isn’t security, but profit generation.
“Isn’t this just another example of how capitalist society has reached a dead end? Capitalist organization only considers how to maintain itself, not solve social needs or problems.”
Marginal returns on investment. Capitalism has run out of things to make that actually make sense to people. So we have to convince everyone that revolutionary change through technology is just around the corner. Because if the masses understood that the iPhone and Facebook are as good as it is ever going to get, they might want some real revolutionary change.
Wouldn’t do anything for the other autopilot victim, where the car mistook the concrete highway architecture for a bonus center lane.
Arguably, remote drivers could help train the AI. What’s gonna happen is they’ll change road building standards to make the problem solvable. (Bury rfid in the road markings, build bigger clearances like interstate, etc). $$. Opportunity for firms with vision to Disrupt the highway construction business :-/
Hey Uber, here’s my road, it has 698 significant curves in the 25 miles and 7,000 vertical feet it climbs. There are many sections where there is room for 1 1/2 cars, and if there is an impasse with another driver, one of them has to put it in reverse, sometimes for hundreds of feet, to allow passing room. Blind curves are a dime a dozen, if not cheaper.
https://www.youtube.com/watch?v=0abPq6uw-xo&t=51s
OMG!!! Your road is terrifying.
“This road is the “king” of dangerous roads. In California, there are few roads that are as dangerous as Mineral King. The journey down this strip is a daunting 24 miles long and includes very narrow roads that are hard to see around. These narrow roads include sharp turns that can be impossible to make when fast drivers are passing around other vehicles. This road is notorious for causing drivers to spin out of control head on into other drivers who are coming from the other direction. Believe it or not, large semi-trucks still use this road for transporting goods back and forth between states. The roads are considered too narrow and it becomes impossible to drive it. Mineral King Road is home to many complaints, accidents and mishaps down its 24 mile stretch. The road is often not large enough for two vehicles to pass in some areas and must be taken with extreme caution. Mineral King can be avoided and should be if you are unfamiliar with the route. Protect yourself and take another route to your destination.”
http://www.smlawca.com/californias-3-dangerous-roads/
Pre-recorded voice from the Phantom Car service as your AV is about to drive off a cliff:
“We’re sorry. All of our operators are currently busy taking control of other cars or you’re in an area with a poor connection. Please try again later. Have a nice day and thank you for choosing Phantom.”
And, don’t forget, your call is important!
In reference to automation and AI let me ask you if you have ever gone into a modern, high-tech restroom, and, while trying to place the rear-end-interface (A22) gasket, the bulk receiving device, upon your righting and rotating the human component, flushes itself, taking the just placed A22 gasket into the processing system before the rear-end-interface receiving device connection can be completed? Fortunately, when the above happens, after several attempts, the human component can adjust its approach/geometry/timing to accomplish the connection and sit down, having earned the right to relax while the intended purpose is realized. And this automation/AI is supposed to drive our cars?
The obvious solution for software is ALWAYS to redefine the problem when deadlines loom. This might help them get started:
The trick in software is set up the problem correctly to start with. The rest just seems to fall (or slam) into place.
Yes, as someone who regularly programs software that deals with remote objects, this idea is simply nonsense. Authentication alone is enough to kill the idea. Even if your AV is dragging a fiber optic cable behind it, authenticating someone who is trying to connect to the vehicle with a non-trivial encryption algorithm will take long enough for the impending disaster to occur.
As mentioned above, liability issues are also going to be a major impediment. No insurance company that wants to stay in business is going to issue policies to cover the phantom operators without -very- hefty premiums, assuming they are dumb enough to even risk it. Anybody that self-insures will simply go bankrupt after one or two catastrophic incidents.
I suppose the idea might be feasible for forklifts, or perhaps bulldozers.
They do it with bulldozers and other construction equipment now. Or, rather, they can do it.
But, the operator is looking at the piece of equipment, while standing near it.
It’s normally only done when putting a person inside the piece of equipment would expose that person to some sort of safety hazard.
Bulldozers also don’t move at 60 mph.
And that’s the kind of scenario where it makes sense. Another case I could see for regular passenger vehicles would be if the driver was incapacitated due to a medical event or something similar.
Trying to do it remotely, purely via instruments, for constant regular driving operation is insanity. I think I would actually feel safer sharing the road with a drunk driver than I would with a car being operated in this manner.
The comment about authentication is interesting, didn’t occur to me until you mentioned that.
As someone who’s regularly written software that is extremely sensitive to latency, I also find the idea of running the remote control over any type of wireless network with their fairly inconsistent latency rather concerning.
A real problem is that at least one class of problems that would make self-driving inoperative would also have a real negative impact on tele-operation: Bad weather. Heavy rain or snow can make driving too difficult for a computer AND can randomly knock out communications. Anybody with a satellite tv has seen their TV go out in a storm. So this is no help in un-bricking self driving trucks in bad weather.
Take the Tesla challenge:
For off-the-wall [sorry] fun, it beats swallowing Tide pods. :-)
Tele-operation = no skin in the game
For a certain unsound state of mind, this is a deadly temptation: crank that semi up to 85 mph and smash it into the biggest target available.
So I should start a business where I make an app that will call a taxi. A self-employed phantom driver will be sitting at home driving one of his many self-purchased AVs to pick you up. He can have AVs all over the world! It will always be peak billing time somewhere! And my drivers can be from all over the world! They’re willing to drive pretty cheaply in Iraq and Yemen! This just makes sense. I’m starting my first funding round. Who’s in for $10 million?
I have always found the self driving booster crowd confusing. They seem to have a near religious faith in the idea of self driving cars that blinds them to the obvious problems involved. It’s almost as if they have a deep emotional investment in self driving cars. Lost in the imaginary world of possibility in five years where these cars work perfectly and there are no problems or malfunctions at all, they don’t bother to consider the very real problems involved and the limitations of computer learning or infrastructure.
Talk about out of touch. Most of the younger people I know already have or are pursuing lifestyles that do not require cars.
What we need is better mass transit, and infrastructure designed to reduce the need for private vechicles.
All this effort looks like the last gasp of a dead end technology. The problem is not that we have to drive the cars, it is that we have to use cars at all.
When did we vote on “self-driving” cars. Seems to me to be another step in the direction of totalitarian government that will “decide” if you need to take that trip. For the record, and for what it’s worth (probably nothing) I vote NO!
The ‘logic’ chain of these ideas is anything but.
“While we can’t manage one instance of a self driving car, or even a remote controlled car, we can only expect that after *miracle* that we’ll be able to claim massive scale advantages and be able to self drive / remote control all of them because they’ll be networked!”
How can they “network” these cars together if they can’t rely on that network to be able to drive one car?
“while we are having trouble with one, we will undoubtedly have no problems at all with many.”
“while we are having trouble with one, we will undoubtedly have no problems at all with many.”
Are we talking about securitization.
IF we can’t figure out how to keep trains from running into each other on the same set of tracks or get a jetway to connect up to a plane’s door, how can we let cars loose on our highways and streets?
Ah. Well. What to say.
The current establishment meme is: “Robots are stealing our jobs and human workers are going to become obsolete and we are running out of workers and people need to breed like cattle or we need to import the surplus labor from the overpopulated third-world to prevent the crops from rotting in the fields and we will all starve because we MUST have more workers even though robots mean that we need ever fewer.’ If you can spot the contradiction, kudos! You are a racist. Or whatever.
Bottom line: it’s all rubbish. Current public discourse on this subject is incoherent. “Telepresence” is basically like the “Mechanical Turk” – an alleged chess-playing automation that was really a dwarf concealed in the cabinet. “Oh look a self-driving car pay no attention to the Bangladeshi getting paid 50 cents an hour remotely piloting it because robots are taking our jobs and there is no need for human labor except for that needed to pretend that robots are actually taking human jobs” – we are so beyond satire.
Really, is anyone paying attention anymore?
Argument against….
This article reminds me of my moment of elucidation with regard to nuclear power. Anybody planning to authorise nuclear facilities needs to consider how they are going to ensure the recruitment and maintenance of the suicide squads* required to go in and clear up the body melting mess when the inevitable happens. Put the cost as a line item into the spreadsheet and see if it adds up economically. I’m thinking Fukushima here as a recent and ongoing capitalist example.
Pip-Pip!
*If not the military.
Fukushima was a design and planning problem. Your country is Earthquake Central and Tsunami Central as well. So build your plant right on the water, put your spent fuel pools up high in the building, and your emergency generators at ground level, and, for good measure, run the cables under ground. All to save money. It’s Casino Capitalism; throw the dice and take your chances.
I’m not an advocate of nuclear power, but -every- major incident at commercial plants has been due to operator error or design flaws, or a combination of the two. All were avoidable.
I’ll admit, I was surprised when I heard people advocating giving guns to teachers, but now I see it’s just a sign of the times.
When common sense fails, idiocy is there to replace it.
. .. . .. — ….
Hey, I’ve got an idea from this article. We already have drone operators, right? Well how about we have teleoperators for passenger airliners? Most of the stress is at take-off and landings and most of a trip is just cruising along so in theory, a teleoperator could control dozens of aircraft, amiright? I’m sure that most passengers would have no qualms about boarding an aircraft knowing that the pilot is safe in some room somewhere with no skin in the game.
Actually, I told a pork pie here. I did not just get this idea from this article. A few nights ago there was a story on the news about how if there was ever a nuclear war that people should not be worried as the President would be safe aboard his personal evacuation aircraft along with the Cabinet and all the staffers. The General giving the story also mentioned that the military boys would also be safe in their hardened bunkers from any nuclear attacks so everything was hunky-dory. Nothing like having no skin in the game to being able make all these life-critical decisions, right?
One point I haven’t seen brought up is mental health. There are quite a lot of studies dealing with PTSD and similar mental health issues of military drone pilots that could apply here. Liability and similar, uh, “problems” could potentially be dealt with by the corporate legal team, however, nothing can fix reaction latency or the small window of time in those “edge cases” where the algorythm can’t decide whether to call for help or not.
Imagine someone going to work, only to witness a massive car crash on their first assignment, just because they couldn’t react fast enough. Depending on the placement of cameras, or rather, the POV of the operator, they might see even worse things, and even if the footage is cut immediately, their imagination could carry on “watching”.
Obviously, these kinds of HR issues could be solved by employing the “right kind” of people, but psycopaths aren’t the most responsible with others’ lives as far as I know.
Can robotic cars be courteous? Will they stop in the rain and signal to a mother and her baby to cross a street even though they are not near a crosswalk?
Yes and Yes.
I think AI is reaching the limits of its epicyclic pattern matching: Human level cognition is not down this road.
You cannot separate intelligence from emotions.
Embodied, Emotional, Intelligence: Gotta choose one from each column.
I could be wrong, but why let them on the road unless the answers are yes?