By Lambert Strether of Corrente
Usage note: I’m going to start saying “robot cars,” instead of “self-driving cars” (they don’t have selves), or “autonomous vehicles” (it’s too long to type, and anyhow “auto” implies a self, too. I guess we’ve had this category error for some time, come to think of it). And by “robot car,” I mean a fully autonomous Level 5 vehicle.
Let’s start by quoting Governor Cuomo’s deceptive press release (which I’ve helpfully annotated):
Governor Cuomo Announces Cruise Automation Applying to Begin First Fully Autonomous Vehicle Testing in New York State
Governor Andrew M. Cuomo today announced General Motors and Cruise Automation are applying to begin the first sustained testing of vehicles in [1] fully autonomous mode in New York State in early 2018. Through Governor Cuomo’s recent legislation allowing the testing of autonomous technology, GM and Cruise are applying to begin testing in Manhattan, where mapping has begun in a [2]geofenced area. All testing will include [3]an engineer in the driver’s seat to monitor and evaluate performance, and a second person in the passenger seat…. Cruise’s planned testing would be the first time [4]Level 4 autonomous vehicles will be tested in New York State…
At [2] and [3] we get some detail that would indicates that the testing is going to be carefully circumscribed, so the autonomy is for some definition of “fully.” However, [1] and [4] are contradictory: “Fully autonomous” robot cars are, in the jargon of the field, “Level 5,” not “Level 4”, as we explained here. In lay terms:
From Dr. Steve Shladover of Berkeley’s Partners for Advanced Transportation Technology:
[High Automation]: [Level 4] has multiple layers of capability, and it could allow the driver to, for example, fall asleep while driving on the highway for a long distance trip…
That could also be a low-speed shuttle that would operate within a confined area, like a retirement community or a resort or shopping complex, where the interactions with other vehicles might be limited so that that helps keep it safe.
[Full Automation]: Level 5 is where you get to the automated taxi that can pick you up from any origin or take you to any destination… If you’re in a car sharing mode of operation, you want to reposition a vehicle to where somebody needs it. That needs Level 5.
Level 5 is really, really hard.
And the Daily Mail, amazingly enough, provides better coverage than all the stories I am about to look at, including a chart with a more technical definitino of the levels:
Level Four – The system can cope will all situations automatically within defined use [for some defintion of “defined use” –lambert] but it may not be able to cope will all weather or road conditions. System will rely on high definition mapping
Level Five – Full automation. System can cope with all weather, traffic and lighting conditions. It can go anywhere, at any time in any conditions
Furthermore, GM’s Cruise Automation hasn’t even developed Level 5 software. (Neither has Tesla.) Fully autonomous, despite a lot of wishful thinking means just that: “Fully.” “Autonomous.” It doesn’t mean “autonomous only in some areas and with a human in the loop.” Fully autonomous = Level 5 ≠ Level 4. So Cuomo’s press release is deceptive.
* * * Now let’s look at how some major journalistic enterprises covered the story. We’ll see that most of them fell for Cuomo’s intitial “fully autonomous,” and didn’t read on to see “Level 4.” Some of them also add interesting details that aren’t in the press release, or in other stories.
GM to test self-driving cars in N.Y. in early 2018: Gov. Cuomo
General Motors Co (GM.N) plans to test vehicles in fully autonomous mode in New York state in early 2018, according to New York Governor Andrew Cuomo.
The planned testing by GM and its self-driving unit, Cruise Automation, will be the first by a Level 4 autonomous vehicle in the state, Cuomo said in a statement.
Reuters, to its credit, read Cuomo’s whole press release, and mentions both “fully autonomous” and “Level 4,” but without noticing they contradict each other.
General Motors to test self-driving cars in New York City
The Detroit-based automaker, whose shares have risen 25 per cent in recent weeks on investor expectations that it could beat rivals to the introduction of a mass market autonomous vehicle service, will test Chevrolet Bolt fully autonomous electric cars in its most complex market so far: lower Manhattan.
The Financial Times falls for “fully autonomous,” and doesn’t mention “Level” anywhere in the story. Careless. Also, the hard problem is not Manhattan as a “complex market,” but Manhattan as a complex streetscape; exactly the sort of category error one would expect an organ like the FT — much as I love the pink paper — to make.
GM to Test Fleet of Self-Driving Cars in New York — Update
GM will deploy a fleet of self-driving Chevrolet Bolt electric cars early next year in a 5-square-mile section of lower Manhattan that engineers are mapping, said Kyle Vogt, chief executive of Cruise Automation, the driverless-car developer GM acquired last year…. [Manhattan, like San Francisco] offers a congested environment with a high concentration of hairy situations that fully automated cars must learn to navigate.
Dow Jones, like the FT, doesn’t mention levels at all, emitting instead vague terms like “self-driving” and “driverless,” and implying, without actually saying, that the Chevy Bolts will be “fully automated” (as opposed to “autonomous.” They do pick up, however, that GM’s testing will take place in a 5-square-mile area, which GM is mapping. (Existing maps won’t do, then?)
Self-Driving Cars Could Come to Manhattan
The driverless trials will include two passengers: an engineer sitting behind the wheel to monitor and evaluate performance, and a second person in the passenger seat, according to the governor’s announcement.
The New York Times doesn’t get into Levels either — I guess they didn’t read that far down in the press release, although, to their credit, they did interview some cab drivers[1]. Like Dow Jones, they equivocate with “driverless” and “self-driving” (though I suppose we could get into the semantics of what “self-driving” can mean with two people in the car who one of whom is an engineer).
Self-driving Chevy Bolts will roam New York City streets next year
General Motors will operate a handful of semi-driverless Chevy Bolts within a five-square-mile area in lower Manhattan for at least a year to test the technology.
The legislation also requires state police escorts to accompany the test cars, but how that’s implemented is still being worked out. Each car being tested in New York must also have a $5 million insurance policy.
Like Dow Jones and The Times, Recode doesn’t get into levels. Unlike them, it further qualifies the already qualified “driverless” with “semi” (because of the engineer behind the wheel, I suppose). The detail on the state police escort (!) and the five million insurance policy per car is not in the press release, however, or any other story I read, so kudos.
GM will be the first company to test self-driving cars in New York City
Cruise Automation, the self-driving unit of General Motors, announced today that it will test its autonomous Chevy Bolts in one of the most torturously congested cities in the world: New York City. According to New York Governor Andrew Cuomo, the company will be the first to test Level 4 autonomous vehicles in the state.
To its credit, the Verge mentions “Level 4,” and does not say fully autonomous, unlike Cuomo. (And most of Manhattan is a grid;
it may be torture, but it is not tortuous.)General Motors PAC donated $17G to Cuomo months before picked to test self-driving cars in New York[2]
The company will be the first to test fully automated, or “level four” vehicles in the state, Cuomo’s office said Tuesday.
Conclusion
I know robot cars are seen as a technological inevitability, so why spend time on the story? But what we’re seeing is a test taking place in a five-square-mile area that has yet to be mapped[3], in a “Level 4” vehicle that is not “fully autonomous,” at least as the Society of Automotive Engineers defines the term, with an engineer behind the wheel, a passenger with carefully undefined duties, trailed by a police car, and with a five million dollar insurance policy. Frankly, this is an impressive enough technical achievement without going all giddy. Is it really too much to ask our famously free press to read beyond the press release and get the basics right?
NOTES
[1] The lead: “Pity the poor taxi drivers. First came Uber, now comes no one.” Paging Thomas Frank!
[2] Of course.
[3] And how come GM gets to pick its own test area, anyhow?
Semi-driverless seems to be a fairly accurate term here. My question is how is that all that much different than checking one’s mobile while driving a regular car and only paying attention to the vehicle when absolutely necessary as is currently the fashion?
Jetpacks or bust!
Semi-driverless car is a great descriptor for many of the hordes of vehicles circulating hell-A’s sclerotic arterial routes.
If cab drivers are replaced by self-driving cars, who will Tom Friedman interview for pithy insights into the way the world works?
(/snark)
More seriously, I’m not sure anyone’s thought this through. What happens when a long line of lidar/radar-equipped cars are all sending out laser/microwaves in the same direction at the same time? Do we run a first-year physics experiment into wave frequency additions/interference?
What happens when a nation-state (or a skilled hacker) sends out the command for the cars to misbehave – either by bricking them, or causing collisions? How about GPS jammers?
It’s easy to be dazzled by new shiny technology – right up until it bites you. Remember how the computer industry was thought to be “clean” because there were no smokestacks? Now, we worry about the hazardous solvents those “clean” companies released into the ground water…
I’ve seen some buzz from radar suppliers about introducing modulation techniques (in a similar manner to how 3G works) to mitigate cross-talk, but I’m not sure how far they’ve come on implementing this and how scalable they are. No idea how LIDAR would handle this, it’s not really my area.
GPS jamming and spoofing is a hot topic, especially in the defense industry after Iran tricked a US drone into landing on hostile ground in 2011. There are a lot of things you can do to identify spoofers or jammers and mitigate them, but I don’t think there’s any magic solution. Which is bad news for Pokemon Go…
I think this is the worst idea ever. The hackers will have a field day with this. And privacy? What’s that? Everyone has lost their minds. Its just More tracking—- someday these young people will wake up and it will be too late!
Technology is not looking out for US its just big brother watching every single thing. Everyone has lost their mind.
Marketing fraud, like all of AI. It is possible to have trams … even trams that look like cars, that follow fixed routes (on wheels instead of rails). Think Uber without exploited Uber drivers. And those car/trams are coordinated with the street signals. But there is a liability issue with driverless trams today … which is why there is always a driver … that way the driver can be blamed, not the tram manufacturer/owner.
i can’t wait to see how these things operate when it’s snowing.
Or when people pop out between cars to hail cabs or jaywalk..or open car doors on the street side (which happens more often than it ought to).
Actually, they handle the “people popping out between cars” situation fairly well. At least, the one I rode in did (and that was 7 years ago). It was CMU’s DARPA Urban Challenge vehicle on their test track in Pittsburgh and that was one of the specific situations they had to deal with. Was pretty DAMN cool!
In other words, robot cars are really easy to game?
Game as “jump out in front to watch the overfed passenger bounce off their shoulder belt as the vehicle slammed on the brakes”??? Yep. You’re right. Safer would be to throw a beachball into traffic. Would have the same effect.
I’m not quite sure if you are trying to make a point, though.
agreed, which is one of the reasons this will only happen if the roads are given over to the robots and the humans are kept well away
Like that, yes. From the NC post linked to above:
In other words, the better that robot cars are at reacting to the environment, the more people will be able to game them by manipulating the environment.
As Clive points out, that logic leads to keeping robot cars on their own roadways, far away from people. So why not just build trains?
Adversarial testing for fun and profit!
How about when some pigeon poop gets on the robot’s optical sensor?
Maybe this is one of the positive aspects of the huge reduction in flying insects — there’s less likelihood of something splatting on the sensor. Still, I don’t think that robo-cars are important enough to compensate for a large scale ecological crisis.
For a mining application where there was lots of dirt and dust everywhere, a truck I saw had a little mini washer-wiper system for the LIDARs, kind of like some trucks have for their headlights. Different sensors all have their weaknesses, and I think snow is one of the most brutal things. GNSS and inertial navigation can still help a vehicle orient itself, but lane detection cameras become worthless, lidars and radars can get confused, and cameras will have a hell of a time especially when it’s dark.
Hah! Vitamin A and other carotenoids can help a human with night vision, but they won’t do squat for machinery.
Take the initiative and drop ol’ papa legba’s veve on them. The guardian of the crossroads is not amused by these soulless contrivances
Has anyone tested robo-cabs with blotto drunks or dementia sufferers? Every human cab driver has faced these, and it is taxing even with intuition as a guide.
As you suspected, that’s indeed a common fallacy: that car sharing / ride hailing won’t need labor. I sat in a cafe next to an Autolib’ bank of recharging stations in Paris: a constant parade of staff cleaning cars, which required that the driver be dropped off and then picked up because they couldn’t do a proper cleaning on the street. And then (electric but not autonomous cars) they had to move them from low to high demand locations, as the model is pick up anywhere, drop off anywhere. All very labor intensive. Not conducive to actually making any money. Since EVs are expensive, Autolib’ also has a slow replacement cycle, so most of the cars are visibly very well used, er, abused.
Torturously was the adverb used in the article in The Verge. Meaning painfully congested.
I stand corrected (I misread torturous as tortuous. English is the best language!)
Your comment does make me wonder, however, whether Manhattan’s grid pattern is the sort of special case that might make programming/training a lot easier, and the whole testing program a special case, since most of the turns are 90 degrees. Rather like testing self-driving Indianapolis 500 cars out on the track, where they only turn left, or is it right?
I don’t know! I am not a programmer by any means, but I can’t help but think that city congestion, in general that is, can lead to unforeseen problems. I grew up out in a more country/suburban sprawl/beach-tourist-trap hybrid area where it was relatively easy to navigate compared to the urban center I now inhabit. What I deal with now-with turning lanes that jump out of nowhere, cars lined up on the side of street at unpredictable locations (sometimes the rightmost lane just becomes a parking lane by some sort of default), not to mention a few five point intersections-and this is all in a downtown, relatively grid-like area. I think maybe a birds eye view of the situation fails to take in into account some of the on-the-ground complexity of the street.
Also-New York traffic patterns may be a little more leveled out, compared to where I am (triangle area North Carolina), which is just exploding in population. There’s almost no public transit to top that off.
I think Manhattan is a pretty ambitious test environment, even with the way they’ve artificially bounded the problem. It’s an urban canyon and crowded with mixed traffic (pedestrians, bicyclists).
I’m curious to know if they plan on testing these in the dark or in the snow.
I’m still trying to figure out what transportation problem is actually going to be solved by these vehicles.
That being said, it seems like people think that these will be just like regular cars, except that they’ll be able to drive on their own. It seems like the model being implemented will require significant back-end support, just to replicate what one human driving a car can do. How is that more efficient? For that matter, how is that safer? It’s just introducing more things that can break.
Moar monay for Uber and its ilk, by removing the expensive human?
> what transportation problem is actually going to be solved
Not being with other people, especially cab drivers.
There is the argument that 37,000 lives due to auto accidents will be saved. I should unpack that at some point. I would start by pointing out the rural states are over-represented in drunk driving deaths, and that robot cars aren’t likely to make it to rural states, if any significant infrastructure investment at all is required.
Those numbers are overstated, I think. They’re assuming that these robot cars will always operate perfectly, and in perfect conditions, while comparing it to human drivers operating in all conditions.
what transportation problem is actually going to be solved by these vehicles
It’s not solving a problem; it’s 3 card Monte…look at the shiny geegaw your manipulator insists is your future while he and his associates fleece you blind.
You may recognize them by their snappy and frequently slightly reichy/bond-villainy names….Otto / Uber / Elon , etc
I can think of five major ones right off the bat:
1. Elderly and handicapped drivers (including the blind) can be mobile at low cost;
2. Drug/alcohol impaired driving (a major cause of accidents) would largely go away;
3. Sleep-deprived driving (another major cause of accidents) becomes a thing of the past;
4. Texting while driving (another major cause of accidents) would become irrelevant; and
5. The car could go park itself away from where you are dropped off – searching for parking spots near your destination is a major source of urban traffic.
That’s a good list, but I can see problems:
1. Elderly and handicapped (blind) drivers won’t be able to take over in emergencies, i.e., when lives are lost, so Level 4 won’t help them. And Level 5, so far, is a fantasy.
2. Drunk driving deaths aren’t evenly distributed geographically. I doubt the flyover areas where they disproportionately occur will get the infrastructure improvements needed to make the algos work (I’m recalling the company President who complained that his robot cars didn’t work because the white lines on the road were faded or non-existent)
3. On sleep, same argument as #1.
4. True, with a decrement for missing an emergency in Level 4. And surely the clever engineers in Silicon Valley could disable text functions when the phone “senses” it’s in a vehicle, so we don’t have to spend squillions of dollars on robot cars?
5. I’m not sure the driver would be comfortable with that. I suppose if the robot car is hired by the hour, who cares? But sending a very expensive piece of property I own off to park itself, I know not where, seems like a risky proposition. And that’s assuming the car seeking a parking spot isn’t gamed by, say, kids painting white lines on the road to lure it into a “robot car trap,” and then stripping it.
The original autonomous project, Europes Eureka Prometheus Project was intended to address congestion. I recall sitting in lectures from traffic engineers back in the late 1980’s being told that this was the alternative to building more highways. The promise was that autonomous cars would move in a more rational and predictable manner than human drivers, resulting in significant capacity increases for existing highways. The selling point was that the cost of implementing the project would be off-set by not having to build more road capacity. Nobody seemed to consider that one obvious result would be people choosing longer commutes. But road engineers never liked having it pointed out that increasing traffic speeds only resulted in greater numbers of cars.
Safety was another issue that was regularly brought up, although from my memory of the lectures, it was very much secondary to getting people to work quicker and more reliably.
Prometheus was a public sector initiative, heavily led by transport engineers, hence the extreme focus on ‘efficiency’ in a narrow sense. I think it was only in the last decade that silicon valley clued into the potential for making money from it.
Flying into Instrument Meteorological Conditions requires a great deal of mandated redundant equipment, in addition to a rated pilot.
The lack of published equipment minimums is a clear tell that nobody knows what they are doing.
Where is the NTSB on accident and incident reporting requirements?
Airplanes require certified mechanics. Instrumentation requires frequent calibration and certification intervals.
So many missing details …
Don’t worry. The private sector is going to handle all this through innovation. “Light touch” regulation is the way to go.
So the standardization necessary to make this scheme work will be the sole property of whichever entities are able to withstand the liability challenges their reckless course of invention causes?
Once proven on the public roadways it would be most efficient for the competitors to use the same open system. Otherwise they have to solve the problem of black box algorithms 2nd guessing each other.
But maybe market conditions make the technically superior cooperative solution impossible?
Is there a corollary between Level 4 robot cars and the reported drop in piloting skills on airplanes with robot flying (aka auto-pilot)?
The humans become both less able to respond in a emergency (skills require practice, and decay without continuing use).
Expecting the humans to both be able to snooze yet also requiring the human to be vigilant if the robot meets a situation it cannot handle appears to me to be such a contradiction as to make potential accidents worse.
Or am I completely missing the point?
Yes, I think that this reasoning applies.
I mean, the idea that “the vehicle drives itself until ZOMG LOOK OUT LOOK OUT I’VE GOTTA–” [screech crash]”… I mean, “until the human takes over in emergencies” won’t work out real well.
Sounds like Level 4 driving would be “hours of boredom punctuated by moments of extreme terror.” Who wants that?
Two articles: the first is on Nvidia’s new self-driving computer from 1/2016, bragging it has the power of 150 MacBook Pros. Next up is a similar article from earlier this month on the NEW New Nvidia auto computer, bragging that it has 10 times the power of the earlier one. In essence, that’s an admission from Nvidia that, despite their expertise, they underestimated the difficulty of the job by at least a factor of 10.
The article also mentions that “Most car companies have said they will probably skip Level 3 and 4 because it’s too dangerous, and go right to Level 5.” Meaning the levels are hogwash.
Better: Level 1: vehicle operating in a cooperative environment. Amazon warehouse robots might be a good example of this, but note lots of room for improvement. Level 2: vehicle operating in a benign environment. Your example in the OP is an excellent example of this:
That could also be a low-speed shuttle that would operate within a confined area, like a retirement community or a resort or shopping complex, where the interactions with other vehicles might be limited so that that helps keep it safe.
The fact there are not multiple commercial deployments of such systems speaks volumes on where development actually stands. Level 3: Operation in the natural world. That is to say, the vehicle is driving on crowded icy roads right after a football game, surrounded by drunks, dogs and revelers, while at the same time Ukrainian mobsters are hacking into your car’s computer to mine Bitcoin… :-)
That could also be a low-speed shuttle that would operate within a confined area, like a retirement community or a resort or shopping complex, where the interactions with other vehicles might be limited so that that helps keep it safe.
Isn’t that a train?
Not really because the scale is different.
Nevertheless, it would be funny if GM bet the company on what turns out to be a niche market: Self-driving golf carts in retirement communities, driving very slowly.
Great point, but I’m sure the greed-head/techno-utopians behind this robot car push are unfazed. Humans with severely degraded or never-cultivated driving skills means ‘TINA’ when it comes to level 5 cars. Widespread adoption of level 4 cars creates a impetus for level 5 automation as humans lose their driving skills. The interesting question is how many people die in automation related accidents while the technology for safe, functioning autonomous cars is developed? Does society have the tolerance for a sizable number of deaths attributable to buggy automation and end-user automation screw-ups? Can the court system or our corrupt elected officials stop the Waymos and Musks of the world? Sounds like we are going to find out.
As an airline pilot flying new model aircraft equipped with the latest automation technology the aviation world has to offer, I must admit I am skeptical. After decades on the market and constant refinements, the automation technology in airliners is quite buggy and frequently requires human intervention. The automation is just good enough to lull a trusting or lazy person into not paying attention, which is precisely where the danger lies. Based on my experience with automation and human factors I see a very painful rollout of this new technology. Mix in some shameless advertising and grandiose marketing promises, lax regulation, a poorly understood complex system embraced by a distracted and self-medicated public and it’s easy to imagine a bloodbath. So, yes, count me among those skeptical of this entire experiment.
> The interesting question is how many people die in automation related accidents while the technology for safe, functioning autonomous cars is developed?
As many as necessary, Jerry! What’s wrong with you?
The press release points out that:
I say bring those robot cars out to rural Michigan for some real novel situations. How about trying to navigate a severely pot-holed dirt road that requires slalom-like steering, while encountering a large combine coming at you from the opposite direction, just after a momma deer has leapt across the road but before its trailing fawns have made the crossing. That’s not unusual – that’s a morning commute.
One of my more cynical scenarios for infrastructure spending is that much of it goes for improving roadways in big cities, so we can have lots of robot Ubers (and not take the subway or buses with smelly proles). Your unusual situations in Michigan would then remain exactly the same. No robot cars for the flyover states!
My own cynical suspicion is that this is all an immense propaganda campaign to get trillions of public dollars spent on infrastructure that will allow the less-than-autonomous vehicles, that can actually be produced (eventually), to operate and make scads of $$$ for their makers.
I think you attribute far too much competence to the people putting out press releases. My experience in the industry is that there are a whole lot of people trying to climb the ladder and be “visionaries” who aren’t all that interested in learning about the technology itself, and they really believe the absurd stuff they say at conferences or in press releases.
There’s a bit of an interesting dynamic with “robot driving” technology, in that it’s super easy to build demonstrators that show something cool (2-3 engineers with the right competences could implement something like Mercedes’ autonomous runway clearing system in less than a year) but it is INCREDIBLY difficult to build robust systems that will operate on public roads under many different conditions, and harder yet if malicious actors are going to be taken into consideration. I always grit my teeth when I read press releases about stuff I’ve worked on, because the caveats are inevitably missing and no one sees all the ugly workarounds that went into getting the demo working.
And in what is becoming a predictable pattern, those damn russkies will manage to negate all that perfectly executed planning, technology, insight, dedication and funneling of billions to the deserving few through the use of some small band of underfunded trolls armed with all the surplus street paint they could acquire.
I call for the immediate banning of all white materials that could threaten the right to be driven by an infallible autonomous agent the Creator and owner of which you have waived any rights to sue (see section 3200.12 sub sections g-z of the end user
loserlicense agreement you agreed to when purchasing this service) before a single one of your Betters is delayed in their urgent and important business of running you down.No, you’re right: Air France flight 447, which crashed off the coast of Brazil a number of years ago, went down for pretty much exactly the reasons you describe.
Sorry, my comment was directed to Synoia
I used to have a link to a nicely written article about the Air France crash talking autopilot/autodriving design (And the AF pilots had 2+ minutes to figure out the right thing to do {dive} and still did the wrong thing. I don’t think autodriving “failures” will allow the same leeway). Of course I can’t find it, but while searching the web for it, I came across this one:
Artificial Stupidity: Fumbling The Handoff From AI To Human Control with some nice “money” quotes:
“That human-machine handoff is a major stumbling block for the Pentagon’s Third Offset Strategy, which bets America’s future military superiority on artificial intelligence.” Boy I feel *so* much safer now.
“…the combination of human and artificial intelligence is more powerful than either alone. To date, however, human and AI sometimes reinforce each other’s failures.” Mutually assured destruction?
” “Handing off to the human in an emergency is a crutch for programmers facing the limitations of their AI, said Igor Cherepinsky, director of autonomy programs at Sikorsky: “For us designers, when we don’t know how to do something, we just hand the control back to the human being… even though that human being, in that particular situation (may) have zero chance of success.” ”
““You can get lulled into a sense of complacency because you think, ‘oh, there’s a person in the loop,’” said Scharre. When that human is complacent or inattentive, however, “you don’t really have a human in the loop,” he said. “You have the illusion of human judgment.” ”
““The inherent difficulty of integrating humans with automated components has created a situation that has come to be known as the ‘dangerous middle ground’ of automation – somewhere between manual control and full and reliable automation.” It’s the worst of both worlds.”
So yeah level 4 is great…until it’s level 0 and you’re not paying attention and have forgotten how to drive. How many lives is this supposed to save again? Illusion of judgement = delusion of benefit.
Great comment.
“” “Handing off to the human in an emergency is a crutch for programmers facing the limitations of their AI, said Igor Cherepinsky, director of autonomy programs at Sikorsky: “For us designers, when we don’t know how to do something, we just hand the control back to the human being… even though that human being, in that particular situation (may) have zero chance of success.” ”
Yep, program computers to navigate the routine tasks. Train human users to expect situational expertise/”awareness” from the computers. Combine. Yikes!
adding: If companies called this Advanced Automated Switching (A2S) instead of Artificial Intelligence (AI) they would be more accurate, even if the more accurate name had less PR woo. Greater accuracy in naming would lead to clearer thought about both deployments and human (pilots, in this instance) training in the use of the machinery, imo.
So robot cars will simultaneously make us stupider drivers by degrading our driving skills while simultaneously handing us control during sudden emergencies. Surprise! What could go wrong?
Some good things in here, though the problems being described are generally familiar. E.g.:
This is a good point, rarely heard:
I think the point here is that *automated* systems are necessarily *more complex* systems, not only in terms of parts count but also in terms of the variety and difficulty of interactions expected of the operator, and the (explicit–see above) assumption that the operator is expected to be the backstop for all manner of system faults and situations that the system designers could not automate well. So an automated system requires *more and deeper* training. This runs exactly contrary to the normal capitalist assumption that automation allows dumber/cheaper humans to man the system, or ideally no humans at all.
This is really insightful:
You can read far and wide in automation literature and not find this point: The proper goal for automation is to *complement* the human operator and thus make the overall system *better*, not haphazardly *displace* the behavior of the human operator along with his or her strengths. This obviously requires deep and creative thinking, and a willingness to be honest about what both humans and machines do best.
+1
Excellent analysis XXYY.
I finally found the original article I was searching for: Crash: how computers are setting us up for disaster. Just a few revisits of the themes:
“This problem has a name: the paradox of automation. It applies in a wide variety of contexts, from the operators of nuclear power stations to the crew of cruise ships, from the simple fact that we can no longer remember phone numbers because we have them all stored in our mobile phones, to the way we now struggle with mental arithmetic because we are surrounded by electronic calculators. The better the automatic systems, the more out-of-practice human operators will be, and the more extreme the situations they will have to face. The psychologist James Reason, author of Human Error, wrote: “Manual control is a highly skilled activity, and skills need to be practised continuously in order to maintain them. Yet an automatic control system that fails only rarely denies operators the opportunity for practising these basic control skills … when manual takeover is necessary something has usually gone wrong; this means that operators need to be more rather than less skilled in order to cope with these atypical conditions.” ”
So , summarizing the paradox of automation:
1) Helps the less-skilled still do the task (under normal conditions)
2) Removes the need for practice–so the less skilled do not become more skilled and the more skilled become less skilled
3) Fails in “unusual” situations, precisely when a more skilled/most skilled response is needed.
4) Reliance on algorithms blunts our efforts to solve the problem other ways (since it solves some of the problems/part of the problem), which leads to even greater reliance on algorithms, etc.
“We fail to see that a computer that is a hundred times more accurate than a human, and a million times faster, will make 10,000 times as many mistakes. This is not to say that we should call for death to the databases and algorithms. There is at least some legitimate role for computerised attempts to investigate criminal suspects, and keep traffic flowing. But the database and the algorithm, like the autopilot, should be there to support human decision-making. If we rely on computers completely, disaster awaits.”
“An alternative solution is to reverse the role of computer and human. Rather than letting the computer fly the plane with the human poised to take over when the computer cannot cope, perhaps it would be better to have the human fly the plane with the computer monitoring the situation, ready to intervene. Computers, after all, are tireless, patient and do not need practice. Why, then, do we ask people to monitor machines and not the other way round?
When humans are asked to babysit computers, for example, in the operation of drones, the computers themselves should be programmed to serve up occasional brief diversions. Even better might be an automated system that demanded more input, more often, from the human – even when that input is not strictly needed. If you occasionally need human skill at short notice to navigate a hugely messy situation, it may make sense to artificially create smaller messes, just to keep people on their toes.”
So maybe we should regard Level 2 as the highest we *should* go, and require a certain % of driving to be level 1, and work to have the best Levels 1/2 we can?
From an online dictionary – – Roam: move about or travel aimlessly or unsystematically, especially over a wide area. “tigers once roamed over most of Asia”
Using roam to describe a tightly-controlled driving area seems inappropriate.
Perhaps the buried metaphor comes from cellphones, not tigers?
Took a look at the State of New York applications for Autonomous Vehicle Demonstration and Testing and found a few more interesting tidbits.
First of all, page two of the application clearly specifies what Levels are allowed to be demo’d or tested:
So it may not even be legally possible to test fully automated, Level 5 vehicles in New York at this time.
To increase the gap between a real-world, Level 5 test and what is really being proposed here, Part II of the two-part application states:
What fun is that? (And how hard is it going to find a route without those two complications?)
Bonus factoid: The NY State Police Dept. is going to bill testers such as GM $92.73 per hour ($131.67 overtime) plus 53.5¢ per mile to supervise the ongoing festivities.
Excellent catch. Thank you!
So, apparently these things are going to be ready in a couple of years even though they can’t be legally tested in school zones. I can’t find a map of Manhattan school-zoned streets but here is a map of the schools:
So presumably the five-square-mile area won’t include streets near these schools. (Interesting, a search on “schools” in Google maps produced charter schools, but not all the public schools in the map above; and the search on “public schools,” as you see, doesn’t include elementary schools.
Makes me think that mapping needs to be done from scratch.
The public school density in Lower Manhattan is greater than that shown on the map by at least a factor of three. Add in the few remaining Catholic schools, charter and other private schools (and no, they’re not public schools, no matter what their promotors say), and there’s no way to maintain the prohibition and conduct the tests simultaneously. They could perhaps skirt that by conducting tests on weekends when school is out, but that negates the whole point of “real-world” testing.
Every day that goes by makes me think this whole machine-governed car thing, at least according to the timetable(s) the Hype Machine is using, is part hustle, part mass delusion, part sinister social/urban engineering (readers please add to the list) …
Overlay a map of Manhatten road closures and you’re really talking some fun and games. GM is really going to need to do some next level planning. According to the application, the testers of the robot car must submit detailed route specifications, including:
It’s going to look like a gerrymandered fiefdom.
This is the idea, yeah. To implement a base layer that maybe resembles something like you see there, and to have layers on top with things like curbs, lamp posts, lane markings, buildings and other things with high-precision that correspond to what a vehicle’s cameras, radars and lidars and such will perceive. Then each vehicle should synchronize with the map database and upload the picture of the world that it perceives to identify map changes.
It’s probably going to be extremely difficult to do this in places like construction zones (part of the reason why they’ve artificially changed the inputs in this article), and while you can maybe use it for orienting yourself in absolute coordinates, it’s no guarantee that a new obstacle won’t pop up for the first time. Plus doing any sort of meshing is going to be rife with bugs for the infinite corner cases of what sensors can spit out.
I think a good part of this obfuscation (besides the laziness of the MSM), is designed to make “driverless cars” seem to be a foregone conclusion in the minds of the public. No way to fight progress marching on; etc. The decision has been made that we want and need this regardless of our own opinions.
Seems inevitable to me some terribly tragic series of accidents will follow this due to owners stretching the capabilities of their Level 3 car to Level 5, or a glitch in the software, causing the car to enter the wrong way traffic on a freeway or turnpike.
And God help us when they really get rolling with those driverless semi trucks.
“Robot Car Killed My Baby!”
Which will actually happen. Of course, auto accidents kill people, too, but I don’t for a moment believe a real-world robot car implementation will eliminate all those deaths.
And “who” will be liable? The car company or the driver or the other person/car ? What will insurance rates be on “AI” driven cars? Is the whole AI thing an arbitrage on “who” (driver/car/other/car) is responsible. if courts defer to claims of algorithm perfection, and 2 car companies make that claim, how will courts decide. etc. (I begin to think basic coding in any language you like should be part of general education, the way English and maths and history are part of general education.)
adding: To be clear, I think a general coding course should be required in general high school and college education to de -mystify the claims made for computer algorithms. Currently, too many in positions in law and government do not understand the arguements ( through no fault of their own, it is the techincal transistion of the times), and cannot subject the argeements and claims to the “reasonable man” test, which in essence is the “common sense” test.
A general coding course is a course in LOGIC. Period. End of Story.
Not in entirely.
It is also a course in clearly expressing the logic, testing its implementation, and integrating it into the real world (making it usable).
Alright, you got me laughing out loud.
Okay, I am going to lob something in here, something I have brought up before in discussions about autonomous vehicles. If I am talking to AV zealots, they wave me away (or worse). But I would like a considered answer.
In their fullest flowering, AVs are truly driverless. As in (an oft-cited example): “You can send the car to your daughter’s school, empty, and she can get in and be driven back home.” (Leaving aide what happens when the offspring refuses to get in the car for now…)
If this kind of car ever exists, why can I not place an IED (triggered remotely or via timer) on the front seat (or in the trunk), and send the car off to some crowded street in New York, Delhi, or Manila?
If I were being hopelessly snarky, I guess I could say that this sort of automation puts suicide bombers out of work. But more seriously, is an AV the terrorist’s perfect weapon?
Cars are one of the most highly regulated areas of modern life for a reason and that reason is that they were and are part of most criminal enterprises. Ideas like “shotgun” and “getaway driver” are pretty much ingrained in our language by now as weaponized uses for cars.
But you’re thinking a step too far. You could just drive the autonomous car into a crowd of people. Or the kid could. Or the computer could. Or a hacker could. Or whatever.
I don’t see autonomous cars existing in twenty years. One is going to plow through a marathon and that’s going to be the end of it.
Very good points.
i.e., insurance companies are going to spead-sheet run the probabilities and set insurance premium rates accordingly. imo.
We are now at what I call Peak Auto. That shows up with the Boards of car companies pushing CEOs to try to catch the euphoria around Tesla and Uber. Autonomy is another.
In a previous Peak Auto car companies (Japan, Europe, US) all bought car rental companies. All then unloaded them, at a loss. This time they’re buying taxi companies, which I expect to end even worse, as per your reporting on Uber.
But GM’s shares have risen with the ongoing tech makeover, and that will get other Boards to double down on their bets. The reality is the companies are all caught in a prisoner’s dilemma, spending billions now but with any revenue stream years away, and profits even further. After all, if everyone has the latest ADAS features, automated emergency braking and lane keeping and all that, then no one can charge a premium for them. And they’re fast becoming standard features, not ways to differentiate your products.
Raise costs, not revenues is a bad business model. What you are describing goes way beyond sloppy journalism.
> Raise costs, not revenues is a bad business model. What you are describing goes way beyond sloppy journalism.
Everything goes way beyond sloppy journalism :-) But I wanted to get a reading on how far I could trust the coverage. And the answer is: Not at all. I mean, when the Daily Mail blows the away the competition from the Times, the FT, Reuters, etc., we truly are living in Bizarro World.
> Raise costs, not revenues is a bad business model
I wonder if Uber has so many internal pathologies because, given its business model, the only way to make money is to cheat. And I wonder how many other Silicon Valley companies are like that (Amazon getting its start by evading state and local taxes, for example).
I seem to recall a protest right after the national 55 mph speed limit was imposed after the the oil shortages in the 70’s. Three fellows drove their cars side by side exactly at 55 mph across country on RT 80 not letting anyone pass them. The traffic jam behind them was miles and miles long People were furious, late to work, trucks delayed ect. I can’t help but think a similar thing will happen with all these computer controlled cars all driving at the legal limit.
Here in Jersey the GSP is posted at 65 mph… but if you’re not going 80 you’ll get passed… somehow it works…
They don’t have selves! Excellent point. “Robot Cars” it is then. Though “Zombie cars” is also good:
https://www.opednews.com/articles/Google-Is-Building-A-Zombi-by-Anthony-Kalamar-Automobiles_Capitalism_Commodities_Consumerism-130902-903.html
Earthquakes, floods, fire, and riots?
I remember driving after the last serious Bay Area earthquake and between the downed, damaged, blocked, coned off roads, freeways, overpasses, fires, lack of power, plus days (weeks?) of no traffic lights just how would these autonomous vehicles do? It was not even that bad of an earthquake. Rather a mild one for all that some died. How would these nice self driving cars do in those situations?
I understand the dream. Heck, how many science fiction stories have them? It would be so cool. But. Instead of these toys what about catching up on all the deferred maintenance like potholes, do all the repair work, actually do all the planned road, freeway, and mass transit (including bus, BART, and light rail) expansions that have been in the works for decades, some since the 1960s and maybe rebuilt, and expand, the problematic conventional rail, as well as a complete high speed rail system. Oh, and did the same with the energy grid, starting by jailing the entire senior management of PG&E, so it could reliably function and support the increased energy demands.
If we did all this, I could see working on autonomous vehicles. To make them work at all requires at least some real work on the roads. Maybe they would work at least in some areas. Bring back mobility to some. And it would be cool to see.
But all of that would require taxes, bonds, serious detailed long term planning, funding, and construction like adults in a functional government at the state, regional, and municipal level. It’s much easier to do light fluff research, and investment, by our Lords of Silicon and fluffier advertising by the “news”’media.
Can you imagine what amazing public transportation we could have with existing Level 4 technology? Automated vans running in dedicated lanes would allow more responsive service (fewer stops mean faster trips) on existing routes and new service on routes and in communities that can’t support normal buses.
In the wealthy Anglosphere countries we still live in a profit-driven, rather than a people-driven, society, and so that won’t happen. But maybe Denmark or somewhere? Or if Corbyn wins big?