-
Consumers should question the callousness of automakers using a human driver as a “component” of their safeguards.
-
What’s at stake: If you think a faulty sensor triggered a BMW to automatically accelerate to 110mph on a U.K. country road, think again. The problem is systemic. The incident exposes the inability of many carmakers to understand the relationship among individual modules to ensure system-level safety.
By now, we hope a Sunday Times of London report, BMW cruise control ‘took over and tried to reach 110mph‘, has become required reading for every system engineer developing AI-embedded ADAS vehicles, and for consumers eager to embrace automated vehicle features. The story’s alarming subhead reads, “A motorist was sent hurtling over the limit when his car’s technology misread signs.”
Shown to be a no one-off glitch, the incident demonstrates that auto sensors can misread speed limit signs. An advanced automated feature – BMW’s Speed Limit Assist – enabled the car to act autonomously, accelerating the BMW X5 to 110mph in a 30mph road on a village road in the U.K. county of Essex.
I’m focusing on the BMW incident because the story is on many levels full of teachable moments. If we learn anything from this fiasco, the lessons should apply beyond BMW to all car OEMs and top-tier suppliers developing ADAS features.
The easy way out for carmakers is to attribute a failure to an individual component and its software. That’s BMW’s alibi. As The Times reported, a BMW representative told the driver – who experienced the trauma of his vehicle “taking over” without permission – “there was ‘no fault with the car’.” The problem, according to BMW, involved sensor “picking up writing or numbers on the side of the road.”
Unwittingly acknowledging in its statement, BMW screwed up its system-level engineering. The incident underscores shortfalls within OEMs’ system-level designs, testing, verification, and validation of autonomous vehicles and ADAS cars loaded with AI-driven features.
Cross-checking
Among carmakers’ minimum responsibilities is cross-checking ADAS components to determine whether they function together as intended.
Missy Cummings, an engineering professor at George Mason University, told The Ojo-Yoshida Report: “My concerns about this and related incidents is why there is no cross-checking of the speed limit with both the known speed limit on that road….” A digital map would have provided the local speed limit and sensors would detect local conditions such as time of day and weather.
Phil Koopman, a safety expert and associate professor at Carnegie Mellon University, agreed. “A vision-based speed limit sign system will have a substantive error rate, and the OEM knew this.”
In other words, BMW was aware this could happen.
Cummings continued: “The National Highway Traffic Safety Administration’s Standing General Order is replete with ADAS cars getting into accidents where the speed is too high either for the road type or too high for the weather conditions.” The U.S. regulator issued its General Standing Order last June requiring crash reporting where automated driving or Level 2 advanced driver assistance systems are involved.
With ample data publicly available, carmakers have had time to add cross-checks to their vehicles to catch sensor errors. What have automakers done since last June? Their position remains “that the driver is responsible for mitigating dangerous failures of the feature,” noted Koopman.
The offense here is the callousness of automakers using human drivers as a safety “component.” The objective is shielding the company from liability rather than protecting drivers.
Recommended: Phil Koopman’s 2023 AV Predictions
Closer look
Colin Barnden, principal analyst at Semicast Research, shared his experience with technology and speed.
My car has traffic sign recognition and satellite navigation with GPS speed advisory. Two systems can advise me on the road speed limit. I have had many occasions where they have said different speeds, and they were BOTH wrong from the posted speed signs. Under U.K. law, I am required to obey the signs and that is the legal limit, not what my car tech tells me.
Barnden shared another unsettling anecdote:
I was driving on a motorway (speed limit 70mph) when the traffic sign recognition changed to 5 (5mph). The instrument cluster turned red. But the system was advisory, and I ignored it because it was wrong. I drove for maybe 30 miles before I passed a speed traffic sign and the system caught up with what was going on. There was no problem because the speed warning was advisory, and I was able to ignore it. But had the system been mandatory and either slowed the car (or worse applied the brakes) I would have been doing 5mph on a national motorway with no way to accelerate to the speed limit. This would have been an extraordinarily dangerous situation, particularly with the risk of being rear-ended by a heavy truck.
As Koopman clarified, BMW’s Speed Limit Assist “is not the usual safety ADAS-type feature that intervenes to reduce the severity of a mishap” such as automatic emergency braking. Instead, the feature “is actively controlling the vehicle. Going beyond normal cruise control, [the feature] has authority to increase risk automatically” by accelerating.
The issues then come down to when it’s acceptable for vehicles to take control and initiate changes as well as how—and how fast—drivers must consent to vehicle decisions.
“Technology for speed adaptation is not a panacea,” Barnden added. “It is not safe to automatically adjust the vehicle speed to whatever is on the sign without understanding if it is safe to do so. Equally, tech cannot override the judgment of the driver and enforce a maximum speed limit, because it too can be wrong.”
ADAS, automated driving differences
As Koopman emphasized, via his podcast and 2023 AV forecast on the Ojo-Yoshida Report, is the question of where to draw the line between ADAS and “automated driving.” In the case of BMW’s Speed Limit Assist, Koopman said, “When an automation feature can initiate significant change in behavior, such as dramatic acceleration, that is no longer driver assistance; that is an automated driving system that should be held to a higher standard of safety to the degree it introduces new hazards, especially when the driver has limited time to react to avoid an incident.”
He added, “We already know phantom braking is a problem. [The BMW incident] amounts to phantom acceleration, and in a high-powered car that is surely a problem as well.”
Asked what OEMs could have done to mitigate these defects without blaming the driver, Koopman advocated “sanity checks and cross-checks the OEMs apparently decided not to include.”
These include:
- Comparison to a national speed limit;
- Comparison to the speed limit on a map (even low-definition maps have speed limits, although they’re not perfect);
- Taking road type into account (number of lanes, turns, upcoming intersections);
- Gauging the speed of other vehicles, including those traveling in the opposite direction and on cross streets to calibrate the neighborhood speed limit;
- Remembering the speed limit from the car’s last trip on the same road.
In short, Koopman concludes, the speed-crazed BMW “could have taken the minimum of several sources of data, but it did not.”
Cummings noted, “It is not difficult to develop code that cross-checks commanded speeds with either mapped speeds or reasonable speeds for known weather conditions. If your car is smart enough to turn on its windshield wipers when it rains, it should know better than to take an off-ramp at an unsafe speed, especially in the rain.”
Semicast Research’s Barden spotted something else in the U.K. report: “I noted that the car was accelerating towards 110mph. That is above the UK speed limit [70mph] but there are signs in Germany for 110kph. The system is not even smart enough to distinguish between inputs that would be in mph versus those that would be in kph. There is no 110mph speed sign in the U.K. So the system doesn’t even ask the driver to confirm acceleration to a speed which is far above the legal speed limit in the country. It just goes with whatever it sees, or even with whatever it thinks it sees. That is pattern matching, not intelligent speed adaptation.”
Added Cummings, “It is a mystery to me why there aren’t more integrated efforts to understand more holistically what constitutes ‘reasonable’ operating limitations, and then have the car stick to them.”
AI must operate with humans
As long as human drivers rely on ADAS-equipped vehicles, there must be a better way for machines to collaborate with humans, rather than using humans to bear the brunt of machines’ mistakes.
Bryan Reimer, an MIT research scientist, noted during a CES 2023 panel, “Our [human’s] decision processes are not as fast as an AI system. And yet, humans respond to gray information much better than AI systems which are programmed much more in black and white.”
A human strength absent in AI, Koopman notes, is “common sense.” Humans know that 100mph is unreasonable for all but a very few specific roads and racetracks. Humans also presume lower speed limits in urban areas. Without seeing a road sign, drivers understand it is reckless to “tear through a small town at highway speeds.”
Koopman proposes a “better integration of the human driver into ensuring safety.” For example, ADAS can ask drivers to confirm speed changes before taking actions. Machines can also be tuned to accelerate gradually, or not at all, once a car reaches the speed limit and unless the driver confirms a speed change. “This gives drivers time to react and override,” Koopman said,
Larger problems in systems engineering
Beyond measures carmakers could have taken in deploying automated features, Cummings said “a larger problem lies in a lack of mature systems engineering concepts in autonomous vehicle design.” She warned: “Well known to aviation, the need to look across individual modules to ensure functional safety as an integrated concept is still woefully lacking in ADAS/ADS design, especially in the area of testing and validation.”
Automakers must consider “how the inclusion of embedded AI could or should change the way systems should be conceived, designed and testing, [including] the system engineering process.”
In an upcoming technical paper to be published in Frontiers in Neuroergonomics, Cummings writes that little tangible progress has been made by industry or government regulators to adapt testing practices to address AI blind spots.
A section about software updates stood out for me:
One major issue is the constant updating of software code that is a necessary byproduct of agile software development. In such design approaches, software can change in seemingly small ways, but then lead to unexpected outcomes.
In other words, as Cummings noted, “Without principled testing, particularly for software that can have a derivative effect on human performance, the stage will be set for potential latent systems failures.”
Bottom line
A decade ago, consumers marveled at the ability of automated systems to read road signs. That AI euphoria ended when carmakers started blaming human drivers for failing to mitigate automation mistakes when things go wrong. Instead, BMW simply repeats the platitude that it “takes every customer concern, especially those that relate to safety, very seriously.” Sticking to its talking points, it evades, “BMW Speed Limit Assist functionality is a driver aid and is not designed or marketed as an autonomous driving function, and the driver remains responsible for ensuring they do not exceed the permitted speed limit.” Our advice? Caveat emptor!
Junko Yoshida
Editor In Chief, The Ojo-Yoshida Report
This article was published by the The Ojo-Yoshida Report. For more in-depth analysis, register today and get a free two-month all-access subscription.