If one were to inquire as to the chief pathways from which technological change arises, there can be no shortage of avenues one might take to arrive at a decisive answer.
And though I do not rebut that notions of our culture’s mediums of communication are a force worthy of our consideration, it is true that other methodologies of technological alterations are not difficult to notice.
The world of transportation is just one area of culture that is facile to observe.
And though I have covered some of the deficits of so-called “smart” vehicles once before, other developments merit our attention.
Why the standard gas-powered motor vehicle is now insufficient to accommodate our culture’s transportation needs, what consequences might arise from a human driver being ushered out of the driver’s seat, why our culture must adopt this experimental technology in spite of its documented risks, in what ways a human driver’s skillset is subordinate to a machine’s, why peak efficiency is more important than driver freedom, who decided that this reinvention of travel is necessary and desirable, and who’s end this transformation ultimately serves are questions not commonly addressed.
The point is that we, collectively, do not pause to consider technology’s capacity to take as well as it can give.
Indeed, those who laud technology’s benefaction without restraint believe, to borrow the words of Postman, that “technological innovation is synonymous with human progress.”
But we will find that the foundation of this mindset begins to crack after bearing the weight of valid inspections.
Perhaps referencing some recent technological events in the world of transportation might be useful.
On August 10, 2023, a California state agency voted to permit Alphabet’s Waymo and General Motors’ Cruise to transport paying passengers day or night through the city of San Francisco. “The California Public Utilities Commission on Thursday sided with the companies in the face of vigorous opposition from some residents and city agencies,” read a report from Reuters. “Commissioners heard more than six hours of public comment from residents and special interest groups supporting or opposing the measure that would expand paid autonomous vehicle service…In favor of the expansion were technologists and residents who said they felt the cars offer a safer alternative to human drivers and are a critical boost to San Francisco's economy.”
The article goes on to say: “The companies, who applied with the commission for permits to expand taxi service, have said their vehicles are safer than distractible human drivers and have yet to cause a life-threatening injury or death….The vehicles, with empty driver seats and self-turning steering wheels, have become a common sight around San Francisco. Locals frequently document their driving hiccups on social media. The robotaxi proposal had divided San Francisco between locals who resent their city being used as to test what they say is an unproven technology and those who say they feel the symbolic technology capital ought to be the leader in developing what could lead to fewer traffic accidents and injuries.”
In an attempt to halt the vote, city officials “wrote letters and spoke at hearings to bring attention to a string of incidents in recent months: A car stopping near the scene of a mass shooting, another getting tangled in caution tape and downed wires after a major storm and another blocking a firetruck from exiting a station for several minutes.”
It is interesting to note that these so-called “smart” vehicles can be locked up by the simple act of placing a traffic cone on its hood when it is stopped. The result of this exercise is that the vehicle will detect a recognized object to be avoided that is also registered as lying inside the perimeter of the vehicle, which in turn causes an instruction loop.
According to The Washington Post’s account of the vote, “There was also a more organic protest movement that stemmed from residents. In videos that went viral on Twitter, a group of people found that placing traffic cones on the nose of the vehicles disables them and causes them to stall. Their goal: to highlight how easy it is to confuse the technology, and also pressure state regulators to halt the expansion of these cars on San Francisco’s streets.”
The paper adds that, “According to the data analyzed by The Post, there have been at least 236 collisions reported by companies with cars operating in fully autonomous mode in California since 2019 — most relatively minor. That does not include the many other examples of issues the cars have run into when they were operating in manual mode, or after the autonomous car was taken over by a human driver…Just a few months after a Chinese start-up, Pony.ai, scored a permit to test its driverless cars on California’s roads in 2021, one of its cars rolled over a center divider in Fremont, Calif., and mangled a traffic sign.”
The article further noted that, “That same year, an Apple car near its campus in Cupertino bumped into a curb at about 13 mph. No one was hurt, but it misaligned a wheel on the car. In two separate instances in May and June of this year, Amazon’s Zoox was involved in two minor crashes in San Francisco that caused injuries to the human drivers in the car.”
As succinctly put by San Francisco officials, in comments made before the vote took place, “Waymo driverless AVs have committed numerous violations that would preclude any teenager from getting a California Driver’s License.”
Of course, there are no shortages of past incidents that have effectively illustrated the technology’s shortcomings.
In 2021, a Waymo self-driving taxi was stranded at an intersection in Chandler, Arizona while transporting a passenger. According to reports, a tech support team arrived to free the vehicle, but the car then suddenly drove away and pulled over again, in turn blocking a three-lane road.
We are told that the self-driving errors, in this instance, was attributed to the presence of plastic traffic cones surrounding a construction site.
“First, the Waymo vehicle paused at a stop sign rather than turning onto a street lined with cones,” read an article from CNN. “Waymo told CNN Business that guidance provided from one of its employees to revise the car’s trajectory was ‘improper,’ and declined to elaborate…The car then completed the turn, but soon stopped in the road, blocking part of a lane of traffic. Construction sites are known to be a challenge for fully autonomous vehicles because they rely on detailed maps of their environment to navigate safely. When the car’s environment changes, such as with traffic cones or lane closures, it can struggle to operate at its best. Following a four-minute stop, it backed up slightly, further blocking a traffic lane. Human motorists had to cross a double yellow line to go around the Waymo vehicle. Some honked.”
The article continues: “A few minutes later, the Waymo car pulled away, surprising a Waymo worker who was explaining to the van’s passenger… that roadside assistance was on its way…’Are we moving?’ the worker asked in a confused tone. Further down the road, the Waymo van halted again, amid yet more cones…’You better hurry up, it’s going to escape,’ [the passenger] warned the Waymo worker. Then, as the human driver approached, the Waymo car drove away again, but only a short distance. ‘I don’t even know what’s going on anymore,’ [the passenger] said in the video.”
“The first [error] was understandable. The second was strange. The third one was jaw-dropping and the fourth one I threw up my hands,” remarked Noah Goodall, a University of Virginia scientist who researches vehicle communication and automation, to CNN Business.
The passenger arrived roughly twenty minutes late to his destination and received a Waymo refund.
Earlier this year, a report by the San Francisco Standard found that incidents involving autonomous vehicles owned by Waymo and Cruise have more than tripled from 24 to 87 incidents between January and April 2023.
On May 21 in eastern San Francisco, a Waymo self-driving vehicle stuck and fatally injured a small dog.
And yet, other defects in self-driving technology are important to grasp.
As detailed by a study from researchers at the University of Michigan, University of Florida, and the University of Electro-Communication in Japan, a laser aimed at the LIDAR guidance system of a self-driving car can disrupt its sensors and trick it into not seeing a pedestrian or other obstacle in its way.
“Self-driving cars, like the human drivers that preceded them, need to see what’s around them to avoid obstacles and drive safely,” remarked the science and technology news release website EurekAlert!. “The most sophisticated autonomous vehicles typically use lidar, a spinning radar-type device that acts as the eyes of the car. Lidar provides constant information about the distance to objects so the car can decide what actions are safe to take. But these eyes, it turns out, can be tricked.”
As put by the paper, “The ablation of this critical LiDAR information causes autonomous driving obstacle detectors to fail to identify and locate obstacles and, consequently, induces AVs to make dangerous automatic driving decisions. In this paper, we present a method invisible to the human eye that hides objects and deceives autonomous vehicles' obstacle detectors by exploiting inherent automatic transformation and filtering processes of LiDAR sensor data integrated with autonomous driving frameworks. We call such attacks Physical Removal Attacks…In our moving vehicle scenarios, we achieve a 92.7% success rate removing 90\% of a target obstacle's cloud points.”1
And, according to an award-winning study from a team at the University of Copenhagen published on April 19, 2023, in the journal Association for Computing Machinery, autonomous vehicles are having difficulty in interpreting humans’ subtle social cues that influence driving choices. As revealed by the study, the inability of these vehicles to interpret social interactions, including body language and other signals, has significant implications for their performance and safety.2
“The ability to navigate in traffic is based on much more than traffic rules. Social interactions, including body language, play a major role when we signal each other in traffic. This is where the programming of self-driving cars still falls short,” observed Professor Barry Brown, from Copenhagen’s Department of Computer Science, in a university release. “That is why it is difficult for them to consistently understand when to stop and when someone is stopping for them, which can be both annoying and dangerous.”
The researchers reviewed 18 hours of 70 different YouTube videos taken by people testing the vehicles from the back seat. “One of the videos,” as detailed by the study research website Study Finds, “features a family of four waiting to cross a residential street in the United States. Despite the absence of a pedestrian crosswalk, the family wishes to cross the road. As the driverless car approaches, it slows down, prompting the two adults in the family to gesture for the car to continue. However, the car stops next to them for 11 seconds. As the family starts to cross the road, the car begins to move again, causing them to leap back onto the sidewalk.”
According by Professor Brown, who has studied self-driving car behavior for the past five years, “The situation is similar to the main problem we found in our analysis and demonstrates the inability of self-driving cars to understand social interactions in traffic. The driverless vehicle stops so as to not hit pedestrians, but ends up driving into them anyway because it doesn’t understand the signals. Besides creating confusion and wasted time in traffic, it can also be downright dangerous.”
Nonetheless, as is typically the case in a technologically sophisticated cultures, such real-world technological consequence must take a backseat to what is to be reaped from these creations.
Naturally, I am not suggesting that our culture should deny self-driving vehicles the capability of ferrying humans in a novel manner, or to prevent individuals from utilizing such technology by their own free will.
I mean only that it can be useful to be aware of technology’s veritable framework.
For my goal here has not been to doubt that engrossing excursions can be undertaken through autonomous vehicle technology.
But it is a fair thing for the foresighted to contemplate where this newfangled self-driving journey will ultimately take us.
https://arxiv.org/abs/2210.09482
Barry Brown, Mathias Broth, and Erik Vinkhuyzen. 2023. The Halting problem: Video analysis of self-driving cars in traffic. In Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems (CHI '23). Association for Computing Machinery, New York, NY, USA, Article 12, 1–14. https://doi.org/10.1145/3544548.3581045