Self-driving cars will soon be easily hidden from view. We shouldn’t let them.

Self-driving cars will soon become easy to hide in plain sight. The rooftop lidar sensors that currently mark many of them will likely get smaller. Mercedes vehicles with the new, partially automated Drive Pilot system, which mounts lidar sensors behind the car’s front grille, are already indistinguishable from normal human-driven vehicles.

Is this a good thing?be a part of us Driverless Futures The UCL project, my colleagues and I have recently completed the largest and most comprehensive Citizenship Attitude Survey Autonomous vehicles and road rules. After more than 50 in-depth interviews with experts, one of the questions we decided to ask was whether self-driving cars should be labeled. The consensus from our sample survey of 4,800 UK citizens was clear: 87% agreed with the statement ‘if the vehicle drives itself, other road users must know’ (only 4% disagree, the rest do not Sure).

We sent the same survey to a small group of experts. They were less convinced: 44% agreed and 28% disagreed that the state of the vehicle should be advertised. This question is not simple. There are valid arguments on both sides.

We could argue that, in principle, humans should know when they interact with robots.This is the argument made in a report commissioned by the UK government in 2017 Engineering and Physical Sciences Research Council. “Robots are artefacts,” it said. “They shouldn’t be designed in a deceptive way to exploit vulnerable users; instead, their machine nature should be transparent.” If self-driving cars are actually tested on public roads, other road users could be seen as The subjects of the experiment should give informed consent and the like. Another argument in favor of labelling, which is a pragmatic one, is that — like a car driven by a student driver — it is safer to have wider berths in a vehicle that may not be driven by a trained person.

There are also arguments against labels. The label can be seen as a waiver of the innovator’s responsibility, implying that others should acknowledge and adapt to self-driving cars. And it could be argued that a new label, without a clear shared understanding of the limitations of the technology, will only add confusion to a road already fraught with distractions.

From a scientific point of view, labels can also affect data collection. If a self-driving car is learning to drive and others know it and behave differently, this could contaminate the data it collects.Something like this seems to come to mind A Volvo executive told reporters in 2016, “Just to be on the safe side,” the company will use unmarked cars to conduct self-driving trials on UK roads. “I’m pretty sure if people brake very hard or block themselves in the road in front of autonomous cars, they’re going to challenge them,” he said.

Overall, the labelling argument, at least in the short term, is more convincing. This debate isn’t just about self-driving cars. It cuts to the heart of how to regulate new technologies.developers of emerging technologies, they portray them often Disruptive and world-changing to begin with, once regulators come knocking, it’s easy for them to paint them as just incremental and problem-free. But new technology isn’t just for the current world. They reshape the world. We need to be honest with them if we are to realize their benefits and make the right decisions about their risks.

To better understand and manage the deployment of autonomous vehicles, we need to dispel the myth that computers will drive like humans but better. For example, management professor Ajay Agrawal, debated Self-driving cars are basically just doing what drivers do, but more efficiently: “Humans receive data through sensors—cameras on our faces and microphones on the sides of our heads—and when the data comes in, we process it with our The monkey brain collects data, and then we act, and our actions are very limited: we can turn left, we can turn right, we can brake, we can accelerate.”

Source link