“Self-driven/Autonomous car hits bus “
This is a headline that is not unexpected; accident happens,
and whatever the hype, the autonomous vehicles are still at testing stage.
“Google said the crash took place in Mountain View on Feb.
14 when a self-driving Lexus RX450h sought to get around some sandbags in a
wide lane… The vehicle and the test driver believed the bus would slow or allow
the Google (autonomous vehicle) to continue…But three seconds later, as the
Google car in autonomous mode re-entered the center of the lane, it struck the
side of the bus”
However, “our test driver believed the bus was going to slow
or stop to allow us to merge into the traffic”; hence google agreed that they
“clearly bear some responsibility”.
But what is, to me, scary is what google has learnt from the
accident: “From now on, our cars will more deeply understand that buses (and
other large vehicles) are less likely to yield to us than other types of
vehicles”. It sounds like the algorithm will understand the type of vehicle
approaching and allocate a different probability of the in-coming vehicle
slowing down depending on the vehicle type/size.
To me that’s not a brilliant idea.
I do not think it’s called safe driving, nor courteous
driving, to cause an incoming vehicle to slow down to avoid an accident with
you. Instead, you should at most assume the incoming vehicle will not
accelerate and entering its lane will be safe for both vehicles (and their
occupants).
Assuming the incoming vehicle will slow down is a recipe for
accidents. Refining that assumption based on the size of the incoming vehicle
will only encourage people to buy larger vehicles.
This brings me to another question: who bears responsibility
for vehicular accidents involving autonomous vehicles, especially one
autonomous vehicle and one ‘traditional” human driven vehicle? Will the AI
provider pick the tab? In this case it’s certainly based on a decision by the
AI.
So if this happens when autonomous vehicles are in
production not just testing, who will pick the tab? If it is say google, would
an individual (or an insurance company) try to sue google or would it just
settle? In that case, would we end up with a two speed justice system?
I think there is a lot of potential in autonomous vehicles,
but more thought has to be put in the legislation and implications (especially
in the insurance domain because accidents will happen) around it, and we have
to be very careful about what is being tweaked in the models of the autonomous
drivers, about the behaviours we are creating.
At the risk of sounding like the NRA: “It’s not the
technology, it’s the people using the technology.”
Source article:
http://www.reuters.com/article/us-google-selfdrivingcar-idUSKCN0W22DG
No comments:
Post a Comment