I do not understand how self driving cars should ever be allowed as just OTS software.
I work for a company that develops fail-safe systems to enforce trains to obey the signaling systems to ensure no collisions or unsafe movements (we make fail-safe signal systems also).
To reach a fail-safe state, there is a level of hardware, software and principle complexity involved that its clear these robot cars do not have. Clue, viewing footage from a camera feed could NEVER be fail-safe. Having to fall back to a driver is NOT fail-safe.
In the railway industry, this is what is expected to ensure the safety of both life and property. I will never understand how this will ever be acceptable from a safety perspective. Systems will fail, ensuring they cannot fail in a way to allow accidents is the trick. Unless the cars have some means to communicate with the environment beyond "cameras", some one will get hurt/die, and this pipe dream will all come crashing down.
I agree with all of the above except the bolded.I do not understand how self driving cars should ever be allowed as just OTS software.
I work for a company that develops fail-safe systems to enforce trains to obey the signaling systems to ensure no collisions or unsafe movements (we make fail-safe signal systems also).
To reach a fail-safe state, there is a level of hardware, software and principle complexity involved that its clear these robot cars do not have. Clue, viewing footage from a camera feed could NEVER be fail-safe. Having to fall back to a driver is NOT fail-safe.
In the railway industry, this is what is expected to ensure the safety of both life and property. I will never understand how this will ever be acceptable from a safety perspective. Systems will fail, ensuring they cannot fail in a way to allow accidents is the trick. Unless the cars have some means to communicate with the environment beyond "cameras", some one will get hurt/die, and this pipe dream will all come crashing down.
This is all wishful thinking. The current Tesla technology does the handoff quite well, especially when the system is overwhelmed.
And that also presented challenges, because they are selling that technology as more of a third-party to auto manufacturers. It is much harder than the incremental approach Tesla is using.There's a reason Google completely abandoned the notion of split responsibility altogether last year, it doesn't work.
It may "work well" for current Teslas because it has so many "low confidence" factors that it will tell the driver to take the wheel nearly every 3 minutes - this is glorified cruise control.
And that also presented challenges, because they are selling that technology as more of a third-party to auto manufacturers. It is much harder than the incremental approach Tesla is using.
I drive around them almost daily in Shadyside here in Pittsburgh... Never seen any issues. But who know, they are on the road at all times.Is Pittsburgh PA having any similar problems with the self driving Uber cars?
If Pittsburgh isn't, I'm inclined to believe Uber and that it probably is driver error and not the self driving cars.
So a human ran red the light.
Well it is Uber. Absolutely must love that company for some reason.I love how corporate maleficence is being dismissed just because this is a pet technology that many are emotionally attached to.
At the ultimate ends, yes. But this isn't what uber is doing. The Tesla application is more appropriate within given regulations.I agree, but it's also the safest, IMO. As soon as you ask the average driver to have to monitor AI that isn't capable of handling everything on its own you're asking for trouble.
I do not understand how self driving cars should ever be allowed as just OTS software.
I work for a company that develops fail-safe systems to enforce trains to obey the signaling systems to ensure no collisions or unsafe movements (we make fail-safe signal systems also).
To reach a fail-safe state, there is a level of hardware, software and principle complexity involved that its clear these robot cars do not have. Clue, viewing footage from a camera feed could NEVER be fail-safe. Having to fall back to a driver is NOT fail-safe.
In the railway industry, this is what is expected to ensure the safety of both life and property. I will never understand how this will ever be acceptable from a safety perspective. Systems will fail, ensuring they cannot fail in a way to allow accidents is the trick. Unless the cars have some means to communicate with the environment beyond "cameras", some one will get hurt/die, and this pipe dream will all come crashing down.
I haven't kept up to date with uber at all lately, why are they bleeding money?
Im confused, Uber is saying it was human error. Like, was tjere a human driver at the time, or was it actually self driving?
Im confused, Uber is saying it was human error. Like, was tjere a human driver at the time, or was it actually self driving?
Is Pittsburgh PA having any similar problems with the self driving Uber cars?
If Pittsburgh isn't, I'm inclined to believe Uber and that it probably is driver error and not the self driving cars.
So a human ran red the light.
At the ultimate ends, yes. But this isn't what uber is doing. The Tesla application is more appropriate within given regulations.
It seems that you swallowed the technobable and not the data that every manufacturer, and future third party has.
I trust the incremental approach because it gives us real data.
Car manufacturers do share critical data. Just like Tesla sold it's tech to Toyota, and other's for their approach to EV cars.
As in: "When it works it's the car and when it fails it's the human."
I know you're being facetious but this has pretty much been the case.
I was wondering how do police pull over a driver-less car? Lets say it blows through a construction site who gets the ticket? Who gets their licence suspended? who gets charged in case of a death?
Edit: Another thought if there was a fleet of self driving cargo trucks, could you just put up detour signs and direct them to "an abandoned warhouse"? how does the system read a detour sign and know where to go?
On Wednesday night, the California DMV (Department of Motor Vehicles) issued a statement saying it would revoke the registrations of 16 cars owned by Uber, which the company had been using to test its self-driving system. The DMV said that the registrations were improperly issued for these vehicles because they were not properly marked as test vehicles.
They are taking their ball and going home.Tonight, Uber e-mailed a statement saying that rather than apply for the permit required by the DMV, Uber would cancel its pilot program in the state. Were now looking at where we can redeploy these cars but remain 100 percent committed to California and will be redoubling our efforts to develop workable statewide rules, an Uber spokesperson said.
Even with the DMV extending a olive branch and offering their full support to expedite the process.In an e-mail to the press, a DMV spokesperson said, Uber is welcome to test its autonomous technology in California like everybody else, through the issuance of a testing permit that can take less than 72 hours to issue after a completed application is submitted. The department stands ready to assist Uber in obtaining a permit as expeditiously as possible.