Welcome to the Utopia Forums! Register a new account
The current time is Thu Apr 25 17:08:32 2024

Utopia Talk / Politics / Tesla allegedly staged Autopilot video
murder
Member
Tue Jan 17 19:08:48
Tesla staged Autopilot demo video, says director of software

Tesla’s much-hyped video of its Autopilot driver-assist system “driving by itself” from 2016 was not actually driving itself, according to Ashok Elluswamy, Tesla’s director of Autopilot software.

In a recent deposition, Elluswamy said that the video titled “Full Self-Driving Hardware on All Teslas” was intended to “portray what was possible to build the system” rather than what customers could actually expect the system to do.

The video, which Tesla CEO Elon Musk tweeted a link to saying that “Tesla drives itself,” shows a Tesla driving and parking itself, avoiding obstacles, and obeying red and green lights. The video starts with a title card saying that “the person in the driver’s seat is only there for legal reasons” and that “he is not doing anything. The car is driving by itself.”

But according to Elluswamy, the demo was “specific to some predetermined route,” compared to the production version of the tech that was just relying on input from cameras and sensors. “It was using additional premapped information to drive,” he said, after telling lawyers that the route the car followed had previously been 3D mapped. At the time the video was being made, Elluswamy was an engineer on the team that helped with the video.

In other words, Tesla’s Autopilot was not capable of dynamic route planning, instead requiring the company’s engineers to map out the route it would take for the purposes of the promotional video.

The New York Times had previously reported the premapping, pointing out that consumers using the system wouldn’t have that luxury, but now, we have it on the record from a Tesla official. The deposition, which you can read in full below, was taken as part of a lawsuit filed by the family of Wei “Walter” Huang, who died in 2018 when his Model X with Autopilot engaged crashed into a highway barrier.

Elluswamy also said that the version of Autopilot that was available when the video was produced had “no traffic-light-handling capability,” despite it being shown in the video. What isn’t clear is how exactly the video was made; Elluswamy says he doesn’t recall whether the person in the driver’s seat controlled any acceleration or braking or if the car did it. It’s also not clear if the car was running software capable of recognizing traffic signals.

The admission isn’t the only part of Elluswamy’s deposition that’s raising eyebrows. Mahmood Hikmet, head of research and development at Ohmio Automation, highlighted parts of the transcript where Elluswamy said he doesn’t know about fundamental safety considerations, such as Operational Design Domain, also known as ODD.

The phrase refers to situations, like geography or weather, in which an autonomous vehicle is allowed to operate. For example, if an autonomous vehicle is only capable of driving in a specific city in ideal weather conditions, then a rainy day in a different city would be outside of its ODD.

While you wouldn’t expect the phrase to come up in everyday conversation or appear in marketing materials, it is definitely something you’d expect the person directing the Autopilot program to know about. The Society of Automotive Engineers (SAE), the organization behind the levels of autonomy that Tesla itself has referenced, calls ODD “the key to autonomous vehicle safety,” and Waymo put out an entire study evaluating its software’s performance in a specific domain.

Musk seemed to show disregard for thinking about ODD during a podcast appearance with Lex Fridman. He said that the acronym “sounds like ADD,” then proceeded to answer a question about the philosophy behind Tesla’s wide-ranging ODD (compared to other systems like GM’s Super Cruise, which will only work in certain conditions) by saying that it’s “pretty crazy” to let humans drive cars instead of machines.

http://www...ideo-pre-mapped-traffic-lights
williamthebastard
Member
Wed Jan 18 11:23:33
Even the staged demo failed and the car crashed while parking, apparently, despite following a pre-programmed, cleared pathway.
williamthebastard
Member
Wed Jan 18 11:29:50
A few more tidbits like that and Elon begins to come closer to the realm of involuntary manslaughter than supreme defender of babies against pedophilic marxist liberals
murder
Member
Wed Jan 18 11:35:51

He's definitely guilty but he'll never be prosecuted because he's been adopted by the DoD.

And the title of this thread says "allegedly", but he totally did it. :o)

williamthebastard
Member
Wed Jan 18 11:40:18
Fortune just published an article that straight up describes him as a fraud and possible criminal. Things have changed.
Sam Adams
Member
Wed Jan 18 13:59:56
Meh. A marketting video made to look better before a product was ready isnt that bad.

Calling something "autopilot" or "full self driving" and saying it will be ready each year when it isnt is marketing fraud for sure though.
Sam Adams
Member
Wed Jan 18 14:02:49
The key stat, that i have yet to see, is the accident rate under autopilot or fsd and what the average human rate on a similar road is.
murder
Member
Wed Jan 18 14:11:43

You'll never see that data because technically no accident is the result of autopilot or fsd since drivers are supposed to be alert and ready to take over if autopilot or fsd fails.

I said a while back that the thing that will keep fsd from happening is liability, because juries would not be kind to auto companies. Congress or state legislatures will essentially have to grant automakers immunity from liability in order for it to happen and I don't think Republicans want that much heat.

williamthebastard
Member
Wed Jan 18 14:15:07
Fortune argues that Teslas entire value has basically been built on FSD promises which are more and more looking like outright lies. Of course, one must remember that Fortune is a marxist pedophilic liberal freedom-hating rag
williamthebastard
Member
Wed Jan 18 14:21:12
probably a sound point that no auto-maker is ever going to want to take on that kind of liability
Seb
Member
Wed Jan 18 14:23:28
Sam:

An accident by a human driver is often the liability of the driver, and covered by their insurance.

An accident by FSD is whose liability? And who will underwrite it?
williamthebastard
Member
Wed Jan 18 14:25:37
Yeah, theyre likely always going to want to include some fine print explaining that the driver must always have both hands on the wheel etc etc etc, completely negating the adjective "full" in FSD
Seb
Member
Wed Jan 18 14:27:38
murder exactly
Sam Adams
Member
Wed Jan 18 14:48:55
"An accident by FSD is whose liability? And who will underwrite it?"

Same with human drivers. Insurance. Hence my question about the accident rate. If accident rate is lower than human drivers, it makes sense to implement, and the overall cost to the system would be lower.

"Fortune argues that Teslas entire value has basically been built on FSD promises which are more and more looking like outright lies."

Not the entire value, but a significant chunk of it. Those unfulfilled promises are obviously worth a partial refund to any tesla owner of the last 5ish years.
williamthebastard
Member
Wed Jan 18 14:57:09
I wonder if he could also break the record for most indebted man in history?
williamthebastard
Member
Wed Jan 18 15:09:46
According to Fortune, Tesla was 1 month from bankruptcy before he started making unfulfilled FSD promises. That could mean that Teslas entire existence today is predicated on those alleged falsehoods
Seb
Member
Thu Jan 19 01:56:20
Sam:

Insurance insures the entity that is liable.

So first you need to answer the question as to who is liable when a FSD car crashes.

Until you do that, you can't discuss insurance.

Seb
Member
Thu Jan 19 02:00:28
It gets a heck of a lot more complicated if it turns out FSD provider is liable.

Even if the accident rate is lower than average, because you start to move into the realm of questions of negligence Vs accident and professional indemnity. Which is typically more expensive.

That's one of the reasons Tesla was, for a while, trying to look at whether it needed to run its own insurance scheme.

TheChildren
Member
Thu Jan 19 03:21:10
anyone surprised?

lie cheat and steal is literally ur culture

jergul
large member
Thu Jan 19 03:32:48
It may not be. If FSD can demonstrate lower accident rates than humans, then it seems likely that producers can either self-insure, or get insurance coverage.
Seb
Member
Thu Jan 19 04:06:17
jergul:

There are issues with the current technology.

The key issues with FSD are the difference between accident and negligence.

So if an FSD car - for example - accelerates dramatically because it dramatically misreads a speed signal and crashes as a result; is that an accident or is that negligence? It's a failure mode very different to a human failure mode because a human wouldn't misread a 20 as a 100, and if they did, they wouldn't suddenly accelerate in a road where contextually it is obviously not appropriate to do so*.

Secondly, there is an issue with explainability - while we are based on deep learning it's quite hard to exhaustively predict how the trained algorithm will respond in edge situations and more importantly *why*. There is some improvement in this space, but I would expect that the kinds of testing and demands to understand training data used etc.

Thirdly - and which I have not covered at all yet but coupled with the above - adversarial images that look innocuous but are designed to fitz with the classifier and produce mad results. While the previous point requires some degree of transparency on what is under the hood, that actually makes it easier to design adversarial response. We will need to get classifiers MUCH more resistant to adversarial attack before we roll FSD out properly.

I suspect we are going to end up stuck somewhere in the space of supervised autopilot for quite a long time before the evidence base for robust FSD that fits within existing legal frameworks. I can imagine driving and vehicle regulations changing to accommodate this new world to some extent, but as I recall there are some deep, far reaching fundamental points around liability that mean it would be hard to fix everything "just" for driving without creating problems elsewhere.

*This is an example of a real issue, and we can imagine the various patches we could lay on top to stop this specific mode, but the underlying problem is there; and in a way the fact we can think of easy fixes for difficult to imagine problems is the point: the manufacturer needs to argue that the fact they haven't covered that issue and fixed it in texting is not negligence.
Sam Adams
Member
Thu Jan 19 09:53:25
"The key issues with FSD are the difference between accident and negligence."

No. Most human caused crashes are negligence.
Sam Adams
Member
Thu Jan 19 10:29:15
"Insurance insures the entity that is liable.

So first you need to answer the question as to who is liable when a FSD car crashes."

This is meaningless bureaucrat/lawyer bullshit. The same meaningless rhetoric has been bantered about by weenies for every safety item ever. In reality it doesnt matter which entity is liable... they will be covered by some sort of insurance if it is safer than humans.

If it beats humans is the only metric that ultimately matters.
Seb
Member
Thu Jan 19 11:00:13
Sam:

So yeah, it is not actually as simple as you think. If you had asked me in 2018 about this, I'd have probably given you the same answer you are giving me now. However, it turns out not to be the right answer.

Yup, but there is a distinction between negligence of an individual - which is an uncorrelated risk and can be averaged over - and negligence in design which is a systemic risk and the kind of thing that is much harder to insure for but if it materialises, materialises across the entire product range all at once. Really fucking hard to get insurance that a product you are putting out there is faulty because of negligent design.

If I drive several times too dangerously and cause enough dammage - I will get disqualified from insurance. So lets say a Tesla causes a massive pileup - do you think the insurance provider is going to continue to cover recurrence of that issue after it is identified? It would need to: you can't afford to suspend the entire fleet (both from a systemic point of view or a product point of view) - so your premium here is going to be much larger than the premium covering every individual driver even if the accident rate is the same.

"This is meaningless bureaucrat/lawyer bullshit."
See, and now you had to go and ruin a perfectly find conversation by saying something stupid. Sure. Meaningless except for determining who ends up carrying billions of dollars of damages.

"they will be covered by some sort of insurance if it is safer than humans."
You can't analytically prove it is safer, and good luck proving it is safer than humans without hundreds of thousands of hours of real world data in real life context when you legally cannot run these cars in such an environment - and even then you still aren't really sure on the tail risk (or adversarial attack).

But even if you could quantify the risk, you are still missing a key point on liability: if a person drives recklessly and causes death, they are liable. If a self driving car drives in a manner that would - for a person - be considered reckless, who is criminally liable? The driver as owner? The regulator for approving it into the market? The manufacturer?

That either needs to be defined in statute, or very well established in courts. Until it is, then ALL parties need to insure for the risk.

Increases the cost dramatically.

Ok, so what if the manufacturer unilaterally takes the risk - well we haven't even got to criminal liability yet - just financial liability.

When you drive recklessly and kill someone, you have a criminal liability. How will that work in a FSD mode? That's going to require statutory change. I would imagine in most jurisdictions without specific statutory provision for use of a FSD, the cars occupant would be considered criminally liable whether it was in FSD or not (in much the same way as if you get into a cars passenger seat, start the engine, you are 'in charge of the vehicle' and will still be liable for running someone over and not be able to claim you were not in fact driving the vehicle and nobody was driving the vehicle).

And that statutory change is going to require some kind of regulation of the FSD's themselves to establish their fitness for purpose, and that is going to be challenging in the context of how deep neural nets.

It will bring us back to very large proven track record, which has a strong chicken and egg element to it.

I think the path will end up looking like companies running FSD in the background and comparing what the car would have done to what drivers actually did do. But it is hard to show the consequences of what any divergence between the two would have been, and indeed which one was safer.



Sam Adams
Member
Thu Jan 19 11:17:27
"and negligence in design which is a systemic risk and the kind of thing that is much harder to insure for but if it materialises, materialises across the entire product range all at once."

A tesla style update could introduce a bug all at once but thats a retarded way to do things once its proven, and then no, it cannot show up suddenly.

"When you drive recklessly and kill someone, you have a criminal liability. How will that work in a FSD mode?"

Turning on a good FSD wont be considered reckless.

It all comes back to: is it good enough? I am not aware of such stats yet. Those stats are the key.
Sam Adams
Member
Thu Jan 19 11:25:23
"It will bring us back to very large proven track record"

Well ya. Or more precisely a track record large enough to be significant in showing superiority to humans.
murder
Member
Thu Jan 19 12:40:12

FSD Tesla with a single passenger inside is traveling down the road a 80 mph. The car with an unknown number of occupants in front slams on the brakes unexpectedly for unknown reasons. The FSD Tesla has 3 options ...

1. Slam on the brakes and crash into the car in front.

2. Veer to the right into moving traffic.

3. Veer left into a canal.

What does the FSD Tesla do and why? How does Tesla avoid getting sued to oblivion? Is the car acting to save the life of the passenger or to minimize Tesla's liability?

Sam Adams
Member
Thu Jan 19 13:09:03
Why would the tesla with computer reaction time be unable to match the deceleration rate of the car ahead?
murder
Member
Thu Jan 19 13:20:06

Assuming it can ... what does it do?

Slam on the brakes and get rear ended?

murder
Member
Thu Jan 19 13:36:17

Who is the AI responsible to? Who is it responsible for?

If an AI takes an action that endangers the lives of people outside the vehicle, it can't claim to have done so to save it's own life or out of fear. It's making a cold calculation potentially about who dies.

Without legislative protection, juries will bankrupt Tesla and any other FSD automakers.
Sam Adams
Member
Thu Jan 19 13:57:24
"Slam on the brakes and get rear ended?"

Ya. First off, the car behind you should also stop in time if they are competent.

Even if not, rear end collisions tend to be much safer. The closing speed is lower, theres more car to absorb the hit on either end before you get to the humans.
Sam Adams
Member
Thu Jan 19 14:05:45
"Without legislative protection, juries will bankrupt Tesla and any other FSD automakers."

Ya, you might need to tweak a law where punitive damages cant be awarded to isolated cases within a good system. So that activist juries cant fuck things up.
Seb
Member
Thu Jan 19 14:08:00
Sam:

"A tesla style update could introduce a bug all at once but thats a retarded way to do things once its proven, and then no, it cannot show up suddenly."

Yeah it can - deep learning nets do not "think" and "understand" things the way we do. What you will find is some weird failure mode that was *there all along* but comes out in certain circumstances.

You may not even be able to precisely diagnose what it is that's at fault.

So once that manifests and is a known issue, will you be able to continue to use that algo? Will it continue to be covered?

My bet is not - because the flaw is systemic.

"Turning on a good FSD wont be considered reckless."

Ok, so describe to me the path for how this change in law happens. Because as it stands I'm 100% sure that no insurance company is going to insure you for FSD right now, and 100% certain that existing laws would view you as in charge of the vehicle if you start it and will not regard liability as switching to a piece of software in the vehicle.

Someone is going to have to change the law to shift the liability *in statute*.

So yeah, we can say "magically, there will come a time when FSD arrives that is good, and magically the law will change, and magically insurance services will agree to underwrite its use, because it will just be that good, it will" - but that's just working backwards from the assertion.

With this tech, as it stands now, I don't see an easy path to FSD due to the combination of the way the tech works, and the required changes to law and business models that aren't really in enough people's interest.

"Or more precisely a track record large enough to be significant in showing superiority to humans."

Which requires it to be used in real life situations for many tens of thousands of hours or more. However, it is not legal to use it in that situation. Rinse, repeat.







Seb
Member
Thu Jan 19 14:10:29
And murder raises another point.

Anything the AI does to respond to a correctly percieved threat is by definition premeditated and calculated decision.

There's always a mens rea, which from a liability perspective is difficult.

It is going to be quite hard to fit this technology into existing legal systems without a dedicated new statutory framework.

And yes, we can imagine what logical ones might look like, but that's going to be really hard to get in place because there will be opponents to any partiular flavour.

Meanwhile, inertia favours "sure, have your FSD, but legally you better have your hands on the wheel because if you don't, you aren't insured and you are criminally liable".

Sam Adams
Member
Thu Jan 19 14:13:57
"business models that aren't really in enough people's interest."

What are you talking about. 50 million people with long commutes want this tech. 200 million people driving home from a bar a little drunk want. No one wants asians or old women driving. The companies that sell this tech are going to ultra rich. Im willing to 20k for this.

So much interest and money.

Itll get done.
williamthebastard
Member
Thu Jan 19 14:20:22
Im still waiting for someone to produce a SFD scooter
TheChildren
Member
Thu Jan 19 14:21:00
asians have lowest insurance rates. asians r greatest drivers. get owned, idiots

Seb
Member
Thu Jan 19 14:50:12
Sam:

"What are you talking about."

Explain to me how this pitch works for changing the law:

"We are going to provide legal indemnity right now for drivers using FSD right now, which will grant them immunity from criminal and civil liability they might face from running you over in order to get the tech on the road so than in five to ten years time we will (god willing) have evidence for insurers to underwrite multinational car companies. If you do get run over, the manufacturers will have civil liability."

Nah, no way. It would never pass and if it did it won't survive the first dead granny mowed down by a drunk senior exec on his way back from necking champagne after end of year bonus time.

You will have all sorts of lobby groups coming out against that, not least from car manufacturers that feel they don't have the technology.

Which brings us back to square 1 - how do you prove its safe in a real world context without using it at scale in a real world context, and who is going to use or even allow it to be used at scale in a real world context until criminal *and* civil liability is firmly placed on the manufacturer?

We need another leap in AI tech I reckon. The current image recognition stuff isn't good enough.

And there will need to be a fuck load of legislation covering how a car should behave when it detects a collision is about to happen (to murder's point) and rules/principles on who bares the physical risks and consequences.

And that is going to be something where everyone will have a view and nobody will be able to aggree.
Seb
Member
Thu Jan 19 14:52:17
*and when I say right now, I mean at any arbitrary point in time.

Until we get deep learning algos where we can somehow be very sure about how they will behave under very broad real world scenarios in the same way we can be very sure about the performance of other machines via inspection and lab based testing - you have this chicken and egg situation.

Sam Adams
Member
Thu Jan 19 14:56:11
"how do you prove its safe in a real world context without using it at scale in a real world context"

The same as what we are doing. Monitored self driving, then build up a big set of stats proving that safe, then limitted self driving, then fsd.

"and who is going to use or even allow it to be used at scale in a real world context until criminal *and* civil liability is firmly placed on the manufacturer?"

Think how retarded the average human is. Think what bad drivers they are. What sane society is going to let those fuckheads keep driving once FSD is proven better?
Sam Adams
Member
Thu Jan 19 15:04:53
The same argument seb is using now was used against aircraft automation a few decades ago. Now a flight is 99% automated. An airbus even more.

And flights are much much safer.

Now you couldnt even get a new plane certified if you wanted pilots to fly it more than a few minutes.
Sam Adams
Member
Thu Jan 19 15:11:41
You dont need deep learning at every stage of fsd.

You can make it explainable to a regulator.
Seb
Member
Thu Jan 19 16:45:40
Sam:

"once FSD is proven better"
Begging the question. The question is how do you off it's safe.

"Monitored self driving" "then limitted self driving"

Except:
A. That doesn't prove it's safe, because by definition it's not real world. How you going to measure this? The number of times the driver intervenes and differs from the FSD? Like I said earlier, I think you are going to get stuck for a very long time. And you still need the statutory change.

B. Define limited self driving.

"Now a flight is 99% automated."

And has any country in the world decided to get rid of expensive, annoying pilots? Or do we still in fact insist on two pilots in the plane? And if he autopilot starts to do something stupid, who is legally liable for intervening? Would we ever allow a commercial jet liner to be crewed by a pilot 'on his way back from the pub'?

This is a terrible analogy.

Unless it's changed in the last few years, I don't think any countries civil aviation authority has come up with a framework for permitting fully autonomous unmanned aircraft without a human with the equivalent of their hands on the wheel, and that's a far more mature area. And the stumbling block is exactly the same: liability issues.



Seb
Member
Thu Jan 19 16:48:56
What the aircraft analogy shows is even when the technology is actually capable of operating autonomously - we don't let it.

I'm pretty sure a commercial jet liner absolutely could fly itself for most of its journey and the pilot and copilot could just rest up away from the cockpit.

Good luck getting that permitted in regulation.
Sam Adams
Member
Thu Jan 19 17:18:32
Passenger aviation is held to a standard of safety vastly higher than cars. And if a plane fucks up it cant just stop. A confused car can stop anywhere. It is much easier to be safe enough in a car than a passenger plane.

Non-passenger automated aircraft fly above you already, inluding some large ones.

I would still pay a lot of money for limitted self driving... say interstates only.

You look at the accident rate of the human and the machine together. You test it. Assuming the machine's monitored accident rate (or emergency intervention rate) is superior enough you give it more authority and less monitoring. Rinse and repeat. Eventually you get full self driving that is trusted or something very close.

Remember, beating the average human driver is not a difficult target.
williamthebastard
Member
Thu Jan 19 17:25:18
I dont know much about planes but Im gonna guess that the only dangerous moments (actually a pilot told me this) are landing and taking off, and that these two portions of the flight are very manual procedures compared to the rest of the time the plane spends in the air
Sam Adams
Member
Thu Jan 19 18:57:08
Autolands are a thing these days.

The big difference between that and a car is price. The equipment we use to make a 777 or a350 land itself at a high trust level is... expensive. Only your biggest international flights rate that kind of gear, usually.

Cars have to have much cheaper gear, but also can fail much more often. The safety level planes aim for is a fatal crash every billion hours. A car fatal crashes once every million or so? Maybe even more. Much different level of safety
williamthebastard
Member
Sat Jan 21 10:12:54
But the plane's extremely expensive FSD is also only employed under optimal conditions. The crucial instances, such as landings, are performed under highly regulated and supervised conditions and on runways that are meticulously cleared of all obstacles etc
williamthebastard
Member
Sat Jan 21 10:18:51
And what instructions do the pilots have during landings/take offs using FSD? Surely they are instructed to be on full standby at the very least?
Seb
Member
Sat Jan 21 10:21:38
Sam:

Yes, I said it was a stupid comparison for you to have made.

"Non-passenger automated aircraft fly above you already, inluding some large ones"

Remote poiloted is not the same as automated.

Last time I checked, there's no allowance for fully automated no-human-supervision heavier than air.

Certainly the UK CAA and European equivalents do not.

"Assuming the machine's monitored accident rate (or emergency intervention rate) is superior enough you give it more authority and less monitoring."

How exactly do you do that? Have someone drive, but allow the machine to overrule it in some circumstances? You are describing something in the abstract that you can't do in the real world.

"Remember, beating the average human driver is not a difficult target."

Like I said, the way the models work and the way human cognition works is very very different and the risk that a given accident rate implies is very different. One is an uncorrelated risk and when an individual causes a few accidents they become uninsurable so the ongoing risk for an insurer can be cut.

By definition an "accident" caused by error of an algorithm *is* correlated, and unless you want to suspend the algorithm, remains open ended.

williamthebastard
Member
Sat Jan 21 10:29:25
"Many airlines officially require a minimum of two pilots in the cockpit at all times and there are no certified single pilot or pilotless transport-category aircraft. This means insurers can't yet provide cover for self-flying planes, making them too risky for airlines to fly."

https://www.aircharterserviceusa.com/about-us/news-features/blog/self-flying-planes-and-the-future-of-air-travel
williamthebastard
Member
Sat Jan 21 10:33:00
However, since pilots earn good salaries, we can probably rest assured that the aviation industry is working full time to replace them at some point.
Seb
Member
Sat Jan 21 11:53:15
WtB:

Exactly. The technology exists, but regulators, insurers prevent them being used.

The exception is military flights which normally come under a different regulatory framework.

As far as I am aware and last time I checked properly (a few years ago, working for a consultancy firm, looking at exactly this issue, and why I changed my mind from Sam's position) - no regulator has a framework for actually ALLOWING autonomous flights - and I think that is true even for unmanned vehicles.

Even the civilian tests are done with a person in the loop to intervene.

So equivalent to "FSD, but really we mean human supervised autopilot and the supervisor is liable for any failure to intervene if the autopilot gets it wrong".

This has some benefits, e.g. unmanned vehicles and/or one pilot covering multiple vehicles.

williamthebastard
Member
Sat Jan 21 12:01:40
I read a study that explains that few states even use the same definitions with regards to different kinds of car accidents, and the differences in definition alone lead to a 25% discrepancy. These definitions apparently have to be legally coordinated universally for insurance companies to even begin to take comparative studies under consideration. The legal framework looks like its a decade away at minimum.
Sam Adams
Member
Sat Jan 21 12:25:10
"Surely they are instructed to be on full standby at the very least?"

Yes. Very much so. I'm not saying FSD is ready to achieve airliner safety standards. The combination of a good computer and a good pilot has a nearly perfect safety record. But the average driver has a much much lower safety record, much easier to beat.

Seb,

"and when an individual causes a few accidents they become uninsurable so the ongoing risk for an insurer can be cut. "

And a big or a corner case can be fixed.
Seb
Member
Sat Jan 21 14:00:08
Sam:

"But the average driver has a much much lower safety record, much easier to beat."

However, for a computer driving a car is actually much much harder than flying a plane.

"And a big or a corner case can be fixed."
If you can diagnose the cause. Which is not always straightforward with deep learning or other ml systems. And while it is being fixed, you have an auto manufacturer with millions of cars on the road with a known issues and a much higher risk profile than previously assumed. Why will an insurer agree to cover that?

This is the problem: currently you insure a *person* that has some kind of average risk profile. Essentially, many random dice but you can very quickly determine the pdf of the average die. And it one die turns out to be way outside the pdf, you can quickly chuck them out of the pool and the bet on that die is small.

With fsd you are insuring an algorithm that has a specific risk profile but context - you are trying to guess the pdf of the die - but there's only one die. And if you fucked up and underestimated the risk you are carrying, it's one big die.


So the premium you want is going to be much much larger.

Cf. Kelly criterion.


It's really not at all as simple as "if accident rate of machine is lower than accident rate if human, then insurable.
Sam Adams
Member
Sat Jan 21 14:08:17
"really not at all as simple as "if accident rate of machine is lower than accident rate if human, then insurable."

Of course it is. Lawyers and bureaucrats such as yourself will attempt to muck it up of course, but in the grand scheme of things it is that simple.
Seb
Member
Sat Jan 21 14:35:20
Nope, to insurers it is not the same.

That's what we discovered.

No risk pooling = much higher premium.
Seb
Member
Sat Jan 21 14:36:52
It's quite sad you keep blamining your own ignorance and lack of statistical competence onto "lawyers and bureaucrats" etc.
Sam Adams
Member
Sat Jan 21 23:47:14
No risk pooling? Wtf are you talking about. That a couple designs that dominate the market cant be insured? Guess we better shut down all airlines because some wankoff bureacrat named seb thinks the 737 and a320 cant be insured because the risk pool is too narrow.

Lol.
Seb
Member
Sun Jan 22 04:44:32
Sam:

Wow. I didn't think you wouldn't understand the difference between an algorithm - which runs the same everywhere - and hundreds of different physical entities built to the same design but each independent.

There are examples of entire fleets of planes being grounded when a bug was found in flight control software.

The difference is you can find and diagnose such bugs with conventional code but you cannot do so easily with ml.

Anyway. Sure. You keep complaining about the "bureaucrats" and the "lawyers" (as if these rules aren't informed by engineers, technical specialists etc all working together) running the actual real world from the comfort of your armchair.
Sam Adams
Member
Sun Jan 22 10:30:17
"an algorithm - which runs the same everywhere"

Like the completely automated flight control algorithms on every airbus? those have been certified and insured for decades. Thats half the fleet.

Lol seb... tell me again how you understand nothing about this subject.
Sam Adams
Member
Sun Jan 22 10:32:30
"conventional code but you cannot do so easily with ml."

Ml isnt the basis for most self driving.
Seb
Member
Sun Jan 22 13:08:08
Sam:

"Like the completely automated flight control algorithms on every airbus? those have been certified and insured for decades. Thats half the fleet."

Do you actually read my posts? I exactly pointed that out in the same post you are responding to, and explained the difference earlier in the thread.

You can certify, review and inspect conventional code.

You can't really unit test ML in quite the same way, and therein lies the problem.

Risks around ML code is neither like conventional software which can be inspected and certified on that basis, nor is it like humans because the risk is correlated.

"Ml isnt the basis for most self driving"

Actually, it is.




Seb
Member
Sun Jan 22 13:09:24
At least FSD type. Everything else needs supervising for e.g. signs on the side of the road indicating temporary speed limits etc.
Sam Adams
Member
Sun Jan 22 13:39:55
"Actually, it is."

Wrong. Theres some simple hardcoded rules that make up the basis of driving. Stay between the lines. Stop at stop signs. Dont exceed the speed of the car in front of you. Identifying a stop sign or speed limit is the most complex of these tasks, but is still a relatively simple image classifier.

"nor is it like humans because the risk is correlated"

Gtfo with this nonsense. For about 8 hours every week, from 11-3am friday and saturday night, almost every human driver on the road is impaired to some degree. But but humans are safer!

Seb
Member
Sun Jan 22 17:40:53
Sam:

" Stop at stop signs."

The ml - and this may shock you - is the bit that recognises stop signs. And speed limit signs.

"but is still a relatively simple image classifier."

And still regularly gets it wrong in some cases we don't understand (hence a number of cases of cars suddenly breaking out accelerating).

All of this was explicitly covered up-thread.

You are either to unfamiliar or to stupid to recognise it.
Seb
Member
Sun Jan 22 17:42:18
"But but humans are safer!"

The best 'FSD' can't reliably navigate an intersection at rush hour the moment.

So yeah, even on a statistical basis, Hhumans are safer.
Sam Adams
Member
Mon Jan 23 09:35:34
Pats seb on the head. Yes, right now fsd isnt yet ready. Way to change the subject. No one was claiming it is ready now.
Seb
Member
Mon Jan 23 09:56:43
"For about 8 hours every week, from 11-3am friday and saturday night, almost every human driver on the road is impaired to some degree. But but humans are safer!"

Your words, not mine.
show deleted posts

Your Name:
Your Password:
Your Message:
Bookmark and Share