In the piece — titled “Can You Fool a Self Driving Car?” — Rober found that a Tesla car on Autopilot was fooled by a Wile E. Coyote-style wall painted to look like the road ahead of it, with the electric vehicle plowing right through it instead of stopping.

The footage was damning enough, with slow-motion clips showing the car not only crashing through the styrofoam wall but also a mannequin of a child. The Tesla was also fooled by simulated rain and fog.

  • FuglyDuck@lemmy.world
    link
    fedilink
    English
    arrow-up
    286
    arrow-down
    2
    ·
    2 days ago

    As Electrek points out, Autopilot has a well-documented tendency to disengage right before a crash. Regulators have previously found that the advanced driver assistance software shuts off a fraction of a second before making impact.

    This has been known.

    They do it so they can evade liability for the crash.

    • fibojoly@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      31
      arrow-down
      2
      ·
      edit-2
      1 day ago

      That makes so little sense… It detects it’s about to crash then gives up and lets you sort it?
      That’s like the opposite of my Audi who does detect I’m about to hit something and gives me either a warning or just actively hits the brakes if I don’t have time to handle it.
      If this is true, this is so fucking evil it’s kinda amazing it could have reached anywhere near prod.

      • Red_October@lemmy.world
        link
        fedilink
        English
        arrow-up
        27
        ·
        1 day ago

        The point is that they can say “Autopilot wasn’t active during the crash.” They can leave out that autopilot was active right up until the moment before, or that autopilot directly contributed to it. They’re just purely leaning into the technical truth that it wasn’t on during the crash. Whether it’s a courtroom defense or their own next published set of data, “Autopilot was not active during any recorded Tesla crashes.”

      • FuglyDuck@lemmy.world
        link
        fedilink
        English
        arrow-up
        4
        ·
        edit-2
        20 hours ago

        even your audi is going to dump to human control if it can’t figure out what the appropriate response is. Granted, your Audi is probably smart enough to be like “yeah don’t hit the fucking wall,” but eh… it was put together by people that actually know what they’re doing, and care about safety.

        Tesla isn’t doing this for safety or because it’s the best response. The cars are doing this because they don’t want to pay out for wrongful death lawsuits.

        If this is true, this is so fucking evil it’s kinda amazing it could have reached anywhere near prod.

        It’s musk. he’s fucking vile, and this isn’t even close to the worst thing he’s doing. or has done.

    • NotMyOldRedditName@lemmy.world
      link
      fedilink
      English
      arrow-up
      14
      ·
      edit-2
      21 hours ago

      Any crash within 10s of a disengagement counts as it being on so you can’t just do this.

      Edit: added the time unit.

      Edit2: it’s actually 30s not 10s. See below.

      • FuglyDuck@lemmy.world
        link
        fedilink
        English
        arrow-up
        4
        ·
        21 hours ago

        Where are you seeing that?

        There’s nothing I’m seeing as a matter of law or regulation.

        In any case liability (especially civil liability) is an absolute bitch. It’s incredibly messy and likely will not every be so cut and dry.

        • NotMyOldRedditName@lemmy.world
          link
          fedilink
          English
          arrow-up
          5
          ·
          edit-2
          21 hours ago

          Well it’s not that it was a crash caused by a level 2 system, but that they’ll investigate it.

          So you can’t hide the crash by disengaging it just before.

          Looks like it’s actually 30s seconds not 10s, or maybe it was 10s once upon a time and they changed it to 30?

          The General Order requires that reporting entities file incident reports for crashes involving ADS-equipped vehicles that occur on publicly accessible roads in the United States and its territories. Crashes involving an ADS-equipped vehicle are reportable if the ADS was in use at any time within 30 seconds of the crash and the crash resulted in property damage or injury

          https://www.nhtsa.gov/sites/nhtsa.gov/files/2022-06/ADAS-L2-SGO-Report-June-2022.pdf

          • oatscoop@midwest.social
            link
            fedilink
            English
            arrow-up
            1
            ·
            1 hour ago

            I get the impression it disengages so that Tesla can legally say “self driving wasn’t active when it crashed” to the media.

          • FuglyDuck@lemmy.world
            link
            fedilink
            English
            arrow-up
            5
            ·
            20 hours ago

            Thanks for that.

            The thing is, though the NHTSA generally doesn’t make a determination on criminal or civil liability. They’ll make the report about what happened and keep it to the facts, and let the courts sort it out whose at fault. they might not even actually investigate a crash unless it comes to it. It’s just saying “when your car crashes, you need to tell us about it.” and they kinda assume they comply.

            Which, Tesla doesn’t want to comply, and is one of the reasons Musk/DOGE is going after them.

            • NotMyOldRedditName@lemmy.world
              link
              fedilink
              English
              arrow-up
              4
              ·
              edit-2
              19 hours ago

              I knew they wouldn’t necessarily investigate it, that’s always their discretion, but I had no idea there was no actual bite to the rule if they didn’t comply. That’s stupid.

              • AA5B@lemmy.world
                link
                fedilink
                English
                arrow-up
                1
                ·
                6 hours ago

                Generally things like that are meant more to identify a pattern. It may not be useful to an individual, but very useful to determine a recall or support a class action

    • bazzzzzzz@lemm.ee
      link
      fedilink
      English
      arrow-up
      46
      arrow-down
      2
      ·
      2 days ago

      Not sure how that helps in evading liability.

      Every Tesla driver would need super human reaction speeds to respond in 17 frames, 680ms(I didn’t check the recording framerate, but 25fps is the slowest reasonable), less than a second.

      • orcrist@lemm.ee
        link
        fedilink
        English
        arrow-up
        54
        arrow-down
        3
        ·
        1 day ago

        They’re talking about avoiding legal liability, not about actually doing the right thing. And of course you can see how it would help them avoid legal liability. The lawyers will walk into court and honestly say that at the time of the accident the human driver was in control of the vehicle.

        And then that creates a discussion about how much time the human driver has to have in order to actually solve the problem, or gray areas about who exactly controls what when, and it complicates the situation enough where maybe Tesla can pay less money for the deaths that they are obviously responsible for.

        • jimbolauski@lemm.ee
          link
          fedilink
          English
          arrow-up
          11
          arrow-down
          4
          ·
          1 day ago

          They’re talking about avoiding legal liability, not about actually doing the right thing. And of course you can see how it would help them avoid legal liability. The lawyers will walk into court and honestly say that at the time of the accident the human driver was in control of the vehicle.

          The plaintiff’s lawyers would say, the autopilot was engaged, made the decision to run into the wall, and turned off 0.1 seconds before impact. Liability is not going disappear when there were 4.9 seconds of making dangerous decisions and peacing out in the last 0.1.

          • FuglyDuck@lemmy.world
            link
            fedilink
            English
            arrow-up
            8
            ·
            21 hours ago

            The plaintiff’s lawyers would say, the autopilot was engaged, made the decision to run into the wall, and turned off 0.1 seconds before impact. Liability is not going disappear when there were 4.9 seconds of making dangerous decisions and peacing out in the last 0.1.

            these strategies aren’t about actually winning the argument, it’s about making it excessively expensive to have the argument in the first place. Every motion requires a response by the counterparty, which requires billable time from the counterparty’s lawyers, and delays the trial. it’s just another variation on “defend, depose, deny”.

          • michaelmrose@lemmy.world
            link
            fedilink
            English
            arrow-up
            16
            ·
            1 day ago

            They can also claim with a straight face that autopilot has a crash rate that is artificially lowered without it being technically a lie in public, in ads, etc

          • FauxLiving@lemmy.world
            link
            fedilink
            English
            arrow-up
            11
            ·
            1 day ago

            Defense lawyers can make a lot of hay with details like that. Nothing that gets the lawsuit dismissed but turning the question into “how much is each party responsible” when it was previously “Tesla drove me into a wall” can help reduce settlement amounts (as these things rarely go to trial).

      • FuglyDuck@lemmy.world
        link
        fedilink
        English
        arrow-up
        65
        arrow-down
        1
        ·
        2 days ago

        It’s not likely to work, but them swapping to human control after it determined a crash is going to happen isn’t accidental.

        Anything they can do to mire the proceedings they will do. It’s like how corporations file stupid junk motions to force plaintiffs to give up.

      • FuglyDuck@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        ·
        21 hours ago

        So, as others have said, it takes time to brake. But also, generally speaking autonomous cars are programmed to dump control back to the human if there’s a situation it can’t see an ‘appropriate’ response to.

        what’s happening here is the ‘oh shit, there’s no action that can stop the crash’, because braking takes time (hell, even coming to that decision takes time, activating the whoseitwhatsits that activate the brakes takes time.) the normal thought is, if there’s something it can’t figure out on it’s own, it’s best to let the human take over. It’s supposed to make that decision well before, though.

        However, as for why tesla is doing that when there’s not enough time to actually take control?

        It’s because liability is a bitch. Given how many teslas are on the road, even a single ruling of “yup it was tesla’s fault” is going to start creating precedent, and that gets very expensive, very fast. especially for something that can’t really be fixed.

        for some technical perspective, I pulled up the frame rates on the camera system (I’m not seeing frame rate on the cabin camera specifically, but it seems to either be 36 in older models or 24 in newer.)

        14 frames @ 24 fps is about 0.6 seconds@36 fps, it’s about 0.4 seconds. For comparison, average human reaction to just see a change and click a mouse is about .3 seconds. If you add in needing to assess situation… that’s going to be significantly more time.

      • NotMyOldRedditName@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        ·
        edit-2
        21 hours ago

        AEB braking was originally designed to not prevent a crash, but to slow the car when a unavoidable crash was detected.

        It’s since gotten better and can also prevent crashes now, but slowing the speed of the crash was the original important piece. It’s a lot easier to predict an unavoidable crash, than to detect a potential crash and stop in time.

        Insurance companies offer a discount for having any type of AEB as even just slowing will reduce damages and their cost out of pocket.

        Not all AEB systems are created equal though.

        Maybe disengaging AP if an unavoidable crash is detected triggers the AEB system? Like maybe for AEB to take over which should always be running, AP has to be off?

      • GoodLuckToFriends@lemmy.today
        link
        fedilink
        English
        arrow-up
        7
        ·
        1 day ago

        Because even braking can’t avoid the crash. Unavoidable crash means bad juju if the ‘self driving’ car image is meant to stick around.

      • Trainguyrom@reddthat.com
        link
        fedilink
        English
        arrow-up
        4
        ·
        23 hours ago

        Breaks require a sufficient stopping distance given the current speed, driving surface conditions, tire condition, and the amount of momentum at play. This is why trains can’t stop quickly despite having breaks (and very good ones at that, with air breaks on every wheel) as there’s so much momentum at play.

        If autopilot is being criticized for disengaging immediately before the crash, it’s pretty safe to assume its too late to stop the vehicle and avoid the collision

        • filcuk@lemmy.zip
          link
          fedilink
          English
          arrow-up
          8
          ·
          23 hours ago

          This autopilot shit needs regulated audit log in a black box, like what planes or ships have.
          In no way should this kind of manipulation be legal.

    • Simulation6@sopuli.xyz
      link
      fedilink
      English
      arrow-up
      15
      arrow-down
      1
      ·
      1 day ago

      If the disengage to avoid legal consequences feature does exist, then you would think there would be some false positive incidences where it turns off for no apparent reason. I found some with a search, which are attributed to bad software. Owners are discussing new patches fixing some problems and introducing new ones. None of the incidences caused an accident, so maybe the owners never hit the malicious code.

      • AA5B@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        edit-2
        5 hours ago

        The given reason is simply that it will return control to the driver if it can’t figure out what to do, and all evidence is consistent with that. All self-driving cars have some variation of this. However yes it’s suspicious when it disengages right when you need it most. I also don’t know of data to support whether this is a pattern or just a feature of certain well-published cases.

        Even in those false positives, it’s entirely consistent with the ai being confused, especially since many of these scenarios get addressed by software updates. I’m not trying to deny it, just say the evidence is not as clear as people here are claiming

      • FuglyDuck@lemmy.world
        link
        fedilink
        English
        arrow-up
        4
        ·
        20 hours ago

        if it randomly turns off for unapparent reasons, people are going to be like ‘oh that’s weird’ and leave it at that. Tesla certainly isn’t going to admit that their code is malicious like that. at least not until the FBI is digging through their memos to show it was. and maybe not even then.

        • AA5B@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          5 hours ago

          When I tried it, the only unexpected disengagement was on the highway, but it just slowed and stayed in lane giving me lots of time to take over.

          Thinking about it afterwards, possible reasons include

          • I had cars on both sides, blocking me in. Perhaps it decided that was risky or it occluded vision, or perhaps one moved toward me and there was no room to avoid
          • it was a little over a mile from my exit. Perhaps it decided it had no way to switch lanes while being blocked in
      • Dultas@lemmy.world
        link
        fedilink
        English
        arrow-up
        5
        ·
        1 day ago

        I think Mark (who made the OG video) speculated it might be the ultrasonic parking sensors detecting something and disengaging.