• skisnow@lemmy.ca
    link
    fedilink
    English
    arrow-up
    4
    ·
    1 day ago

    a data analytics tool that will help advance the agency’s modernization objectives for aviation safety.

    SMART will cost $12 billion, and will supposedly help flight controllers schedule flights weeks in advance to cut down on delays.

    “This software will say, ‘well, listen, we can see this 45 days out. Let’s move some of those flights a little bit later, or five, seven, 10 minutes earlier, and we can resolve the issue. And so then you are not delayed,'” Duffy said.

    Nothing in any of the facts as reported there suggest the use of language models, except for the editorialising in the summary about how LLMs hallucinate things, which makes me wonder about how competent Futurism’s tech journalism is.

  • Blackmist@feddit.uk
    link
    fedilink
    English
    arrow-up
    30
    ·
    2 days ago

    We don’t have enough air traffic controllers.

    We use AI to reduce their workload. <---- We are here

    We don’t need as many air traffic controllers.

    We sack more air traffic controllers.

    We don’t have enough air traffic controllers.

  • flop_leash_973@lemmy.world
    link
    fedilink
    English
    arrow-up
    9
    ·
    1 day ago

    Well, once the mistakes start to pile up I will probably get a lot less judgement from others about my apprehension of flying.

  • skozzii@lemmy.ca
    link
    fedilink
    English
    arrow-up
    11
    ·
    1 day ago

    I tried to use AI to install a reverse osmosis water system yesterday, I asked it to look at manual for hose colors to match them, I figured it would save me a few mins.

    After an hour of it not working and trying all sorts of nonsense I looked in manual to have it show me it had given me all the wrong information to a simple task.

    I can’t wait to have people’s lives reliant on this technology.

    • NιƙƙιDιɱҽʂ@lemmy.world
      link
      fedilink
      English
      arrow-up
      8
      arrow-down
      1
      ·
      1 day ago

      AI is a pretty big catch-all term. If they mean specially designed and trained deep learning neural nets, maaaaybe it’ll be okay. If they mean typical LLMs we’re straight up fucked.

      • RogueJello@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 day ago

        Exactly. With a broad enough term those computerized screens showing the position of all the planes is “AI”.

    • phx@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      ·
      1 day ago

      I just saw an ad for using ChatGPT to “come up with new recipes and baking ideas”

      Yeah I’m sure having a bunch of people decide to eat whatever a hallucinating AI comes up with isn’t going to be dangerous at all…

      • buddascrayon@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        edit-2
        1 day ago

        I’ll look it up and try to find it. But I’m pretty sure there’s a YouTube video where they actually did ask Chat GPT to come up with new recipes and baking ideas and then they tried to make them to the results you would expect.

        Edit: ok, so it looks like there are a whole lot of YouTubers making AI recipes to the expected results. So Google away.

  • bearboiblake [he/him]@pawb.social
    link
    fedilink
    English
    arrow-up
    45
    ·
    2 days ago

    My mistake, you’re absolutely right – I neglected to ensure the runway was clear before scheduling that landing. Please accept my apologies for causing those deaths. I’m really glad to be working with you, it’s reassuring that you’ll always keep me honest. You’re not just an assistant traffic controller – you’re a friend.

  • GreenBeanMachine@lemmy.world
    link
    fedilink
    English
    arrow-up
    30
    ·
    2 days ago

    Let’s say the error rate is 0.1%. Pretty low, right. But that’s one mistake per thousand flights. Are they really okay with one plane out of a thousand potentially crashing? There are certain industries and jobs where AI simply cannot and should not be used.

    • BarneyPiccolo@lemmy.today
      link
      fedilink
      English
      arrow-up
      10
      ·
      2 days ago

      Each day, about 100-120 people die in car crashes in America.

      Over 45,000 planes fly in America every day, and over 5000 are in the air at any given moment. With a crash rate of 1 out of a thousand, we’d be having multiple plane crashes, with thousands of people killed, every day. One plane crash could easily match or surpass that daily car crash number, and we’d be having multiple plane crashes per day.

      1 out of a thousand? I’d never fly again. NOBODY would ever fly again.

      • Whats_your_reasoning@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        1 day ago

        The worst part would be that it doesn’t matter if you fly or not - as long as a plane can fly above you, you’re at risk. None of us are safe.

        • BarneyPiccolo@lemmy.today
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 day ago

          Normally, I would scoff at being worried about airborne debris, but if 1 out of 1000 were crashing, and there were 45k flights a day, that’s enough crashes to worry about.

          The vast majority of those crashes would be around airports, though, so just keep away from the airports, and your chance of being clobbered by a black box goes down significantly.

          It’s almost comical to think about major airports having a half dozen crashes a day. At least the AI won’t have any trouble sleeping at night.

    • Aceticon@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      5
      ·
      2 days ago

      Even further: the biggest problem with AI and thus the biggest decider on its suitability or not for something is that its distribution of failure in terms of consequence is uniform rather than it being more likely to err in ways with few or less grevious consequences than in ways with more or worse consequences.

      In other words, unlike humans who activelly try and avoid making the nastiest and deadly mistakes, when AI fails, it can fail just as easilly in the most horrible and deadly ways as it can in the most minor of ways.

      That’s why you have lots of instances of LLMs giving what for humans are obviously dangerous advice like telling people to put glue on pizza to make it look good or those with suicidal thoughts to kill themselves - unlike humans AI has no mechanism to detect “obviously dangerous” on an output it’s about to produce and generate a different output instead.

      This is why using AI to generate fluff filling for e-mails is fine but it’s not fine in systems were errors can easilly cost lives.

    • Napster153@lemmy.world
      link
      fedilink
      English
      arrow-up
      7
      ·
      2 days ago

      Sarcasm:

      But think of the insurance people! Look at how many insurances are waiting to be denied and robbed!

      More importantly, we can justify every other profit increase, because our economies are built on literal exploitation just as they did a couple hundred years prior!

      Modern exploiting problems require modern idol solutions.

      • Heikki2@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        2 days ago

        Sadly there is part of the population that will view that as a valid argument. Faux News, news max, OAN and all the conservative talk radio will feed it to them

  • rumba@lemmy.zip
    link
    fedilink
    English
    arrow-up
    12
    ·
    1 day ago

    Fuck AI for this, but there’s a lot of room in ATC for further automation. To be perfectly honest, if the planes can more or less land themselves, and they’re all fly-by-wire, I could see nearly automating the whole thing. Phase it in over a 10-year plan… computers HAVE to be able to be better at this than one unpaid, overworked, under-rested controller.

    • Echo Dot@feddit.uk
      link
      fedilink
      English
      arrow-up
      11
      ·
      1 day ago

      I’m all for automation if it works and if it improves safety but as far as I know they haven’t proven that yet. I’d like to see an AI air traffic controller running in a simulation for many many years of simulation time first before we would even begin to talk about implementing it in real hardware.

      • limelight79@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        1 day ago

        That’s the problem. No one wants to test Ai like that. Just dive right in and use it, I’m sure it’s great!

      • fira@lemmy.today
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 day ago

        Could test it out at small low-volume/non commercial airports first & go from there

        • T00l_shed@lemmy.world
          link
          fedilink
          English
          arrow-up
          3
          ·
          1 day ago

          I’d start with computer Sims before putting people’s lives on the line, but then from your suggestion

          • village604@adultswim.fan
            link
            fedilink
            English
            arrow-up
            1
            ·
            1 day ago

            The question is whether the AI or the human is more prone to mistakes. It’s hard to do that without real world tests, unfortunately.

            Like self driving cars. Of course they’re going to be involved in crashes where people die, but humans are such terrible drivers that the computers are better (except for Tesla which just has mislabeled lane assist)

    • BlackAura@lemmy.world
      link
      fedilink
      English
      arrow-up
      7
      ·
      1 day ago

      Counterpoint: just look at the Air Canada crash that recently happened where a controller let a fire truck cross in the path of a landing aircraft.

      Planes may have all this technology but that only involves what’s happening in the air, not on the ground.

      Now maybe all ground crew could have vehicles equipped with transponders and tracked as well, but there are also incidents of people randomly ending up on the runways / taxiways, or animals, or non airport vehicles.

      • piranhaconda@mander.xyz
        link
        fedilink
        English
        arrow-up
        4
        ·
        1 day ago

        With the amount of AI powered cameras being put up around cities around the world… Yea they could use tech like that to monitor runways too

    • vithigar@lemmy.ca
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 day ago

      AI is fine for this… assuming we’re talking about a specifically trained machine learning model that is actually made to handle ATC and not just shoehorning an LLM into a job it was never intended to do.

      • rumba@lemmy.zip
        link
        fedilink
        English
        arrow-up
        3
        ·
        1 day ago

        Honestly, I’d put it at too high a risk for weighted models. We have ton’s of pathfinding navigation code out there that could solve this outright on a raspberry pi :) not that i’d reccomend the pi…