LOL. Fuck that. I’m not flying.
Forget flying, you’ll be getting Donnie Darkoed in your bedroom.
So when, and i do mean when, this results in a crash, who will be held responsible?
Biden
Hillary with her butterymales?
Well we have already had deaths due to the current crunch (and not paying them) of us air traffic controllers.
https://www.cbc.ca/news/world/laguardia-collision-air-traffic-control-ntsb-9.7140479
And who was blamed for those? Oh yeah the traffic controllers! So when grok starts seeing how many 737s can fit in the same physical space, it will be the controllers fault. As you can imagine this will make those controllers want to quit, meaning more pressure to use shit like AI tools.
Liberals.
Obviously it’s the DEI
Fortunately the world is going to run out of aviation fluid next week so we won’t have to find out.
If it’s the ATC then it’s their fault, if it’s AI then it’s no one’s.
Powered by Grok?
a data analytics tool that will help advance the agency’s modernization objectives for aviation safety.
SMART will cost $12 billion, and will supposedly help flight controllers schedule flights weeks in advance to cut down on delays.
“This software will say, ‘well, listen, we can see this 45 days out. Let’s move some of those flights a little bit later, or five, seven, 10 minutes earlier, and we can resolve the issue. And so then you are not delayed,'” Duffy said.
Nothing in any of the facts as reported there suggest the use of language models, except for the editorialising in the summary about how LLMs hallucinate things, which makes me wonder about how competent Futurism’s tech journalism is.
We don’t have enough air traffic controllers.
We use AI to reduce their workload. <---- We are here
We don’t need as many air traffic controllers.
We sack more air traffic controllers.
We don’t have enough air traffic controllers.
Yet another reason not to go to the USA.
Well, once the mistakes start to pile up I will probably get a lot less judgement from others about my apprehension of flying.
We just need one rich asshole in a private jet to crash due to ATC failure for them to care.
I tried to use AI to install a reverse osmosis water system yesterday, I asked it to look at manual for hose colors to match them, I figured it would save me a few mins.
After an hour of it not working and trying all sorts of nonsense I looked in manual to have it show me it had given me all the wrong information to a simple task.
I can’t wait to have people’s lives reliant on this technology.
AI is a pretty big catch-all term. If they mean specially designed and trained deep learning neural nets, maaaaybe it’ll be okay. If they mean typical LLMs we’re straight up fucked.
Exactly. With a broad enough term those computerized screens showing the position of all the planes is “AI”.
I just saw an ad for using ChatGPT to “come up with new recipes and baking ideas”
Yeah I’m sure having a bunch of people decide to eat whatever a hallucinating AI comes up with isn’t going to be dangerous at all…
I’ll look it up and try to find it. But I’m pretty sure there’s a YouTube video where they actually did ask Chat GPT to come up with new recipes and baking ideas and then they tried to make them to the results you would expect.
Edit: ok, so it looks like there are a whole lot of YouTubers making AI recipes to the expected results. So Google away.
My mistake, you’re absolutely right – I neglected to ensure the runway was clear before scheduling that landing. Please accept my apologies for causing those deaths. I’m really glad to be working with you, it’s reassuring that you’ll always keep me honest. You’re not just an assistant traffic controller – you’re a friend.
HAL-9000 if it was made today
Well, at least the AI seemed sincere in their apology.
when you have the pilot and microslop copilot:
for entertainment purposes only
Let’s say the error rate is 0.1%. Pretty low, right. But that’s one mistake per thousand flights. Are they really okay with one plane out of a thousand potentially crashing? There are certain industries and jobs where AI simply cannot and should not be used.
Each day, about 100-120 people die in car crashes in America.
Over 45,000 planes fly in America every day, and over 5000 are in the air at any given moment. With a crash rate of 1 out of a thousand, we’d be having multiple plane crashes, with thousands of people killed, every day. One plane crash could easily match or surpass that daily car crash number, and we’d be having multiple plane crashes per day.
1 out of a thousand? I’d never fly again. NOBODY would ever fly again.
The worst part would be that it doesn’t matter if you fly or not - as long as a plane can fly above you, you’re at risk. None of us are safe.
Normally, I would scoff at being worried about airborne debris, but if 1 out of 1000 were crashing, and there were 45k flights a day, that’s enough crashes to worry about.
The vast majority of those crashes would be around airports, though, so just keep away from the airports, and your chance of being clobbered by a black box goes down significantly.
It’s almost comical to think about major airports having a half dozen crashes a day. At least the AI won’t have any trouble sleeping at night.
Even further: the biggest problem with AI and thus the biggest decider on its suitability or not for something is that its distribution of failure in terms of consequence is uniform rather than it being more likely to err in ways with few or less grevious consequences than in ways with more or worse consequences.
In other words, unlike humans who activelly try and avoid making the nastiest and deadly mistakes, when AI fails, it can fail just as easilly in the most horrible and deadly ways as it can in the most minor of ways.
That’s why you have lots of instances of LLMs giving what for humans are obviously dangerous advice like telling people to put glue on pizza to make it look good or those with suicidal thoughts to kill themselves - unlike humans AI has no mechanism to detect “obviously dangerous” on an output it’s about to produce and generate a different output instead.
This is why using AI to generate fluff filling for e-mails is fine but it’s not fine in systems were errors can easilly cost lives.
Sarcasm:
But think of the insurance people! Look at how many insurances are waiting to be denied and robbed!
More importantly, we can justify every other profit increase, because our economies are built on literal exploitation just as they did a couple hundred years prior!
Modern exploiting problems require modern idol solutions.
Sadly there is part of the population that will view that as a valid argument. Faux News, news max, OAN and all the conservative talk radio will feed it to them
Oh helllll to the nawh nawh nawh
Prompt unclear, plane stuck in skyscraper.
“you’re absolutely right! I have revised my response, here are some better instructions…”
Revisions unclear… A second plane has struck a skyscaper.
Next revision lead to invading Antarctica to hold them responsible for the attack on US.
Oh wait, this already basically happened with tariffs:
https://www.yahoo.com/news/fact-check-trumps-tariffs-still-100000498.html
Fuck AI for this, but there’s a lot of room in ATC for further automation. To be perfectly honest, if the planes can more or less land themselves, and they’re all fly-by-wire, I could see nearly automating the whole thing. Phase it in over a 10-year plan… computers HAVE to be able to be better at this than one unpaid, overworked, under-rested controller.
I’m all for automation if it works and if it improves safety but as far as I know they haven’t proven that yet. I’d like to see an AI air traffic controller running in a simulation for many many years of simulation time first before we would even begin to talk about implementing it in real hardware.
That’s the problem. No one wants to test Ai like that. Just dive right in and use it, I’m sure it’s great!
Could test it out at small low-volume/non commercial airports first & go from there
I’d start with computer Sims before putting people’s lives on the line, but then from your suggestion
And when someone dies, and they will, we decide to roll it out everywhere? As long as there’s profit in it!
The question is whether the AI or the human is more prone to mistakes. It’s hard to do that without real world tests, unfortunately.
Like self driving cars. Of course they’re going to be involved in crashes where people die, but humans are such terrible drivers that the computers are better (except for Tesla which just has mislabeled lane assist)
Counterpoint: just look at the Air Canada crash that recently happened where a controller let a fire truck cross in the path of a landing aircraft.
Planes may have all this technology but that only involves what’s happening in the air, not on the ground.
Now maybe all ground crew could have vehicles equipped with transponders and tracked as well, but there are also incidents of people randomly ending up on the runways / taxiways, or animals, or non airport vehicles.
With the amount of AI powered cameras being put up around cities around the world… Yea they could use tech like that to monitor runways too
AI is fine for this… assuming we’re talking about a specifically trained machine learning model that is actually made to handle ATC and not just shoehorning an LLM into a job it was never intended to do.
Honestly, I’d put it at too high a risk for weighted models. We have ton’s of pathfinding navigation code out there that could solve this outright on a raspberry pi :) not that i’d reccomend the pi…










