I keep picking instances that don’t last. I’m formerly known as:
@EpeeGnome@lemm.ee
@EpeeGnome@lemmy.fmhy.net @EpeeGnome@lemmy.antemeridiem.xyz
@EpeeGnome@lemmy.fmhy.ml

  • 0 Posts
  • 17 Comments
Joined 7 months ago
cake
Cake day: June 5th, 2025

help-circle

  • When I was in college, in a network programming class we had a semester final coding assignment. I forget now what all it was supposed to do, but I do recall my friend in the same class spent a week of free time writing his. I forgot about the assignment completely until he asked me how it was going for me a few hours before it was due. With no other options, I simply punched in a example program from the textbook, replaced about 10 lines in the middle and tested.

    It seemed to work. So, I very clearly labeled which code was mine, and which was copied, so that I technically wasn’t doing a plagiarism. Then I turned it in, and hoped for the best. I got a 100 on it. My friend was pissed, because he only got a 99.



  • AI doesn’t have the ability to critically think

    That is absolutely correct, and the deeper problem is it typically “writes” in a tone that fools people into feeling like it can do so.

    just outputs what it can find on the internet

    That is what this one, the Google search overview, is set up to do. Other LLMs don’t, or may only do so when prompted to.

    What they all do is analyze the patterns of all the words that have already been input, such as the initial system prompt, the user’s prompt, any sources it’s referencing and any replies it’s already generated, and then it uses that to predict what words ought to come next. It uses an enormous web of patterns pulled from all of its initial training data to make that guess. Patterns like a question is usually followed by an answer on the same topic, a sentence’s subject is usually followed by a predicate, writing tone usually doesn’t change, etc. All the rules of grammar it follows and all the facts it “knows” are just patterns of meaningless symbols to it. Essentially, no analysis, logic, or comprehension of any kind is part of its process.


  • EpeeGnome@feddit.onlinetoPeople Twitter@sh.itjust.worksOh thank god
    link
    fedilink
    English
    arrow-up
    13
    arrow-down
    1
    ·
    5 days ago

    This is the Google search overview AI. It just reads the top results and summarizes them together. You don’t directly prompt it, it’s already prompted to just do that. The problem with that arrangement, as demonstrated here, is it will confidently and non-critically summarize parody, idiotic rambling, intentional misinformation and any other sort of nonsense that the search algorithm pulls up.



  • “AI should always be a choice—something people can easily turn off." “It will evolve into a modern AI browser and support a portfolio of new and trusted software additions.”

    How does he not get how contradictory these positions sound. Really a missed opportunity to brand themselves as the browser without AI bullshit and gain users who want to get away from that crap. Sure, they promise it’ll have an off switch, but even if that’s true, they’re still wasting a lot of their very limited budget pursuing it. Really shows where their priorities are.


  • My high school only had one pay phone. It had a bad connection in the hand set, so sound cut in and out constantly. People rarely ever bothered making calls on it. The coin return also had some sort of obstruction inside it. If you inserted a quarter and then hit the coin return lever, you’d hear it fall, but it didn’t actually come out. When enough quarters built up though, they would all flood out into return tray at once. Naturally, it got used as a slot machine. Drop in a quarter, pull the tiny lever, and see if you hit the jackpot.


  • If all 6 got the same answer multiple times, then that means that your query very strongly correlated with that reply in the training data used by all of them. Does that mean it’s therefore correct? Well, no. It could mean that there were a bunch of incorrect examples of your query they used to come up with that answer. It could mean that the examples it’s working from seem to follow a pattern that your problem fits into, but the correct answer doesn’t actually fit that seemingly obvious pattern. And yes, there’s a decent chance it could actually be correct. The problem is that the only way to eliminate those other still also likely possibilities is to actually do the problem, at which point asking the LLM accomplished nothing.







  • Wow, this takes me back. This was one of my favorite sets to play with, and I had forgotten it. I was always a little disappointed that the magnet arm couldn’t actually reach the little magnet cargo box in the base. I usually put the robot on the torso-docking hover-platform, and the human in the head/spaceship, on account of the human being safer in an enclosed cockpit, while the robot was less at risk riding exposed to the vacuum of space. I also added some upgraded thrusters to the head from some other set so it could fly faster when detached.