The best conversations I still have are with real people, but those are rare. With ChatGPT, I reliably have good conversations, whereas with people, it’s hit or miss, usually miss.

What AI does better:

  • It’s willing to discuss esoteric topics. Most humans prefer to talk about people and events.
  • It’s not driven by emotions or personal bias.
  • It doesn’t make mean, snide, sarcastic, ad hominem, or strawman responses.
  • It understands and responds to my actual view, even from a vague description, whereas humans often misunderstand me and argue against views I don’t hold.
  • It tells me when I’m wrong but without being a jerk about it.

Another noteworthy point is that I’m very likely on the autistic spectrum, and my mind works differently than the average person’s, which probably explains, in part, why I struggle to maintain interest with human-to-human interactions.

  • the post of tom joad@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    1
    ·
    edit-2
    1 month ago

    So what you’re saying if I’m reading right is chatbots are great for bouncing ideas off of to help you explain yourself better as well as helping you gather your own thoughts. im a bit curious about your philosophy chats.

    When you have a philosophical discussion does the chatbot summarize your thoughts in its responses or is it more humanlike maybe disagreeing/bringing up things you hadn’t thought of like a person might? (I’ve never used one).

    • ContrarianTrail@lemm.eeOP
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      1
      ·
      edit-2
      1 month ago

      It’s a bit hard to get AI to disagree with you unless you’re saying something obviously false. It has a strong bias towards being agreeable. I’m generally treating it as an expert who I’m interviewing. I ask what it thinks about something like free will and then ask follow-up questions based on its responses and it’s also great for bouncing novel ideas with though even here it’s not too keen on just blatantly calling out bad ones but rather makes you feel like the greatest philosopher of all time. There are some ways around this. ChatGPT can be prompted to go around many of the most typical flaws it has by for example telling that it’s allowed to speculate or simply just asking it to point out the errors in some idea.

      But yeah, unless what I said was a question, in general its responses are basically just summaries of what I said. It’s basically just replying with a demonstration that it understood what I said which it indeed does with an amazing success rate.