Next month, AI will enter the courtroom, and the US legal system may never be the same.

An artificial intelligence chatbot, technology programmed to respond to questions and hold a conversation, is expected to advise two individuals fighting speeding tickets in courtrooms in undisclosed cities. The two will wear a wireless headphone, which will relay what the judge says to the chatbot being run by DoNotPay, a company that typically helps people fight traffic tickets through the mail. The headphone will then play the chatbot’s suggested responses to the judge’s questions, which the individuals can then choose to repeat in court.

  • sj_zero
    link
    fedilink
    21 year ago

    I think there are some real dangers of having non-humans involved with court proceedings.

    First there’s the obvious slippery slope of first your lawyer is an AI, then the prosecutor is an AI, then the judge is an AI and suddenly we’re living entirely off the dictates of an AI system arguing with itself.

    Second, there’s the fact that no AI is a human. This might not seem important, but there’s a lot of truth that a human can perceive that an AI can’t. The law isn’t computer code, it’s extremely squishy and that fact is important to it being just but it’s also important because you can’t just enter text into a prompt and expect to get the results out of the system you want. There’s a big difference between the same question asked by a judge who appears to be convinced by your argument and a judge who appears to be skeptical of your argument.

    You might make an argument that it’s just traffic violations, but there’s a slippery slope there as well. First it’s traffic violations, eventually you might have poor people making use of the AI for serious crimes because through degrees you go “oh, it’s just a traffic violation, oh it’s just a low level possession charge, oh it’s just for crimes with a guilty plea anyway, oh it’s just a tort claim, oh it’s just a real estate case…”

    Another thing is as AI expands, suddenly you get a potential risk with hackers. If you have a really important court case, it might be justifiable to pay someone to break into the AI and sabotage it so you win the case.

    • nicfabOPM
      link
      fedilink
      21 year ago

      I think there are some real dangers of having non-humans involved with court proceedings.

      First there’s the obvious slippery slope of first your lawyer is an AI, then the prosecutor is an AI, then the judge is an AI and suddenly we’re living entirely off the dictates of an AI system arguing with itself.

      Second, there’s the fact that no AI is a human. This might not seem important, but there’s a lot of truth that a human can perceive that an AI can’t. The law isn’t computer code, it’s extremely squishy and that fact is important to it being just but it’s also important because you can’t just enter text into a prompt and expect to get the results out of the system you want. There’s a big difference between the same question asked by a judge who appears to be convinced by your argument and a judge who appears to be skeptical of your argument.

      You might make an argument that it’s just traffic violations, but there’s a slippery slope there as well. First it’s traffic violations, eventually you might have poor people making use of the AI for serious crimes because through degrees you go “oh, it’s just a traffic violation, oh it’s just a low level possession charge, oh it’s just for crimes with a guilty plea anyway, oh it’s just a tort claim, oh it’s just a real estate case…”

      Another thing is as AI expands, suddenly you get a potential risk with hackers. If you have a really important court case, it might be justifiable to pay someone to break into the AI and sabotage it so you win the case.

      I agree with you. The topic is complex ad would deserve much more space to be deepened. Some issues are related, for example, to biases; there are several misdefined cases due to AI biases, especially in the USA.