Next month, AI will enter the courtroom, and the US legal system may never be the same.
An artificial intelligence chatbot, technology programmed to respond to questions and hold a conversation, is expected to advise two individuals fighting speeding tickets in courtrooms in undisclosed cities. The two will wear a wireless headphone, which will relay what the judge says to the chatbot being run by DoNotPay, a company that typically helps people fight traffic tickets through the mail. The headphone will then play the chatbot’s suggested responses to the judge’s questions, which the individuals can then choose to repeat in court.
Welcome to the NicFab Community Lemmy instance!
Please be kind.
All communities in this space should be at least related to Privacy and innovation.
This is a community space for projects and users interested in privacy, data protection, cybersecurity, and innovative solutions.
You can also reach this Privacy Community on Matrix by clicking here.
Please abide by the code of conduct.
To report a CoC violation, message one of the admins.
Benvenuto nella instanza Lemmy NicFab Community!
Vi invitiamo ad essere gentili.
Tutte le comunità in questo spazio dovrebbero essere almeno legate alla privacy e all’innovazione.
Questo è uno spazio comune per progetti e utenti interessati alla privacy, alla protezione dei dati, alla cybersecurity e alle soluzioni innovative.
Puoi trovare questa community anche su Matrix clicando qui.
Qui puoi trovare la nostra Informativa sulla privacy.
Siete invitati a rispettare il codice di condotta.
Per segnalare una violazione del codice di condotta, invia un messaggio a uno degli amministratori.
I think there are some real dangers of having non-humans involved with court proceedings.
First there’s the obvious slippery slope of first your lawyer is an AI, then the prosecutor is an AI, then the judge is an AI and suddenly we’re living entirely off the dictates of an AI system arguing with itself.
Second, there’s the fact that no AI is a human. This might not seem important, but there’s a lot of truth that a human can perceive that an AI can’t. The law isn’t computer code, it’s extremely squishy and that fact is important to it being just but it’s also important because you can’t just enter text into a prompt and expect to get the results out of the system you want. There’s a big difference between the same question asked by a judge who appears to be convinced by your argument and a judge who appears to be skeptical of your argument.
You might make an argument that it’s just traffic violations, but there’s a slippery slope there as well. First it’s traffic violations, eventually you might have poor people making use of the AI for serious crimes because through degrees you go “oh, it’s just a traffic violation, oh it’s just a low level possession charge, oh it’s just for crimes with a guilty plea anyway, oh it’s just a tort claim, oh it’s just a real estate case…”
Another thing is as AI expands, suddenly you get a potential risk with hackers. If you have a really important court case, it might be justifiable to pay someone to break into the AI and sabotage it so you win the case.
I agree with you. The topic is complex ad would deserve much more space to be deepened. Some issues are related, for example, to biases; there are several misdefined cases due to AI biases, especially in the USA.