See here: https://notes.nicfab.it/en/pages/about/
L’ambito soggettivo previsto dalla Direttiva NIS 2 è articolato e disciplinato dall’articolo 2. La nostra interpretazione, derivante dalla lettura delle specifiche norme, è descritta nel contributo, ove si chiarisce il senso del topic.
Letting regulators nose under the tent is bad. It might feel good to gotcha Twitter and Facebook, but they’re always coming for us next. :(
Indeed! It’s a dangerous and bigger game than anyone. At certain levels, there are great pressures, and sometimes there is also a lack of technical competence.
Certamente chi espone servizi self-hosted dovrebbe sapere qualcosa in materia di sicurezza. Tuttavia, i temi della NIS 2 sono altri, soprattutto quello contenuto nel contributo. A fronte di una dichiarata volontà delle istituzioni europee di avere una sovranità digitale europea e di intervenire in ambito cybersecurity, l’impianto della Direttiva NIS 2 sembra coprire qualsiasi ambito, inclusi quelli relativi a privati che mettono a disposizione gratuitamente servizi, correndo così il rischio di imporre pesanti limitazioni. Ci sarebbe molto da discutere …
lol, lmao Will they ever learn? Relatedly, get into webauthn. And don’t make it someone else’s responsibility.
Indeed! 🤣 MFA/2FA, but IMHO the best overall is FIDO2
If it is possible, always self-hosted.
I think there are some real dangers of having non-humans involved with court proceedings.
First there’s the obvious slippery slope of first your lawyer is an AI, then the prosecutor is an AI, then the judge is an AI and suddenly we’re living entirely off the dictates of an AI system arguing with itself.
Second, there’s the fact that no AI is a human. This might not seem important, but there’s a lot of truth that a human can perceive that an AI can’t. The law isn’t computer code, it’s extremely squishy and that fact is important to it being just but it’s also important because you can’t just enter text into a prompt and expect to get the results out of the system you want. There’s a big difference between the same question asked by a judge who appears to be convinced by your argument and a judge who appears to be skeptical of your argument.
You might make an argument that it’s just traffic violations, but there’s a slippery slope there as well. First it’s traffic violations, eventually you might have poor people making use of the AI for serious crimes because through degrees you go “oh, it’s just a traffic violation, oh it’s just a low level possession charge, oh it’s just for crimes with a guilty plea anyway, oh it’s just a tort claim, oh it’s just a real estate case…”
Another thing is as AI expands, suddenly you get a potential risk with hackers. If you have a really important court case, it might be justifiable to pay someone to break into the AI and sabotage it so you win the case.
I agree with you. The topic is complex ad would deserve much more space to be deepened. Some issues are related, for example, to biases; there are several misdefined cases due to AI biases, especially in the USA.
I don’t know if the encryption protocol used for Signal represents the state-of-the-art. Probably, there are other valid encryption protocols; I refer, for example, to that one on which is based Matrix.
It has not escaped your notice. I usually talk about app-related issues. The choice for one or the other solution is based on trust, and personally, after several trials with different solutions, I trust Apple. I am certainly aware that Apple is one of the biggies and that it is not exempt from criticism, but the policy adopted in recent years is user-friendly. It is only worth mentioning that in 2018, during the international conference of Data Protection and Privacy Commissioners, Tim Cook wished that the U.S. had a privacy regulation like the GDPR. This is not the appropriate venue, but your comment will allow me to post something on the point you arise.
Well, that sounds huge. I wonder what consequences this will have. Only fines or actually more privacy in the future?
It isn’t easy to make forecasts. It’s an appropriate step, indeed. We should pay attention to the future.
We retrieved the article from the Internet and didn’t write it. We seemed that news interesting. Feel free to do what you want, even to downvote it
It is really unbelievable how people continue to use wa, especially for work (which is very serious), without bothering to check whether data protection regulations are being followed, especially by the controller (that is WhatsApp). What has happened shows how high the risks are for users’ personal data who are not given control over their data. Join our awareness campaign on the conscious and correct use of IM apps that respect data protection and privacy.
I agree with you. Most people do not know the Fediverse.
I think the prerequisite is to comply with the law. Corporations have to revere the laws like everyone else. It can be considered “normal” for lawyers or consultants to identify pathways to achieve possible goals of a company without violating the legislation. This is legal. Stating that behavior is illegal is up to the judge based on evidence.
It could be possible, but they should respect the GDPR or other legislation.
All companies collect data and personal data. They should respect privacy legislation (in the EU, the GDPR) and users’ rights. Notably, the processing of personal data should be according to the purposes of the information provided to clients. I think that Apple doesn’t expose to risks simply of misusing personal data.
Hi, thank you for writing us. At the moment, this community host content both in Italian and English. The contents in Italian are few in respect to those in English. Anyway, we will consider your proposal
I agree with you. Thank you for suggesting that resource
There are several issues with the generated content from AI systems and copyright aspects. In the USA, someone already filed a lawsuit with a class action on the most relevant issues related to Ai generated content concerning art representations. See https://stablediffusionlitigation.com