They/Them
Network Guardian Angel. Infosec.
Antispeciesist.
Anarchist.
You should hide scores on Lemmy. They are bad for you.
https://github.com/gomods/athens/blob/723c06bd8c13cc7bd238e650a559258ff7e23142/pkg/module/go_get_fetcher.go#L145-L148 https://github.com/gomods/athens/blob/723c06bd8c13cc7bd238e650a559258ff7e23142/pkg/module/go_get_fetcher.go#L163-L165
So two infos:
(I would appreciate if the down voters were able to express their disagreement with words. Maybe I’m wrong, but then, please do me the favor of explaining me how. Also, I’m not a SourceHut hater; I even give money to Drew every month, because I like the idea of SourceHut. I just think Drew is wrong on that matter)
I don’t think that a robots.txt file is the appropriate tool here.
First off, robots.txt are just hints for respectful crawlers. Go proxies are not crawlers. They are just that: caching proxies for Go modules. If all Go developers were to use direct mode, I think the SourceHut traffic would be more, not less.
Second, let’s assume that Go devs would be willing to implement something to be mindful of robots.txt or retry-after
indications. Would attackers do? Of course not.
If a legitimate although quite aggressive traffic is DDoSing SourceHut, that is primarily a SourceHut issue. Returning a 503 does not have to be respected by the client because the client has nothing to respect: the server just choose to say “I don’t want to answer that request. Good Bye”. This is certainly not a response that is costly to generate. Now, if the server tries to honor all requests and is poorly optimized, then the fault is on the server, not the client.
I have not read in details the Go Proxy implementation, to be truthful. I don’t know how it would react if SourceHut was answering 503 status code every now and then, when the fetching strategy is too aggressive. I would simply guess that the server would retry later and serve the Go developers a stale version of the module.
I don’t get it. Public endpoints are public. Go proxies (there are alternatives to direct mode or using Google proxy, such as Athens) are legitimate to query these public endpoints, as aggressively as they want. That’s not polite, but that’s how the open Internet works and always has.
I don’t get why SourceHut does not have any form of DDoS protection, or rate-limiting. I mean HTTP status 503 and the retry-after
header are standard HTTP. That Drew chose a public outcry over implementing basic anti-applicative DDoS seems to be a very questionnable strategy. What would happen to the Sourcehut content if tomorrow attackers launch a DDoS attack on SourceHut? Will Drew post another public outcry on their blog?
SourceHut is still in alpha. This feels like a sign that it is still not mature enough to be a prod service for anyone.
Very good question. Thank you for asking.
To sign documents, I would recommend using signify or minisign.
To encrypt files, I guess one could use age
If you need a cryptolibrary, I would recommend nacl or sodium. In Go, I use nacl a lot. If you need to encrypt or sign very large files, I wrote a small library based on nacl.
Emails are the tricky part. It really depends on your workflow. When I was working for a gov infosec agency, we learned to never use any integrated email crypto solution. Save the blob, decrypt the blob in a secure environment. This helps significantly against leaks and against creating an oracle to the attacker’s benefit.
For data containers, I would use dm-crypt and dm-verity + a signed root. But that’s just me and I would probably not recommend this to other people :)
OpenPGP is rarely used in messaging protocols, but if it was I would probably advise leveraging a double ratchet library.
One example of issues with OpenPGP implementations that are a direct consequence of the poor format desgin… https://www.ssi.gouv.fr/uploads/2015/05/format-Oracles-on-OpenPGP.pdf
Does anyone know if and how the private key is secured during cloud sync? Do they have access to it or is it ciphered before sync using the… user password?
Also, how is it different from Duo Push? (edit: I am talking workflow, here. I know about the FIDO part)
I don’t think this argument is valid in a world where a global observer can already distinguish Tor traffic using timing and volume analysis.
Today, the best defense a VPN has to offer, privacy-wise, is protection against observers close to the victim, on hostile local network. Self-hosted VPNs can do that as well as any paying VPN service. The only reason I’m using a paying service myself is to circumvent geo restrictions. That’s basically the only valid use-case.
I agree with all of your points :)
Can you elaborate on how this is FUD, please?
Introducing socialist millionaire verification to ease fingerprint verification does not seem a bad idea.
Using phone numbers as identifiers is a well-known Signal flaw.
And while CBC is indeed less robust that GCM regarding certain types of attacks, it is true that “up-to-date” CBC implementation have no known vulnerability. Yet, would you claim that TLS1.3 is FUDing for dropping CBC support as well?
I am not promoting mesibo, which I never heard about before. I am just trying to understand how this criticism of Signal would be invalid, or FUD.
Well, that’s not entirely wrong: the website owner is responsible for contracting with Cloudflare in the first place.