This is a Reddit post, not my own work.
Rescue Mission Links
Elsevier and the USDOJ have declared war against Sci-Hub and open science. The era of Sci-Hub and Alexandra standing alone in this fight must end. We have to take a stand with her.
On May 7th, Sci-Hub’s Alexandra Elbakyan revealed that the FBI has been wiretapping her accounts for over 2 years. This news comes after Twitter silenced the official Sci_Hub twitter account because Indian academics were organizing on it against Elsevier.
Sci-Hub itself is currently frozen and has not downloaded any new articles since December 2020. This rescue mission is focused on seeding the article collection in order to prepare for a potential Sci-Hub shutdown.
Alexandra Elbakyan of Sci-Hub, bookwarrior of Library Genesis, Aaron Swartz, and countless unnamed others have fought to free science from the grips of for-profit publishers. Today, they do it working in hiding, alone, without acknowledgment, in fear of imprisonment, and even now wiretapped by the FBI. They sacrifice everything for one vision: Open Science.
Why do they do it? They do it so that humble scholars on the other side of the planet can practice medicine, create science, fight for democracy, teach, and learn. People like Alexandra Elbakyan would give up their personal freedom for that one goal: to free knowledge. For that, Elsevier Corp (RELX, market cap: 50 billion) wants to silence her, wants to see her in prison, and wants to shut Sci-Hub down.
It’s time we sent Elsevier and the USDOJ a clearer message about the fate of Sci-Hub and open science: we are the library, we do not get silenced, we do not shut down our computers, and we are many.
If you have been following the story, then you know that this is not our first rescue mission.
A handful of Library Genesis seeders are currently seeding the Sci-Hub torrents. There are 850 scihub torrents, each containing 100,000 scientific articles, to a total of 85 million scientific articles: 77TB. This is the complete Sci-Hub database. We need to protect this.
Wave 1: We need 85 datahoarders to store and seed 1TB of articles each, 10 torrents in total. Download 10 random torrents from the scimag collection, then load the torrents onto your client and seed for as long as you can. The articles are coded by DOI and in zip files.
Wave 2: Reach out to 10 good friends to ask them to grab just 1 random torrent (100GB). That’s 8,500 seeders. We are now the library.
Final Wave: Development for an open source Sci-Hub. freereadorg/awesome-libgen is a collection of open source achievements based on the Sci-Hub and Library Genesis databases. Open source de-centralization of Sci-Hub is the ultimate goal here, and this begins with the data, but it is going to take years of developer sweat to carry these libraries into the future.
Heartfelt thanks to the /r/datahoarder and /r/seedboxes communities, seedbox.io and NFOrce for your support for previous missions and your love for science.
Welcome to /c/piracy
No netflix or streaming services landlubbers allowed, this is pirates territory.
this is actually pretty serious, I’m not sure how investigation government agencies work, so I’m not quite sure why the recent disclosure order would indicate such a high risk of data being taken offline, but it’s much better to be safe than sorry
if you have a server at home, or even your desktop or laptop, go download a few of the files, as many as you’re comfortable with, I think there are plans to later unzip all of the files and after properly structuring upload them to IPFS, which makes more sense in terms of data preservation and ease of availability and displaying/access
you probably already know this, but sci-hub is of paramount importance to researchers, scientists, students, doctors etc worldwide, it’s very important that this data is not lost to the greedy shitty scientific journals that frankly stole this work from the people would actually performed the research, peer reviews etc etc
this effort will also probably lead to additional clones and sci-hub-esque projects, which would help decentralize this data even further, and hopefully make the job of the journals and agencies harder by distributing their focus over multiple entities
i would sticky this post tbh, it’s pretty important…
Yeah, I will try to help with what I can tomorrow but it is not much since I only have like 25-50GB free right now, but well something is something.
@email@example.com what do you think about stickying it?