

We’ve all been there. If you do this stuff for a living, you’ve done that way more than once.
We’ve all been there. If you do this stuff for a living, you’ve done that way more than once.
You’d think you’d learn from your mistakes
Yes, that what you’d think. And then you’ll sit with a blank terminal once again when you did some trivial mistake yet again.
A friend of mine developed a habit (working on a decent sized ISP 20+ years ago) to set up a scheduled reboot for everything in 30 minutes no matter what you’re going to do. The hardware back then (I think it was mostly cisco) had a ‘running conrfig’ and ‘stored config’ which were two separate instances. Log in, set up scheduled reboot, do whatever you’re planning to do and if you mess up and lock yourself out the system will restore to previous config in a while and then you can avoid the previous mistake. Rinse and repeat.
And, personally, I think that’s the one of the best ways to differentiate actual professionals from ‘move fast and break things’ group. Once you’ve locked yourself out of the system literally half way across the globe too many times you’ll eventually learn to think about the next step and failovers. I’m not that much of a network guy, but I have shot myself in the foot enough that whenever there’s dd, mkfs or something similar on the root shell I automatically pause for a second to confirm the command before hitting enter.
And while you gain experience you also know how to avoid the pitfalls, the more important part (at least for myself) is to think ahead. The constant mindset of thinking about processes, connectivity, what you can actually do if you fuck up and so on becomes a part of your workflow. Accidents will happen, no matter how much experience you have. The really good admins just know that something will go wrong at some point in the process and build stuff to guarantee that when you fuck things up you still have availability to fix it instead of calling someone 6 timezones away in the middle of the night to clean up your mess.
Supporting a wide array of all kinds of hardware is pretty much the thing on Debian. Some few has maintained the code for compatibility up to next release. For me personally that doesn’t do much, but I’m writing this with a machine released in 2010 so if they suddenly dropped support for anything over 10 years old I’d be out of luck with the scavenged old machines from work to run simple desktop anywhere I wish for cheap.
25 years might be pushing it a bit, but maybe some poor kid in India could use debian (or a computer at all) with a setup built from parts dug up from dumpsters or something like that. Or maybe there’s few active youngsters in some other poor country who have sharpened their teeth keeping the old stuff running as they couldn’t afford anything better and they’re building the next meta in Latvia.
Who knows. The main point is that keeping that support going is really not taking away resources from anything else. People maintain what they want to maintain and dropping support as a project for anything would only push those few away from Debian instead of their focus being shifted to the nouveau driver to support 4090TI or other new fancy stuff.
Debian is not a centrally lead organization where any single individual or team could mandate anything. If someone thinks that it’s important enough to keep Debian running on a old sparc they can do that (as long as it meets the commonly agreed quality and other aspects) and that’s it.
And no. I’m not a member of the project, just a happy user since 2000 or something near that, so I’m not too familiar on how the community makes decisions, but I think I’m not too far off.
Yep. Even if the data I’m backing up doesn’t really change that often. Perhapas I should start to back up files from my laptop and workstation too. Nothing too important is stored only on those devices, but reinstalling and reconfiguring everything back is a bit of a chore.
The AMD Geode they’re using as an example was released in 1999. So, if you happen to have 25+ year old hardware still running the latest Debian might not work for you.
Does your storage include any kind of RAID? If not then that’s something I’d personally add in to the mix to avoid interruptions for the service. Also 32 gig of RAM is not much, so don’t use ZFS on proxmox, it eats up your memory and if you run out everything is stupidly slow (personal experience speaking here, my proxmox server has 32gig as well).
Also, that’s quite a lot of stuff to maintain, but you do you. Personally I would not like that big stack to maintain for my everyday needs, but I have wife, kids, kids hobbies and a ton of other stuff going on so I have barely enough personal capacity to run my own proxmox, pihole, immich and HomeAssistant and none of those are in perfect condition. Specially the HA setup badly needs some TLC.
And then there’s the obvious. Personal mail server on a home grade uplink is a beast of it’s own to manage and if you really don’t know what you’re getting into I’d recommend against it. And I’m advocating every mail server which is not owned by alphabet/microsoft/apple/etc. It’s just a complicated thing to do right and email is quite essential thing for everyday life today, so be aware. If you know what’s coming up (or are willing to eat up the mistakes and learn from them) then by all means, go for it. If not, then I’d suggest paying for someone to make it happen.
And then the backups. I’ve made the mistake few times where I thought it’d be fine to set up backups at some point in the future. And that has bit me in the rear. You either have backups on the pipeline coming Very Soon™ or you lose your data. And even if it’s coming Very Soon, you’ll still risk losing your data.
Plus with backups, if you don’t test recovery from them then you don’t have backups. Altough for a home gamer it’s often a bit much to ask for a blank slate recovery, so at least I’ve settled on the scenario where I know for sure I can recover from any disaster happening in the home lab without testing as I don’t have enough spare hardware to run that test fully.
Beyond that, just have fun. Recently I ran into an issue where my proxmox server needed some hardware maintenance/changes and that took my pihole-server down, so whole LAN was out of DNS services. No tthe end of the world for me, but a problem anyways and I’ve been planning for a remedy against that, but haven’t yet done anyting concrete for it.
The one thing I always forget, no matter how many DNAT setups or whatever I write with iptables.
I changed my proxmox server from zfs raid pool to software raid with mdadm. Saved me a ton of ram and cheap ssd’s don’t really like zfs, so it’s a win win. And while messing around with drive setups I also changed the system around a bit. Previously it had only single ssd with LVM and 7x4TB drives with zfs but as I don’t really need that much storage it’s now running 3x1TB SSD + 4x4TB HDD, both with software raid5 so 2TB of fast(ish, they’re still sata drives) storage and 12TB (or 10,6 in the real wold, TB vs TiB) of spinning rust storage.
Well enough for my needs and I finally have enough fast storage for my immich server to maintain all the photos and videos over 20+ years. Took “a while” to copy ~5TB over 1gig lan to other system and back, but it’s now done and the copying didn’t need babysitting in the first place, so not too big of a deal. Biggest unexpected issue was that my 3,5" hdd hotswap cradles didn’t have option to mount 2,5" drives so I had to shut down the server and open the case to mount the drives.
And while doing that my piHole was down, so the whole network didn’t have DNS server around. I’d need to either set up another pihole server or just set up some scripts to the router to change DNS offerings to dhcp clients while pihole is down and shorten the lease time to few minutes.
You can also set your profile picture to ‘missing/broken image’ icon.
I personally prefer printed out books of our photos. We are missing quite a few years due to life getting in the way, but the end goal is to have actual books of photos with titles like ‘Our family in 2018’ and ‘Sports of our first born at 2022’. In europe we have a company called ‘ifolor’ where you can design and order printouts of your photos. They’re not really cheap, but the quality is pretty damn good. And their offerings go to pretty decent sized photo albums, up to A3 size and 180 pages (which is over 200€). So, not cheap, but at least so far their quality has been worth the money.
And they have cheaper options too, but personally I think it’s worth the money to get the best quality you can for printouts. And even the smallest and cheapest option is far superior over not having anything at all due to hardware failure or whatever.
I still listen to the Thriller album sometimes, and though it is always freighted with the context of what came later
I’m very much aware of the controversy around MJ, but in the end he was cleared on all the charges. There’s obviously a ton of things which are problematic, to say the least, but in my personal opinion he was a victim of the system too. There’s absolutely things to condem him for, but I don’t think he was a bad person in the end. Just someone who really needed some help which wasn’t there. Britney Spears would be a better comparison than Kanye.
And there’s quite a big gap between being a problematic human being who created (ambiguously) some of the best art around and someone who straight up wants to make a statement of being a bigger natzi than Elon.
After reading the previous discussion I think that you should get more than single drive to store cold backups. That way you can at least spread out the risk of single drive failing. 2TB spinning drives are pretty cheap today and if you have, for example, 4 of them, you can buy one now, write your backups to it and in 6 months buy another, write data on that and so on.
This way you’ll have drives with year or two difference on purchase date, so it’s pretty unlikely all of them fail at once and a single drive gets powered on and checked every other year or so. My personal experience is that spinning drives are pretty stable on the shelf, but I wouldn’t rely on them for decades. And of course even with multiple drives you’ll still want to replace them every 3-5 years each. Plus with multiple drives, if I were to build setup like that, I’d set up some sort of scripts or other solution where I can just plug the thing in and doubleclick an icon on desktop to refresh the data and maybe get a notification automatically that the drive you’re using should be replaced.
And for actual, long term storage, printouts are the way to go. At least in here you can get books made out of photo paper with your pictures. That’s one media which is actually stable over long period and using them doesn’t require a lot of technical knowledge nor hardware. But I’d still keep digital copies around, as the printouts aren’t resistant to things like house fire or water damage.
Personally I’m running postfix+dovecot+amavis -setup managed by ispconfig3 running on a Debian VPS from hetzner. I think more important than having a ‘clean’ IP is to have clean domain name attached to it, but your mileage may vary.
This seems to be a common point of view for email self hosting.
However, my own experience is a whole another thing. Sure, my hosts have been on every spam list imaginable, mostly with Microsoft, but just a week ago I migrated the whole setup to new VPS and while there’s still a thing or two I’ll need to iron out the emails are running just fine. Biggest issue was that I forgot to add IPv6 DNS records for the VPS and thus got blocked by gmail, but they gave a clear error why that was and once I fixed the problem it’s been smooth sailing.
With current domains I’ve been running things since 2016 or 2018 and even commercially. It’s mostly problem free and things just work, Microsoft being the bigest ass on to work with. For example last october/november they decided to reject everything from one of my servers but both their JMRP portal and support claimed that there’s nothing wrong with our server. It took couple of days to clear without any definitive explanation. But beyond that, on various environments since 2009 (I think) it’s been mostly problem free hosting.
Sure, hosting email for anyone requires at least some understanding on how things should work (both technically and ethically/legally) and the skillset needed is a bit more complex than hosting a web site to public internet, but it’s still something practically anyone can do if they really want to.
And sure, there’s a ton of stuff you need to get right. And then there’s cases when you miss something and your ‘Contact me’ web form becomes a spammer heaven and your servers end up sending few million viagra ads around the net and your IP/domain is on every shitlist there is. It takes some persistence and time to clean that up and learn from the experience, but it’s not the end of the world.
Self hosting your email is perfectly viable, it can be done regardless of google/microsoft, and I hightly recommend doing that. Email is one of the last “old” fronts to the net where everything is not centralized to a single/few actors. But you really need to know what you’re doing. Copy’n’paste commands to set up whatever the latest hot stuff is on docker containers just isn’t enough.
True, but more often than not mozilla should have newer packages on their repository than any distribution. And the main problem still is that Ubuntu changed apt and threw snap in to the mix where it doesn’t belong.
But it’s not obvious either. When I say ‘apt install firefox’, specially after adding their repository to sources.list, I’d expect to get a .deb from mozilla. Silently overriding my commands rubs me in a very wrong way.
If only there was some other alternative than throw my old stuff in the bin.
Edit: I missed the ‘un’ on ‘unsupported’. It’s supposed to be a joke.
I kinda-sorta finalized my migration to a smaller setup with my mail+web server. I’ve been running a small MSP business for several years and as customers flee right and left mostly to microsoft (due to 365 setup pricing) it’s been in a decline for quite a while. So, I finally pulled the plug and shut down the business side of things and downscaled that to a single VPS with a handful of domains, email service and a few simple wodrpress sites.
Also I kinda-sorta moved all of my photo archive of 20+ years to immich and set up a backup scheme for it, which is now (only) 2-1-1. I also need more storage for that thing, but it needs to wait for few days until paycheck and after that migration I can finish importing all the photos I have laying around. That also requires some reconfiguration of my disk arrays, copying couple of terabytes from system to another and back again, but that’s relatively easy thing to do, but it takes “a while” to accomplish.
After that there’s a long list of things to do, but mostly I’ll spend my free time and money to improve the current setup as quickly as possible in the immediate future.
True. And there’s also a ton of devices around which don’t trust LetsEncrypt either. There’s always edge cases. For example, take a bit older photocopier and it’s more than likely that it doesn’t trust on anything on this planet anymore and there’s no easy way to update CA lists even if the hardware itself is still perfectly functional.
That doesn’t mean that your self-signed CA, in itself, would be technically any less secure than the most expensive Verisign certificate you can find. And yes, there’s a ton of details and nuances here and there, but I’m not going to go trough every technical detail about how certificates work. I’m not an expert on that field by any stretch even if I do know a thing or two and there’s plenty of material online to dig deep into the topic if you want to.
I doubt that soccer goal net is enough to trigger the payload of a drone. Depending on how the drone was built, it might, but even if it didn’t you’ll still have only two guys in a golf cart which is pretty easy target even with ak47 over 300 meters.