Hello peoples,

I am looking for tips on how to make my self-hosted setup as safe as possible.

Some background: I started self-hosting some services about a year ago, using an old lenovo thin client. It’s plenty powerful for what I’m asking it to do, and it’s not too loud. Hardware wise I am not expecting to change things up any time soon.

I am not expecting anyone to take the time to baby me through the process, I will be more than happy with some links to good articles and the like. My main problem is that there’s so much information out there, I just don’t know where to start or what to trust.

Anyways, thank you for reading.

N

  • atzanteol@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    46
    ·
    edit-2
    10 months ago

    You’re going to get a lot of bad or basic advice with no reasoning (use a firewall) in here… And as you surmised this is a very big topic and you haven’t provided a lot of context about what you intend to do. I don’t have any specific links, but I do have some advice for you:

    First - keep in mind that security is a process not a thing. 90% of your security will come from being diligent about applying patches, keeping software up-to-date, and paying attention to security news. If you’re not willing to apply regular patches then don’t expose anything to the internet. There are automated systems that simply scan for known vulnerabilities on the internet. Self-hosting is NOT “set it and forget it”. Figuring out ways to automate this help make it easy to do and thus more likely to be done. Checkout things like Ansible for that.

    Second is good authentication hygiene. Choose good passwords. Better yet long passphrases. Or enable MFA and other additional protections. And BE SURE TO CHANGE ANY DEFAULT PASSWORDS for software you setup. Often there is some default ‘admin’ user.

    Beyond that your approach is"security in depth" - you take a layered approach to security understanding what your exposure is and what will happen should one of your services / systems be hacked.

    Examples of security in depth:

    • Proper firewalling will ensure that you don’t accidentally expose services you don’t intend to expose (adds a layer of protection). Sometimes there are services running that you didn’t expect.
    • Use things like “fail2ban” that will add IP addresses to temporary blocklists if they start trying user/passwords that don’t work. This could catch a bot from finding that “admin/password” user on your Nextcloud server that you haven’t changed yet…

    Minimize your attack surface area. If it doesn’t need to be exposed to the internet then don’t expose it. VPNs can help with the “I want to connect to my home server while I’m away” problem and are easy to setup (tailscale and wireguard being two popular options). If your service needs to be “public” to the internet understand that this is a bigger step and that everything here should be taken more seriously.

    Minimize your exposure. Think though the question of “if a malicious person got this password what would happen and how would I handle it?” Would they have access to files from other services running on the same server (having separation between services can help with this)? Would they have access to unencrypted files with sensitive data? It’s all theoretical, until it isn’t…

    If you do expose services to the internet monitor your logs to see if there is anything “unusual” happening. Be prepared to see lots of bots attempting to hack services. It may be scary at first, but relatively harmless if you’ve followed the above recommendations. “Failed logins” by the thousands are fine. fail2ban can help cut that down a bit though.

    Overall I’d say start small and start “internal” (nothing exposed to the internet). Get through a few update/upgrade cycles to see how things go. And ask questions! Especially about any specific services and how to deploy them securely. Some are more risky than others.

    • Nester@feddit.ukOP
      link
      fedilink
      English
      arrow-up
      8
      ·
      10 months ago

      Wow, thank you so much for taking the time to answer. I really do appreciate it.

      Going off of what you said, I am going to take what I currently have, scale it back, and attempt to get more separation between services.

      Again, thank you!

      • atzanteol@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        5
        ·
        10 months ago

        Happy to help.

        Going off of what you said, I am going to take what I currently have, scale it back, and attempt to get more separation between services.

        Containerization and virtualization can help with the separation of services - especially in an environment where you can’t throw hardware at the problem. Containers like Docker/podman and LXD/LXC aren’t “perfect” (isolation-wise) but do provide a layer of isolation between things that run in the container and the host (as well as other services). A compromised service would still need to find a way out of the container (adding a layer of protection). But they still all share the same physical resources and kernel so any vulnerabilities in the kernel would potentially be vulnerable (keep your systems up-to-date). A full VM like VirtualBox or VMWare will provide greater separation at the cost of using more resources.

        Docker’s isolation is generally “good enough” for the most part though. Your aggressors are more likely to be bot nets scanning for low-hanging fruit (poorly configured services, known exploits, default admin passwords, etc.) rather than targeted attacks by state-funded hackers anyway.

  • ElusiveClarity@lemmy.world
    link
    fedilink
    English
    arrow-up
    20
    arrow-down
    1
    ·
    10 months ago

    I’m not expert but for the sake of getting some discussion going:

    Don’t open ports on your router to expose services to the open internet.

    Use a vpn when torrenting and make sure your torrent client is set to only use the vpn’s network adapter. This way, if your vpn drops out the torrent client can’t reach the internet.

    I keep everything local and use Tailscale to access things while I’m away from home.

    • genie@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      ·
      10 months ago

      Couldn’t agree more! Tailscale also lets you use Mullvad (up to 5 devices per Mullvad account, across all clients) as an exit node.

  • TCB13@lemmy.world
    link
    fedilink
    English
    arrow-up
    13
    ·
    edit-2
    10 months ago

    Your basic requirements are:

    • Some kind of domain / subdomain payed or free;
    • Preferably Home ISP that has provides public IP addresses - no CGNAT BS;
    • Ideally a static IP at home, but you can do just fine with a dynamic DNS service such as https://freedns.afraid.org/.

    Quick setup guide and checklist:

    1. Create your subdomain for the dynamic DNS service https://freedns.afraid.org/ and install the daemon on the server - will update your domain with your dynamic IP when it changes;
    2. List what ports you need remote access to;
    3. Isolate the server from your main network as much as possible. If possible have then on a different public IP either using a VLAN or better yet with an entire physical network just for that - avoids VLAN hopping attacks and DDoS attacks to the server that will also take your internet down;
    4. If you’re using VLANs then configure your switch properly. Decent switches allows you to restrict the WebUI to a certain VLAN / physical port - this will make sure if your server is hacked they won’t be able to access the Switch’s UI and reconfigure their own port to access the entire network. Note that cheap TP-Link switches usually don’t have a way to specify this;
    5. Configure your ISP router to assign a static local IP to the server and port forward what’s supposed to be exposed to the internet to the server;
    6. Only expose required services (nginx, game server, program x) to the Internet us. Everything else such as SSH, configuration interfaces and whatnot can be moved to another private network and/or a WireGuard VPN you can connect to when you want to manage the server;
    7. Use custom ports with 5 digits for everything - something like 23901 (up to 65535) to make your service(s) harder to find;
    8. Disable IPv6? Might be easier than dealing with a dual stack firewall and/or other complexities;
    9. Use nftables / iptables / another firewall and set it to drop everything but those ports you need for services and management VPN access to work - 10 minute guide;
    10. Configure nftables to only allow traffic coming from public IP addresses (IPs outside your home network IP / VPN range) to the Wireguard or required services port - this will protect your server if by some mistake the router starts forwarding more traffic from the internet to the server than it should;
    11. Configure nftables to restrict what countries are allowed to access your server. Most likely you only need to allow incoming connections from your country and more details here.

    Realistically speaking if you’re doing this just for a few friends why not require them to access the server through WireGuard VPN? This will reduce the risk a LOT and won’t probably impact the performance. Here a decent setup guide and you might use this GUI to add/remove clients easily.

    Don’t be afraid to expose the Wireguard port because if someone tried to connect and they don’t authenticate with the right key the server will silently drop the packets.

    Now if your ISP doesn’t provide you with a public IP / port forwarding abilities you may want to read this in order to find why you should avoid Cloudflare tunnels and how to setup and alternative / more private solution.

  • MigratingtoLemmy@lemmy.world
    link
    fedilink
    English
    arrow-up
    10
    ·
    10 months ago

    Are you taking about security for your homelab? It essentially comes down to good key hygiene, network security and keeping everything updated.

    Don’t open ports, use a good firewall at the border of the network, use a seedbox for torrenting. Use ACLs alongside VLANs in your network. Understand DNS in terms of how your requests are forwarded and how they are processed.

    • Big P@feddit.uk
      link
      fedilink
      English
      arrow-up
      3
      ·
      10 months ago

      What does using a good firewall mean exactly? As I understand it a port is either open or closed right? So what does a good firewall do that a bad one doesn’t?

      • MigratingtoLemmy@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        10 months ago

        Projects like OpenWRT and OPNsense take care to maintain their code and address security issues in firewall/router software that can be exploited. Perhaps firewall might not have been the best way to put it, but companies like TP-Link aren’t really the most scrupulous with their software

  • ikidd@lemmy.world
    link
    fedilink
    English
    arrow-up
    7
    arrow-down
    1
    ·
    10 months ago

    Check out the “Open Source Security Podcast” with Kurt Siegfried and Josh Bressers. It’s not about specifics so much as how to build a mindset around security for IOT and hosting, generally dealing with opensource offerings.

  • passepartout@feddit.de
    link
    fedilink
    English
    arrow-up
    4
    ·
    edit-2
    10 months ago

    It’s true that you shouldn’t open ports to the internet. If you still want your services to be accessible from outside the local network you can install a wireguard server on your thin client that has access to the services you want. And if you really want to harden it you can restrict wireguard clients from ssh and other admin things.

    You will need to open one port on the router to your wireguard server though. Wireguard is UDP though and ignores packages without an established connection, so attackers will not even know there is an open port on your router.

    Edit: tailscale and zerotier are good external solutions to this as well without needing to open a port at all.

  • naeap@sopuli.xyz
    link
    fedilink
    English
    arrow-up
    4
    ·
    10 months ago

    I’ve setup wireguard, because it’s only me and an employee using the services. But with that, externally I don’t even seem to have a port open. But wireguard is so fast to be online, that I’m just always connected as soon as I’m online - using a domain and an IP update script

    • BearOfaTime@lemm.ee
      link
      fedilink
      English
      arrow-up
      3
      ·
      10 months ago

      Something like Wireguard, Tailscale (uses Wireguard but provides easier administration), Reverse Proxy, VPN, are the best approaches.

      Since OP doesn’t need for anyone else to access, I’d use Tailscale (Wireguard if you want a little more effort). Tailscale has a full self-host option with Headscale, though I have no problem with letting them provide discovery.

      With Tailscale, you don’t even need the client on devices to access your Tailscale network, by enabling the Funnel feature. This does something similar to Reverse Proxy, by having a Web-exposed service hosted by Tailscale which then routes traffic (encrypted) to your Tailscale network.

      • naeap@sopuli.xyz
        link
        fedilink
        English
        arrow-up
        1
        ·
        10 months ago

        Yeah, but then I’ve a web exposed service and I want keep a low profile as possible with what I’m exposing. So I guess as long as there aren’t many users to manage, wireguard (or a tailscale configuration) could work out for OP

        • BearOfaTime@lemm.ee
          link
          fedilink
          English
          arrow-up
          2
          ·
          10 months ago

          As far as I can tell.

          I assume that will eventually change as people start using it. Especially if they run data intensive stuff over it.

          If I were to want to share video, I’d probably use a VPS as my own Tailscale/Wireguard Funnel. Or use some high performance VPN between the VPS and my media server.

          • ULS@lemmy.ml
            link
            fedilink
            English
            arrow-up
            1
            ·
            10 months ago

            I saw the feature. Seems a bit over my head… Or at least I’m too lazy to figure it out… Cool it’s there though.

  • Decronym@lemmy.decronym.xyzB
    link
    fedilink
    English
    arrow-up
    4
    arrow-down
    1
    ·
    edit-2
    10 months ago

    Acronyms, initialisms, abbreviations, contractions, and other phrases which expand to something larger, that I’ve seen in this thread:

    Fewer Letters More Letters
    CGNAT Carrier-Grade NAT
    DNS Domain Name Service/System
    HTTP Hypertext Transfer Protocol, the Web
    IP Internet Protocol
    LXC Linux Containers
    NAT Network Address Translation
    PiHole Network-wide ad-blocker (DNS sinkhole)
    SSH Secure Shell for remote terminal access
    UDP User Datagram Protocol, for real-time communications
    VPN Virtual Private Network
    VPS Virtual Private Server (opposed to shared hosting)
    ZFS Solaris/Linux filesystem focusing on data integrity
    nginx Popular HTTP server

    11 acronyms in this thread; the most compressed thread commented on today has 7 acronyms.

    [Thread #462 for this sub, first seen 29th Jan 2024, 15:25] [FAQ] [Full list] [Contact] [Source code]

  • genie@lemmy.world
    link
    fedilink
    English
    arrow-up
    3
    ·
    10 months ago

    I’ll assume you mean what I mean when I say I want to be safe with my self hosting – that is, “safe” but also easily accessible enough that my friends/family don’t balk the first time they try to log in or reset their password. There are all kinds of strategies you can use to protect your data, but I’ll cover the few that I find to be reasonable.

    1. Port Forwarding – as someone mentioned already, port forwarding raw internet traffic to a server is probably a bad idea based on the information given. Especially since it isn’t strictly necessary.

    2. Consumer Grade Tunnel Services – I’m sure there are others, but cloudflare tunnels can be a safer option of exposing a service to the public internet.

    3. Personal VPN (my pick) – if your number of users is small, it may be easiest to set up a private VPN. This has the added benefit of making things like PiHole available to all of your devices wherever you go. Popular options include Tailscale (easiest, but relies on trusting Tailscale) or Wireguard/OpenVPN (bare bones with excellent documentation). I think there are similar options to tailscale through NordVPN (and probably others), where it “magically” handles connecting your devices but then you face a ~5 device limit.

    With Wireguard or OpenVPN you may ask: “How do I do that without opening a port? You just said that was a bad idea!” Well, the best way that I have come up with is to use a VPS (providers include Digital Ocean, Linode to name a few) where you typically get a public IP address for free (as in free beer). You still have a public port open in your virtual private network, but it’s an acceptable risk (in my mind, for my threat model) given it’s on a machine that you don’t own or care about. You can wipe that VPS machine any time you want, the cost is time.

    It’s all a trade-off. You can go to much further lengths than I’ve described here to be “safer” but this is the threshold that I’ve found to be easy and Good Enough for Me™.

    If I were starting over I would start with Tailscale and work up from there. There are many many good options and only you can decide which one is best for your situation!

    • atzanteol@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      5
      ·
      edit-2
      10 months ago

      Port Forwarding – as someone mentioned already, port forwarding raw internet traffic to a server is probably a bad idea based on the information given. Especially since it isn’t strictly necessary.

      I don’t mean to take issue with you specifically, but I see this stated in this community a lot.

      For newbies I can agree with the sentiment “generally” - but this community seems to have gotten into some weird cargo-cult style thinking about this. “Port forwarding” is not a bad idea end of discussion. It’s a bad idea to expose a service if you haven’t taken any security precautions for on a system that is not being maintained. But exposing a wireguard service on a system which you keep up-to-date is not inherently a bad thing. Bonus points if VPN is all it does and has restricted local accounts.

      In fact of all the services homegamers talk about running in their homelab wireguard is one of the safest to expose to the internet. It has no “well-known port” so it’s difficult to scan for. It uses UDP which is also difficult to scan for. It has great community support so there will be security patches. It’s very difficult to configure in an insecure way (I can’t even think of how one can). And it requires public/private key auth rather than allowing user-generated passwords. They don’t even allow you to pick insecure encryption algorithms like other VPNs do. It’s a great choice for a home VPN.

      • genie@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        ·
        10 months ago

        You make a great point. I really shouldn’t contribute to the boogeyman-ification of port forwarding.

        I certainly agree there is nothing inherently wrong or dangerous with port forwarding in and of itself. It’s like saying a hammer is bad. Not true in the slightest! A newbie swinging it around like there’s no tomorrow might smack their fingers a few times, but that’s no fault of hammer :)

        Port forwarding is a tool, and is great/necessary for many jobs. For my use case I love that Wireguard offers a great alternative that: completes my goal, forces the use of keys, and makes it easy to do so.

        • atzanteol@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          3
          ·
          10 months ago

          Glad you didn’t take my comment as being “aggressive” since it certainly wasn’t meant to be. :-)

          Wireguard is a game-changer to me. Any other VPN I’ve tried to setup makes the user make too many decisions that require a fair amount of knowledge. Just by making good decisions on your behalf and simplifying the configuration they’ve done a great job of helping to secure the internet. An often overlooked piece of security is that “making it easier to do something the right way is good for security.”

          • genie@lemmy.world
            link
            fedilink
            English
            arrow-up
            3
            ·
            10 months ago

            Right!! Just like anything there’s a trade-off.

            Glad you phrased the well-intentioned (and fair) critique in a kind way! I love it when there’s good discourse around these topics

    • TCB13@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      10 months ago

      With Wireguard or OpenVPN you may ask: “How do I do that without opening a port? You just said that was a bad idea!”

      There’s a BIG difference here. Don’t be afraid to expose the Wireguard port because if someone tried to connect and they don’t authenticate with the right key the server will silently drop the packets.

      We require authentication in the first handshake message sent because it does not require allocating any state on the server for potentially unauthentic messages. In fact, the server does not even respond at all to an unauthorized client; it is silent and invisible. The handshake avoids a denial of service vulnerability created by allowing any state to be created in response to packets that have not yet been authenticated. https://www.wireguard.com/protocol/

      OpenVPN is very noisy and you’ll know if someone is running it on a specific port while Wireguard you’ll have no way to tell it’s running.

  • Matej@matejc.com
    link
    fedilink
    English
    arrow-up
    4
    arrow-down
    2
    ·
    10 months ago

    Software:

    • firewall, no inbound and do outbound restrictions
    • use immutable OS
    • full disk encryption (keep in mind that in many setups you will need to be beside the computer after restart)

    Hardware:

    • put it in the trusted datacenter (home stuff is not safe from teenagers and people that need computer’s electrical socket for a vacuum cleaner)
    • TCB13@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      4
      ·
      edit-2
      10 months ago

      use immutable OS

      Just no.

      Immutable distros are all about making thing that were easy into complex, “locked down”, “inflexible”, bullshit to justify jobs and payed tech stacks and a soon to be released property solution.

      Security isn’t even a valid argument for immutable distros because we had Ansible, containers, ZFS and BTRFS that provided all the required immutability needed already, but someone decided that is is time to transform proven development techniques in the hopes of eventually selling some orchestration and/or other proprietary repository / platform / BS like Docker / Kubernetes does.

      “Oh but there are truly open-source immutable distros” … true, but this hype is much like Docker and it will invariably and inevitably lead people down a path that will then require some proprietary solution or dependency somewhere that is only required because the “new” technology itself alone doesn’t deliver as others did in the past. As with CentOS’s fiasco or Docker it doesn’t really matter if there are truly open-source and open ecosystems of immutable distributions because in the end people/companies will pick the proprietary / closed option just because “it’s easier to use” or some other specific thing that will be good on the short term and very bad on the long term. This happened with CentOS vs Debian is currently unfolding with Docker vs LXC/RKT and will happen with Ubuntu vs Debian for all those who moved from CentOS to Ubuntu.

      We had good examples of immutable distributions and architectures before this new hype. We’ve been using MIPS routers and/or IOT devices that are usually immutable and there are also reasons why people are moving away from those towards more mutable ARM architectures.

      • Guenther_Amanita@feddit.de
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        10 months ago

        Dude… It’s the hundredth time you’ve posted this copypasta.
        Image-based OSs aren’t locked down and also don’t depend on proprietary services.

        You can just read my post I made about immutable systems, maybe we can discuss it there.

        But, I wouldn’t choose a image based OS right now too for servers. At least yet.
        I’m just afraid about compatibility, because many installers and services might rely on access to the root file system for now. Debian is right now the best choice as server OS, but that might change in the future.

        • TCB13@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          1
          ·
          edit-2
          10 months ago

          Image-based OSs aren’t locked down and also don’t depend on proprietary services.

          I’m sure we’ve been over this. It’s just a question of time until those solutions become unmanageable at scale and for the more professional users and then a magic proprietary solution that fixes it all will appear. Exactly the same that happened with Docker/DockerHub/Kubernetes.

          I’m just afraid about compatibility, because many installers and services might rely on access to the root file system for now. Debian is right now the best choice as server OS, but that might change in the future.

          Use BRTFS/ZFS snapshots to rollback if anything breaks. Either way you can use LXD/LXC as containers to run your stuff that are easy to setup and will resolve the root filesystem issue.

  • Atemu@lemmy.ml
    link
    fedilink
    English
    arrow-up
    1
    ·
    10 months ago

    One “hammer” mitigation to most threats could conceivably face when self-hosting is to never expose your services to the internet using a firewall. “Securing” your services against a small circle of guests/friends/family members in your home network is a lot simpler than securing against the entire world.
    If you need to access your services remotely, there are ways to achieve that without permanently opening a single port to the internet such as Tailscale or ZeroTier.

    Otherwise, commonly used tools in self-hosting such as Docker or VMs usually offer quite decent separation even if a service is compromised.

    Nothing replaces good security hygiene though. Keep your stuff up-to-date. Use secure methods of authentication such as hard to guess passwords or better. Make frequent backups (3-2-1). The usual.