14 years of self-hosting

My IT rack in my basement

Self-hosting is the practice of running your own server hosting your own Internet services, especially emails. As of this writing, I am now self-hosting for 14 years and I figured it could be useful to document my current setup and what I learned along the way.

Nowadays I value a stable, boring system and my current choices reflect that. When hosting critical services others depend on (eg. emails for your significant other), reliability is paramount.

Things I learned

In no particular order...

Have good backups emails, tax receipts or family pictures should not be a disaster away from oblivion.
RAID is not a backup it helps reliability in the face of disk failure but will not save you in the face of fat fingers or natural disasters.
Linux software RAID-1 is good easy and reliable.
SMART is useful after all most of my disks failures were announced by some SMART pre-fail warnings.
Reliable email delivery is hard (and Gmail is the worst offender).
Keep it simple complexity kills you. As an engineer I sometime have to restrain myself from over-engineering.
Stick to the default too much customization will make upgrading or reinstalling more complex.
Be conservative obscure hardware and software tend to be buggy and you do not have to maintain a service you do not run.

Hardware

You do not need last generation, super powerful hardware to self-host: any 10 years old PC should make it. It will depends upon what you plan to run but for email and a low-traffic website that should be more than enough.

Some recommendations:

  • memory (RAM): at least 2GB is better - PHP apps such as NextCloud can consume quite a lot of memory
  • disks (HDD): 2 or more for RAID-1/5/6 (no RAID-0). Use consumer-grade models but try to have at least 2 entirely different models in case a manufacturer has a bad serie, and stick to CMR (no SMR)
  • platforms: use x86 Intel or AMD. Exotic hardware such as ARM SBCs or VIA processors can be fun but tend to either be ill-supported (proprietary drivers preventing kernel upgrades) or break in unexpected and subtle ways (I've been bitten by DMA heisenbugs on VIA). They also tend to be less versatile/upgradeable.

Here is the hardware I use:

  • Motherboard: Intel D201GLY2
  • CPU: Intel Celeron 220 (Conroe @1.20 GHz, integrated on the motherboard)
  • Memory: 1x 2GB DDR2 Samsung M3 78T5663QZ3-CF7
  • Disks: 2x 2TB SATA HDD (currently TOSHIBA HDWD120 and WDC WD20EZRZ-00Z)
  • Network: 1Gbps Intel 82541PI

A fairly old and slow setup by today's standards, but it has been humming happily for more than a decade.

Software

I use Debian stable which provides a good balance between stability and usability with 3 partitions: system (Linux), data (homedir etc.) and swap. Having a separate data partition is useful when you want to reinstall your system but anything more granular usually complicates things (as in "crap! my /boot partition is too small to upgrade my kernel").

software RAID-1

Linux ext4 and software RAID-1 is used for the entire system. The main benefit is of course reliability: if a disk fails, the system still runs unaffected. It also eases HDD upgrades a lot: when a disk needs to be changed, I can shutdown the system, swap the old and new disks, put the system back on and ask Linux to rebuild the array in the background - everything is done in a few minutes.

To achieve this:

  1. make a single "DOS physical" partition on each disk
  2. create a dm-raid RAID-1 array using the 2 partitions (1 on each disk)
  3. partition and install Linux on the exposed dm-raid device (eg. /dev/md0)
  4. install grub bootloader on both disk MBR: the MBR is not covered by the RAID-1 array, this allows both disk to boot. If the 1st disk fails, the 2nd disk will boot transparently

No LVM, it just adds complexity for no real gain. It is also a good idea to disable dm-raid badblocks list but not strictly necessary.

Because a good picture is better than a 1000 words:

sda
 |
 `-- sda1 --\
            |
sdb         |
 |          |
 `-- sdb1 --+
            |
            `-- md0
                 |
                 +-- md0p1  system  50G
                 |
                 +-- md0p2  swap    1G
                 |
                 `-- md0p3  data    1.8T

Internet services

Here are the public-facing services I run:

  • email (the killer app of self-hosting)
  • www (where you can read me mumbling how Gmail is detrimental to the email ecosystem)
  • NextCloud (to share your files between computers, phones and people)

In the last few years, I settled on using Yunohost instead of configuring those by myself. Yunohost is a derivative of Debian specialized for self-hosting, and they do a great job at packaging, configuring and maintaining the various services with good defaults (including Let's Encrypt certificates management etc.).

The downside is that when you delegate the configuration of your server to some automated system, it is better to avoid customization apart from what is allowed by the tools. Doing otherwise just make upgrades painful.

To solve this, I run Yunohost in a separate LXC container. The added bonus is I can run it as an unprivileged container and can isolate it from my private network (think DMZ), which is always nice for Internet facing services.

Internal services

I am also running private services, to be accessed from my local network only:

  • DNS
  • Samba (Windows shares)
  • Backups (Borg)
  • Misc. (smartmontools, logwatch...)

They run on the host itself, not in the Yunohost container. The philosophy is: the host is managed by me and the Yunohost container by Yunohost tools.

Coping with downtime

When self-hosting, downtime is inevitable: you'll be facing hardware issues, your ISP will screw up or maybe you'll just move to a new place. Those interruptions can range from a few minutes to weeks and you must be ready to cope with that.

In my case, the only critical service is email and fortunately email is designed to be robust against downtime of a few hours - in other words, most downtime are transparent. When the interruption extends over several days however, it becomes an issue.

To mitigate that, I have configured the exact same email addresses I am hosting at an external email provider and I declare it as a secondary MX for my domain. If my server is not available, the sender will fallback to the secondary MX (the external provider), which will accept the email. I can then read and send my emails using the external service (through webmail etc.) while my server is unavailable.

Fetchmail is used to transparently retrieve emails from the external provider when my server is back or if some emails happened to be delivered to the secondary MX.

My external provider is Gandi OVH (after more than 14 years of great service, Gandi decided to multiply their pricing by 10x. I now use OVH for my domain with a MX plan).

About email delivery

The main curse of self-hosting is really reliable email delivery. It is becoming harder and harder as email providers are fighting spam and concentration to a few big providers (Google, Microsoft) does not help, up to a point that it is almost impossible to get emails accepted reliably when sending from a personal Internet connection.

<rant>Gmail is the worst offender, where even if you do everything by the book (SPF, DMARC, TLS...) it might refuses your email (but at least in that case you get a bounce) or worst, just silently put all your emails in spam. And to add insult to injury, if you follow their process and try their postmaster tools, they will tell you your server does not send enough volume so there is nothing you can do about it anyway.</rant>

I resorted to use an external email provider to relay my outgoing emails (OVH again), and could not have been happier since. Do not forget to update your SPF record to authorize the relay server to send emails on your behalf though. Of course this means delegating delivery to a 3rd-party, but you must already trust the receiving server anyway, and you still directly receive emails destined to your server.

As a side note, I prefer to use OVH than my ISP service because they are more reliable in my experience, and this also means my configuration is independent from my ISP (I can change my ISP without reconfiguring my relay).

Backups

As I wrote earlier, nothing can replace good backups. You should backup your data and you should make sure your backups work.

I am now using BorgBackup with 1 local backup and 1 remote backup at rsync.net (sometime referred to as a 3-2-1 backup strategy).

Split DNS

To access my services from my internal network, I deploy a split DNS setup: my public DNS is managed by OVH while the DNS configured through DHCP on my local network is a private local one. The local private DNS advertises both private and Internet services with local private addresses while OVH only advertises Internet services (www, MX, SPF, DMARC...) with my public address.

It makes my private network completely independent from my Internet access including addressing: if the Internet goes down, I still access my services. If my public address change (because I moved or changed ISP, which happened several times since I started self-hosting) all I have to do is to update my public DNS records.