Rich Gibbs

The Indie Founder's VPS Security 101

vps · security · linux · indie-founder · ubuntu · debian · sysadmin

You shipped the thing. It runs on one Linux box at DigitalOcean or Hetzner or wherever. Customers are starting to show up, and somewhere in the back of your head a little voice is asking: is this thing actually safe?

This guide is for that voice.

It’s written for solo founders and very small teams who are not security professionals but can copy a command into a terminal. The goal is “secure enough that you can sleep” — not “audit-grade fortress.” Those are different jobs, and treating one like the other is how you waste a weekend installing seven intrusion detection tools and shipping nothing for a month.

What “secure enough” looks like for one box

For a single VPS running your SaaS, “secure enough” is a short list:

  1. Nobody can log in as root from the internet.
  2. Logging in requires a key you have, not a password someone could guess.
  3. Only the ports you actually use are open.
  4. The OS gets security patches automatically.
  5. You’d notice if something obviously bad started happening.
  6. If the disk caught fire tomorrow, you could rebuild from a backup before the end of the day.

That’s the whole bar. Everything else is optimization. Hit those six and you’ve already done more than the majority of small-team production servers I’ve seen.

First-day setup

Do these once, when the server is fresh. They take about twenty minutes.

1. Create a non-root user with sudo

Logging in as root is a footgun. One typo and you’ve nuked the box. Make a normal user instead.

# As root, on a fresh server
adduser deploy
usermod -aG sudo deploy

Pick a real password for deploy even though you’ll be using SSH keys — you’ll need it for sudo prompts.

2. Set up SSH keys and disable password login

On your laptop, if you don’t already have a key:

ssh-keygen -t ed25519 -C "you@laptop"

Copy it to the server:

ssh-copy-id deploy@your.server.ip

Now log in as deploy and confirm sudo works:

ssh deploy@your.server.ip
sudo whoami   # should print: root

Once you’re sure key login works, lock down SSH. Edit /etc/ssh/sshd_config (or drop a file in /etc/ssh/sshd_config.d/):

sudo tee /etc/ssh/sshd_config.d/99-hardening.conf >/dev/null <<'EOF'
PermitRootLogin no
PasswordAuthentication no
KbdInteractiveAuthentication no
EOF

sudo sshd -t          # test config — must print nothing
sudo systemctl reload ssh

Do not close your existing SSH session yet. Open a second terminal and confirm you can log in fresh. If that works, you’re good. If it doesn’t, you’ve still got the first session to fix things.

3. Turn on the firewall

Ubuntu ships with ufw, which is a friendly wrapper around iptables/nftables. Default-deny inbound, allow only what you need:

sudo ufw default deny incoming
sudo ufw default allow outgoing
sudo ufw allow OpenSSH
sudo ufw allow 80/tcp
sudo ufw allow 443/tcp
sudo ufw enable
sudo ufw status verbose

If you don’t run a web server on this box, drop the 80/443 lines. The rule is simple: open a port only when something on the box actually needs to listen on it.

4. Enable automatic security updates

Most successful attacks are not clever zero-days — they’re known bugs in software you forgot to patch. Let the OS patch itself.

sudo apt update
sudo apt install -y unattended-upgrades
sudo dpkg-reconfigure -plow unattended-upgrades   # answer "Yes"

Then check /etc/apt/apt.conf.d/50unattended-upgrades and make sure security updates are uncommented. On Ubuntu the default config already covers ${distro_id}:${distro_codename}-security, which is what you want.

For peace of mind, make it tell you when reboots are needed and when to install them:

sudo tee /etc/apt/apt.conf.d/51auto-reboot >/dev/null <<'EOF'
Unattended-Upgrade::Automatic-Reboot "true";
Unattended-Upgrade::Automatic-Reboot-Time "04:00";
EOF

Pick a time when nobody’s using the app. Yes, this means the box reboots itself sometimes. That’s fine. Your app should already survive a reboot — and if it doesn’t, that’s a bigger problem than security.

That’s first-day setup. Non-root sudo user, keys-only SSH, default-deny firewall, automatic patching. You’re now ahead of a lot of production servers.


Worried you missed something on first-day setup?

Run a free QuickCheck on your server →

It’s a read-only scan that flags the boring stuff: SSH still allows passwords, port 22 open to the world, no automatic updates configured, sketchy listening services, and so on. No agent, no signup wall. Here’s a sample report if you’d like to see the format first.


What to actually monitor

You don’t need a SIEM. You need a few things you can eyeball once a week (or get a tiny script to email you about). For a single VPS, this short list catches almost everything that matters.

Failed logins

If somebody is hammering your SSH port, this shows it:

sudo journalctl -u ssh --since "24 hours ago" | grep -i "failed\|invalid"

A handful of attempts per day is internet background noise. Thousands per hour from one IP is worth blocking with ufw or installing fail2ban.

Listening ports

What’s actually accepting connections on this box? Run this every so often and make sure nothing surprising is there:

sudo ss -tulpn

You’re looking for things bound to 0.0.0.0: or :::. Anything bound to 127.0.0.1 is fine — only your box can talk to it. The classic mistake: running a dev database with bind = 0.0.0.0 and no password. Don’t do that.

Disk free

Servers don’t usually die from hackers. They die from full disks at 3 AM.

df -h /
du -sh /var/log /var/lib/docker 2>/dev/null

If / is over 80% full, plan on cleaning it up before it hits 100% and your database refuses to write.

Package updates available

Even with unattended-upgrades, it’s worth a manual sanity check now and then:

sudo apt update
apt list --upgradable 2>/dev/null

And: is a reboot pending after a kernel update?

[ -f /var/run/reboot-required ] && cat /var/run/reboot-required

If yes, schedule one. A patched kernel that hasn’t been booted into is just a download.

You can wire any of these into a weekly cron that emails you a one-page digest. Five lines of bash. Don’t overthink it.

Backups and restore drills

This is the boring section everyone skips. Skip it and you have a hobby project, not a business.

The minimum viable backup setup for a single VPS:

  • Database: nightly dump (pg_dump, mysqldump, or your equivalent), encrypted, sent off-box. To S3, B2, or any object store. Keep at least 7 daily and 4 weekly copies.
  • User-uploaded files: same deal — sync to object storage on a schedule. restic and rclone both work fine.
  • Config: keep it in git. If your nginx.conf lives only on the server, it’s already half-lost.

That’s the easy part. Here’s the part people skip:

Actually do a restore. From scratch. On a fresh VPS. Once.

Spin up a new box. Pull last night’s backup. Restore the database. Boot the app. Did it work? How long did it take? What did you forget? (Spoiler: an environment variable, an SSL cert, a cron job, or a system package.)

If you’ve never done this drill, you don’t have backups. You have files you hope will work. There is a meaningful difference, and you really, really don’t want to discover it during an outage.

Re-do the drill at least once a year, or any time you make a big infrastructure change.

Don’t over-do it

There is a tempting path where, in the name of “being thorough,” you install:

  • An intrusion detection system
  • A second intrusion detection system in case the first one misses something
  • A file integrity monitor
  • A custom auditd ruleset you found on a blog
  • An EDR agent
  • A SIEM forwarder

…on a VPS that hosts one Rails app and gets 200 visitors a day.

Don’t. Each of these has a cost: CPU, memory, alert noise you’ll learn to ignore, and your time. For one small box, the basics in this article handle 95% of realistic risk. Adding more tools without tuning them often makes you less secure, because real signals get buried in junk alerts you stop reading.

If your business actually grows into the territory where you need that stuff (regulated data, big customer base, real compliance), you’ll know — and at that point you’ll also have the budget to do it properly. Until then: keep the surface small, keep it patched, and keep watching the four things in the monitoring section.

Common mistakes

The same handful of things bite small-team servers over and over:

  • Port 22 open to the entire internet with password login still enabled. This is the #1 thing scanners look for. Even with a strong password, you’re contributing to the noise. Keys only.
  • Logging in as root. Either directly, or via a sudoers rule that means a single mistake takes the whole box down. Make a real user.
  • Skipping reboots after kernel updates. A patched-but-not-rebooted kernel still runs the old, vulnerable kernel. unattended-upgrades with Automatic-Reboot "true" fixes this for free.
  • IMDSv1 left enabled on AWS. If you’re on EC2/Lightsail, the legacy instance metadata endpoint can be reached by anything that can make an outbound HTTP request from the box — including a bug in your app. Enforce IMDSv2 (HttpTokens=required) so a token is required to read instance credentials.
  • Dev services bound to 0.0.0.0. Postgres, Redis, MongoDB, Elasticsearch, a debug UI, that one Jupyter notebook you spun up “just for a sec” — anything that listens on all interfaces with no auth is a free shell waiting to happen. Bind to 127.0.0.1, or at minimum require a password and put it behind the firewall.
  • No backups, or backups that have never been restored. See previous section. This is the one that ends businesses.
  • Storing secrets in committed .env files. You’ll forget, push to a public repo, and your API keys are now public. Use a .env.example checked in, and the real .env ignored.

None of these are exotic. All of them are still everywhere.

Worth a free second opinion?

Even after a careful first-day setup, things drift. A teammate enables password auth “just for a minute.” A new service starts listening on 0.0.0.0. Auto-updates silently break and stop running. The point of a periodic external check is to catch that drift before it matters.

Run a QuickCheck on your VPS → — read-only, no install, takes a few minutes. Or look at a sample report first to see what it covers.

What this is not

This article is a sensible starting checklist for one Linux VPS run by one person or a tiny team. It is not:

  • A replacement for security advice from someone who knows your specific stack and threat model.
  • A compliance program. If you handle health data, payment data, or anything else regulated, you need more than a blog post.
  • A guarantee. Nothing in security is. The goal is to make yourself a much less appealing target than the millions of other servers on the internet that haven’t done any of this.

Do the basics, do them well, then go back to building the actual product. That’s the job.

About Tuck Sentinel

Tuck Sentinel is a small operation focused on practical security checks for indie founders and small teams running production on a VPS. We build QuickCheck, a free read-only scan that highlights the boring-but-important configuration issues most one-person ops teams miss. No agents, no upsell maze — just the things worth fixing.

{
  "@context": "https://schema.org",
  "@type": "Article",
  "headline": "The Indie Founder's VPS Security 101",
  "description": "A practical, no-nonsense guide for solo founders running one Linux VPS. Lock the doors, watch the right things, and skip the security theater.",
  "author": {
    "@type": "Organization",
    "name": "Tuck Sentinel",
    "url": "https://richgibbs.dev/"
  },
  "publisher": {
    "@type": "Organization",
    "name": "Tuck Sentinel",
    "url": "https://richgibbs.dev/"
  },
  "mainEntityOfPage": {
    "@type": "WebPage",
    "@id": "https://richgibbs.dev/blog/indie-founder-vps-security-101/"
  },
  "image": "https://richgibbs.dev/og/indie-founder-vps-security-101.png",
  "keywords": "VPS security, indie founder, Linux server hardening, Ubuntu, Debian, SSH, ufw, unattended-upgrades, backups",
  "articleSection": "Security",
  "inLanguage": "en"
}