Rich Gibbs

Ubuntu/Debian EC2 hardening checklist (2026)

ubuntu · debian · ec2 · hardening · security · devops · sysadmin · aws

You spun up an EC2 instance, pointed a domain at it, and now real traffic — and real bots — can reach it. Most “hardening guides” online are either copy-paste cargo cult from 2014 or vendor whitepapers selling a SIEM. This is the version I actually run on Ubuntu 22.04, Ubuntu 24.04, and Debian 12 boxes, written for solo founders and small teams who don’t have a dedicated security person.

Work through it top to bottom on a fresh box. On an existing box, treat it as a diff: read each section, run the audit command, fix the gap, move on.

Why this checklist

The threats most small EC2 fleets actually get hit by aren’t APTs. They’re:

  • SSH brute force from random botnets
  • Exposed services you forgot were listening (Redis, Postgres, Docker API, an old admin panel)
  • Stolen IAM credentials via SSRF on a misconfigured app reaching the EC2 metadata service
  • An unpatched kernel or library with a known CVE
  • A compromised dependency or container image that opens a reverse shell

Everything below is aimed at those concrete risks. There’s no checklist on earth that makes you “secure” — but a tight baseline closes the cheap, automated attack paths so an attacker has to actually work.

Threat model assumptions

Before any commands, make these explicit:

  • This is a single-tenant Linux server (or small fleet) on AWS EC2.
  • You are the only admin, or there’s a tiny ops team with shared SSH keys.
  • The instance runs a public-facing web app and/or some background workers.
  • You’re not in a regulated environment yet (PCI/HIPAA/SOC 2 controls are not what this checklist gives you).
  • You can tolerate a few minutes of downtime to reboot for kernel updates.

If any of those don’t match, adjust before applying.

1. SSH

SSH is still the single biggest “front door” on a Linux server.

Use keys, not passwords. Disable root login. Limit who can log in.

Edit /etc/ssh/sshd_config (or drop a file in /etc/ssh/sshd_config.d/ on Ubuntu 22.04+):

PermitRootLogin no
PasswordAuthentication no
KbdInteractiveAuthentication no
ChallengeResponseAuthentication no
PubkeyAuthentication yes
PermitEmptyPasswords no
X11Forwarding no
MaxAuthTries 3
LoginGraceTime 20
ClientAliveInterval 300
ClientAliveCountMax 2
AllowUsers ubuntu deploy

Replace ubuntu deploy with the actual non-root accounts you use. Then validate and reload:

sudo sshd -t
sudo systemctl reload ssh

Optional but worth it on small boxes:

  • Move SSH off port 22. It doesn’t stop a determined attacker, but it cuts log noise from internet-wide scanners by ~95%. If you do this, update the EC2 security group too.
  • Restrict the SSH security group to your office/VPN IP, your home IP, or a bastion. 0.0.0.0/0 on port 22 is a choice, not a default.
  • Install fail2ban for cheap brute-force throttling:

bash sudo apt-get update && sudo apt-get install -y fail2ban sudo systemctl enable --now fail2ban

Audit:

sudo sshd -T | grep -Ei 'permitrootlogin|passwordauth|pubkeyauth|allowusers|port'

2. Firewall and listeners

The cheapest mistake on EC2 is a service binding to 0.0.0.0 that you thought was on 127.0.0.1. Defense in depth: lock it down at the OS and at the security group.

See what’s actually listening:

sudo ss -tulpn

Anything bound to 0.0.0.0 or :: that isn’t your web server, SSH, or something you explicitly want public is a finding. Common offenders: Redis (6379), Postgres (5432), MySQL (3306), Docker API (2375/2376), Elasticsearch (9200), Memcached (11211), node dev servers.

Bind to localhost in the service config (e.g. bind 127.0.0.1 in /etc/redis/redis.conf, listen_addresses = 'localhost' in postgresql.conf).

Then layer UFW on top:

sudo apt-get install -y ufw
sudo ufw default deny incoming
sudo ufw default allow outgoing
sudo ufw allow OpenSSH
sudo ufw allow 80/tcp
sudo ufw allow 443/tcp
sudo ufw enable
sudo ufw status verbose

On the AWS side, the security group is your real perimeter. Rules of thumb:

  • One SG per role (web, db, worker), not one giant SG that allows everything internally.
  • DB and cache SGs accept traffic only from the app SG, never from 0.0.0.0/0.
  • SSH SG limited to known IPs or a bastion/VPN SG.
  • No 0.0.0.0/0 on anything except 80/443 on the public web tier.

Audit:

sudo ss -tulpn | awk '$5 ~ /0\.0\.0\.0|\[::\]/'
sudo ufw status numbered

Cross-check the AWS console / CLI:

aws ec2 describe-security-groups \
  --query 'SecurityGroups[].{Name:GroupName,Ingress:IpPermissions}' \
  --output json

3. OS updates and reboots

Unpatched kernels and OpenSSL/libc libraries are the most boring and most common way servers get owned.

Enable unattended security upgrades:

sudo apt-get install -y unattended-upgrades apt-listchanges
sudo dpkg-reconfigure -plow unattended-upgrades

Check /etc/apt/apt.conf.d/50unattended-upgrades includes the security pocket and that Unattended-Upgrade::Automatic-Reboot is set deliberately. On a single box with a real user, automatic reboots at 3am can be fine; on production-critical workers, prefer notification + manual.

Patch now:

sudo apt-get update
sudo apt-get -y dist-upgrade
sudo apt-get -y autoremove --purge

Detect a needed reboot:

[ -f /var/run/reboot-required ] && cat /var/run/reboot-required.pkgs

If the kernel was updated, schedule a reboot. Live-patching (Ubuntu Pro / Livepatch) is great if you’re paying for it, but it doesn’t cover everything — you’ll still need occasional reboots.

Audit:

apt list --upgradable 2>/dev/null
uname -r

4. Admin surface

Every account that can sudo is part of your admin surface. Trim it.

getent group sudo
getent group adm
awk -F: '($3 == 0) {print}' /etc/passwd   # any extra UID 0 account is a finding

Rules:

  • One sudo user per real human, no shared logins where avoidable.
  • Service accounts (www-data, postgres, deploy) should not have shell or sudo. Use usermod -s /usr/sbin/nologin <user> if needed.
  • Rotate or remove SSH keys when someone leaves the team. ~/.ssh/authorized_keys for every login user is your source of truth — review it.
  • Disable cloud-init’s default password if any (cloud-init shouldn’t set one on official AMIs, but check).
  • If you must allow sudo without a password for automation, scope it to specific commands in /etc/sudoers.d/, not blanket NOPASSWD: ALL.

Audit:

for u in $(awk -F: '$7 ~ /sh$/ {print $1}' /etc/passwd); do
  echo "== $u =="; sudo cat /home/$u/.ssh/authorized_keys 2>/dev/null
done

5. EC2 metadata service (IMDSv2)

This one is non-negotiable in 2026. The EC2 instance metadata service hands out IAM role credentials. With IMDSv1 enabled, any server-side request forgery (SSRF) bug in your app can pop those credentials and walk into your AWS account.

Force IMDSv2 only, with a low hop limit:

TOKEN=$(curl -sX PUT "http://169.254.169.254/latest/api/token" \
  -H "X-aws-ec2-metadata-token-ttl-seconds: 60")
curl -sH "X-aws-ec2-metadata-token: $TOKEN" \
  http://169.254.169.254/latest/meta-data/instance-id

If that works but the same call without a token also works, you’re still on IMDSv1.

Enforce v2 on the instance (run from your laptop with the AWS CLI):

aws ec2 modify-instance-metadata-options \
  --instance-id i-xxxxxxxxxxxxxxxxx \
  --http-tokens required \
  --http-endpoint enabled \
  --http-put-response-hop-limit 1

hop-limit 1 means a container or proxy can’t trivially relay a request to the metadata service. If you run Docker with bridge networking, you may need 2 — but start at 1, raise only if needed, and never to 64.

Also: the IAM role attached to the instance should be least privilege. “Read this one S3 bucket and write to this one log group” beats AdministratorAccess every time.

Audit:

aws ec2 describe-instances \
  --query 'Reservations[].Instances[].[InstanceId,MetadataOptions.HttpTokens,MetadataOptions.HttpPutResponseHopLimit]' \
  --output table

Anything where HttpTokens is not required is a finding.


Mid-article CTA

If you’d rather have someone else go through this list on your servers and hand you back a clear report, that’s exactly what Tuck Sentinel QuickCheck does: a one-shot, read-only audit of a single Linux box with prioritized findings and copy-pasteable fixes. You can see what the output looks like in this sample report before deciding.

Back to the checklist.


6. Logging and time sync

You can’t investigate what you didn’t record, and you can’t correlate logs that disagree on what time it is.

Time sync. Ubuntu 22.04+ and Debian 12 ship systemd-timesyncd or chrony. Either is fine, just make sure one is running:

timedatectl
# or
chronyc tracking

If you’re on AWS, the local time source 169.254.169.123 is reliable and low-latency. chrony config example:

server 169.254.169.123 prefer iburst minpoll 4 maxpoll 4

Logging. journald is the default. A few sane settings in /etc/systemd/journald.conf:

Storage=persistent
SystemMaxUse=1G
SystemMaxFileSize=128M
ForwardToSyslog=no

Then:

sudo systemctl restart systemd-journald

For anything beyond a single box, ship logs off the instance — CloudWatch Logs, a Loki/Grafana stack, or any hosted log service. The reason isn’t compliance, it’s that the first thing an attacker tries to do is rm /var/log/* and journalctl --rotate --vacuum-time=1s.

Auditd is worth installing if you want a record of which user ran which command:

sudo apt-get install -y auditd
sudo systemctl enable --now auditd

You don’t need elaborate rules to start; the defaults plus shipping /var/log/audit/audit.log off-box is already a huge upgrade.

Audit:

journalctl --disk-usage
timedatectl | grep 'System clock synchronized'

7. Backups and restore drills

A backup you’ve never restored is a wish, not a backup.

For a small EC2 setup:

  • Use AWS Backup or scheduled EBS snapshots for the volume(s).
  • For databases, also take logical backups (pg_dump, mysqldump) on a schedule and copy them to S3 with versioning + lifecycle to Glacier.
  • Encrypt at rest (EBS encryption + S3 SSE-KMS). On modern AWS regions/accounts, EBS encryption-by-default should be on — check it.
  • Keep at least one backup copy in a different AWS account or region. Ransomware-style attackers will delete in-region snapshots if they get the chance.

Restore drill — once a quarter, on a throwaway instance:

  1. Pick a recent snapshot/dump.
  2. Spin up a new instance/volume from it.
  3. Verify the app starts and recent data is present.
  4. Time how long it took. That’s your real RTO.

If you’ve never done step 4, you don’t know your RTO; you have a hope.

Audit:

aws ec2 describe-snapshots --owner-ids self \
  --query 'Snapshots[?StartTime>=`2026-01-01`].[SnapshotId,StartTime,VolumeSize,Description]' \
  --output table

8. Docker basics (if applicable)

If you don’t run Docker on the box, skip this. If you do, the most common foot-guns:

  • Don’t expose the Docker daemon over TCP. 2375 unauthenticated is root-on-box for anyone who can reach it. Use the local socket (/var/run/docker.sock) and SSH for remote control.
  • Mind the -p flag. -p 5432:5432 binds to 0.0.0.0 and bypasses UFW on most Docker setups (Docker writes its own iptables rules). If you only need the port locally, use -p 127.0.0.1:5432:5432.
  • Run containers as non-root where possible. USER directive in your Dockerfile, or --user 1000:1000 at runtime.
  • Pin base images to a digest (FROM ubuntu:24.04@sha256:...) for production, and rebuild on a schedule to pick up CVE fixes.
  • Don’t bind-mount the Docker socket into containers unless you fully understand that’s equivalent to giving that container root on the host.
  • Set --read-only and --cap-drop=ALL for containers that don’t need to write to their filesystem or hold extra capabilities; add back only what’s needed.

A useful audit one-liner:

docker ps --format '{{.Names}} {{.Ports}}' | grep -E '0\.0\.0\.0|:::'

Anything in that list is reachable from the public internet (modulo the security group). Decide if that’s intentional.

For containerd/k8s setups this barely scratches the surface — but on a single EC2 box running a few containers, those bullets close ~80% of the cheap holes.

What this is not

Be honest with yourself about what a checklist like this does and doesn’t do.

  • It is not a penetration test. Nobody is exploiting your application logic, your auth flows, or your business rules here. A pentest is a different (and more expensive) thing.
  • It is not compliance. SOC 2, HIPAA, PCI, ISO 27001 all require documented policies, evidence collection, access reviews, vendor management, and a lot more. A hardened box is part of that, not a substitute.
  • It is not a guarantee. New CVEs ship every week. Your application code changes. Someone leaks a key on GitHub. Hardening is a continuous practice, not a one-time event.
  • It is not opinionated about your app stack. TLS configuration, WAF rules, secrets management, dependency scanning, CI/CD security — all out of scope here.

What it does do: dramatically reduce the set of “stupid ways your server gets owned by a bot at 3am” and give you a baseline you can re-run on every new instance.

End-article CTA

If you got this far and want to skip the manual audit, that’s exactly what I built Tuck Sentinel QuickCheck for: a single-instance, read-only Linux audit that runs the kind of checks above and produces a prioritized report with concrete fixes — no agent left behind, no ongoing access. Take a look at the sample report to see exactly what you’d get.

Either way: run the checklist. Future-you will thank present-you.

About Tuck Sentinel

Tuck Sentinel is a small, focused security tooling project from indie operator Rich Gibbs. It produces practical, no-nonsense audits and content for solo founders and small teams running their own Linux infrastructure — the kind of work most SOC platforms ignore because the deal size is too small. Start with QuickCheck if you want a one-shot review of a single server.

{
  "@context": "https://schema.org",
  "@type": "Article",
  "headline": "Ubuntu/Debian EC2 hardening checklist (2026)",
  "description": "A practical 2026 hardening checklist for Ubuntu and Debian EC2 instances: SSH, UFW, IMDSv2, updates, logging, backups, and Docker basics.",
  "author": {
    "@type": "Person",
    "name": "Rich Gibbs",
    "url": "https://richgibbs.dev/"
  },
  "publisher": {
    "@type": "Organization",
    "name": "Tuck Sentinel",
    "url": "https://richgibbs.dev/"
  },
  "mainEntityOfPage": {
    "@type": "WebPage",
    "@id": "https://richgibbs.dev/blog/ubuntu-debian-ec2-hardening-checklist-2026/"
  },
  "image": "https://richgibbs.dev/og/ubuntu-debian-ec2-hardening-2026.png",
  "datePublished": "2026-05-10",
  "dateModified": "2026-05-10",
  "keywords": "ubuntu, debian, ec2, hardening, security, devops, sysadmin, aws, imdsv2, ssh, ufw",
  "inLanguage": "en"
}