Tagged in: projects, dns, networking

I work as a security consultant, and I witness that anything hindering usability (even for the sake of security) will be bypassed.

With that in mind, how does one secure their own home and devices, while being invisible and silent?


Pi-hole is a well-known all-in-one system that does DNS adblocking, stats, etc. But upon reading their gravity.sh script, I chose to write a portable alternative: I already wrote about blocking (b)ad servers at the DNS level.

The next step is to publish the DNS address in the DHCP leases that I manage on my network. See dhcpd.conf(5). Now any device will be protected, with no overhead or overly complicated guides to follow.

option domain-name-servers;
option domain-name-servers;

§ Bad hosts

§ known-bad

In the same vein as lie-to-me, my blackhole script will download and compile a list of known-bad IPs. Only just have to feed it to your favorite firewall (pf) to prevent local devices from reaching those IPs.

# Blacklist
0   0   *   *   *   blackhole -o /etc/badips
0   1   *   *   *   pfctl -t badips -T replace -f /etc/badips
table <badips> persist file "/etc/badips"
block out log quick to <badips>

§ bruteforcers

Block communication from bad bad people. (You shouldn’t try to bruteforce my SSH server, BTW)

table <bruteforce> persist
block drop in quick from <bruteforce>

# Among others
pass in log on $ext_if inet  proto tcp from any to ($ext_if) port ssh \
 flags S/SA keep state \
 (max-src-conn 100, max-src-conn-rate 15/3600, \
 overload <bruteforce> flush global)

BTW, to avoid shooting yourself in the foot, you should use SSH’s multiplexing capabilities, so that you don’t need to create a new TCP connection each time you want to reach your server (because that counts towards the limit 15/3600).

# expire bruteforcers after 1 week
@daily  /sbin/pfctl -t bruteforce -Te 604800

§ sharing bad IPs

Still a work in progress. Probably a mix of ssh(1) and pfctl(8) magic.

The idea is to have multiple nodes capture a list of (SSH|http|whatever) bruteforcers and send them over the network to other nodes. Also, those lists need to expire: bruteforcers should be purged from the list after 1 week.

NB: can’t simply send all data over the link, as when I expire one side, other sides wouldn’t expire them, and they’d send back over to the initial node bad hosts after they should have expired!

§ data protection

Protecting from data loss, in the least user-fiendly way. The master word to avoid data loss is: redundancy. Be they backups, synced devices, snapshots sent over the network…

§ local “cloud”

A NAS with raidz is a good thing to have, but even better if people not familiar with a shell can use it. Deploy any kind of “cloud” on that bad boy (e.g. Nextcloud) and now you can tell your friends to also keep a copy of their Ph. D. thesis on there: you know, just in case Windows decides to wipe data (2018-10), or if they ever run any untrustworthy piece of code on their machine (2015-01). Or just if their own hardware decides to go full 切腹 on them.

§ syncing things

I already wrote about using syncthing to sync multiple machines. The startup guide is so simple that even your (grand-)parents could do it. It will work with an arbitrary number of machines, as long as any one machine can send its changes to at least one other in the network.

There are even steps to add syncthing as a daemon to your standard Windows machine so that grandma doesn’t need to worry, and the service just keeps running.

§ snapshots

Syncing data is great, but if one mirror goes FUBAR, data corruption will likely spread. Keeping snapshots of the data in their respective locations sounds sane.

On my Linux systems, I use my own butter snapshot management tool; on FreeBSD, I use zfsnap. The concepts are similar: keep doing snapshots, and cleanup the old/stale ones.

With zfsnap(8), we issue snapshot commands at regular intervals, and one cleanup command at another regular interval:

# Keep data around for N days
@daily  /usr/local/sbin/zfsnap snapshot -r -a 15d zroot/var/data/documents
@daily  /usr/local/sbin/zfsnap snapshot -r -a 15d zroot/var/data/images
@daily  /usr/local/sbin/zfsnap snapshot -r -a 7d zroot/var/data/music

# Daily cleanup
0   2   *   *   *   /usr/local/sbin/zfsnap destroy -r zroot

With butter(8), we setup a retention strategy and keep firing it:

# butter add /,/home hourly
# butter set /home snapshot.hourly.max 24
# while :; do butter snapshot ALL type hourly; sleep 3600; done # better use cron

After a while, we get:

% butter snaplist
MOUNTPOINT      TYPE    DATE                           UUID
/home           hourly  Tue Oct 30 08:00:07 CET 2018   c4ea6b95-fdfe-4efb-9520-6005321d0502
/home           hourly  Tue Oct 30 13:00:07 CET 2018   30914b0e-b836-4210-8638-ecf2fc939956
/home           hourly  Tue Oct 30 13:11:37 CET 2018   801fabae-a3b3-4776-89f0-db4bbc4fa0b1
/home           hourly  Tue Oct 30 14:00:11 CET 2018   b02dd7d4-1ab5-4fec-b195-75d2e3c04962
/home           hourly  Tue Oct 30 15:00:11 CET 2018   9153d2b1-9621-4e93-a883-0936db5f7460
/home           hourly  Tue Oct 30 16:00:11 CET 2018   14c708e5-c0af-4a22-81a4-c0e2c582f5d0
/home           hourly  Tue Oct 30 17:00:01 CET 2018   e440540a-c958-4f0f-8e14-1bb8ee102b2e
/home           hourly  Tue Oct 30 18:00:11 CET 2018   e95ca9c6-5280-4796-a1ac-120db0bb5ec1
/home           hourly  Tue Oct 30 19:00:11 CET 2018   70ff689d-04de-4e42-8217-75456322eab2
/home           hourly  Tue Oct 30 20:00:05 CET 2018   e164d32e-a359-43d6-aa87-c41b36a0825f
/home           hourly  Tue Oct 30 21:00:10 CET 2018   682699a9-5330-473a-b81a-8bce531f60a6
/home           hourly  Tue Oct 30 22:00:11 CET 2018   62afd621-3360-4e23-9948-304c83ba89a8
/home           hourly  Tue Oct 30 23:00:01 CET 2018   03747215-6bff-4dfa-8507-3656764fe9d7
/home           hourly  Wed Oct 31 00:00:11 CET 2018   00215616-5159-48e6-8527-a5fdb50ef46b
/home           hourly  Wed Oct 31 01:00:11 CET 2018   6148ed7f-8532-4893-96f0-bbf31330b582
/home           hourly  Wed Oct 31 02:00:11 CET 2018   a50ff8fb-d170-460e-afb4-2ea5024202d4
/home           hourly  Wed Oct 31 03:00:11 CET 2018   52af7d8c-ed32-4124-abb6-d452dbedd5bf
/home           hourly  Wed Oct 31 04:00:11 CET 2018   af4c43b9-e361-4afe-a6fc-9f70cf6746d8
/home           hourly  Wed Oct 31 05:00:11 CET 2018   29cda7fa-0bd3-4b34-8c4a-9c6da3035d8b
/home           hourly  Wed Oct 31 06:00:11 CET 2018   c568c29e-7ec7-4a69-9943-b436966d9901
/home           hourly  Wed Oct 31 07:00:11 CET 2018   8871b0ad-ed58-4486-94f0-436cdbcfbb35
/home           hourly  Wed Oct 31 08:00:11 CET 2018   0e76538b-c051-4d1c-86f5-9b87b00f4a6c
/home           hourly  Thu Nov  1 08:42:55 CET 2018   56f750fb-934b-4729-bca7-e8b562accde2
/home           hourly  Thu Nov  1 09:00:10 CET 2018   e3563dd0-2649-4860-8799-c3c4a9a1fd8c
/               hourly  Wed Oct 31 06:00:10 CET 2018   99e432a5-2635-4afe-8c11-c19b5cb6196a
/               hourly  Wed Oct 31 07:00:10 CET 2018   9b24e291-e790-4538-89f8-2d71bdadba5c
/               hourly  Wed Oct 31 08:00:10 CET 2018   cfe01f7d-5ecc-4964-bb9c-0e93e4318f3a
/               hourly  Thu Nov  1 08:42:48 CET 2018   5d6b9e0f-858b-4cc3-b832-36acb185c51e
/               hourly  Thu Nov  1 09:00:09 CET 2018   641ea743-87dd-4a88-a2a4-738ddaaa14c3

The TYPE is totally arbitrary, and defaults to default when unset. Nothing will prevent your weekly snapshots from being named daily or whatever.

Anyway, that’s it, I’ll write at length about butter(8), but that’s for another time.