Skip to content

Commit 77a79ca

Browse files
author
Patrick M
committed
add posts
1 parent ff4a40c commit 77a79ca

File tree

5 files changed

+275
-0
lines changed

5 files changed

+275
-0
lines changed
Lines changed: 223 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,223 @@
1+
---
2+
layout: post
3+
title: 'Pi-hole setup with High Availablity'
4+
date: 2022-12-18 01:00:00 -0500
5+
category: 'Service Setup'
6+
tags: ['pihole', 'setup', 'high availablity', 'dns', 'spam']
7+
---
8+
9+
This is a step by step guide to set up Pi-hole in a high availabilty environment. Previously I was using a lone Raspberry Pi 3B to run Pi-hole. The issue with this setup was, if that pi went down, DNS was down on my network, which is definitely unacceptable. So let make it better!
10+
11+
<!--more-->
12+
13+
## Prerequisites
14+
15+
Since I am running this in a Proxmox LXC, I need to install curl and rsync. A more typical debian or ubuntu install should already have these utilities installed.
16+
17+
```bash
18+
sudo apt update && apt upgrade -y && apt install curl rsync -y
19+
```
20+
21+
Once curl is install, I can continue with the install.
22+
23+
## Installing Pi-hole
24+
25+
I prefer to run Pi-hole natively as an application, rather than in Docker. To do this I typically follow their [official install documentation](https://docs.pi-hole.net/main/basic-install/). Basically though, it boils down to running this command <u>with sudo</u>.
26+
27+
```bash
28+
sudo curl -sSL https://install.pi-hole.net | bash
29+
```
30+
31+
When you run the install command, a GUI will appear. It will guide you through the install process. Remember you will need to have a static IP to correctly host Pi-hole, so either set that in your environment, or use a static DHCP reservation. I use the default settings for the rest of the install and once it is complete, I always reset the password for the Pi-hole admin panel using the following command.
32+
33+
```bash
34+
pihole -a -p
35+
```
36+
37+
Lastly, you'll want to add your user to the pihole group so that you can edit configuration files without needing sudo. This will be useful later.
38+
39+
```bash
40+
sudo usermod -a -G pihole <username>
41+
```
42+
43+
## Configuring Pi-hole
44+
45+
Always enable dark mode in <u>Settings</u> > <u>API / Web interface</u> > <u>Web interface settings</u>.
46+
47+
Because my network sets DNS per client, and not just per gateway, each client will make DNS requests directly to my Pi-hole instance. This is better for logging, but means that Pi-hole needs to be behind a firewall, and must permit all origins. This can be configured in <u>Settings</u> > <u>DNS</u> > <u>Interface Settings</u>
48+
49+
![systemctl status keepalived](/assets/img/install-pihole-ha-2.png)
50+
51+
I also like to turn on DNSSEC in <u>Settings</u> > <u>DNS</u> > <u>Advanced DNS settings</u>. This will add a little extra assurance on DNS lookups.
52+
53+
![systemctl status keepalived](/assets/img/install-pihole-ha-3.png)
54+
55+
The last change that I make is to add the hostname I will use for this instance to an authorized hosts array in the web interface php file. I do this so that when I try and access my instance from the friendly name I have set up in DNS or a reverse proxy, I don't have to remember the /admin suffix. To do this, you will need to edit the index.php file of Pi-hole.
56+
57+
```bash
58+
sudo nano /var/www/html/pihole/index.php
59+
```
60+
61+
In this file I edit the authorizedHosts array
62+
63+
```php
64+
$authorizedHosts = [ "localhost", "pihole.local" ];
65+
```
66+
67+
> index.php is likely overwritten whenever Pi-hole is updated and these changes will need to be reapplied
68+
{: .prompt-warning }
69+
70+
## High Availability with keepalived
71+
72+
To have a high availabilty cluster, you will need more than one Pi-hole instance running. Once you have them both running, you can configure `keepalived` to set up a virtual IP between them using a technology called VRRP. It allows both servers to share a virtual IP between them, swapping instantly when one of them goes down. Because this is more of a "hot spare" methodology, one node will be primary, and the rest will be secondary. To get started you will need to install two pacakges.
73+
74+
```bash
75+
sudo apt install keepalived libipset13 -y
76+
```
77+
78+
Once installed, edit the configuration file
79+
80+
```bash
81+
sudo nano /etc/keepalived/keepalived.conf
82+
```
83+
84+
Here's an example of the configuration file. Let's break it down.
85+
86+
```conf
87+
vrrp_instance pihole {
88+
state <MASTER|BACKUP>
89+
interface ens18
90+
virtual_router_id 30
91+
priority 150
92+
advert_int 1
93+
unicast_src_ip 192.168.1.51
94+
unicast_peer {
95+
192.168.1.52
96+
192.168.1.53
97+
}
98+
99+
authentication {
100+
auth_type PASS
101+
auth_pass <password>
102+
}
103+
104+
virtual_ipaddress {
105+
192.168.1.50/24
106+
}
107+
}
108+
```
109+
110+
| Line | Description |
111+
|---|---|
112+
| 1 | The first thing to configure is the instance name. I have it set to `pihole`. |
113+
| 2 | You will need to decide the node's default disposition, whether it is the master node or a backup. Keep in mind, the node's disposition will change as necessary based on other nodes. If another node enters the cluser with a higher priorty, it will always become the master node. |
114+
| 3 | The name of the interface that the virtual IP will be bound. Can be found using `ip a`. |
115+
| 5 | The priority will configure which node is the Master. The master node will always be the node with the highest priority |
116+
| 6 | The advertisement timespan in seconds. |
117+
| 7 | You will need to add the node's IP |
118+
| 8 | The other nodes IPs |
119+
120+
121+
122+
> Never set an IP reservation for the virtual IP, or set it as a static address for another device
123+
{: .prompt-warning }
124+
125+
Also keep in mind, this is set up for unicast, but can be configured for multicast. I just like to be explict. You can find more details about [keepalived configuration here](https://keepalived.readthedocs.io/en/latest/configuration_synopsis.html).
126+
127+
Once it's configured, restart the service
128+
129+
```bash
130+
sudo systemctl restart keepalived
131+
```
132+
133+
You can check the service with the following command also
134+
135+
```bash
136+
sudo systemctl status keepalived
137+
```
138+
139+
![systemctl status keepalived](/assets/img/install-pihole-ha-1.png)
140+
141+
## Configuring Local DNS
142+
143+
I use Pi-hole as my local DNS service also, so I will need to add my local DNS records. This can be done in the web admin panel at <u>Local DNS</u> > <u>DNS Records</u>, but for initial configuration, it is quicker to add records to the custom.list file. This is for A/AAAA records only.
144+
145+
```bash
146+
sudo nano /etc/pihole/custom.list
147+
```
148+
149+
Records are added as `ip` `hostname`
150+
151+
```text
152+
192.168.1.5 proxmox.local
153+
192.168.1.50 pihole.local
154+
192.168.1.51 pihole1.local
155+
192.168.1.52 pihole2.local
156+
192.168.1.53 pihole3.local
157+
```
158+
159+
CNAME records can be edited using the web admin panel at <u>Local DNS</u> > <u>CNAME Records</u>, or manually in a different file in `dnsmasq.d`
160+
161+
```bash
162+
sudo nano /etc/dnsmasq.d/05-pihole-custom-cname.conf
163+
```
164+
165+
Entries here will follow a different format: `cname:<alias>,<a-record>`
166+
167+
```
168+
cname=pihole.ha.local,pihole.local
169+
```
170+
171+
## Syncronizing Local DNS
172+
173+
Now, a critical part of this is that the configuration you set up on your primary node is distributed to the other nodes, so that in the event of a failover your DNS local records still resolve. If you don't use local DNS, or want to keep things syncronized manually, you can skip this next bit. If not though, I'll show you how to syncronize these files using `rsync`.
174+
175+
Also, keep in mind there is a premade service out there called [Gravity Sync](https://github.com/vmstan/gravity-sync). There are lots of guides on how to use it, but for simply syncronizing these two files, I prefer to use rsync.
176+
177+
### SSH Keys
178+
179+
To get started we will need to set up SSH keys for rsync to use on the primary node. You will need to make sure you generate them as the user that will be running rsync. You will also need to create an `.ssh` folder for the keys to go into.
180+
181+
```bash
182+
mkdir ~/.ssh/
183+
ssh-keygen -t rsa -b 4096
184+
```
185+
186+
I use the default file location/name and do not set a passphrase. When you are done, you should see two files.
187+
188+
```bash
189+
ls ~/.ssh
190+
# id_rsa id_rsa.pub
191+
```
192+
193+
Now all you need to do is to export the key to the backup nodes. This can be done with `ssh-copy-id`
194+
195+
```bash
196+
ssh-copy-id -i ~/.ssh/id_rsa <username>@<host>
197+
```
198+
199+
More about ssh-keygen and ssh-copy-id can be found [here](https://www.ssh.com/academy/ssh/keygen) and [here](https://www.ssh.com/academy/ssh/copy-id). Now you can confirm ssh works without a password.
200+
201+
```bash
202+
ssh <username>@<host>
203+
```
204+
205+
### rsync
206+
207+
Now for the last step, add an rsync file to `cron.d` and add the rsync commands.
208+
209+
```bash
210+
sudo nano /etc/cron.d/rsync
211+
```
212+
213+
```
214+
* * * * * <primary-node-username> rsync /etc/pihole/custom.list <username>@<host>:/etc/pihole/custom.list
215+
* * * * * <primary-node-username> rsync /etc/dnsmasq.d/05-pihole-custom-cname.conf <username>@<host>:/etc/dnsmasq.d/05-pihole-custom-cname.conf
216+
```
217+
218+
To break this down, `* * * * *` will ensure the command runs every minute. This can be adjusted to your liking. `<primary-node-username>` is the name of the user on the primary node that rsync will run under. This should be the same user that created and copied the keys to the other nodes. `<username>` and `<host>` should be the user and host you configured for SSH in the last step.
219+
220+
> You should manually run the rsync commands in terminal to save the host thumbprint, and ensure the command works
221+
{: .prompt-tip }
222+
223+
Congratulations, you should now have a high availabilty Pi-hole cluster!

_posts/2022-12-18-lxc-set-up.md

Lines changed: 52 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,52 @@
1+
---
2+
layout: post
3+
title: 'LXC: First commands on a new CT'
4+
date: 2022-12-18 00:00:00 -0500
5+
category: 'Service Setup'
6+
tags: ['proxmox', 'lxc',]
7+
---
8+
9+
A list of the first commands I run on a new LXC to homogenize and secure my new environment.
10+
11+
<!--more-->
12+
13+
## Utilities
14+
15+
```bash
16+
apt update && apt upgrade -y
17+
```
18+
19+
```bash
20+
apt install curl nano openssl rsync -y
21+
```
22+
23+
## Don't use root
24+
25+
It is critical that you don't use root for SSH or for typical CLI tasks. I always create a new user for that reason.
26+
27+
```bash
28+
useradd -m -g users -G sudo <username>
29+
passwd <username>
30+
chsh -s /bin/bash <username>
31+
```
32+
33+
## SSH Configuration
34+
35+
I always disallow login for root over SSH and allow password logins for other users. To do this, edit `/etc/ssh/sshd_config`. You're looking to uncomment and modify the following lines:
36+
37+
```conf
38+
# Authentication:
39+
LoginGraceTime 2m
40+
PermitRootLogin no
41+
StrictModes yes
42+
MaxAuthTries 6
43+
MaxSessions 2
44+
45+
-----
46+
47+
# To disable tunneled clear text passwords, change to no here!
48+
PasswordAuthentication yes
49+
PermitEmptyPasswords no
50+
```
51+
52+
Once you've made the changes, you can restart the LXC and use SSH with your new user.

assets/img/install-pihole-ha-1.png

79.2 KB
Loading

assets/img/install-pihole-ha-2.png

35.1 KB
Loading

assets/img/install-pihole-ha-3.png

54.4 KB
Loading

0 commit comments

Comments
 (0)