You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Now, a critical part of this is that the configuration you set up on your primary node is distributed to the other nodes, so that in the event of a failover your DNS local records still resolve. If you don't use local DNS, or want to keep things syncronized manually, you can skip this next bit. If not though, I'll show you how to syncronize these files using `rsync`.
170
+
Now, a critical part of this is that the configuration you set up on your primary node is distributed to the other nodes, so that in the event of a failover your DNS local records still resolve. If you don't use local DNS, or want to keep things syncronized manually, you can skip this bit. If not though, I'll show you how to syncronize these files using [Gravity Sync](https://github.com/vmstan/gravity-sync).
171
171
172
-
Also, keep in mind there is a premade service out there called [Gravity Sync](https://github.com/vmstan/gravity-sync). There are lots of guides on how to use it, but for simply syncronizing these two files, I prefer to use rsync.
172
+
In the past I tried to keep instances syncronized with rsync, but that proved to be too fragile over time. Gravity sync does a very robust job and just works.
173
173
174
-
### SSH Keys
175
-
176
-
To get started we will need to set up SSH keys for rsync to use on the primary node. You will need to make sure you generate them as the user that will be running rsync. You will also need to create an `.ssh` folder for the keys to go into.
177
-
178
-
```bash
179
-
mkdir ~/.ssh/
180
-
ssh-keygen -t rsa -b 4096
181
-
```
182
-
183
-
I use the default file location/name and do not set a passphrase. When you are done, you should see two files.
184
-
185
-
```bash
186
-
ls ~/.ssh
187
-
# id_rsa id_rsa.pub
188
-
```
189
-
190
-
Now all you need to do is to export the key to the backup nodes. This can be done with `ssh-copy-id`
174
+
To install, follow the installation guide in the repo, but to overview you will need to run the curl command.
More about ssh-keygen and ssh-copy-id can be found [here](https://www.ssh.com/academy/ssh/keygen) and [here](https://www.ssh.com/academy/ssh/copy-id). Now you can confirm ssh works without a password.
180
+
The install script will prompt you for the remote machine. For my usage, my auxiallary instances pull their configuration from the primary instance. Once a connection is made, run the pull command.
197
181
198
182
```bash
199
-
ssh <username>@<host>
183
+
gravity-sync pull
200
184
```
201
185
202
-
### rsync
203
-
204
-
Now for the last step, add an rsync file to `cron.d` and add the rsync commands.
186
+
Then you can configure it to run automatically by running the automate command.
To break this down, `* * * * *` will ensure the command runs every minute. This can be adjusted to your liking. `<primary-node-username>` is the name of the user on the primary node that rsync will run under. This should be the same user that created and copied the keys to the other nodes. `<username>` and `<host>` should be the user and host you configured for SSH in the last step.
216
-
217
-
> You should manually run the rsync commands in terminal to save the host thumbprint, and ensure the command works
218
-
{: .prompt-tip }
192
+
Auto will follow use the last successful connection made, pull or push.
219
193
220
194
Congratulations, you should now have a high availabilty Pi-hole cluster!
0 commit comments