You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: _posts/2021-03-23-outlook-spam-rule.md
+2-2Lines changed: 2 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -28,10 +28,10 @@ From here click `Add new rule`. A create rule screen will appear and you can sta
28
28
29
29

30
30
31
-
Add the action to `Move to` the `Junk Email` folder, and for an added nicety, Add anohter action to set `Mark as read` too.
31
+
Add the action to `Move to` the `Junk Email` folder, and for an added nicety, Add another action to set `Mark as read` too.
32
32
33
33
Lastly, and most importantly, set the exception. The rules here can be tailored to meet your needs, but for me, `Sender address includes` and then my {@organization.com} did the trick.
34
34
35
35
## Finally
36
36
37
-
Now my inbox is much leaner. Occasionally I will check the junk folder just to see if I miseed anything, but so far I haven't.
37
+
Now my inbox is much leaner. Occasionally I will check the junk folder just to see if I missed anything, but so far I haven't.
Copy file name to clipboardExpand all lines: _posts/2021-06-16-autostart-api-spa-app.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -34,7 +34,7 @@ it's also important to declare the script you want node to run in scripts sectio
34
34
35
35
## Humble beginnings
36
36
37
-
For my first attempt at makin my start script, I created a batch file that started two scripts. One new Powershell window for `dotnet watch run` and another for `npm run start`. This had some hiccups initially. The windows would close if the Powershell stopped running, which was problematic for collecting errors. Also the command prompt window would linger. After a few iterations, I was able to solve those problems and landed on this:
37
+
For my first attempt at making my start script, I created a batch file that started two scripts. One new Powershell window for `dotnet watch run` and another for `npm run start`. This had some hiccups initially. The windows would close if the Powershell stopped running, which was problematic for collecting errors. Also the command prompt window would linger. After a few iterations, I was able to solve those problems and landed on this:
This is a step by step guide to set up Pi-hole in a high availabilty environment. Previously I was using a lone Raspberry Pi 3B to run Pi-hole. The issue with this setup was, if that pi went down, DNS was down on my network, which is definitely unacceptable. So let make it better!
9
+
This is a step by step guide to set up Pi-hole in a high availability environment. Previously I was using a lone Raspberry Pi 3B to run Pi-hole. The issue with this setup was, if that pi went down, DNS was down on my network, which is definitely unacceptable. So let make it better!
To have a high availabilty cluster, you will need more than one Pi-hole instance running. Once you have them both running, you can configure `keepalived` to set up a virtual IP between them using a technology called VRRP. It allows both servers to share a virtual IP between them, swapping instantly when one of them goes down. Because this is more of a "hot spare" methodology, one node will be primary, and the rest will be secondary. To get started you will need to install two pacakges.
69
+
To have a high availability cluster, you will need more than one Pi-hole instance running. Once you have them both running, you can configure `keepalived` to set up a virtual IP between them using a technology called VRRP. It allows both servers to share a virtual IP between them, swapping instantly when one of them goes down. Because this is more of a "hot spare" methodology, one node will be primary, and the rest will be secondary. To get started you will need to install two packages.
70
70
71
71
```bash
72
72
sudo apt install keepalived libipset13 -y
@@ -107,7 +107,7 @@ vrrp_instance pihole {
107
107
| Line | Description |
108
108
|---|---|
109
109
| 1 | The first thing to configure is the instance name. I have it set to `pihole`. |
110
-
| 2 | You will need to decide the node's default disposition, whether it is the master node or a backup. Keep in mind, the node's disposition will change as necessary based on other nodes. If another node enters the cluser with a higher priorty, it will always become the master node. |
110
+
| 2 | You will need to decide the node's default disposition, whether it is the master node or a backup. Keep in mind, the node's disposition will change as necessary based on other nodes. If another node enters the cluster with a higher priority, it will always become the master node. |
111
111
| 3 | The name of the interface that the virtual IP will be bound. Can be found using `ip a`. |
112
112
| 5 | The priority will configure which node is the Master. The master node will always be the node with the highest priority |
113
113
| 6 | The advertisement timespan in seconds. |
@@ -119,7 +119,7 @@ vrrp_instance pihole {
119
119
> Never set an IP reservation for the virtual IP, or set it as a static address for another device
120
120
{: .prompt-warning }
121
121
122
-
Also keep in mind, this is set up for unicast, but can be configured for multicast. I just like to be explict. You can find more details about [keepalived configuration here](https://keepalived.readthedocs.io/en/latest/configuration_synopsis.html).
122
+
Also keep in mind, this is set up for unicast, but can be configured for multicast. I just like to be explicit. You can find more details about [keepalived configuration here](https://keepalived.readthedocs.io/en/latest/configuration_synopsis.html).
Now, a critical part of this is that the configuration you set up on your primary node is distributed to the other nodes, so that in the event of a failover your DNS local records still resolve. If you don't use local DNS, or want to keep things syncronized manually, you can skip this bit. If not though, I'll show you how to syncronize these files using [Gravity Sync](https://github.com/vmstan/gravity-sync).
170
+
Now, a critical part of this is that the configuration you set up on your primary node is distributed to the other nodes, so that in the event of a failover your DNS local records still resolve. If you don't use local DNS, or want to keep things synchronized manually, you can skip this bit. If not though, I'll show you how to synchronize these files using [Gravity Sync](https://github.com/vmstan/gravity-sync).
171
171
172
-
In the past I tried to keep instances syncronized with rsync, but that proved to be too fragile over time. Gravity sync does a very robust job and just works.
172
+
In the past I tried to keep instances synchronized with rsync, but that proved to be too fragile over time. Gravity sync does a very robust job and just works.
173
173
174
174
To install, follow the installation guide in the repo, but to overview you will need to run the curl command.
The install script will prompt you for the remote machine. For my usage, my auxiallary instances pull their configuration from the primary instance. Once a connection is made, run the pull command.
180
+
The install script will prompt you for the remote machine. For my usage, my auxillary instances pull their configuration from the primary instance. Once a connection is made, run the pull command.
181
181
182
182
```bash
183
183
gravity-sync pull
@@ -191,4 +191,4 @@ gravity-sync auto
191
191
192
192
Auto will follow use the last successful connection made, pull or push.
193
193
194
-
Congratulations, you should now have a high availabilty Pi-hole cluster!
194
+
Congratulations, you should now have a high availability Pi-hole cluster!
Copy file name to clipboardExpand all lines: _posts/2022-12-18-lxc-plex-setup.md
+6-6Lines changed: 6 additions & 6 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -12,7 +12,7 @@ After setting up a new server, I wanted to migrate my plex install to the more p
12
12
13
13
## Provisioning
14
14
15
-
Previously I was running plex as a container in unraid. Then as a container on another VM. Both were somewhat problematic for me because plex is a hog and takes up all the resources of the VM during transcoding. So I wanted to try and install it in a LXC environment instead. To start, I provitioned the environment with 2 CPU cores, 3 GB of RAM, 16 GB of disk space, and a static IP. While this seemed like enough at first, I doubled the CPU coure count to 4 as it was running steadily at 98% utilization with only 2. I also had to convert the environment to a privledged container to get CIFS automount to work correctly.
15
+
Previously I was running plex as a container in Unraid. Then as a container on another VM. Both were somewhat problematic for me because plex is a hog and takes up all the resources of the VM during transcoding. So I wanted to try and install it in a LXC environment instead. To start, I provisioned the environment with 2 CPU cores, 3 GB of RAM, 16 GB of disk space, and a static IP. While this seemed like enough at first, I doubled the CPU core count to 4 as it was running steadily at 98% utilization with only 2. I also had to convert the environment to a privileged container to get CIFS automount to work correctly.
16
16
17
17
Final provisioned LXC environment is as follows:
18
18
@@ -24,7 +24,7 @@ Final provisioned LXC environment is as follows:
24
24
25
25
## Mounting Media from Network Share
26
26
27
-
Mounting my media share from a storage device was easy enough, once I realized I had to make the container privledged. I configured `fstab` to automount the share when the environment started, and used a credential file stored in /root for security.
27
+
Mounting my media share from a storage device was easy enough, once I realized I had to make the container privileged. I configured `fstab` to automount the share when the environment started, and used a credential file stored in /root for security.
28
28
29
29
> Privileged Container must be set to true to mount a network share
Once you've added the configuration save the file and run
@@ -57,7 +57,7 @@ Once you've added the configuration save the file and run
57
57
sudo mount -a
58
58
```
59
59
60
-
You should now see your media mounted under /mnt/media. Restart the environment to make sure it is remounter after a reboot.
60
+
You should now see your media mounted under /mnt/media. Restart the environment to make sure it is remounted after a reboot.
61
61
62
62
## Creating /transcode
63
63
@@ -71,7 +71,7 @@ sudo nano /etc/fstab
71
71
```
72
72
73
73
```
74
-
tmpfs /mnt/transcode tmpfs rw,size=2G 0 0
74
+
tmpfs /mnt/transcode tmpfs rw,size=2G 0 0
75
75
```
76
76
77
77
I set the size to 2 GB, but this can be configured to whatever you like. During heavy transcoding, or when Plex is handling multiple streams, this will certainly fill up. Plex will recycle memory as needed based on memory pressure and available space.
@@ -106,7 +106,7 @@ Once Plex finishes installing, you can access it from the static IP configured a
106
106
107
107
## Migrating Configuration
108
108
109
-
As I mentioned in the beginning, I am migrating from an existing plex environment, and thus I want to move my cache to the new environment rather than recreate it. The benefit of this is that I won't lose all of my custom metadata, nor collections and other settings. To make this move, you will need to find your Library folders and copy the content to the new environment. I used rsync to do this but, you can use winscp or any other method you like. I found my Library files in the config folder I mounted for the container I was using. Installing Plex in the LXC node, I found it in `/var/lib/plexmediaserver/`
109
+
As I mentioned in the beginning, I am migrating from an existing plex environment, and thus I want to move my cache to the new environment rather than recreate it. The benefit of this is that I won't lose all of my custom metadata, nor collections and other settings. To make this move, you will need to find your Library folders and copy the content to the new environment. I used rsync to do this but, you can use WinSCP or any other method you like. I found my Library files in the config folder I mounted for the container I was using. Installing Plex in the LXC node, I found it in `/var/lib/plexmediaserver/`
110
110
111
111
I would recommend stopping Plex as a service before you migrate the files.
Copy file name to clipboardExpand all lines: _posts/2023-06-16-back-up-rpi-live.md
+6-4Lines changed: 6 additions & 4 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -18,15 +18,16 @@ My criteria for success, in order, has been:
18
18
1. Automatic
19
19
1. Fast
20
20
21
-
Obviously the most important thing about any backup is that it is accureate and can be successfully restored. My first thought was just pulling the flash drive they boot off of and using clonezilla to make an image. This would require me to have the disipline and memory to shutdown the device regularly and pull the flash drive for backup. I would prefer something a bit more automatic but I know options like `dd` are not viable running against an active device.
21
+
Obviously the most important thing about any backup is that it is accurate and can be successfully restored. My first thought was just pulling the flash drive they boot off of and using CloneZilla to make an image. This would require me to have the discipline and memory to shutdown the device regularly and pull the flash drive for backup. I would prefer something a bit more automatic but I know options like `dd` are not viable running against an active device.
22
22
23
-
That's when I came accross`image-utilities`. It spawned from a rasperry pi forum post in 2019, but has been slightly maintained since then [in GitHub](https://github.com/seamusdemora/RonR-RPi-image-utils). It uses a bash script and rsync to make a copy of the running device, and is even able to make incremental backups.
23
+
That's when I came across`image-utilities`. It spawned from a raspberry pi forum post in 2019, but has been slightly maintained since then [in GitHub](https://github.com/seamusdemora/RonR-RPi-image-utils). It uses a bash script and rsync to make a copy of the running device, and is even able to make incremental backups.
24
24
25
25
To install it, follow the guide on the github page, but here is a simplified version.
26
26
27
27
## Scripts Install
28
28
29
29
> Don't just take my word for it. Always inspect the code that will be running on your machines, especially from an untrusted and unsigned source.
The backup run time will depend on your device and how much data it needs to copy. It is surprisingly fast though. 15GB ususually runs for 2+ minutes on a Raspberry Pi 4B.
54
+
The backup run time will depend on your device and how much data it needs to copy. It is surprisingly fast though. 15GB usually runs for 2+ minutes on a Raspberry Pi 4B.
54
55
55
56
> Backup can be pretty large, ~15GB depending on how much you have running on your Pi
57
+
{: .prompt-warning }
56
58
57
-
Once you have a completed backup, you can run an incremental backup by running `image-backup` and providing an exisiting backup to update.
59
+
Once you have a completed backup, you can run an incremental backup by running `image-backup` and providing an existing backup to update.
0 commit comments