Skip to content

Commit f264138

Browse files
author
Patrick M
committed
2 parents e64ec2b + 56e09e7 commit f264138

8 files changed

+26
-24
lines changed

_config.yml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -114,7 +114,7 @@ assets:
114114
env: # [development|production]
115115

116116
pwa:
117-
enabled: true # the option for PWA feature
117+
enabled: false # the option for PWA feature
118118

119119
paginate: 10
120120

_posts/2021-03-23-outlook-spam-rule.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -28,10 +28,10 @@ From here click `Add new rule`. A create rule screen will appear and you can sta
2828

2929
![Rules](/assets/img/outlook-spam-rules.png)
3030

31-
Add the action to `Move to` the `Junk Email` folder, and for an added nicety, Add anohter action to set `Mark as read` too.
31+
Add the action to `Move to` the `Junk Email` folder, and for an added nicety, Add another action to set `Mark as read` too.
3232

3333
Lastly, and most importantly, set the exception. The rules here can be tailored to meet your needs, but for me, `Sender address includes` and then my {@organization.com} did the trick.
3434

3535
## Finally
3636

37-
Now my inbox is much leaner. Occasionally I will check the junk folder just to see if I miseed anything, but so far I haven't.
37+
Now my inbox is much leaner. Occasionally I will check the junk folder just to see if I missed anything, but so far I haven't.

_posts/2021-06-16-autostart-api-spa-app.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -34,7 +34,7 @@ it's also important to declare the script you want node to run in scripts sectio
3434

3535
## Humble beginnings
3636

37-
For my first attempt at makin my start script, I created a batch file that started two scripts. One new Powershell window for `dotnet watch run` and another for `npm run start`. This had some hiccups initially. The windows would close if the Powershell stopped running, which was problematic for collecting errors. Also the command prompt window would linger. After a few iterations, I was able to solve those problems and landed on this:
37+
For my first attempt at making my start script, I created a batch file that started two scripts. One new Powershell window for `dotnet watch run` and another for `npm run start`. This had some hiccups initially. The windows would close if the Powershell stopped running, which was problematic for collecting errors. Also the command prompt window would linger. After a few iterations, I was able to solve those problems and landed on this:
3838

3939
```batch
4040
cd ./src/api

_posts/2022-12-18-install-pihole-ha.md

Lines changed: 8 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -6,7 +6,7 @@ category: 'Service Setup'
66
tags: ['pihole', 'setup', 'high availablity', 'dns', 'spam']
77
---
88

9-
This is a step by step guide to set up Pi-hole in a high availabilty environment. Previously I was using a lone Raspberry Pi 3B to run Pi-hole. The issue with this setup was, if that pi went down, DNS was down on my network, which is definitely unacceptable. So let make it better!
9+
This is a step by step guide to set up Pi-hole in a high availability environment. Previously I was using a lone Raspberry Pi 3B to run Pi-hole. The issue with this setup was, if that pi went down, DNS was down on my network, which is definitely unacceptable. So let make it better!
1010

1111
<!--more-->
1212

@@ -66,7 +66,7 @@ url.redirect = ("^/$" => "/admin" )
6666

6767
## High Availability with keepalived
6868

69-
To have a high availabilty cluster, you will need more than one Pi-hole instance running. Once you have them both running, you can configure `keepalived` to set up a virtual IP between them using a technology called VRRP. It allows both servers to share a virtual IP between them, swapping instantly when one of them goes down. Because this is more of a "hot spare" methodology, one node will be primary, and the rest will be secondary. To get started you will need to install two pacakges.
69+
To have a high availability cluster, you will need more than one Pi-hole instance running. Once you have them both running, you can configure `keepalived` to set up a virtual IP between them using a technology called VRRP. It allows both servers to share a virtual IP between them, swapping instantly when one of them goes down. Because this is more of a "hot spare" methodology, one node will be primary, and the rest will be secondary. To get started you will need to install two packages.
7070

7171
```bash
7272
sudo apt install keepalived libipset13 -y
@@ -107,7 +107,7 @@ vrrp_instance pihole {
107107
| Line | Description |
108108
|---|---|
109109
| 1 | The first thing to configure is the instance name. I have it set to `pihole`. |
110-
| 2 | You will need to decide the node's default disposition, whether it is the master node or a backup. Keep in mind, the node's disposition will change as necessary based on other nodes. If another node enters the cluser with a higher priorty, it will always become the master node. |
110+
| 2 | You will need to decide the node's default disposition, whether it is the master node or a backup. Keep in mind, the node's disposition will change as necessary based on other nodes. If another node enters the cluster with a higher priority, it will always become the master node. |
111111
| 3 | The name of the interface that the virtual IP will be bound. Can be found using `ip a`. |
112112
| 5 | The priority will configure which node is the Master. The master node will always be the node with the highest priority |
113113
| 6 | The advertisement timespan in seconds. |
@@ -119,7 +119,7 @@ vrrp_instance pihole {
119119
> Never set an IP reservation for the virtual IP, or set it as a static address for another device
120120
{: .prompt-warning }
121121

122-
Also keep in mind, this is set up for unicast, but can be configured for multicast. I just like to be explict. You can find more details about [keepalived configuration here](https://keepalived.readthedocs.io/en/latest/configuration_synopsis.html).
122+
Also keep in mind, this is set up for unicast, but can be configured for multicast. I just like to be explicit. You can find more details about [keepalived configuration here](https://keepalived.readthedocs.io/en/latest/configuration_synopsis.html).
123123

124124
Once it's configured, restart the service
125125

@@ -167,17 +167,17 @@ cname=pihole.ha.local,pihole.local
167167

168168
## Syncronizing Local DNS
169169

170-
Now, a critical part of this is that the configuration you set up on your primary node is distributed to the other nodes, so that in the event of a failover your DNS local records still resolve. If you don't use local DNS, or want to keep things syncronized manually, you can skip this bit. If not though, I'll show you how to syncronize these files using [Gravity Sync](https://github.com/vmstan/gravity-sync).
170+
Now, a critical part of this is that the configuration you set up on your primary node is distributed to the other nodes, so that in the event of a failover your DNS local records still resolve. If you don't use local DNS, or want to keep things synchronized manually, you can skip this bit. If not though, I'll show you how to synchronize these files using [Gravity Sync](https://github.com/vmstan/gravity-sync).
171171

172-
In the past I tried to keep instances syncronized with rsync, but that proved to be too fragile over time. Gravity sync does a very robust job and just works.
172+
In the past I tried to keep instances synchronized with rsync, but that proved to be too fragile over time. Gravity sync does a very robust job and just works.
173173

174174
To install, follow the installation guide in the repo, but to overview you will need to run the curl command.
175175

176176
```bash
177177
curl -sSL https://raw.githubusercontent.com/vmstan/gs-install/main/gs-install.sh | bash
178178
```
179179

180-
The install script will prompt you for the remote machine. For my usage, my auxiallary instances pull their configuration from the primary instance. Once a connection is made, run the pull command.
180+
The install script will prompt you for the remote machine. For my usage, my auxillary instances pull their configuration from the primary instance. Once a connection is made, run the pull command.
181181

182182
```bash
183183
gravity-sync pull
@@ -191,4 +191,4 @@ gravity-sync auto
191191

192192
Auto will follow use the last successful connection made, pull or push.
193193

194-
Congratulations, you should now have a high availabilty Pi-hole cluster!
194+
Congratulations, you should now have a high availability Pi-hole cluster!

_posts/2022-12-18-lxc-plex-setup.md

Lines changed: 6 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -12,7 +12,7 @@ After setting up a new server, I wanted to migrate my plex install to the more p
1212

1313
## Provisioning
1414

15-
Previously I was running plex as a container in unraid. Then as a container on another VM. Both were somewhat problematic for me because plex is a hog and takes up all the resources of the VM during transcoding. So I wanted to try and install it in a LXC environment instead. To start, I provitioned the environment with 2 CPU cores, 3 GB of RAM, 16 GB of disk space, and a static IP. While this seemed like enough at first, I doubled the CPU coure count to 4 as it was running steadily at 98% utilization with only 2. I also had to convert the environment to a privledged container to get CIFS automount to work correctly.
15+
Previously I was running plex as a container in Unraid. Then as a container on another VM. Both were somewhat problematic for me because plex is a hog and takes up all the resources of the VM during transcoding. So I wanted to try and install it in a LXC environment instead. To start, I provisioned the environment with 2 CPU cores, 3 GB of RAM, 16 GB of disk space, and a static IP. While this seemed like enough at first, I doubled the CPU core count to 4 as it was running steadily at 98% utilization with only 2. I also had to convert the environment to a privileged container to get CIFS automount to work correctly.
1616

1717
Final provisioned LXC environment is as follows:
1818

@@ -24,7 +24,7 @@ Final provisioned LXC environment is as follows:
2424

2525
## Mounting Media from Network Share
2626

27-
Mounting my media share from a storage device was easy enough, once I realized I had to make the container privledged. I configured `fstab` to automount the share when the environment started, and used a credential file stored in /root for security.
27+
Mounting my media share from a storage device was easy enough, once I realized I had to make the container privileged. I configured `fstab` to automount the share when the environment started, and used a credential file stored in /root for security.
2828

2929
> Privileged Container must be set to true to mount a network share
3030
{: .prompt-tip }
@@ -48,7 +48,7 @@ sudo nano /etc/fstab
4848
```
4949

5050
```
51-
//192.168.1.10/media /mnt/media cifs uid=0,credentials=/root/.cifscreds,iocharset=utf8,vers=3.0,noperm 0 0
51+
//192.168.1.10/media /mnt/media cifs vers=3.0,credentials=/root/.cifscreds,uid=1000,gid=1000 0 0
5252
```
5353

5454
Once you've added the configuration save the file and run
@@ -57,7 +57,7 @@ Once you've added the configuration save the file and run
5757
sudo mount -a
5858
```
5959

60-
You should now see your media mounted under /mnt/media. Restart the environment to make sure it is remounter after a reboot.
60+
You should now see your media mounted under /mnt/media. Restart the environment to make sure it is remounted after a reboot.
6161

6262
## Creating /transcode
6363

@@ -71,7 +71,7 @@ sudo nano /etc/fstab
7171
```
7272

7373
```
74-
tmpfs /mnt/transcode tmpfs rw,size=2G 0 0
74+
tmpfs /mnt/transcode tmpfs rw,size=2G 0 0
7575
```
7676

7777
I set the size to 2 GB, but this can be configured to whatever you like. During heavy transcoding, or when Plex is handling multiple streams, this will certainly fill up. Plex will recycle memory as needed based on memory pressure and available space.
@@ -106,7 +106,7 @@ Once Plex finishes installing, you can access it from the static IP configured a
106106

107107
## Migrating Configuration
108108

109-
As I mentioned in the beginning, I am migrating from an existing plex environment, and thus I want to move my cache to the new environment rather than recreate it. The benefit of this is that I won't lose all of my custom metadata, nor collections and other settings. To make this move, you will need to find your Library folders and copy the content to the new environment. I used rsync to do this but, you can use winscp or any other method you like. I found my Library files in the config folder I mounted for the container I was using. Installing Plex in the LXC node, I found it in `/var/lib/plexmediaserver/`
109+
As I mentioned in the beginning, I am migrating from an existing plex environment, and thus I want to move my cache to the new environment rather than recreate it. The benefit of this is that I won't lose all of my custom metadata, nor collections and other settings. To make this move, you will need to find your Library folders and copy the content to the new environment. I used rsync to do this but, you can use WinSCP or any other method you like. I found my Library files in the config folder I mounted for the container I was using. Installing Plex in the LXC node, I found it in `/var/lib/plexmediaserver/`
110110

111111
I would recommend stopping Plex as a service before you migrate the files.
112112

_posts/2023-01-01-lxc-docker-setup.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -12,7 +12,7 @@ A quick guide to getting docker running on a Debian CT
1212

1313
## CT Configuration
1414

15-
First make sure your container is running in privledged mode and nested is enabled in Options > Features
15+
First make sure your container is running in privileged mode and nested is enabled in Options > Features
1616

1717
![features](/assets/img/lxc-docker-setup-1.png)
1818

_posts/2023-06-16-back-up-rpi-live.md

Lines changed: 6 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -18,15 +18,16 @@ My criteria for success, in order, has been:
1818
1. Automatic
1919
1. Fast
2020

21-
Obviously the most important thing about any backup is that it is accureate and can be successfully restored. My first thought was just pulling the flash drive they boot off of and using clonezilla to make an image. This would require me to have the disipline and memory to shutdown the device regularly and pull the flash drive for backup. I would prefer something a bit more automatic but I know options like `dd` are not viable running against an active device.
21+
Obviously the most important thing about any backup is that it is accurate and can be successfully restored. My first thought was just pulling the flash drive they boot off of and using CloneZilla to make an image. This would require me to have the discipline and memory to shutdown the device regularly and pull the flash drive for backup. I would prefer something a bit more automatic but I know options like `dd` are not viable running against an active device.
2222

23-
That's when I came accross `image-utilities`. It spawned from a rasperry pi forum post in 2019, but has been slightly maintained since then [in GitHub](https://github.com/seamusdemora/RonR-RPi-image-utils). It uses a bash script and rsync to make a copy of the running device, and is even able to make incremental backups.
23+
That's when I came across `image-utilities`. It spawned from a raspberry pi forum post in 2019, but has been slightly maintained since then [in GitHub](https://github.com/seamusdemora/RonR-RPi-image-utils). It uses a bash script and rsync to make a copy of the running device, and is even able to make incremental backups.
2424

2525
To install it, follow the guide on the github page, but here is a simplified version.
2626

2727
## Scripts Install
2828

2929
> Don't just take my word for it. Always inspect the code that will be running on your machines, especially from an untrusted and unsigned source.
30+
{: .prompt-warning }
3031

3132
```bash
3233
git clone https://github.com/seamusdemora/RonR-RPi-image-utils.git ./image-utils
@@ -50,11 +51,12 @@ Now you should be able to use `image-backup`.
5051
sudo image-backup --initial /mnt/backup/$(date +"%Y-%m-%d").img,,5000
5152
```
5253

53-
The backup run time will depend on your device and how much data it needs to copy. It is surprisingly fast though. 15GB ususually runs for 2+ minutes on a Raspberry Pi 4B.
54+
The backup run time will depend on your device and how much data it needs to copy. It is surprisingly fast though. 15GB usually runs for 2+ minutes on a Raspberry Pi 4B.
5455

5556
> Backup can be pretty large, ~15GB depending on how much you have running on your Pi
57+
{: .prompt-warning }
5658

57-
Once you have a completed backup, you can run an incremental backup by running `image-backup` and providing an exisiting backup to update.
59+
Once you have a completed backup, you can run an incremental backup by running `image-backup` and providing an existing backup to update.
5860

5961
```bash
6062
image-backup <image_name.img>

_posts/2023-06-16-mounting-smb-share-at-boot.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -8,7 +8,7 @@ tags: ['linux', 'fstab', 'smb']
88

99
Very frequently I need to mount SMB2 or SMB3 shares inside of my linux devices. To do so I usually use `fstab`.
1010

11-
## Depenedencies
11+
## Dependencies
1212

1313
You will need to install Samba and CIFS utilities
1414

0 commit comments

Comments
 (0)