Skip to content

Commit 45782be

Browse files
committed
docs fixex, post about IaaC
1 parent f5d5385 commit 45782be

File tree

3 files changed

+138
-33
lines changed

3 files changed

+138
-33
lines changed

adminforth/build.log

Lines changed: 0 additions & 12 deletions
This file was deleted.

adminforth/documentation/blog/2025-02-19-compose-ec2-deployment-ci-registry/index.md

Lines changed: 132 additions & 15 deletions
Original file line numberDiff line numberDiff line change
@@ -1,11 +1,30 @@
11
---
22
slug: compose-ec2-deployment-github-actions-registry
3-
title: Deploy AdminForth to EC2 with terraform on GitHub actions with self-hosted Docker Registry
3+
title: IaaC Deploy setup to Amazon EC2 with GitHub actions, Deocker, terraform and self-hosted Docker Registry
44
authors: ivanb
55
tags: [aws, terraform, github-actions]
66
---
77

8-
This guid shows how to deploy AdminFforth to Amazon EC2 with Docker and Terraform involving Registry.
8+
This guide shows how to deploy own Docker apps (with AdminForth as example) to Amazon EC2 instance with Docker and Terraform involving Docker self-hosted registry.
9+
10+
Needed resources:
11+
- GitHub actions Free plan which includes 2000 minutes per month (1000 of 2-minute builds per month - more then enough for many projects, if you are not running tests etc). Extra builds would cost `0.008$` per minute.
12+
- AWS account where we will auto-spawn EC2 instance. We will use t3a.small instance (2 vCPUs, 2GB RAM) which costs `~14$` per month in `us-east-1` region (cheapest region).
13+
- $2 per month for EBS gp2 storage (20GB) for EC2 instance
14+
15+
This is it, registry will be auto-spawned on EC2 instance, so no extra costs for it. Also GitHub storage is not used, so no extra costs for it.
16+
17+
The setup has next features:
18+
- Build process is done using IaaC approach with HashiCorp Terraform, so almoast no manual actions are needed from you. Every resource including EC2 server instance is described in code which is commited to repo and should not be manually clicked.
19+
- Docker build process is done on GitHub actions, so EC2 server is not overloaded
20+
- Changes in infrastructure including changing server type, adding S3 Bucket, changing size of sever Disk is also done in code and can be done by commiting code to repo.
21+
- Docker images and cache are stored on EC2 server, so no extra costs for Docker registry are needed.
22+
- Total build time for average commit to AdminForth app (with vite rebuilds) is around 2 minutes.
23+
24+
<!-- truncate -->
25+
26+
27+
# Building on CI versus building on EC2?
928

1029
Previously we had a blog post about [deploying AdminForth to EC2 with Terraform without registry](/blog/compose-ec2-deployment-github-actions/). That method might work well but has a significant disadvantage - build process happens on EC2 itself and uses EC2 RAM and CPU. This can be a problem if your EC2 instance is well-loaded without extra free resources. Moreover, low-end EC2 instances have a small amount of RAM and CPU, so build process which involves vite/tsc/etc can be slow or even fail.
1130

@@ -22,9 +41,6 @@ Quick difference between approaches from previous post and current post:
2241
| Disadvantages | Build on EC2 requires additional server RAM / overloads CPU | More terraform code is needed. registry cache might require small extra space on EC2 |
2342

2443

25-
26-
<!-- truncate -->
27-
2844
## Chellenges when you build on CI
2945

3046
A little bit of theory.
@@ -35,11 +51,11 @@ When you move build process to CI you have to solve next chellenges:
3551

3652
### Delivering images
3753

38-
### Exporing images to tar files
54+
#### Exporing images to tar files
3955

4056
Simplest option which you can find is save docker images to tar files and deliver them to EC2. We can easily do it in terraform (using `docker save -o ...` command on CI and `docker load ...` command on EC2). However this option has a significant disadvantage - it is slow. Docker images are big (always include all layers, without any options), so it takes infinity to do save/load and another infinity to transfer them to EC2 (via relatively slow rsync/SSH and relatively GitHub actions outbound connection).
4157

42-
### Docker registry
58+
#### Docker registry
4359

4460
Second and right option which we will use here - involve Docker registry. Docker registry is a repository which stores docker images. It does storing in a smart way - it stores layers, so if you will update last layer and push it from CI to registry, only last layer will be pushed to registry and then pulled to EC2.
4561
To give you row compare - whole-layers image might took `1GB`, but last layer created by `npm run build` command might take `50MB`. And most builds you will do only last layer changes, so it will be 20 times faster to push/pull last layer than whole image.
@@ -65,8 +81,7 @@ So when build-in Docker cache can't be used, there is one alternative - Docker B
6581
So BuildKit allows you to connect external storage. There are several options, but most sweet for us is using Docker registry as cache storage (not only as images storage). However drowback appears here.
6682
Previously we used docker compose to run our app, it can be used to both build and deploy images, but has [issues with external cache connection](https://github.com/docker/compose/issues/11072#issuecomment-1848974315). While they are not solved we have to use `docker buildx bake` command to build images. It is not so bad, but is another point of configuration which we will cover in this post.
6783

68-
# Practice - deploy AdminForth to EC2 with terraform on GitHub actions with self-hosted Docker Registry
69-
84+
# Practice - deploy setup
7085

7186
Assume you have your AdminForth project in `myadmin`.
7287

@@ -86,7 +101,6 @@ RUN --mount=type=cache,target=/tmp npx tsx bundleNow.ts
86101
CMD ["npm", "run", "startLive"]
87102
```
88103

89-
90104
## Step 2 - compose.yml
91105

92106
create folder `deploy` and create file `compose.yml` inside:
@@ -144,6 +158,7 @@ Create `deploy/.gitignore` file with next content:
144158
*.tfstate.*
145159
*.tfvars
146160
tfplan
161+
.env.live
147162
```
148163

149164
## Step 5 - Main terraform file main.tf
@@ -158,7 +173,7 @@ Create file `main.tf` in `deploy` folder:
158173
159174
locals {
160175
app_name = "<your_app_name>"
161-
aws_region = "eu-central-1"
176+
aws_region = "us-east-1"
162177
}
163178
164179
@@ -252,7 +267,7 @@ resource "aws_key_pair" "app_deployer" {
252267
253268
resource "aws_instance" "app_instance" {
254269
ami = data.aws_ami.ubuntu_linux.id
255-
instance_type = "t3a.small"
270+
instance_type = "t3a.small" # just change it to another type if you need, check https://instances.vantage.sh/
256271
subnet_id = data.aws_subnet.default_subnet.id
257272
vpc_security_group_ids = [aws_security_group.instance_sg.id]
258273
key_name = aws_key_pair.app_deployer.key_name
@@ -266,7 +281,7 @@ resource "aws_instance" "app_instance" {
266281
}
267282
268283
root_block_device {
269-
volume_size = 40 // Size in GB for root partition
284+
volume_size = 20 // Size in GB for root partition
270285
volume_type = "gp2"
271286
272287
# Even if the instance is terminated, the volume will not be deleted, delete it manually if needed
@@ -508,7 +523,7 @@ terraform {
508523
backend "s3" {
509524
bucket = "<your_app_name>-terraform-state"
510525
key = "state.tfstate" # Define a specific path for the state file
511-
region = "eu-central-1"
526+
region = "us-east-1"
512527
profile = "myaws"
513528
use_lockfile = true
514529
}
@@ -580,6 +595,7 @@ jobs:
580595
VAULT_AWS_SECRET_ACCESS_KEY: ${{ secrets.VAULT_AWS_SECRET_ACCESS_KEY }}
581596
VAULT_SSH_PRIVATE_KEY: ${{ secrets.VAULT_SSH_PRIVATE_KEY }}
582597
VAULT_SSH_PUBLIC_KEY: ${{ secrets.VAULT_SSH_PUBLIC_KEY }}
598+
583599
run: |
584600
/bin/sh -x deploy/deploy.sh
585601
@@ -613,6 +629,9 @@ EOF
613629
614630
chmod 600 ./.keys/id_rsa*
615631
632+
# init .env.live
633+
echo "" > .env.live
634+
616635
# force Terraform to reinitialize the backend without migrating the state.
617636
terraform init -reconfigure
618637
terraform plan -out=tfplan
@@ -629,4 +648,102 @@ Go to your GitHub repository, then `Settings` -> `Secrets` -> `New repository se
629648
- `VAULT_SSH_PUBLIC_KEY` - make `cat ~/.ssh/id_rsa.pub` and paste to GitHub secrets
630649

631650

632-
Now you can push your changes to GitHub and see how it will be deployed automatically.
651+
Now you can push your changes to GitHub and see how it will be deployed automatically.
652+
653+
654+
### Adding secrets
655+
656+
Once you will have sensitive tokens/passwords in your apps you have to store them in a secure way.
657+
658+
Simplest way is to use GitHub secrets.
659+
660+
Let's imagine you have `OPENAI_API_KEY` which will be used one of AI-powered plugins of adminforth. We can't put this key to the code, so we have to store it in GitHub secrets.
661+
662+
Open your GitHub repository, then `Settings` -> `Secrets` -> `New repository secret` and add `VAULT_OPENAI_API_KEY` with your key.
663+
664+
Now open GitHub actions file and add it to the `env` section:
665+
666+
```yml title=".github/workflows/deploy.yml"
667+
- name: Start building
668+
env:
669+
VAULT_AWS_ACCESS_KEY_ID: ${{ secrets.VAULT_AWS_ACCESS_KEY_ID }}
670+
VAULT_AWS_SECRET_ACCESS_KEY: ${{ secrets.VAULT_AWS_SECRET_ACCESS_KEY }}
671+
VAULT_SSH_PRIVATE_KEY: ${{ secrets.VAULT_SSH_PRIVATE_KEY }}
672+
VAULT_SSH_PUBLIC_KEY: ${{ secrets.VAULT_SSH_PUBLIC_KEY }}
673+
//diff-add
674+
VAULT_OPENAI_API_KEY: ${{ secrets.VAULT_OPENAI_API_KEY }}
675+
```
676+
677+
Next add it to the `deploy.sh` script:
678+
679+
```bash title="deploy/deploy.sh"
680+
681+
//diff-remove
682+
echo "" > .env.live
683+
//diff-add
684+
cat <<EOF > .env.live
685+
//diff-add
686+
OPENAI_API_KEY=$VAULT_OPENAI_API_KEY
687+
//diff-add
688+
EOF
689+
```
690+
691+
692+
In the same way you can add any other secrets to your GitHub actions.
693+
694+
695+
### Out of space on EC2 instance? Extend EBS volume
696+
697+
698+
To upgrade EBS volume size you have to do next steps:
699+
700+
In `main.tf` file:
701+
702+
```hcl title="main.tf"
703+
root_block_device {
704+
//diff-remove
705+
volume_size = 20 // Size in GB for root partition
706+
//diff-add
707+
volume_size = 40 // Size in GB for root partition
708+
volume_type = "gp2"
709+
}
710+
```
711+
712+
And run build.
713+
714+
This will increase physical size of EBS volume, but you have to increase filesystem size too.
715+
716+
Login to EC2 instance:
717+
718+
```bash
719+
ssh -i ./.keys/id_rsa ubuntu@<your_ec2_ip>
720+
```
721+
722+
> You can find your EC2 IP in AWS console by visiting EC2 -> Instances -> Your instance -> IPv4 Public IP
723+
724+
725+
Now run next commands:
726+
727+
```bash
728+
lsblk
729+
```
730+
731+
This would show something like this:
732+
733+
```bash
734+
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
735+
loop0 7:0 0 99.4M 1 loop /snap/core/10908
736+
nvme0n1 259:0 0 40G 0 disk
737+
└─nvme0n1p1 259:1 0 20G 0 part /
738+
```
739+
740+
Here we see that `nvme0n1` is our disk and `nvme0n1p1` is our partition.
741+
742+
Now to extend partition run:
743+
744+
```bash
745+
sudo growpart /dev/nvme0n1 1
746+
sudo resize2fs /dev/nvme0n1p1
747+
```
748+
749+
This will extend partition to the full disk size. No reboot is needed.

adminforth/documentation/docs/tutorial/05-Plugins/11-oauth.md

Lines changed: 6 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -20,16 +20,16 @@ You need to get the client ID and client secret from your OAuth2 provider.
2020
For Google:
2121
1. Go to the [Google Cloud Console](https://console.cloud.google.com)
2222
2. Create a new project or select an existing one
23-
3. Go to "APIs & Services""Credentials"
23+
3. Go to `APIs & Services``Credentials`
2424
4. Create credentials for OAuth 2.0 client IDs
2525
5. Select application type: "Web application"
2626
6. Add your application's name and redirect URI
27-
7. Set the redirect URI to `http://your-domain/oauth/callback`
27+
7. In "Authorized redirect URIs", add next URI: `https://your-domain/oauth/callback`, `http://localhost:3500/oauth/callback`. Please remember to include BASE_URL in the URI if you are using it in project e.g. `https://your-domain/base/oauth/callback`
2828
8. Add the credentials to your `.env` file:
2929

3030
```bash
31-
GOOGLE_CLIENT_ID=your_google_client_id
32-
GOOGLE_CLIENT_SECRET=your_google_client_secret
31+
GOOGLE_OAUTH_CLIENT_ID=your_google_client_id
32+
GOOGLE_OAUTH_CLIENT_SECRET=your_google_client_secret
3333
```
3434

3535
### 2. Plugin Configuration
@@ -46,8 +46,8 @@ plugins: [
4646
new OAuthPlugin({
4747
adapters: [
4848
new AdminForthAdapterGoogleOauth2({
49-
clientID: process.env.GOOGLE_CLIENT_ID,
50-
clientSecret: process.env.GOOGLE_CLIENT_SECRET,
49+
clientID: process.env.GOOGLE_OAUTH_CLIENT_ID,
50+
clientSecret: process.env.GOOGLE_OAUTH_CLIENT_SECRET,
5151
redirectUri: 'http://localhost:3000/oauth/callback',
5252
}),
5353
],

0 commit comments

Comments
 (0)