- Purely is a cloud-first microservices web application showcasing Kubernetes. The application is a web-based e-commerce app where users can browse items, add them to the cart, and purchase them.
- The architecture leverages Spring Boot microservices, Spring Cloud Gateway, and Eureka Service Registry, with a React.js frontend and MongoDB databases.
- The solution is containerized and deployed to AWS Elastic Kubernetes Service (EKS) using Helm and automated via GitHub Actions CI/CD pipelines.
- Project Tree
- Development Set up
- Deployment Set up
- How to run locally?
- How to deploy to AWS?
- Demo video
fullstack-E-commerce-web-application/
├── .github/
│ └── workflows/
│ ├── ci-cd-auth.yml
│ ├── ci-cd-cart.yml
│ ├── ci-cd-category.yml
│ ├── ci-cd-gateway.yml
│ ├── ci-cd-ingress.yml
│ ├── ci-cd-notification.yml
│ ├── ci-cd-order.yml
│ ├── ci-cd-product.yml
│ ├── ci-cd-registry.yml
│ ├── ci-cd-user.yml
│ └── ci-cd-web.yml
├── assets/
├── frontend/
│ ├── nginx/
│ ├── public/
│ ├── src/
│ │ ├── api-service/
│ │ ├── assets/
│ │ ├── components/
│ │ ├── contexts/
│ │ ├── pages/
│ │ ├── routes/
| | ├── App.jsx
│ │ └── main.jsx
│ ├── Dockerfile
│ └── index.html
├── helm-charts/
│ ├── api-gateway/
│ ├── auth-service/
│ ├── cart-service/
│ ├── category-service/
│ ├── ingress-alb/
│ ├── notification-service/
│ ├── order-service/
│ ├── product-service/
│ ├── service-registry/
│ ├── user-service/
│ └── web-app/
├── microservice-backend/
│ ├── api-gateway/
│ ├── auth-service/
│ ├── cart-service/
│ ├── category-service/
│ ├── notification-service/
│ ├── order-service/
│ ├── product-service/
│ ├── service-registry/
│ └── user-service/
├── sample-data/
│ ├── purely_category_service.categories.json
│ └── purely_product_service.products.json
└── terraform/
│ ├── common-data.tf
│ ├── common-provider.tf
│ ├── common-variables.tf
│ ├── ecr-registries.tf
│ ├── eks-access-entries.tf
│ ├── eks-alb-controller.tf
│ ├── eks-cluster-autoscaler.tf
│ ├── eks-cluster.tf
│ ├── eks-metrics-server.tf
│ ├── eks-node-groups.tf
│ ├── eks-openid-connect-provider.tf
│ ├── policies/
│ │ ├── AWSLoadBalancerControllerIAMPolicy.json
│ │ └── EKSClusterAutoscalerIAMPolicy.json
│ ├── vpc-internet-gateway.tf
│ ├── vpc-nat-gateway.tf
│ ├── vpc-route-tables.tf
│ ├── vpc-subnets.tf
│ └── vpc.tf
└── README.md
- Microservices Architecture: Independent services for User, Auth, Product, Category, Cart, Order, and Notification.
- Service Discovery: Centralized Eureka Service Registry manages dynamic discovery of microservices within the cluster. Simplifies communication and load balancing between services.
- API Gateway: Built using Spring Cloud Gateway. Acts as the single entry point for all client requests.
- Frontend: Developed in React.js, providing a responsive user interface. Communicates with the backend exclusively via API Gateway.
- Databases: Each microservice uses a dedicated MongoDB database.
-
The Service Registry serves as a centralized repository for storing information about all the available services in the microservices architecture.
-
This includes details such as IP addresses, port numbers, and other metadata required for communication.
-
As services start, stop, or scale up/down dynamically in response to changing demand, they update their registration information in the Service Registry accordingly.
-
The API gateway acts as a centralized entry point for clients, providing a unified interface to access the microservices.
-
API gateway acts as the traffic cop of our microservices architecture. It routes incoming requests to the appropriate microservice, or instance based on predefined rules or configurations.
- The Auth Service is responsible for securely verifying user identities and facilitating token-based authentication.
- The Category Service provides centralized data management and operations for product categories.
- The Product Service provides centralized data management and operations for available products.
- The Cart Service provides centralized data management and operations for user carts.
- The Order Service provides centralized data management and operations for orders.
- The Notification Service provides centralized operations for send emails to user.
| HTTP Method | Route Path | Description |
|---|---|---|
/notification/send |
Send email |
- OpenFeign, a declarative HTTP client library for Java, is used to simplify the process of making HTTP requests to other microservices.
- Each component (frontend, service-registry, api-gateway, and other microservices) has its own Dockerfile, and is packaged into a Docker image.
- Images pushed to Amazon Elastic Container Registry (ECR).
- Each service is deployed as a separate Helm chart under
/helm-chartsdirectory. - Each chart includes Kubernetes resources:
Deployment,hpa,Service,ConfigMaps, andSecrets. - All components (Ingress, frontend, service-registry, api-gateway, and other microservices) deployed as
ClusterIPservice type.
- A dedicated VPC across two Availability Zones (AZs).
- Subnets:
- 2 Public subnets (1 in each AZ).
- 2 Private subnets (1 in each AZ).
- Internet Gateway: Attached to VPC for public subnet access for public subnets.
- NAT Gateway: Deployed in one public subnet, allowing outbound internet access for resources in private subnets (e.g., EKS worker nodes pulling Docker images).
- Route Tables:
- Public route table routes internet-bound traffic via Internet Gateway.
- Private route table routes internet-bound traffic via NAT Gateway.
- EKS Cluster deployed within the above VPC.
- EKS Node Group (managed worker nodes) spread across the two AZs for high availability. Worker nodes are deployed in private subnets, ensuring they are not exposed directly to the internet.
- Application Load Balancer controller is installed within the EKS cluster, to let traffic route using ingress.
- Metrics-server is installed within the EKS cluster, to let
Horizontal Pod AutoScalerget the current CPU/memory usage for each Pod. - Cluster AutoScaler is installed within the EKS Cluster, automatically adjusting the number of worker nodes in the EKS cluster based on pending pods.
Horizontal Pod AutoScaler (HPA) is a Kubernetes resource that automatically scales the number of pods in a Deployment, ReplicaSet, or StatefulSet. It continuously watches pod resource metrics (like CPU %, memory %, or custom metrics) from metrics-server. If usage goes above or below a defined threshold, it increases or decreases pods.
Cluster Autoscaler (CA) is a Kubernetes component that automatically adjusts the number of worker nodes in the cluster. If HPA scales up pods but no nodes have enough resources to run them, CA adds new nodes. If nodes are scaled down, it removes nodes to save cost.
- Infrastructure provisioned using Terraform, ensuring reproducibility and automation.
- Terraform manage:
- VPC (subnets, Internet Gateway, NAT Gateway, route tables).
- EKS Cluster (Control Plane, Managed Node Groups, Access Entry, Metrics-server, Application Load Balancer Controller, Cluster Autoscaler).
- ECR Repositories for storing Docker images.
- Separate workflow files per service for isolation and independent deployments.
- Workflow stages:
- Build & test
- Build Docker image and push to ECR
- Deploy/update Helm release on EKS
Make sure you have the following tools installed locally:
- JAVA Development Kit (JDK 21)
- Maven
- Node.js
- npm
- Git
-
Fork the repository to your GitHub account.
-
Clone the forked repository to your local machine.
git clone https://github.com/<your-username>/Fullstack-E-commerce-web-application- Create the following databases in MongoDB Atlas:
purely_auth_servicepurely_category_servicepurely_product_servicepurely_cart_servicepurely_order_service
- You can find sample data for products and categories to get started here.
- In the
notification-service, configure the following credentials in theapplication.propertiesfile to enable email sending functionality:
spring.mail.username=YOUR_USERNAME
spring.mail.password=YOUR_PASSWORDReplace YOUR_USERNAME and YOUR_PASSWORD with your actual email service credentials.
- First run
service-registry. Access the Eureka dashboard athttp://localhost:8761. Next run the other services.
mvn springboot:run
- Make sure all the services are up and running in the Eureka Dashboard as below.
- Navigate to frontend direcory.
cd ./frontend
- Install dependencies.
npm install
- Update API_BASE_URL in
apiConfig.js.
const API_BASE_URL = "http://localhost:8080"- Run the app.
npm run dev
Access the application at http://localhost:5173/
Make sure you have the following tools installed locally:
- kubectl
- Helm
- AWS CLI
- ekctl
- Terraform
- Each component (frontend, service-registry, api-gateway, and microservices) has its own Dockerfile.
- You don’t need to change anything here. The components will be automatically built and push images to Amazon ECR when running CI/CD.
- Each service is deployed as a separate Helm chart under
/helm-chartsdirectory. Leave them as that. - No need to modify the chart structure unless adding new services or debugging purposes.
-
AWS resources are provisioned using Terraform manifests in the
terraform/directory. -
By default, you can’t directly access an eks cluster without the AmazonEKSClusterAdminPolicy.
- For each user who needs access (root, GitHub Actions IAM user, local AWS CLI user), you must create an access entry in the cluster.
- In this project: Access entries are defined in
terraform/eks_access_entry.tf. - Update IAM usernames for GitHub Actions and local CLI in terraform/variables.tf.
-
Then, run the following commands:
terraform init
terraform plan
terraform apply
- This will create a VPC, subnets (2 public, 2 private), an Internet Gateway, a NAT Gateway, and route tables. You can verify the networking setup from
AWS console > VPC > Resource Map.
- This will deploy an EKS cluster (purely-cluster), EKS node groups, Application Load Balancer controller, Metrics server, and Cluster autoscaler.
- After Terraform finishes, update your kubeconfig (Ensure the local AWS CLI user has an access entry in the EKS cluster.
aws eks update-kubeconfig --region YOUR_REGION --name YOUR_CLUSTER_NAME
- Next, ensure that nodes, Application Load Balancer controller, Metrics server, and Cluster autoscaler are installed properly.
- IAM User for CI/CD
- Create an IAM user with permissions to EKS and ECR.
- Ensure this user has an access entry in the EKS cluster.
- Add the following secrets to your GitHub repository:
| Secret | Value |
|---|---|
AWS_ACCESS_KEY_ID |
Access key of IAM user |
AWS_REGION |
us-east-1 (unless you’re using a different AWS region) |
AWS_SECRET_ACCESS_KEY |
Secret access key of IAM user |
ECR_AUTH_REPOSITORY |
purely_auth_registry (unless you're using a different name for ECR repository of Auth service) |
ECR_CART_REPOSITORY |
purely_cart_registry (unless you're using a different name for ECR repository of Cart service) |
ECR_CATEGORY_REPOSITORY |
purely_category_registry (unless you're using a different name for ECR repository of Category service) |
ECR_GATEWAY_REPOSITORY |
purely_gayeway_registry (unless you're using a different name for ECR repository of API Gateway) |
ECR_NOTIFICATION_REPOSITORY |
purely_notification_registry (unless you're using a different name for ECR repository of Notification service) |
ECR_ORDER_REPOSITORY |
purely_order_registry (unless you're using a different name for ECR repository of Order service) |
ECR_PRODUCT_REPOSITORY |
purely_product_registry (unless you're using a different name for ECR repository of Product service) |
ECR_REGISTRY_REPOSITORY |
purely_service_registry (unless you're using a different name for ECR repository of Service Registry) |
ECR_USER_REPOSITORY |
purely_user_registry (unless you're using a different name for ECR repository of User service) |
ECR_WEB_REPOSITORY |
purely_web_registry (unless you're using a different name for ECR repository of Frontend) |
EKS_CLUSTER |
purely-cluster (unless you're using a different name for EKS cluster) |
SPRING_DATA_MONGODB_URI_AUTH |
Database URI of auth service from MongoDB Atlas |
SPRING_DATA_MONGODB_URI_CART |
Database URI of cart service from MongoDB Atlas |
SPRING_DATA_MONGODB_URI_CATEGORY |
Database URI of category service from MongoDB Atlas |
SPRING_DATA_MONGODB_URI_ORDER |
Database URI of order service from MongoDB Atlas |
SPRING_DATA_MONGODB_URI_PRODUCT |
Database URI of product service from MongoDB Atlas |
SPRING_MAIL_PASSWORD |
Your mail app password |
SPRING_MAIL_USERNAME |
Your mail |
- Each service has its own workflow file (ensuring isolation). Trigger workflows from GitHub Actions. Once completed, services will be live in your EKS cluster.
✅ Deployment Complete!
- Verify cluster resources:
- Nodes
- Deployment
- Horizontal Pod Autoscaler
- Service
- Ingress
- Verify the Eureka server via port forwarding
Copy the Ingress DNS address from the kubectl get ingress and open it in your browser to view the live application.













