diff --git a/content/learning-paths/servers-and-cloud-computing/rabbitmq-gcp/_index.md b/content/learning-paths/servers-and-cloud-computing/rabbitmq-gcp/_index.md index cbd31a174f..689005c3dd 100644 --- a/content/learning-paths/servers-and-cloud-computing/rabbitmq-gcp/_index.md +++ b/content/learning-paths/servers-and-cloud-computing/rabbitmq-gcp/_index.md @@ -1,28 +1,29 @@ --- -title: Deploy RabbitMQ on Google Cloud C4A (Arm-based Axion VMs) +title: Deploy RabbitMQ on Arm64 Cloud Platforms (Azure & GCP) minutes_to_complete: 30 -who_is_this_for: This is an introductory topic for software engineers and platform engineers migrating messaging and event-driven workloads from x86_64 to Arm-based servers, specifically on Google Cloud C4A virtual machines powered by Axion processors. +who_is_this_for: This is an introductory topic for software engineers and platform engineers migrating messaging and event-driven workloads from x86_64 to Arm-based servers, specifically on Microsoft Azure Cobalt 100 Arm processors and Google Cloud C4A virtual machines powered by Axion processors. learning_objectives: + - Provision Arm-based Linux virtual machines on Google Cloud (C4A with Axion processors) and Microsoft Azure (Cobalt 100) - Provision an Arm-based SUSE SLES virtual machine on Google Cloud (C4A with Axion processors) - - Install and configure RabbitMQ on a SUSE Arm64 (C4A) instance - - Validate RabbitMQ deployment using baseline messaging tests - - Implement real-world RabbitMQ use cases such as event-driven processing and notification pipelines + - Install and configure RabbitMQ on Arm64 Linux (SUSE SLES on GCP and Ubuntu Pro 24.04 on Azure) + - Build and configure required Erlang versions for RabbitMQ on Arm64 + - Validate RabbitMQ deployments using baseline messaging and connectivity tests + - Implement practical RabbitMQ use cases such as event-driven processing and notification pipelines on Arm-based infrastructure prerequisites: + - A [Microsoft Azure](https://azure.microsoft.com/) account with access to Cobalt 100-based instances (Dpsv6). - A [Google Cloud Platform (GCP)](https://cloud.google.com/free) account with billing enabled - Basic understanding of message queues and messaging concepts (publishers, consumers) - Familiarity with Linux command-line operations - - Basic knowledge of Python for the use case examples author: Pareena Verma ##### Tags skilllevels: Introductory -subjects: Containers and Virtualization -cloud_service_providers: Google Cloud +subjects: Databases armips: - Neoverse @@ -45,6 +46,11 @@ further_reading: link: https://cloud.google.com/docs type: documentation + - resource: + title: Azure Virtual Machines documentation + link: https://learn.microsoft.com/azure/virtual-machines/ + type: documentation + - resource: title: RabbitMQ documentation link: https://www.rabbitmq.com/documentation.html diff --git a/content/learning-paths/servers-and-cloud-computing/rabbitmq-gcp/azure_baseline.md b/content/learning-paths/servers-and-cloud-computing/rabbitmq-gcp/azure_baseline.md new file mode 100644 index 0000000000..cf53643d05 --- /dev/null +++ b/content/learning-paths/servers-and-cloud-computing/rabbitmq-gcp/azure_baseline.md @@ -0,0 +1,141 @@ +--- +title: RabbitMQ Baseline Testing +weight: 5 + +### FIXED, DO NOT MODIFY +layout: learningpathall +--- + +## Run a Baseline Test With RabbitMQ +This section validates a working **RabbitMQ 4.2.0** installation with **Erlang OTP 26** on an **Azure Ubuntu Arm64 VM**. + +All steps are **CLI-only** and suitable for baseline verification. + +### Verify RabbitMQ Service Status + +```console +sudo systemctl status rabbitmq +``` + +### Verify Erlang Version +RabbitMQ depends on Erlang. This step ensures the broker is using Erlang OTP 26. + +```console +erl -eval 'io:format("~s~n", [erlang:system_info(system_version)]), halt().' -noshell +``` + +### Verify RabbitMQ Version +Confirm the installed RabbitMQ version. + +```console +rabbitmqctl version +``` + +### Verify Enabled Plugins +List all enabled plugins and confirm that the management plugins are active. + +```console +rabbitmq-plugins list -e +``` + +```output +Listing plugins with pattern ".*" ... + Configured: E = explicitly enabled; e = implicitly enabled + | Status: * = running on rabbit@lpprojectubuntuarm64 + |/ +[E*] rabbitmq_management 4.2.0 +[e*] rabbitmq_management_agent 4.2.0 +[e*] rabbitmq_web_dispatch 4.2.0 +```` + +This confirms that: + +- The management UI is enabled +- Required supporting plugins are running + +### Check RabbitMQ Node Health +Retrieve detailed runtime and resource information for the RabbitMQ node. + +```console +rabbitmqctl status +``` +This confirms that: + +- Node is running +- No alarms are reported +- Erlang version matches OTP 26 + +### Ensure RabbitMQ Configuration Directory Permissions +RabbitMQ requires write access to its configuration directory for plugin management. + +```console +sudo mkdir -p /opt/rabbitmq/etc/rabbitmq +sudo chown -R azureuser:azureuser /opt/rabbitmq/etc/rabbitmq +``` + +### Create a Baseline Test Virtual Host +Create an isolated virtual host for baseline testing. + +```console +rabbitmqctl add_vhost test_vhost +rabbitmqctl set_permissions -p test_vhost guest ".*" ".*" ".*" +``` + +This ensures: + +- Tests do not interfere with default workloads +- Full permissions are available for validation + +### Download RabbitMQ Admin CLI +Download the `rabbitmqadmin` CLI tool from the management endpoint. + +```console +wget http://localhost:15672/cli/rabbitmqadmin -O ~/rabbitmqadmin +chmod +x ~/rabbitmqadmin +``` + +This CLI is used to perform queue and message operations. + +### Declare a Test Queue +Create a non-durable test queue in the test virtual host. + +```console +~/rabbitmqadmin -V test_vhost declare queue name=test durable=false +``` + +### Publish a Test Message +Publish a sample message to the test queue using the default exchange. + +```console +~/rabbitmqadmin -V test_vhost publish \ + exchange=amq.default \ + routing_key=test \ + payload="Hello RabbitMQ" +``` + +This validates: + +- Message routing +- Exchange-to-queue binding behavior + +### Consume The Test Message +Retrieve and remove the message from the queue. + +```console +~/rabbitmqadmin -V test_vhost get queue=test count=1 +``` + +You should see an output similar to: + +```output ++-------------+----------+---------------+----------------+---------------+------------------+------------+-------------+ +| routing_key | exchange | message_count | payload | payload_bytes | payload_encoding | properties | redelivered | ++-------------+----------+---------------+----------------+---------------+------------------+------------+-------------+ +| test | | 0 | Hello RabbitMQ | 14 | string | | False | ++-------------+----------+---------------+----------------+---------------+------------------+------------+-------------+ +``` + +- Message payload: Hello RabbitMQ +- Queue becomes empty after consumption + +This baseline validates a healthy RabbitMQ 4.2.0 deployment running on Erlang/OTP 26 on an Azure Ubuntu Arm64 VM. Core components, plugins, and node health were verified, followed by successful message publish and consume operations. diff --git a/content/learning-paths/servers-and-cloud-computing/rabbitmq-gcp/azure_installation.md b/content/learning-paths/servers-and-cloud-computing/rabbitmq-gcp/azure_installation.md new file mode 100644 index 0000000000..703a0a3311 --- /dev/null +++ b/content/learning-paths/servers-and-cloud-computing/rabbitmq-gcp/azure_installation.md @@ -0,0 +1,155 @@ +--- +title: Install RabbitMQ on Microsoft Azure Cobalt 100 +weight: 4 + +### FIXED, DO NOT MODIFY +layout: learningpathall +--- + +## Install RabbitMQ on Azure Cobalt 100 +This guide describes the end-to-end installation of RabbitMQ 4.2.0 on an Azure Cobalt 100 (Arm-based) Ubuntu Pro 24.04 virtual machine. It covers system preparation, Erlang installation, RabbitMQ setup, service configuration, and validation with the management plugin enabled. + +### Update system and install build dependencies +This step ensures the operating system is up to date and installs all required packages needed to build Erlang and run RabbitMQ reliably. + +```console +sudo apt update +sudo apt install -y build-essential libssl-dev libncurses-dev libtinfo-dev \ + libgl1-mesa-dev libglu1-mesa-dev libpng-dev libssh-dev \ + unixodbc-dev wget tar xz-utils git +``` + +### Build and install Erlang OTP 26 +RabbitMQ 4.2.0 requires Erlang OTP 26. This section builds Erlang from source to ensure full compatibility on Arm64. + +```console +# Clone Erlang source +git clone https://github.com/erlang/otp.git +cd otp + +# Checkout OTP 26 branch +git checkout OTP-26 + +# Clean previous builds +make clean + +# Configure build with SSL/crypto support +./configure --prefix=/usr/local/erlang-26 \ + --enable-smp-support \ + --enable-threads \ + --enable-kernel-poll \ + --with-ssl + +# Build and install +make -j$(nproc) +sudo make install +``` +### Make Erlang PATH persistent (IMPORTANT) +This step ensures the Erlang binaries are permanently available in the system PATH across sessions and reboots. + +```console +echo 'export ERLANG_HOME=/usr/local/erlang-26' | sudo tee /etc/profile.d/erlang.sh +echo 'export PATH=$ERLANG_HOME/bin:$PATH' | sudo tee -a /etc/profile.d/erlang.sh +``` + +### Download and install RabbitMQ +This section downloads the official RabbitMQ 4.2.0 generic Unix distribution and installs it under `/opt/rabbitmq`. + +```console +cd ~ +wget https://github.com/rabbitmq/rabbitmq-server/releases/download/v4.2.0/rabbitmq-server-generic-unix-4.2.0.tar.xz +sudo mkdir -p /opt/rabbitmq +sudo tar -xvf rabbitmq-server-generic-unix-4.2.0.tar.xz -C /opt/rabbitmq --strip-components=1 + +# Create directories for logs and database +sudo mkdir -p /var/lib/rabbitmq /var/log/rabbitmq +sudo chown -R $USER:$USER /var/lib/rabbitmq /var/log/rabbitmq +``` + +#### Update PATH environment variable +This step makes RabbitMQ CLI tools available in the current shell and should be persisted for future sessions. + +```console +export PATH=/usr/local/erlang-26/bin:/opt/rabbitmq/sbin:$PATH +``` + +Add this line to `~/.bashrc` or `~/.profile` for persistence. + +### Configure RabbitMQ systemd service +This section configures RabbitMQ to run as a managed systemd service, enabling automatic startup and controlled lifecycle management. + +Create `/etc/systemd/system/rabbitmq.service`: + +```ini +[Unit] +Description=RabbitMQ broker +After=network.target + +[Service] +Type=simple +User=azureuser +Group=azureuser + +Environment=HOME=/home/azureuser +Environment=RABBITMQ_HOME=/opt/rabbitmq +Environment=RABBITMQ_MNESIA_BASE=/var/lib/rabbitmq +Environment=RABBITMQ_LOG_BASE=/var/log/rabbitmq +Environment=PATH=/usr/local/erlang-26/bin:/opt/rabbitmq/sbin:/usr/bin + +ExecStart=/opt/rabbitmq/sbin/rabbitmq-server +ExecStop=/opt/rabbitmq/sbin/rabbitmqctl shutdown + +Restart=on-failure +RestartSec=10 +LimitNOFILE=65536 + +[Install] +WantedBy=multi-user.target +``` + +Reload systemd and start RabbitMQ: + +```console +sudo systemctl daemon-reload +sudo systemctl enable rabbitmq +sudo systemctl start rabbitmq +sudo systemctl status rabbitmq +``` + +### Enable RabbitMQ management plugin +This step enables the RabbitMQ management plugin, which provides a web-based UI and HTTP API for monitoring and administration. + +```console +# Ensure config directory exists +sudo mkdir -p /opt/rabbitmq/etc/rabbitmq +sudo chown -R $USER:$USER /opt/rabbitmq/etc/rabbitmq + +# Enable management plugin +rabbitmq-plugins enable rabbitmq_management +``` + +### Verify installation +This section validates that both Erlang and RabbitMQ are installed correctly and running with the expected versions. + +**Erlang version:** + +```console +erl -eval 'io:format("~s~n", [erlang:system_info(system_version)]), halt().' -noshell +``` + +You should see an output similar to: +```output +Erlang/OTP 26 [erts-14.2.5.12] [source] [64-bit] [smp:4:4] [ds:4:4:10] [async-threads:1] [jit] +``` + +**Verify RabbitMQ version:** + +```console +rabbitmqctl version +``` + +You should see an output similar to: +```output +4.2.0 +``` +RabbitMQ 4.2.0 is successfully installed on an Azure Cobalt 100 Ubuntu Pro 24.04 Arm64 VM with systemd management, persistent storage, logging, and the management plugin enabled. diff --git a/content/learning-paths/servers-and-cloud-computing/rabbitmq-gcp/azure_instance.md b/content/learning-paths/servers-and-cloud-computing/rabbitmq-gcp/azure_instance.md new file mode 100644 index 0000000000..14e189efa0 --- /dev/null +++ b/content/learning-paths/servers-and-cloud-computing/rabbitmq-gcp/azure_instance.md @@ -0,0 +1,50 @@ +--- +title: Create an Arm based cloud virtual machine using Microsoft Cobalt 100 CPU +weight: 3 + +### FIXED, DO NOT MODIFY +layout: learningpathall +--- + +## Introduction + +There are several ways to create an Arm-based Cobalt 100 virtual machine: the Microsoft Azure console, the Azure CLI tool, or using your choice of IaC (Infrastructure as Code). This guide will use the Azure console to create a virtual machine with Arm-based Cobalt 100 Processor. + +This learning path focuses on the general-purpose virtual machine of the D series. Please read the guide on [Dpsv6 size series](https://learn.microsoft.com/en-us/azure/virtual-machines/sizes/general-purpose/dpsv6-series) offered by Microsoft Azure. + +If you have never used the Microsoft Cloud Platform before, please review the microsoft [guide to Create a Linux virtual machine in the Azure portal](https://learn.microsoft.com/en-us/azure/virtual-machines/linux/quick-create-portal?tabs=ubuntu). + +#### Create an Arm-based Azure Virtual Machine + +Creating a virtual machine based on Azure Cobalt 100 is no different from creating any other virtual machine in Azure. To create an Azure virtual machine, launch the Azure portal and navigate to "Virtual Machines". +1. Select "Create", and click on "Virtual Machine" from the drop-down list. +2. Inside the "Basic" tab, fill in the Instance details such as "Virtual machine name" and "Region". +3. Choose the image for your virtual machine (for example, Ubuntu Pro 24.04 LTS) and select “Arm64” as the VM architecture. +4. In the “Size” field, click on “See all sizes” and select the D-Series v6 family of virtual machines. Select “D4ps_v6” from the list. + +![Azure portal VM creation — Azure Cobalt 100 Arm64 virtual machine (D4ps_v6) alt-text#center](images/instance.png "Figure 1: Select the D-Series v6 family of virtual machines") + +5. Select "SSH public key" as an Authentication type. Azure will automatically generate an SSH key pair for you and allow you to store it for future use. It is a fast, simple, and secure way to connect to your virtual machine. +6. Fill in the Administrator username for your VM. +7. Select "Generate new key pair", and select "RSA SSH Format" as the SSH Key Type. RSA could offer better security with keys longer than 3072 bits. Give a Key pair name to your SSH key. +8. In the "Inbound port rules", select HTTP (80) and SSH (22) as the inbound ports. + +![Azure portal VM creation — Azure Cobalt 100 Arm64 virtual machine (D4ps_v6) alt-text#center](images/instance1.png "Figure 2: Allow inbound port rules") + +9. Click on the "Review + Create" tab and review the configuration for your virtual machine. It should look like the following: + +![Azure portal VM creation — Azure Cobalt 100 Arm64 virtual machine (D4ps_v6) alt-text#center](images/ubuntu-pro.png "Figure 3: Review and Create an Azure Cobalt 100 Arm64 VM") + +10. Finally, when you are confident about your selection, click on the "Create" button, and click on the "Download Private Key and Create Resources" button. + +![Azure portal VM creation — Azure Cobalt 100 Arm64 virtual machine (D4ps_v6) alt-text#center](images/instance4.png "Figure 4: Download Private key and Create Resources") + +11. Your virtual machine should be ready and running within no time. You can SSH into the virtual machine using the private key, along with the Public IP details. + +![Azure portal VM creation — Azure Cobalt 100 Arm64 virtual machine (D4ps_v6) alt-text#center](images/final-vm.png "Figure 5: VM deployment confirmation in Azure portal") + +{{% notice Note %}} + +To learn more about Arm-based virtual machines in Azure, refer to “Getting Started with Microsoft Azure” in [Get started with Arm-based cloud instances](/learning-paths/servers-and-cloud-computing/csp/azure). + +{{% /notice %}} diff --git a/content/learning-paths/servers-and-cloud-computing/rabbitmq-gcp/background.md b/content/learning-paths/servers-and-cloud-computing/rabbitmq-gcp/background.md index 6200205e33..1a5a3afe13 100644 --- a/content/learning-paths/servers-and-cloud-computing/rabbitmq-gcp/background.md +++ b/content/learning-paths/servers-and-cloud-computing/rabbitmq-gcp/background.md @@ -1,11 +1,17 @@ --- -title: Learn about RabbitMQ and Google Axion C4A +title: Technology Stack Overview weight: 2 layout: "learningpathall" --- +## Cobalt 100 Arm-based processor + +Azure’s Cobalt 100 is built on Microsoft's first-generation, in-house Arm-based processor: the Cobalt 100. Designed entirely by Microsoft and based on Arm’s Neoverse N2 architecture, this 64-bit CPU delivers improved performance and energy efficiency across a broad spectrum of cloud-native, scale-out Linux workloads. These include web and application servers, data analytics, open-source databases, caching systems, and other related technologies. Running at 3.4 GHz, the Cobalt 100 processor allocates a dedicated physical core for each vCPU, ensuring consistent and predictable performance. + +To learn more about Cobalt 100, refer to the blog [Announcing the preview of new Azure virtual machine based on the Azure Cobalt 100 processor](https://techcommunity.microsoft.com/blog/azurecompute/announcing-the-preview-of-new-azure-vms-based-on-the-azure-cobalt-100-processor/4146353). + ## Google Axion C4A Arm instances in Google Cloud Google Axion C4A is a family of Arm-based virtual machines built on Google’s custom Axion CPU, which is based on Arm Neoverse-V2 cores. Designed for high-performance and energy-efficient computing, these virtual machines offer strong performance for modern cloud workloads such as CI/CD pipelines, microservices, media processing, and general-purpose applications. @@ -22,4 +28,4 @@ RabbitMQ helps decouple application components, improve scalability, and increas RabbitMQ is widely used for **event-driven architectures**, **background job processing**, **microservices communication**, and **notification systems**. It integrates easily with many programming languages and platforms. -Learn more from the [RabbitMQ website](https://www.rabbitmq.com/) and the [RabbitMQ documentation](https://www.rabbitmq.com/documentation.html). +Learn more from the [RabbitMQ official website](https://www.rabbitmq.com/) and the [official documentation](https://www.rabbitmq.com/documentation.html). diff --git a/content/learning-paths/servers-and-cloud-computing/rabbitmq-gcp/firewall_setup.md b/content/learning-paths/servers-and-cloud-computing/rabbitmq-gcp/firewall_setup.md deleted file mode 100644 index 30b776551e..0000000000 --- a/content/learning-paths/servers-and-cloud-computing/rabbitmq-gcp/firewall_setup.md +++ /dev/null @@ -1,38 +0,0 @@ ---- -title: Create a firewall rule on GCP -weight: 3 - -### FIXED, DO NOT MODIFY -layout: learningpathall ---- - -## Overview - -In this section, you create a firewall rule within Google Cloud Console to expose TCP port 15672. - -{{% notice Note %}} -For support on GCP setup, see the Learning Path [Getting started with Google Cloud Platform](/learning-paths/servers-and-cloud-computing/csp/google/). -{{% /notice %}} - -## Create a firewall rule in GCP - -To expose TCP port 15672, create a firewall rule. - -Navigate to the [Google Cloud Console](https://console.cloud.google.com/), go to **VPC Network > Firewall**, and select **Create firewall rule**. - -![Screenshot showing the VPC Network Firewall page in Google Cloud Console with the Create firewall rule button highlighted alt-txt#center](images/firewall-rule.png "Create a firewall rule") - -Next, create the firewall rule that exposes TCP port 15672. -Set the **Name** of the new rule to "allow-tcp-15672". Select your network that you intend to bind to your VM (default is "autoscaling-net" but your organization might have others). - -Set **Direction of traffic** to "Ingress". Set **Allow on match** to "Allow" and **Targets** to "Specified target tags". Enter "allow-tcp-15672" in the **Target tags** text field. Set **Source IPv4 ranges** to your IP address so that only you can access the application. - -![Screenshot showing the firewall rule configuration interface with target tag set to allow-tcp-15672 and TCP port 15672 specified in the protocols and ports section alt-txt#center](images/network-rule.png "Creating the TCP/15672 firewall rule") - -Finally, select **Specified protocols and ports** under the **Protocols and ports** section. Select the **TCP** checkbox, enter "15672" in the **Ports** text field, and select **Create**. - -![Screenshot showing the Protocols and ports section with TCP checkbox selected and port 15672 entered in the ports field alt-txt#center](images/network-port.png "Specifying the TCP port to expose") - -## What you've accomplished and what's next - -You've successfully created a firewall rule to allow TCP traffic on port 15672 for the RabbitMQ management interface. This firewall rule will be applied to your virtual machine using network tags. Next, you'll provision the Google Axion C4A Arm virtual machine. \ No newline at end of file diff --git a/content/learning-paths/servers-and-cloud-computing/rabbitmq-gcp/baseline.md b/content/learning-paths/servers-and-cloud-computing/rabbitmq-gcp/gcp_baseline.md similarity index 59% rename from content/learning-paths/servers-and-cloud-computing/rabbitmq-gcp/baseline.md rename to content/learning-paths/servers-and-cloud-computing/rabbitmq-gcp/gcp_baseline.md index c8252fc51a..4a001c85b1 100644 --- a/content/learning-paths/servers-and-cloud-computing/rabbitmq-gcp/baseline.md +++ b/content/learning-paths/servers-and-cloud-computing/rabbitmq-gcp/gcp_baseline.md @@ -1,14 +1,14 @@ --- -title: Validate RabbitMQ installation -weight: 6 +title: RabbitMQ Baseline Testing on Google Axion C4A Arm Virtual Machine +weight: 9 ### FIXED, DO NOT MODIFY layout: learningpathall --- -## RabbitMQ baseline validation on GCP SUSE Arm64 VM - -In this section you'll validate your RabbitMQ installation on the Google Cloud SUSE Linux Arm64 virtual machine by confirming: +## RabbitMQ Baseline Validation on GCP SUSE Arm64 VM +This document defines a **baseline validation procedure** for RabbitMQ installed on a **Google Cloud SUSE Linux Arm64 virtual machine**. +The purpose of this baseline is to confirm: - RabbitMQ service health - Management plugin availability @@ -21,22 +21,19 @@ Verify that the RabbitMQ node is operational and healthy. ```console sudo rabbitmqctl status ``` - -The command returns detailed status information. Verify that: - - Node status reports RabbitMQ is running - No active alarms - Listeners are active on ports 5672 and 15672 - Memory and disk space are within safe limits ### Verify enabled plugins -Confirm that the RabbitMQ management plugins are enabled: +Confirm that the RabbitMQ management plugins are enabled. ```console sudo rabbitmq-plugins list | grep management ``` -The output is similar to: +You should see an output similar to: ```output [ ] rabbitmq_federation_management 4.2.0 [E*] rabbitmq_management 4.2.0 @@ -46,13 +43,13 @@ The output is similar to: ``` ### Validate RabbitMQ listeners -Ensure RabbitMQ is listening on the required ports: +Ensure RabbitMQ is listening on the required ports. ```console sudo rabbitmqctl status | grep -A5 Listeners ``` -The output is similar to: +You should see an output similar to: ```output Listeners @@ -61,54 +58,49 @@ Interface: [::], port: 25672, protocol: clustering, purpose: inter-node and CLI Interface: [::], port: 5672, protocol: amqp, purpose: AMQP 0-9-1 and AMQP 1.0 ``` -### Download RabbitMQ admin CLI tool - -The `rabbitmqadmin` command is a Python script to manage and monitor RabbitMQ. - -Download the CLI tool from the local management endpoint to the virtual machine. You can also download and run `rabbitmqadmin` on your local computer, but you need to have `python3` installed, including `pip3`. +### Download RabbitMQ Admin CLI tool +Download the rabbitmqadmin CLI tool from the local management endpoint. ```console curl -u guest:guest http://localhost:15672/cli/rabbitmqadmin -o rabbitmqadmin ``` - -Make the tool executable: +**Make the tool executable:** ```console chmod +x rabbitmqadmin ``` - ### Validate queue creation -Create a test queue to validate write operations: +Create a test queue to validate write operations. ```console ./rabbitmqadmin declare queue name=testqueue durable=false ``` -The output is similar to: +You should see an output similar to: ```output queue declared ``` ### Publish a test message -Send a test message to the queue: +Send a test message to the queue. ```console ./rabbitmqadmin publish exchange=amq.default routing_key=testqueue payload="hello world" ``` -The output is similar to: +You should see an output similar to: ```output Message published ``` ### Consume message from queue -Retrieve messages from the queue to verify read functionality: +Retrieve messages from the queue to verify read functionality. ```console ./rabbitmqadmin get queue=testqueue ``` -The output is similar to: +You should see an output similar to: ```output +-------------+----------+---------------+-------------+---------------+------------------+------------+-------------+ | routing_key | exchange | message_count | payload | payload_bytes | payload_encoding | properties | redelivered | @@ -118,21 +110,29 @@ The output is similar to: ``` ### Verify queue state -Confirm that the queue is empty after consumption: +Confirm that the queue is empty after consumption. ```console ./rabbitmqadmin list queues name messages ``` -The output is similar to: +You should see an output similar to: ```output -+-----------+----------+ -| name | messages | -+-----------+----------+ -| testqueue | 1 | -+-----------+----------+ ++--------------+----------+ +| name | messages | ++--------------+----------+ +| jobs | 0 | +| order.events | 1 | +| testqueue | 1 | ``` -## What you've accomplished and what's next +### Baseline validation summary + +- RabbitMQ node is running and healthy +- The management plugin is enabled and accessible +- Queue creation is successful +- Message publishing works as expected +- Message consumption functions correctly +- CLI tools operate without error -You've successfully validated RabbitMQ on your Google Cloud SUSE Arm64 virtual machine. The node is running and healthy, the management plugin is enabled and accessible, and queue operations (creation, publishing, consumption) work correctly. Next, you'll explore practical use cases that demonstrate RabbitMQ's capabilities for event-driven architectures and notification systems. +This confirms a successful baseline validation of RabbitMQ on a GCP SUSE Arm64 virtual machine. diff --git a/content/learning-paths/servers-and-cloud-computing/rabbitmq-gcp/gcp_firewall_setup.md b/content/learning-paths/servers-and-cloud-computing/rabbitmq-gcp/gcp_firewall_setup.md new file mode 100644 index 0000000000..4efe1e666a --- /dev/null +++ b/content/learning-paths/servers-and-cloud-computing/rabbitmq-gcp/gcp_firewall_setup.md @@ -0,0 +1,36 @@ +--- +title: Create a Firewall Rule on GCP +weight: 6 + +### FIXED, DO NOT MODIFY +layout: learningpathall +--- + +## Overview + +In this section, you create a Firewall Rule within Google Cloud Console to expose TCP port 15672. + +{{% notice Note %}} +For support on GCP setup, see the Learning Path [Getting started with Google Cloud Platform](/learning-paths/servers-and-cloud-computing/csp/google/). +{{% /notice %}} + +## Create a Firewall Rule in GCP + +To expose TCP port 15672, create a firewall rule. + +Navigate to the [Google Cloud Console](https://console.cloud.google.com/), go to **VPC Network > Firewall**, and select **Create firewall rule**. + +![Create a firewall rule](images/firewall-rule.png "Create a firewall rule") + +Next, create the firewall rule that exposes TCP port 15672. +Set the **Name** of the new rule to "allow-tcp-15672". Select your network that you intend to bind to your VM (default is "autoscaling-net" but your organization might have others). + +Set **Direction of traffic** to "Ingress". Set **Allow on match** to "Allow" and **Targets** to "Specified target tags". Enter "allow-tcp-15672" in the **Target tags** text field. Set **Source IPv4 ranges** to "0.0.0.0/0". + +![Create a firewall rule](images/network-rule.png "Creating the TCP/15672 firewall rule") + +Finally, select **Specified protocols and ports** under the **Protocols and ports** section. Select the **TCP** checkbox, enter "15672" in the **Ports** text field, and select **Create**. + +![Specifying the TCP port to expose](images/network-port.png "Specifying the TCP port to expose") + +The network firewall rule is now created and you can continue with the VM creation. diff --git a/content/learning-paths/servers-and-cloud-computing/rabbitmq-gcp/installation.md b/content/learning-paths/servers-and-cloud-computing/rabbitmq-gcp/gcp_installation.md similarity index 56% rename from content/learning-paths/servers-and-cloud-computing/rabbitmq-gcp/installation.md rename to content/learning-paths/servers-and-cloud-computing/rabbitmq-gcp/gcp_installation.md index 54d4afd6eb..6cc1836d35 100644 --- a/content/learning-paths/servers-and-cloud-computing/rabbitmq-gcp/installation.md +++ b/content/learning-paths/servers-and-cloud-computing/rabbitmq-gcp/gcp_installation.md @@ -1,14 +1,13 @@ --- -title: Install RabbitMQ -weight: 5 +title: Install RabbitMQ on GCP SUSE Arm64 VM +weight: 8 ### FIXED, DO NOT MODIFY layout: learningpathall --- ## Install RabbitMQ on GCP SUSE Arm64 VM - -In this section you'll install RabbitMQ on a Google Cloud Platform SUSE Linux Arm64 virtual machine using RPM packages for both Erlang and RabbitMQ Server. +This guide describes a **step-by-step installation of RabbitMQ** on a **Google Cloud Platform SUSE Linux Arm64 virtual machine**, using **RPM packages** for both **Erlang** and **RabbitMQ Server**. RabbitMQ needs Erlang to be installed before setting up the server. @@ -20,14 +19,14 @@ RabbitMQ needs Erlang to be installed before setting up the server. - Outbound internet access ### Refresh system repositories -Update the system's package list to get the latest available software from repositories. +This step updates the system’s package list so the operating system knows about the latest software available from its repositories. ```console sudo zypper refresh ``` ### Install required system utilities -Install the basic tools needed to download and manage packages. +You can install the basic tools needed to download and manage packages. ```console sudo zypper install -y curl wget gnupg tar socat logrotate @@ -42,19 +41,19 @@ sudo rpm -Uvh erlang-26.2.5-1.el8.aarch64.rpm ``` ### Verify Erlang installation -Confirm that Erlang is installed correctly: +Confirm that Erlang is installed correctly. ```console erl -eval 'io:format("~s~n", [erlang:system_info(system_version)]), halt().' -noshell ``` -The output is similar to: +You should see an output similar to: ```output Erlang/OTP 26 [erts-14.2.5] [source] [64-bit] [smp:4:4] [ds:4:4:10] [async-threads:1] [jit] ``` -### Download RabbitMQ server RPM +### Download RabbitMQ Server RPM Download the RabbitMQ Server RPM package. ```console @@ -64,8 +63,7 @@ sudo rpm -Uvh rabbitmq-server-4.2.0-1.el8.noarch.rpm {{% notice Note %}} RabbitMQ version 3.11.0 introduced significant performance enhancements for Arm-based architectures. This version needs Erlang 25.0 or later, which brings Just-In-Time (JIT) compilation and modern flame graph profiling tooling to both x86 and Arm64 CPUs. These features result in improved performance on Arm64 architectures. - -View the [release notes](https://github.com/rabbitmq/rabbitmq-server/blob/main/release-notes/3.11.0.md) for more information. +You can view [this release note](https://github.com/rabbitmq/rabbitmq-server/blob/main/release-notes/3.11.0.md) The [Arm Ecosystem Dashboard](https://developer.arm.com/ecosystem-dashboard/) recommends RabbitMQ version 3.11.0, the minimum recommended on Arm platforms. {{% /notice %}} @@ -86,22 +84,6 @@ sudo systemctl status rabbitmq-server The service should be in an active (running) state. -```output -● rabbitmq-server.service - Open source RabbitMQ server - Loaded: loaded (/usr/lib/systemd/system/rabbitmq-server.service; enabled; vendor preset: disabled) - Active: active (running) since Fri 2026-01-09 14:50:52 UTC; 3s ago - Main PID: 3953 (beam.smp) - Tasks: 53 - CPU: 2.432s - CGroup: /system.slice/rabbitmq-server.service - ├─ 3953 /usr/lib64/erlang/erts-14.2.5/bin/beam.smp -W w -MBas ageffcbf -MHas ageffcbf -MBlmbcs 512 -MHlmbcs 512 -MMmcs 30 -pc unicode -P 1048576 -t 5000000 -stbt db -zdbbl > - ├─ 3967 erl_child_setup 32768 - ├─ 4014 /usr/lib64/erlang/erts-14.2.5/bin/inet_gethost 4 - ├─ 4015 /usr/lib64/erlang/erts-14.2.5/bin/inet_gethost 4 - ├─ 4024 /usr/lib64/erlang/erts-14.2.5/bin/epmd -daemon - └─ 4077 /bin/sh -s rabbit_disk_monitor -``` - ### Enable RabbitMQ management plugin Enable the RabbitMQ management plugin to access the web-based dashboard. @@ -117,13 +99,13 @@ sudo systemctl restart rabbitmq-server ``` ### Verify RabbitMQ version -Confirm the installed RabbitMQ version: +Confirm the installed RabbitMQ version. ```console sudo rabbitmqctl version ``` -The output is similar to: +You should see an output similar to: ```output 4.2.0 @@ -140,10 +122,6 @@ sudo rabbitmqctl set_user_tags admin administrator sudo rabbitmqctl set_permissions -p / admin ".*" ".*" ".*" ``` -{{% notice Warning %}} -Replace `StrongPassword123` with a strong, unique password. For production environments, use environment variables or a secrets management system instead of hardcoding passwords. -{{% /notice %}} - **Log in to Management UI** Now, test it from outside the VM. Open a web browser on your local machine (Chrome, Firefox, Edge, etc.) and enter the following URL and credentials in the address bar: @@ -156,19 +134,6 @@ Replace `` with the public IP of your GCP VM. If everything is configured correctly, you see a RabbitMQ login page in your browser that looks like this: -![Screenshot showing the RabbitMQ management UI login interface with username and password input fields and a login button#center](images/rabbitmq.png "RabbitMQ Login page") - -## What you've accomplished and what's next - -You've successfully installed RabbitMQ on your Google Cloud Arm64 VM with: -- Erlang and RabbitMQ Server installed via RPM packages -- RabbitMQ Management UI enabled and accessible -- Administrative user configured for UI access - -Next, you'll validate your RabbitMQ installation and verify it's functioning correctly. +![RabbitMQ page alt-text#center](images/rabbitmq.png "Figure 1: RabbitMQ Login page") This confirms that your RabbitMQ management dashboard is operational. - -## What you've accomplished and what's next - -You've successfully installed RabbitMQ on a Google Cloud SUSE Arm64 virtual machine, enabled the management plugin, created an admin user, and verified access to the web-based management interface. Next, you'll validate the RabbitMQ installation with baseline messaging tests to ensure all components are functioning correctly. \ No newline at end of file diff --git a/content/learning-paths/servers-and-cloud-computing/rabbitmq-gcp/instance.md b/content/learning-paths/servers-and-cloud-computing/rabbitmq-gcp/gcp_instance.md similarity index 53% rename from content/learning-paths/servers-and-cloud-computing/rabbitmq-gcp/instance.md rename to content/learning-paths/servers-and-cloud-computing/rabbitmq-gcp/gcp_instance.md index 7063cb18e5..98e144074e 100644 --- a/content/learning-paths/servers-and-cloud-computing/rabbitmq-gcp/instance.md +++ b/content/learning-paths/servers-and-cloud-computing/rabbitmq-gcp/gcp_instance.md @@ -1,6 +1,6 @@ --- title: Create a Google Axion C4A Arm virtual machine on GCP -weight: 4 +weight: 7 ### FIXED, DO NOT MODIFY layout: learningpathall @@ -8,34 +8,33 @@ layout: learningpathall ## Overview -In this section, you provision a Google Axion C4A Arm virtual machine on Google Cloud Platform (GCP) using the `c4a-standard-4` (4 vCPUs, 16 GB memory) machine type in the Google Cloud Console. +In this section, you provision a Google Axion C4A Arm virtual machine on Google Cloud Platform (GCP) using the `c4a-standard-4` (4 vCPUs, 16 GB memory) machine type in the Google Cloud Console. +We will then use this GCP VM to execute a few RabbitMQ use cases. {{% notice Note %}} For support on GCP setup, see the Learning Path [Getting started with Google Cloud Platform](/learning-paths/servers-and-cloud-computing/csp/google/). {{% /notice %}} -## Provision a Google Axion C4A Arm VM in Google Cloud console +## Provision a Google Axion C4A Arm VM in Google Cloud Console To create a virtual machine based on the C4A instance type, navigate to the [Google Cloud Console](https://console.cloud.google.com/) and go to **Compute Engine > VM Instances**. Select **Create Instance**. Under **Machine configuration**, populate fields such as **Instance name**, **Region**, and **Zone**. Set **Series** to `C4A` and select `c4a-standard-4` for machine type. - ![Screenshot showing the machine configuration interface in Google Cloud Console with C4A series selected and c4a-standard-4 machine type highlighted, displaying 4 vCPUs and 16 GB memory specifications alt-txt#center](images/gcp-vm.png "Creating a Google Axion C4A Arm virtual machine in Google Cloud Console") + ![Create a Google Axion C4A Arm virtual machine in the Google Cloud Console with c4a-standard-4 selected alt-text#center](images/gcp-vm.png "Creating a Google Axion C4A Arm virtual machine in Google Cloud Console") Under **OS and Storage**, select **Change**, then choose an Arm64-based OS image. For this Learning Path, use **SUSE Linux Enterprise Server**. Select "Pay As You Go" for the license type and press **Select**. Under **Networking**, enable **Allow HTTP traffic** and add "allow-tcp-15672" as a network tag in the **Network tags** text field. -![Screenshot showing the Networking section of the VM configuration with allow-tcp-15672 entered in the Network tags field alt-txt#center](images/network-config.png "Adding the TCP/15672 firewall rule to the VM") +![Adding the TCP/15672 firewall rule to the VM](images/network-config.png "Adding the TCP/15672 firewall rule to the VM") Select **Create** to launch the instance. Once created, you see an **SSH** option and the public IP address for your VM in the list of VM instances. Save the public IP address as you need it in the next step. Select the **SSH** option to launch an SSH shell into your VM instance. -![Screenshot showing the VM instances list with columns for name, zone, machine type, internal IP, external IP, and an SSH button for each instance alt-txt#center](images/gcp-pubip-ssh.png "Invoke an SSH session into your running VM instance") +![Invoke an SSH session via your browser alt-text#center](images/gcp-pubip-ssh.png "Invoke an SSH session into your running VM instance") A window opens from your browser and you see a shell into your VM instance. -![Screenshot showing a browser-based terminal window with a command prompt connected to the Google Cloud VM via SSH alt-txt#center](images/gcp-shell.png "Terminal shell in your VM instance") +![Terminal Shell in your VM instance alt-text#center](images/gcp-shell.png "Terminal shell in your VM instance") -## What you've accomplished and what's next - -You've successfully provisioned a Google Axion C4A Arm virtual machine on Google Cloud Platform with the appropriate firewall rules and network configuration. The VM is running SUSE Linux Enterprise Server and is accessible via SSH. Next, you'll install and configure RabbitMQ on this instance. \ No newline at end of file +Next, install RabbitMQ. diff --git a/content/learning-paths/servers-and-cloud-computing/rabbitmq-gcp/gcp_use_case1.md b/content/learning-paths/servers-and-cloud-computing/rabbitmq-gcp/gcp_use_case1.md new file mode 100644 index 0000000000..cd59623083 --- /dev/null +++ b/content/learning-paths/servers-and-cloud-computing/rabbitmq-gcp/gcp_use_case1.md @@ -0,0 +1,212 @@ +--- +title: RabbitMQ Use Case 1 – Event Processing with Python Workers +weight: 10 + +### FIXED, DO NOT MODIFY +layout: learningpathall +--- + +## RabbitMQ Use Case – Event Processing with Python Workers +This use case demonstrates how RabbitMQ enables event-driven architectures using topic exchanges, durable queues, and Python-based worker consumers. It focuses on reliable, asynchronous event processing, which is a common production pattern. + +- Topic exchange–based routing +- Durable queues and bindings +- A Python-based worker using the `pika` client +- Message publishing and consumption validation + +The use case models an **event-driven system**, where order-related events are published and processed asynchronously by workers. + +### Use case overview + +**Scenario:** +An application publishes order-related events (`order.created`, `order.updated`, etc.) to RabbitMQ. A background worker consumes these events from a queue and processes them independently. + +The goal of this use case is to showcase how order-related events can be published to RabbitMQ and processed asynchronously by background workers without tightly coupling producers and consumers. + +**Typical events include:** + +- order.created +- order.updated +- order.completed + +This architecture improves scalability, fault tolerance, and system decoupling. + +### Prerequisites + +- RabbitMQ installed and running +- RabbitMQ management plugin enabled +- Python 3 installed +- Network access to RabbitMQ broker + +### Declare a topic exchange +Create a durable topic exchange to route events based on routing keys. + +```console +./rabbitmqadmin declare exchange name=events type=topic durable=true +``` + +- Creates a durable topic exchange named events. +- Routes messages using wildcard-based routing keys (for example, order.*). +- Ensures the exchange survives broker restarts. + +### Declare a durable queue +Create a durable queue to store order-related events. + +```console +./rabbitmqadmin declare queue name=order.events durable=true +``` + +- Create a durable queue for order events. +- Guarantee that messages are persisted until consumed. +- Ensure reliability in case of worker or broker restarts. + +You should see an output similar to: +```output +queue declared +``` + +### Bind queue to exchange +Bind the queue to the exchange using a topic routing pattern. + +```console +./rabbitmqadmin declare binding source=events destination=order.events routing_key="order.*" +``` + +- Connects the queue to the exchange. +- Ensures all order-related routing keys match the queue. +- Enables flexible event expansion without changing consumers. + +You should see an output similar to: +```output +binding declared +``` + +This binding ensures the queue receives all messages with routing keys such as: +- order.created +- order.updated +- order.completed + +### Publish an event message +Publish a sample order event to the exchange. + +```console +./rabbitmqadmin publish exchange=events routing_key="order.created" payload='{"order_id":123}' +``` + +- Publishes an event to the events exchange. +- Uses a routing key that matches the binding filter. +- Payload is structured JSON to simulate real event data. + +You should see an output similar to: +```output +Message published +``` + +### Install Python dependencies +Install pip and the pika RabbitMQ client library. + +```console +sudo zypper install -y python3-pip +pip install pika +``` + +### Create the worker script +Create a Python worker file to process messages from a queue. + +A **Python worker** was created to process messages from a RabbitMQ queue (jobs) using the pika library. The queue is durable, ensuring message persistence. The worker implements fair dispatch (prefetch_count=1) and manual acknowledgments to reliably process each job without loss. Messages were successfully published to the queue using rabbitmqadmin, and the worker consumed them as expected. + +Using your favorite editor (the example uses "edit") create your "worker.py" file: + +```console +edit worker.py +``` + +**worker.py:** + +```python +import pika +import time +import json + +# RabbitMQ broker address +RABBITMQ_IP = "localhost" + +connection = pika.BlockingConnection( + pika.ConnectionParameters(host=RABBITMQ_IP) +) +channel = connection.channel() + +# Ensure queue exists +channel.queue_declare(queue='jobs', durable=True) + +print("Worker started. Waiting for jobs...") + +def process_job(ch, method, properties, body): + job = json.loads(body.decode()) + print(f"[Worker] Received job: {job}") + + # Simulate processing + time.sleep(2) + + # Acknowledge message + ch.basic_ack(delivery_tag=method.delivery_tag) + +# Fair dispatch configuration +channel.basic_qos(prefetch_count=1) + +channel.basic_consume( + queue='jobs', + on_message_callback=process_job +) + +channel.start_consuming() +``` + +### Start the worker +Run the worker process. + +```console +python3 worker.py +``` + +You should see an output similar to: +```output +The worker started. Waiting for jobs... +``` + +### Publish job messages +From another SSH terminal, publish a job message. + +```console +./rabbitmqadmin publish routing_key=jobs payload='{"job":"test1"}' +``` + +**Worker output:** + +```output +Worker started. Waiting for jobs... +[Worker] Received job: {'job': 'test1'} +``` + +Publish another job: + +```console +./rabbitmqadmin publish routing_key=jobs payload='{"job":"hello1"}' +``` + +**Worker output:** + +```output +Worker started. Waiting for jobs... +[Worker] Received job: {'job': 'hello1'} +``` +Press "CTRL-C" to exit the worker application. + +## Use case validation + +- Event routing via topic exchanges functions correctly +- Durable queues and acknowledgments ensure reliable message processing +- Worker-based consumption supports safe and controlled job execution + +This use case demonstrates how RabbitMQ enables reliable, decoupled, and scalable event processing using topic-based routing and Python workers. +The setup provides a strong foundation for production-grade, message-driven architectures on GCP SUSE Arm64 virtual machines. diff --git a/content/learning-paths/servers-and-cloud-computing/rabbitmq-gcp/gcp_use_case2.md b/content/learning-paths/servers-and-cloud-computing/rabbitmq-gcp/gcp_use_case2.md new file mode 100644 index 0000000000..e3067c4779 --- /dev/null +++ b/content/learning-paths/servers-and-cloud-computing/rabbitmq-gcp/gcp_use_case2.md @@ -0,0 +1,288 @@ +--- +title: RabbitMQ use case 2 - WhatsApp Notification +weight: 11 + +### FIXED, DO NOT MODIFY +layout: learningpathall +--- + + +## WhatsApp Notification Use Case using RabbitMQ +This document demonstrates a **real-world asynchronous messaging use case** where RabbitMQ is used to process WhatsApp notifications reliably using a worker-based architecture. + +### Use case overview + +In many production systems, sending WhatsApp notifications must be: +- Reliable +- Asynchronous +- Independent of the main application flow + +RabbitMQ is used as a **message broker** to decouple message production from message consumption. + +### Architecture flow + +1. Application publishes a message to RabbitMQ +2. RabbitMQ routes the message to a queue +3. A Python worker consumes the message +4. The worker simulates sending a WhatsApp notification + +### Prerequisites + +- GCP SUSE Arm64 virtual machine +- RabbitMQ is installed and running +- RabbitMQ Management Plugin enabled +- Python 3.8+ +- `pika` Python client library installed + +### Install Python dependencies +Installs Python and the RabbitMQ Python client needed to build a consumer. + +```console +sudo zypper install -y python3 python3-pip +pip3 install pika +``` + +### RabbitMQ topology +This use case uses a direct exchange topology for exact-match routing. + +**Exchanges** +- **notifications (direct):** Routes WhatsApp notification messages based on an exact routing key match. + +**Queue** +- **whatsapp.notifications (durable):** Stores WhatsApp messages persistently until they are consumed by a worker. + +**Binding** +- Exchange: **notifications** – Connects the exchange to the WhatsApp notification queue. +- Routing key: **whatsapp** – Ensures only WhatsApp-related messages are routed. +- Queue: **whatsapp.notifications**– Final destination where messages are delivered for processing. + +### Declare RabbitMQ resources +Creates the required exchange, queue, and binding for WhatsApp notifications. + +- `Declare exchange`: Creates a durable direct exchange named notifications to route messages using exact routing keys. +- `Declare queue`: Creates a durable queue whatsapp.notifications to persist WhatsApp notification messages until consumed. +- `Declare binding`: Links the notifications exchange to the whatsapp.notifications queue using the whatsapp routing key. + +```console +./rabbitmqadmin declare exchange \ + name=notifications \ + type=direct \ + durable=true + +./rabbitmqadmin declare queue \ + name=whatsapp.notifications \ + durable=true + +./rabbitmqadmin declare binding \ + source=notifications \ + destination=whatsapp.notifications \ + routing_key=whatsapp +``` +Each command confirms successful creation with messages like **exchange declared, queue declared, and binding declared**. + +**Validate the setup:** + +Validates that RabbitMQ resources exist and are correctly connected. + +```console +./rabbitmqadmin list queues name messages +./rabbitmqadmin list exchanges name type +./rabbitmqadmin list bindings +``` + +- `list queues` displays all queues along with the number of messages currently stored in each queue. +- `list exchanges` lists all exchanges and their types, allowing verification of correct exchange configuration. +- `list bindings` shows how exchanges, queues, and routing keys are connected. + +**Output shows:** + +- notifications exchange of type direct +- whatsapp.notifications durable queue +- Correct routing key binding (whatsapp) +- Zero or more queued messages + +Confirms topology correctness before consuming messages. + +```output +> ./rabbitmqadmin list queues name messages ++------------------------+----------+ +| name | messages | ++------------------------+----------+ +| jobs | 0 | +| order.events | 1 | +| testqueue | 1 | +| whatsapp.notifications | 0 | ++------------------------+----------+ + +> ./rabbitmqadmin list exchanges name type ++--------------------+---------+ +| name | type | ++--------------------+---------+ +| | direct | +| amq.direct | direct | +| amq.fanout | fanout | +| amq.headers | headers | +| amq.match | headers | +| amq.rabbitmq.trace | topic | +| amq.topic | topic | +| events | topic | +| notifications | direct | ++--------------------+---------+ + +> ./rabbitmqadmin list bindings ++---------------+------------------------+------------------------+ +| source | destination | routing_key | ++---------------+------------------------+------------------------+ +| | jobs | jobs | +| | order.events | order.events | +| | testqueue | testqueue | +| | whatsapp.notifications | whatsapp.notifications | +| events | order.events | order.* | +| notifications | whatsapp.notifications | whatsapp | ++---------------+------------------------+------------------------+ +``` + +### WhatsApp worker implementation +The worker attaches as a **blocking consumer** to the `whatsapp.notifications` queue and processes incoming messages. + +Create a `whatsapp_worker.py` file with the content below: + +This Python script implements a **RabbitMQ consumer (worker)** that processes WhatsApp notification messages from a queue in a reliable and controlled manner. + +```python +import pika +import json +import time + +RABBITMQ_HOST = "localhost" +RABBITMQ_VHOST = "/" +RABBITMQ_USER = "guest" +RABBITMQ_PASS = "guest" +QUEUE_NAME = "whatsapp.notifications" + +credentials = pika.PlainCredentials(RABBITMQ_USER, RABBITMQ_PASS) + +parameters = pika.ConnectionParameters( + host=RABBITMQ_HOST, + virtual_host=RABBITMQ_VHOST, + credentials=credentials, + heartbeat=60 +) + +print("[DEBUG] Connecting to RabbitMQ...") +connection = pika.BlockingConnection(parameters) +channel = connection.channel() + +print("[DEBUG] Declaring queue...") +channel.queue_declare(queue=QUEUE_NAME, durable=True) + +print("[DEBUG] Setting QoS...") +channel.basic_qos(prefetch_count=1) + +print("WhatsApp Worker started. Waiting for messages...") + +def send_whatsapp(ch, method, properties, body): + data = json.loads(body.decode()) + print(f"[Worker] Sending WhatsApp message to {data['phone']}") + print(f"[Worker] Message content: {data['message']}") + + # Simulate external WhatsApp API call + time.sleep(1) + + print("[Worker] Message sent successfully") + ch.basic_ack(delivery_tag=method.delivery_tag) + +channel.basic_consume( + queue=QUEUE_NAME, + on_message_callback=send_whatsapp, + auto_ack=False +) + +print("[DEBUG] Starting consumer loop (this should block)...") +channel.start_consuming() +``` + +### Start the worker +Run the worker in a dedicated terminal session: + +```console +python3 whatsapp_worker.py +``` + +The worker is running correctly and waiting for messages without exiting. + +**output:** + +```output +[DEBUG] Connecting to RabbitMQ... +[DEBUG] Declaring queue... +[DEBUG] Setting QoS... +WhatsApp Worker started. Waiting for messages... +[DEBUG] Starting consumer loop (this should block)... +``` + +The process must block without returning to the shell prompt. + +### Publish a test message +From another SSH terminal: Publishes a WhatsApp notification message to RabbitMQ. + +```console +./rabbitmqadmin publish \ + exchange=notifications \ + routing_key=whatsapp \ + payload='{"phone":"+911234567890","message":"Hello from RabbitMQ"}' +``` + +You should see the following output from whatsapp_worker.py that is running in the first SSH terminal: + +```output +[Worker] Sending WhatsApp message to +911234567890 +[Worker] Message content: Hello from RabbitMQ +[Worker] Message sent successfully +``` + +### Message consumption validation +The worker terminal displays logs similar to: + +```output +[DEBUG] Connecting to RabbitMQ... +[DEBUG] Declaring queue... +[DEBUG] Setting QoS... +WhatsApp Worker started. Waiting for messages... +[DEBUG] Starting consumer loop (this should block)... +[Worker] Sending WhatsApp message to +911234567890 +[Worker] Message content: Hello from RabbitMQ +[Worker] Message sent successfully +``` +**What this confirms:** + +- Message routing works correctly +- Queue consumption is successful +- Manual acknowledgments are applied + +This validates the end-to-end message flow. + +### Verify queue state + +```console +./rabbitmqadmin list queues name messages consumers +``` + +Expected output is similar to: + +```output ++------------------------+----------+-----------+ +| name | messages | consumers | ++------------------------+----------+-----------+ +| jobs | 0 | 0 | +| order.events | 2 | 0 | +| testqueue | 1 | 0 | +| whatsapp.notifications | 0 | 1 | ++------------------------+----------+-----------+ +``` + +This confirms that: + +- Messages were consumed successfully +- One active consumer is connected +- No backlog remains in the queue diff --git a/content/learning-paths/servers-and-cloud-computing/rabbitmq-gcp/images/final-vm.png b/content/learning-paths/servers-and-cloud-computing/rabbitmq-gcp/images/final-vm.png new file mode 100644 index 0000000000..5207abfb41 Binary files /dev/null and b/content/learning-paths/servers-and-cloud-computing/rabbitmq-gcp/images/final-vm.png differ diff --git a/content/learning-paths/servers-and-cloud-computing/rabbitmq-gcp/images/instance.png b/content/learning-paths/servers-and-cloud-computing/rabbitmq-gcp/images/instance.png new file mode 100644 index 0000000000..285cd764a5 Binary files /dev/null and b/content/learning-paths/servers-and-cloud-computing/rabbitmq-gcp/images/instance.png differ diff --git a/content/learning-paths/servers-and-cloud-computing/rabbitmq-gcp/images/instance1.png b/content/learning-paths/servers-and-cloud-computing/rabbitmq-gcp/images/instance1.png new file mode 100644 index 0000000000..b9d22c352d Binary files /dev/null and b/content/learning-paths/servers-and-cloud-computing/rabbitmq-gcp/images/instance1.png differ diff --git a/content/learning-paths/servers-and-cloud-computing/rabbitmq-gcp/images/instance4.png b/content/learning-paths/servers-and-cloud-computing/rabbitmq-gcp/images/instance4.png new file mode 100644 index 0000000000..2a0ff1e3b0 Binary files /dev/null and b/content/learning-paths/servers-and-cloud-computing/rabbitmq-gcp/images/instance4.png differ diff --git a/content/learning-paths/servers-and-cloud-computing/rabbitmq-gcp/images/ubuntu-pro.png b/content/learning-paths/servers-and-cloud-computing/rabbitmq-gcp/images/ubuntu-pro.png new file mode 100644 index 0000000000..d54bd75ca6 Binary files /dev/null and b/content/learning-paths/servers-and-cloud-computing/rabbitmq-gcp/images/ubuntu-pro.png differ diff --git a/content/learning-paths/servers-and-cloud-computing/rabbitmq-gcp/use-case1.md b/content/learning-paths/servers-and-cloud-computing/rabbitmq-gcp/use-case1.md deleted file mode 100644 index 922eea6e2e..0000000000 --- a/content/learning-paths/servers-and-cloud-computing/rabbitmq-gcp/use-case1.md +++ /dev/null @@ -1,243 +0,0 @@ ---- -title: "RabbitMQ use case 1: event processing with Python workers" -weight: 7 - -### FIXED, DO NOT MODIFY -layout: learningpathall ---- - -## Event processing with topic-based routing - -In this use case, you implement an event-driven workflow using RabbitMQ with a topic exchange, a durable queue, and a Python worker consumer. You publish order events (for example, `order.created`, `order.updated`) and process them asynchronously. - -This pattern is useful when you need flexible, wildcard-based routing (such as `order.*`) where multiple event types route to the same queue and producers and consumers evolve independently. - -### Use case overview - -**Scenario:** -An application publishes order-related events (`order.created`, `order.updated`, etc.) to RabbitMQ. A background worker consumes these events from a queue and processes them independently. - -Order-related events are published to RabbitMQ and processed asynchronously by background workers without tightly coupling producers and consumers. - -**Typical events include:** - -- order.created -- order.updated -- order.completed - -This architecture improves scalability, fault tolerance, and system decoupling. - -**When to use this pattern:** - -Use topic exchanges when you need wildcard routing where `order.*` matches `order.created`, `order.updated`, and `order.completed`. This approach allows multiple related event types to flow to the same consumer and provides flexibility to add new event types without reconfiguring consumers. - -**Comparison:** - -Use Case 1 (Topic Exchange) provides flexible routing with wildcards, ideal for event streams. Use Case 2 (Direct Exchange) provides exact-match routing, ideal for targeted notifications. - -### Declare a topic exchange - -Create a durable topic exchange to route events based on routing keys: - -```console -./rabbitmqadmin declare exchange name=events type=topic durable=true -``` - -This creates a durable topic exchange named `events` that routes messages using wildcard-based routing keys (for example, `order.*`) and survives broker restarts. - -The output is similar to: -```output -exchange declared -``` - -### Declare a durable queue - -Create a durable queue to store order-related events. - -```console -./rabbitmqadmin declare queue name=order.events durable=true -``` - -This creates a durable queue for order events that guarantees messages are persisted until consumed, ensuring reliability in case of worker or broker restarts. - -The output is similar to: -```output -queue declared -``` - -### Bind queue to exchange - -Bind the queue to the exchange using a topic routing pattern: - -```console -./rabbitmqadmin declare binding source=events destination=order.events routing_key="order.*" -``` - -This connects the queue to the exchange so all order-related routing keys match the queue. The wildcard pattern `order.*` matches routing keys such as `order.created`, `order.updated`, and `order.completed`, enabling flexible event expansion without changing consumers. - -The output is similar to: -```output -binding declared -``` - -### Validate the setup - -Confirm that the exchange, queue, and binding exist and are correctly connected: - -```console -./rabbitmqadmin list exchanges name type -./rabbitmqadmin list queues name messages -./rabbitmqadmin list bindings -``` - -These commands verify that the `events` exchange exists (type: `topic`), the `order.events` queue exists with zero messages initially, and a binding connects `events` to `order.events` with the `order.*` routing pattern. - -The output is similar to: - -```output -+--------------------+---------+ -| name | type | -+--------------------+---------+ -| | direct | -| amq.direct | direct | -| amq.fanout | fanout | -| amq.headers | headers | -| amq.match | headers | -| amq.rabbitmq.trace | topic | -| amq.topic | topic | -| events | topic | -+--------------------+---------+ -+--------------+----------+ -| name | messages | -+--------------+----------+ -| order.events | 0 | -| testqueue | 1 | -+--------------+----------+ -+--------+--------------+--------------+ -| source | destination | routing_key | -+--------+--------------+--------------+ -| | order.events | order.events | -| | testqueue | testqueue | -| events | order.events | order.* | -+--------+--------------+--------------+ -``` - -### Install Python dependencies - -To create the worker, you need Python 3 with the `pika` library, which provides the RabbitMQ client: - -```console -sudo zypper install -y python3-pip -pip3 install pika -``` - -This installs `pip` (Python package manager) and `pika` (RabbitMQ client library for Python). - -### Create the worker script - -The Python worker consumes order-related events from the `order.events` queue. This worker uses durable queues for message persistence, `prefetch_count=1` for fair dispatch, and manual acknowledgments for reliable processing. - -Using a text editor, create a `worker.py` file with the content below: - -```python -import pika -import time -import json - -RABBITMQ_HOST = "localhost" -QUEUE_NAME = "order.events" - -print("[DEBUG] Connecting to RabbitMQ...") -connection = pika.BlockingConnection( - pika.ConnectionParameters(host=RABBITMQ_HOST) -) -channel = connection.channel() - -print("[DEBUG] Declaring queue...") -channel.queue_declare(queue=QUEUE_NAME, durable=True) - -print("[DEBUG] Setting QoS...") -channel.basic_qos(prefetch_count=1) - -print("Worker started. Waiting for events...") - -def process_event(ch, method, properties, body): - event = json.loads(body.decode()) - print(f"[Worker] Received event: {event}") - print(f"[Worker] Processing event type: {event.get('event', 'unknown')}") - - # Simulate processing time - time.sleep(2) - - print("[Worker] Event processed successfully") - ch.basic_ack(delivery_tag=method.delivery_tag) - -channel.basic_consume( - queue=QUEUE_NAME, - on_message_callback=process_event -) - -print("[DEBUG] Starting consumer loop...") -channel.start_consuming() -``` - -### Start the worker - -Now that you've created the worker script, run it to start consuming messages: - -```console -python3 worker.py -``` - -The worker connects to RabbitMQ and begins listening for events. The output is similar to: -```output -[DEBUG] Connecting to RabbitMQ... -[DEBUG] Declaring queue... -[DEBUG] Setting QoS... -Worker started. Waiting for events... -[DEBUG] Starting consumer loop... -``` - -### Publish event messages - -With the worker running, open another SSH terminal and publish an order event: - -```console -./rabbitmqadmin publish exchange=events routing_key="order.created" payload='{"order_id":123,"event":"order.created"}' -``` - -The message routes through the `events` exchange to the `order.events` queue, where the worker consumes it. The worker output shows: - -```output -[Worker] Received event: {'order_id': 123, 'event': 'order.created'} -[Worker] Processing event type: order.created -[Worker] Event processed successfully -``` - -Publish a second event to test the wildcard routing: - -```console -./rabbitmqadmin publish exchange=events routing_key="order.updated" payload='{"order_id":123,"event":"order.updated"}' -``` - -The worker processes this event using the same logic. The output shows: - -```output -[Worker] Received event: {'order_id': 123, 'event': 'order.updated'} -[Worker] Processing event type: order.updated -[Worker] Event processed successfully -``` - -The wildcard binding (`order.*`) allows the worker to process any event with a routing key matching this pattern. You can publish additional events such as `order.completed` or `order.cancelled` and the worker processes them all. - -When you're done testing, press Ctrl+C in the worker terminal to exit the application. - -## What you've accomplished and what's next - -You've implemented an event-driven system using RabbitMQ with topic exchange routing, durable queues, manual acknowledgments, and fair dispatch. - -The Python worker processes order events asynchronously, and the wildcard routing pattern (`order.*`) allows multiple related event types to flow to the same consumer. - -This pattern works well for event streams where you want flexibility to add new event types without reconfiguring consumers. - -Next, you implement a WhatsApp notification pipeline using a direct exchange with exact-match routing, better suited for targeted notifications. diff --git a/content/learning-paths/servers-and-cloud-computing/rabbitmq-gcp/use-case2.md b/content/learning-paths/servers-and-cloud-computing/rabbitmq-gcp/use-case2.md deleted file mode 100644 index d19c979490..0000000000 --- a/content/learning-paths/servers-and-cloud-computing/rabbitmq-gcp/use-case2.md +++ /dev/null @@ -1,282 +0,0 @@ ---- -title: "RabbitMQ use case 2: WhatsApp notification" -weight: 8 - -### FIXED, DO NOT MODIFY -layout: learningpathall ---- -## WhatsApp notification with direct exchange routing - -In this use case, you implement an asynchronous notification workflow where RabbitMQ routes WhatsApp notification messages using a direct exchange with exact-match routing. A Python worker consumes and processes these messages reliably. - -### Use case overview - -In many production systems, sending WhatsApp notifications must be reliable, asynchronous, and independent of the main application flow. RabbitMQ acts as a message broker to decouple message production from consumption. - -**When to use this pattern:** - -Use direct exchanges when you need exact-match routing where `whatsapp` routes only to the WhatsApp queue. This approach provides simple, predictable routing without wildcards, and each message type routes to a specific, dedicated queue. - -**Comparison:** - -Use Case 1 (Topic Exchange) provides flexible routing with wildcards, ideal for event streams. Use Case 2 (Direct Exchange) provides exact-match routing, ideal for targeted notifications. - -### Architecture flow - -The application publishes a WhatsApp notification message to the `notifications` exchange. RabbitMQ routes the message to the `whatsapp.notifications` queue using the exact-match routing key `whatsapp`. The Python worker then consumes the message from the queue and simulates sending the WhatsApp notification. In production, this would call an external WhatsApp API. - -### RabbitMQ topology - -A direct exchange topology is used for exact-match routing. The `notifications` exchange (type: `direct`) routes notification messages based on exact routing key matches. The `whatsapp.notifications` queue is durable, which means it persists messages across broker restarts. The binding connects the exchange to the queue using the `whatsapp` routing key, ensuring only messages published with this exact key are routed to the queue. - -### Declare RabbitMQ resources - -Create the required exchange, queue, and binding for WhatsApp notifications. - -**Declare the exchange:** - -```console -./rabbitmqadmin declare exchange \ - name=notifications \ - type=direct \ - durable=true -``` - -This creates a durable direct exchange named `notifications` that routes messages using exact routing keys. - -The output is similar to: -```output -exchange declared -``` - -**Declare the queue:** - -```console -./rabbitmqadmin declare queue \ - name=whatsapp.notifications \ - durable=true -``` - -This creates a durable queue to persist WhatsApp notification messages until consumed. - -The output is similar to: -```output -queue declared -``` - -**Declare the binding:** - -```console -./rabbitmqadmin declare binding \ - source=notifications \ - destination=whatsapp.notifications \ - routing_key=whatsapp -``` - -This links the exchange to the queue using the `whatsapp` routing key. - -Expected output: -```output -binding declared -``` - -### Validate the setup - -Validate that RabbitMQ resources exist and are correctly connected: - -```console -./rabbitmqadmin list exchanges name type -./rabbitmqadmin list queues name messages -./rabbitmqadmin list bindings source destination routing_key -``` - -These commands verify that the `notifications` exchange exists (type: `direct`), the `whatsapp.notifications` queue exists with zero messages, and the binding connects the exchange to the queue with routing key `whatsapp`. - -The output is similar to: - -```output -+---------------+--------+ -| name | type | -+---------------+--------+ -| notifications | direct | -+---------------+--------+ - -+------------------------+----------+ -| name | messages | -+------------------------+----------+ -| whatsapp.notifications | 0 | -+------------------------+----------+ - -+---------------+------------------------+-------------+ -| source | destination | routing_key | -+---------------+------------------------+-------------+ -| notifications | whatsapp.notifications | whatsapp | -+---------------+------------------------+-------------+ -``` - -### Install Python dependencies - -If you haven't already installed Python dependencies in Use Case 1, install them now: - -```console -sudo zypper install -y python3-pip -pip3 install pika -``` - -This installs `pip` (Python package manager) and `pika` (RabbitMQ client library for Python). - -### WhatsApp worker implementation - -The worker attaches as a blocking consumer to the `whatsapp.notifications` queue and processes incoming messages. This worker uses durable queues for message persistence, `prefetch_count=1` for fair dispatch, and manual acknowledgments for reliable processing. - -Using a text editor, create a `whatsapp_worker.py` file with the content below: - -```python -import pika -import json -import time - -RABBITMQ_HOST = "localhost" -QUEUE_NAME = "whatsapp.notifications" - -print("[DEBUG] Connecting to RabbitMQ...") -connection = pika.BlockingConnection( - pika.ConnectionParameters(host=RABBITMQ_HOST) -) -channel = connection.channel() - -print("[DEBUG] Declaring queue...") -channel.queue_declare(queue=QUEUE_NAME, durable=True) - -print("[DEBUG] Setting QoS...") -channel.basic_qos(prefetch_count=1) - -print("WhatsApp Worker started. Waiting for messages...") - -def send_whatsapp(ch, method, properties, body): - data = json.loads(body.decode()) - phone = data.get('phone', 'unknown') - message = data.get('message', '') - - print(f"[Worker] Processing WhatsApp notification") - print(f"[Worker] Recipient: {phone}") - print(f"[Worker] Message: {message}") - - # Simulate external WhatsApp API call - time.sleep(1) - - print("[Worker] WhatsApp notification sent successfully") - ch.basic_ack(delivery_tag=method.delivery_tag) - -channel.basic_consume( - queue=QUEUE_NAME, - on_message_callback=send_whatsapp, - auto_ack=False -) - -print("[DEBUG] Starting consumer loop (this should block)...") -channel.start_consuming() -``` - -### Start the worker - -Now that you've created the worker script, run it in a dedicated terminal session: - -```console -python3 whatsapp_worker.py -``` - -The worker connects to RabbitMQ and begins listening for WhatsApp notifications. The output is similar to: - -```output -[DEBUG] Connecting to RabbitMQ... -[DEBUG] Declaring queue... -[DEBUG] Setting QoS... -WhatsApp Worker started. Waiting for messages... -[DEBUG] Starting consumer loop (this should block)... -``` - -The worker blocks at this point, waiting for messages without returning to the shell prompt. - -### Publish a test message - -With the worker running, open another SSH terminal and publish a WhatsApp notification message: - -```console -./rabbitmqadmin publish \ - exchange=notifications \ - routing_key=whatsapp \ - payload='{"phone":"+911234567890","message":"Hello from RabbitMQ"}' -``` - -The message routes through the `notifications` exchange to the `whatsapp.notifications` queue, where the worker consumes it. In the first SSH terminal where the worker is running, you should see: - -```output -[Worker] Processing WhatsApp notification -[Worker] Recipient: +911234567890 -[Worker] Message: Hello from RabbitMQ -[Worker] WhatsApp notification sent successfully -``` - -### Message consumption validation - -The complete worker output shows the full message flow: - -```output -[DEBUG] Connecting to RabbitMQ... -[DEBUG] Declaring queue... -[DEBUG] Setting QoS... -WhatsApp Worker started. Waiting for messages... -[DEBUG] Starting consumer loop (this should block)... -[Worker] Processing WhatsApp notification -[Worker] Recipient: +911234567890 -[Worker] Message: Hello from RabbitMQ -[Worker] WhatsApp notification sent successfully -``` - -This output confirms that message routing works correctly through the direct exchange, the worker successfully consumes from the queue, manual acknowledgments are applied, and the end-to-end message flow is validated. - -### Verify queue state - -To confirm successful message consumption, check the queue status: - -```console -./rabbitmqadmin list queues name messages consumers -``` - -The output is similar to: - -```output -+------------------------+----------+-----------+ -| name | messages | consumers | -+------------------------+----------+-----------+ -| whatsapp.notifications | 0 | 1 | -+------------------------+----------+-----------+ -``` - -The output shows zero messages remaining (all were consumed), one active consumer connected, and no message backlog in the queue. - -When you're done testing, press Ctrl+C in the worker terminal to exit the application. - -## What you've accomplished - -You've implemented an asynchronous notification system using RabbitMQ with direct exchange routing, durable queues, manual acknowledgments, and fair dispatch. The Python worker processes WhatsApp notifications asynchronously, and the exact-match routing ensures messages go only to the intended queue. - -This pattern works well for targeted notifications (email, SMS, WhatsApp, push notifications) where routing needs to be simple and predictable. Each notification type routes to a dedicated queue using an exact-match routing key, providing reliable, guaranteed delivery. - -The key difference from Use Case 1 is the routing approach: Use Case 1 uses topic exchange with wildcard routing (`order.*`) for flexible event streams, while Use Case 2 uses direct exchange with exact routing (`whatsapp`) for targeted notifications. - -## Delete RabbitMQ resources - -When you're finished, stop the RabbitMQ workers and delete the resources. - -```console -./rabbitmqadmin delete queue name=order.events -./rabbitmqadmin delete queue name=whatsapp.notifications -./rabbitmqadmin delete queue name=testqueue -./rabbitmqadmin delete exchange name=events -./rabbitmqadmin delete exchange name=notifications -``` - -When you are done, be sure to delete the Google Cloud VM and the firewall rule. -