This example is part of a suite of examples showing the different ways you can use Skupper to connect services across cloud providers, data centers, and edge sites.
- Overview
- Prerequisites
- Step 1: Install the Skupper command-line tool
- Step 2: Access your Kubernetes clusters
- Step 3: Install Skupper on your Kubernetes clusters
- Step 4: Create your Kubernetes namespaces
- Step 5: Create your sites
- Step 6: Link your sites
- Step 7: Deploy the Metrics Generators
- Step 8: Deploy the Prometheus Server on the other public cluster.
- Step 9: Expose the Metrics Deployments to the Virtual Application Network
- Step 10: Label services as Prometheus dedicated collection points
- Step 11: Access the Prometheus Web UI
- Step 12: Verify Metrics
- Cleaning up
- Next steps
- About this example
This tutorial demonstrates how to deploy metric generators across multiple Kubernetes clusters that are located in different public and private cloud providers and to additionally deploy the Prometheus monitoring system to gather metrics across multiple clusters, discovering the endpoints to be scraped dynamically, as soon as services are exposed through the Skupper Virtual Application Network.
In this tutorial, you will create a Virtual Application Network that enables communications across the public and private clusters. You will then deploy the metric generators and Prometheus server to individual clusters. You will then access the Prometheus server Web UI to browse targets, query and graph the collected metrics.
-
Access to at least one Kubernetes cluster, from any provider you choose.
-
The
kubectlcommand-line tool, version 1.15 or later (installation guide).
This example uses the Skupper command-line tool to create Skupper
resources. You need to install the skupper command only once
for each development environment.
On Linux or Mac, you can use the install script (inspect it here) to download and extract the command:
curl https://skupper.io/v2/install.sh | shThe script installs the command under your home directory. It prompts you to add the command to your path if necessary.
For Windows and other installation options, see Installing Skupper.
Skupper is designed for use with multiple Kubernetes clusters.
The skupper and kubectl commands use your
kubeconfig and current context to select the cluster
and namespace where they operate.
This example uses multiple cluster contexts at once. The
KUBECONFIG environment variable tells skupper and kubectl
which kubeconfig to use.
For each cluster, open a new terminal window. In each terminal,
set the KUBECONFIG environment variable to a different path and
log in to your cluster.
Public1:
export KUBECONFIG=~/.kube/config-public1
<provider-specific login command>Public2:
export KUBECONFIG=~/.kube/config-public2
<provider-specific login command>Private1:
export KUBECONFIG=~/.kube/config-private1
<provider-specific login command>Note: The login procedure varies by provider.
Using Skupper on Kubernetes requires the installation of the Skupper custom resource definitions (CRDs) and the Skupper controller.
For each cluster, use kubectl apply with the Skupper
installation YAML to install the CRDs and controller.
Public1:
kubectl apply -f https://skupper.io/v2/install.yamlPublic2:
kubectl apply -f https://skupper.io/v2/install.yamlPrivate1:
kubectl apply -f https://skupper.io/v2/install.yamlThe example application has different components deployed to different Kubernetes namespaces. To set up our example, we need to create the namespaces.
For each cluster, use kubectl create namespace and kubectl config set-context to create the namespace you wish to use and
set the namespace on your current context.
Public1:
kubectl create namespace public1
kubectl config set-context --current --namespace public1Public2:
kubectl create namespace public2
kubectl config set-context --current --namespace public2Private1:
kubectl create namespace private1
kubectl config set-context --current --namespace private1A Skupper site is a location where components of your application are running. Sites are linked together to form a network for your application. In Kubernetes, a site is associated with a namespace.
Use the kubectl apply command to declaratively create sites in the kubernetes namespaces. This deploys the Skupper router. Then use kubectl get site to see the outcome.
Note: If you are using Minikube, you need to start minikube tunnel before creating sites.
Public1:
kubectl apply -f ./public1-crs/site.yaml
kubectl wait --for condition=Ready --timeout=60s site/public1Sample output:
$ kubectl wait --for condition=Ready --timeout=60s site/public1
site.skupper.io/public1 created
site.skupper.io/public1 condition metPublic2:
kubectl apply -f ./public2-crs/site.yaml
kubectl wait --for condition=Ready --timeout=60s site/public2Sample output:
$ kubectl wait --for condition=Ready --timeout=60s site/public2
site.skupper.io/public2 created
site.skupper.io/public2 condition metPrivate1:
kubectl apply -f ./private1-crs/site.yaml
kubectl wait --for condition=Ready --timeout=60s site/private1Sample output:
$ kubectl wait --for condition=Ready --timeout=60s site/private1
site.skupper.io/private1 created
site.skupper.io/private1 condition metA Skupper link is a channel for communication between two sites. Links serve as a transport for application connections and requests.
Creating a link requires use of two skupper commands in
conjunction, skupper token issue and skupper token redeem.
The skupper token issue command generates a secret token that
signifies permission to create a link. The token also carries the
link details. Then, in a remote site, The skupper token redeem command uses the token to create a link to the site
that generated it.
Note: The link token is truly a secret. Anyone who has the token can link to your site. Make sure that only those you trust have access to it.
First, use skupper token issue in public1 to generate the
token. Then, use skupper token redeem in public2 to link the
sites. Using the flag redemptions-allowed specifies how many tokens
are created. In this scenario public2 and private1 will connect to
public1 so we will need two tokens.
Public1:
skupper token issue ~/public1.token --redemptions-allowed 2Public2:
skupper token redeem ~/public1.token
skupper token issue ~/public2.tokenPrivate1:
skupper token redeem ~/public1.token
skupper token redeem ~/public2.tokenIf your terminal sessions are on different machines, you may need
to use scp or a similar tool to transfer the token securely. By
default, tokens expire after a single use or 15 minutes after
creation.
After creating the Skupper network, deploy the Metrics Generators on one of the public clusters and the private cluster.
Private1:
kubectl apply -f ./private1-crs/metrics-deployment-a.yamlSample output:
$ kubectl apply -f ./private1-crs/metrics-deployment-a.yaml
deployment.apps/metrics-a createdPublic1:
kubectl apply -f ./public1-crs/metrics-deployment-b.yamlSample output:
$ kubectl apply -f ./public1-crs/metrics-deployment-b.yaml
deployment.apps/metrics-b createdDeploy the Prometheus server in the public2 cluster.
Public2:
kubectl apply -f ./public2-crs/prometheus-deployment.yamlSample output:
$ kubectl apply -f ./public2-crs/prometheus-deployment.yaml
role.rbac.authorization.k8s.io/prometheus created
serviceaccount/prometheus created
rolebinding.rbac.authorization.k8s.io/prometheus created
configmap/prometheus-conf created
deployment.apps/prometheus createdCreate Skupper listeners and connectors to expose the metric generator deployments in each namespace.
Private1:
kubectl apply -f ./private1-crs/listener.yaml
kubectl apply -f ./private1-crs/connector.yamlSample output:
$ kubectl apply -f ./private1-crs/connector.yaml
listener.skupper.io/prometheus created
connector.skupper.io/metric-a createdPublic1:
kubectl apply -f ./public1-crs/listener.yaml
kubectl apply -f ./public1-crs/connector.yamlSample output:
$ kubectl apply -f ./public1-crs/connector.yaml
listener.skupper.io/prometheus created
connector.skupper.io/metric-b createdPublic2:
kubectl apply -f ./public2-crs/listener.yaml
kubectl apply -f ./public2-crs/connector.yamlSample output:
$ kubectl apply -f ./public2-crs/connector.yaml
listener.skupper.io/metrics-a created
listener.skupper.io/metrics-b created
connector.skupper.io/prometheus createdIn Prometheus, a service label with "app=metrics" indicates that the service is specifically designed to expose metrics for monitoring purposes. This label allows Prometheus to easily identify and scrape data from that service to gather performance and health information.
Public2:
kubectl label service/metrics-a app=metrics
kubectl label service/metrics-b app=metricsSample output:
$ kubectl label service/metrics-b app=metrics
service/metrics-a labeled
service/metrics-b labeledIn a browser access the Prometheus UI at http://{ip}:9090 where ip is output of following command:
Private1:
kubectl get service prometheus -o=jsonpath='{.spec.clusterIP}')In the Prometheus UI, navigate to Status->Target health and verify that the metric endpoints are in the UP state
In the Prometheus UI, navigate to the Query tab and insert the following expression to execute in the + Add query and click execute:
avg(rate(rpc_durations_seconds_count[1m])) by (job, service)
Observe the metrics data in either the Table or Graph view provided in the UI.
To remove Skupper and the other resources from this exercise, use the following commands.
Private1:
skupper site delete --all
kubectl delete -f ./private1-crs/metrics-deployment-a.yamlPublic1:
skupper site delete --all
kubectl delete -f ./public1-crs/metrics-deployment-b.yamlPublic2:
skupper site delete --all
kubectl delete -f ./public2-crs/prometheus-deployment.yamlCheck out the other examples on the Skupper website.
This example was produced using Skewer, a library for documenting and testing Skupper examples.
Skewer provides utility functions for generating the README and
running the example steps. Use the ./plano command in the project
root to see what is available.
To quickly stand up the example using Minikube, try the ./plano demo
command.