Skip to content

Commit e931ad3

Browse files
authored
Merge pull request #2 from citrix/git_cnc_v2
Git cnc v2
2 parents b0ad32c + 1f85460 commit e931ad3

23 files changed

+936
-1294
lines changed

README.md

Lines changed: 25 additions & 33 deletions
Original file line numberDiff line numberDiff line change
@@ -10,27 +10,26 @@
1010

1111
# Citrix k8s node controller
1212

13-
Citrix k8s node controller is deployed as a pod in Kubernetes cluster that provides a network between the Kubernetes cluster and the Ingress Citrix ADC.
14-
15-
>**Note:**
16-
>Citrix k8s node controller currently works only with flannel as the Container Network Interface (CNI). The scope of Citrix node controller can be extended to other CNI.
13+
Citrix k8s node controller is a micro service provided by Citrix that creates network between the Kubernetes cluster and ingress device.
1714

1815
## Contents
1916

20-
+ [Overview](#Overview)
21-
+ [Architecture](#Architecture)
22-
+ [How it works](#How-it-works)
23-
+ [Get started](#Get-started)
24-
+ [Questions](#Questions)
25-
+ [Issues](#Issues)
26-
+ [Code of conduct](#Code-of-conduct)
27-
+ [License](#License)
17+
+ [Overview](#overview)
18+
+ [Architecture](#architecture)
19+
+ [How it works](#how-it-works)
20+
+ [Get started](#get-started)
21+
+ [Using Citrix k8s node controller as a process](#using-citrix-k8s-node-controller-as-a-process)
22+
+ [Using Citrix k8s node controller as a microservice](#using-citrix-k8s-node-controller-as-a-microservice)
23+
+ [Questions](#questions)
24+
+ [Issues](#issues)
25+
+ [Code of conduct](#code-of-conduct)
26+
+ [License](#license)
2827

2928
## Overview
3029

3130
In Kubernetes environments, when you expose the services for external access through the Ingress device, to route the traffic into the cluster, you need to appropriately configure the network between the Kubernetes nodes and the Ingress device. Configuring the network is challenging as the pods use private IP addresses based on the CNI framework. Without proper network configuration, the Ingress device cannot access these private IP addresses. Also, manually configuring the network to ensure such reachability is cumbersome in Kubernetes environments.
3231

33-
Citrix k8s node controller is deployed as a pod in Kubernetes cluster that provides a network between the Kubernetes cluster and the Ingress Citrix ADC.
32+
Citrix provides a microservice called as **Citrix k8s node controller** that you can use to create the network between the cluster and the Ingress device.
3433

3534
## Architecture
3635

@@ -48,47 +47,36 @@ The are the main components of the Citrix k8s node controller:
4847
This **K8s Interface** component interacts with the Kube API server through K8s Go client. It ensures the availability of the client and maintains a healthy client session.
4948
</details>
5049
<details>
51-
<summary>**Node Watcher**</summary>
52-
The **Node Watcher** component monitors the node events through K8s interface. It responds to the node events such as node addition, deletion, or modification with its callback functions.
53-
</details>
54-
<details>
5550
<summary>**Input Feeder**</summary>
5651
The **Input Feeder** component provides inputs to the config decider. Some of the inputs are auto detected and the rest are taken from the Citrix k8s node controller deployment YAML file.
5752
</details>
5853
<details>
59-
<summary>**Config Decider**</summary>
60-
The **Config Decider** component takes inputs from both the node watcher and the input feeder. Using the inputs it decides the best network automation required between the cluster and Citrix ADC.
61-
</details>
62-
<details>
6354
<summary>**Core**</summary>
6455
The **Core** component interacts with the node watcher and updates the corresponding config engine. It is responsible for starting the best config engine for the corresponding cluster.
6556
</details>
6657
<details>
6758
<summary>**Config Maps**</summary>
68-
The **Config Maps** component controls the Citrix k8s node controller. It allows you to define the Citrix k8s node controller to automatically create, apply, and delete routing configuration on Citrix ADC.
59+
The **Config Maps** component controls the Citrix k8s node controller. It allows you to define Citrix k8s node controller to automatically create, apply, and delete routing configuration on Citrix ADC.
6960
</details>
7061

7162
## How it works
7263

73-
Citrix k8s node controller monitors the node events and establishes a route between the cluster nodes and Citrix ADC using VXLAN. Citrix k8s node controller adds a route on the Citrix ADC when a new node joins to the cluster. Similarly when a node leaves the cluster, Citrix k8s node controller removes the associated route from the Citrix ADC. Citrix k8s node controller uses VXLAN overlay between the Kubernetes cluster and Citrix ADC for service routing.
64+
Citrix k8s node controller monitors the node events and establishes a route between the node to Citrix ADC using VXLAN. Citrix k8s node controller adds route on the Citrix ADC when a new node joins to the cluster. Similarly when a node leaves the cluster, Citrix k8s node controller removes the associated route from the Citrix ADC. Citrix k8s node controller uses VXLAN overlay between the Kubernetes cluster and Citrix ADC for service routing.
7465

7566
## Get started
7667

77-
Citrix k8s node controller can be used in the following two ways:
68+
You can run Citrix k8s node controller as:
7869

79-
- **Inside the cluster** - In this configuration, the Citrix k8s node controller is run as **pod**.
80-
- **Outside the cluster** - In this configuration, the Citrix k8s node controller is run as a **process**.
70+
- A **microservice** inside the Kubernetes cluser.
71+
- A **process** outside the Kubernetes cluster.
8172

8273
>**Important:**
83-
>Citrix recommends that you use **Inside the cluster** configuration for production. And, use the **Outside the cluster** configuration for development environments.
84-
85-
### Using Citrix k8s node controller as a pod
86-
87-
Refer the [deployment](deploy/README.md) page for running Citrix k8s node controller as a pod inside the Kubernetes cluster.
74+
>
75+
>Citrix recommends that you use Citrix k8s node controller as a **microservice** for production environments. And, as a **process** for easy development.
8876
8977
### Using Citrix k8s node controller as a process
9078

91-
Before you deploy the citrix-k8s-node-controller package, ensure that you have installed [Go package](https://golang.org/doc/).
79+
Before you deploy the citrix-k8s-node-controller` package, ensure that you have installed Go binary for running MIC.
9280

9381
Perform the following:
9482

@@ -100,7 +88,11 @@ Perform the following:
10088

10189
1. Deploy the config map using the following command:
10290

103-
kubectl apply -f https://raw.githubusercontent.com/citrix/citrix-k8s-node-controller/master/deploy/config_map.yaml
91+
kubectl apply -f https://raw.githubusercontent.com/janraj/citrix-k8s-node-controller/master/deploy/config_map.yaml
92+
93+
### Using Citrix k8s node controller as a microservice
94+
95+
Refer the [deployment](deploy/README.md) page for running Citrix k8s node controller as a microservice inside the Kubernetes cluster.
10496

10597
## Questions
10698

build/Dockerfile

Lines changed: 10 additions & 16 deletions
Original file line numberDiff line numberDiff line change
@@ -1,19 +1,13 @@
11
FROM golang:alpine AS builder
2-
#RUN apk update && apk add --no-cache git
3-
WORKDIR $GOPATH/src/
4-
COPY cmd/citrix-node-controller/k8sInterface.go .
5-
COPY cmd/citrix-node-controller/netScalerInterface.go .
6-
COPY cmd/citrix-node-controller/inputFeeder.go .
7-
COPY cmd/citrix-node-controller/flannel.go .
8-
COPY cmd/citrix-node-controller/server.go .
9-
COPY cmd/citrix-node-controller/main.go .
10-
COPY version/VERSION .
11-
COPY vendor .
12-
#RUN go get -d -v
13-
RUN go build -o /go/bin/citrix-node-controller
2+
WORKDIR $GOPATH/src/citrix-node-controller-v2/
3+
COPY cmd/ cmd
4+
COPY version/ version
5+
COPY vendor/ vendor
6+
RUN go build -o /go/bin/citrix-node-controller ./cmd/citrix-node-controller/
147

15-
FROM alpine
16-
COPY --from=builder /go/bin/citrix-node-controller /go/bin/citrix-node-controller
17-
EXPOSE 8080
18-
ENTRYPOINT ["/go/bin/citrix-node-controller"]
8+
FROM quay.io/chorus/chorus-kube-router:1.8.0
9+
COPY --from=builder /go/bin/citrix-node-controller /go/bin/citrix-node-controller
10+
COPY build/start.sh /go/bin/start.sh
11+
RUN ["chmod", "+x", "/go/bin/start.sh"]
12+
ENTRYPOINT ["sh", "/go/bin/start.sh"]
1913

build/Makefile

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,5 @@
11
OWNER=Janrajc
2-
IMAGE_NAME=citrix-k8s-node-controller
2+
IMAGE_NAME=citrix-k8s-node-controller-v2
33
VERSION_FILE="../version/VERSION"
44
version=1.0.0
55
error=0.0.0
@@ -28,5 +28,5 @@ publish:
2828

2929
clean:
3030
docker rmi -f $$(docker images -q -f dangling=true) || true
31-
docker rmi -f $$(docker images | awk '$$1 ~ /citrix-k8s-node-controller/ { print $$3}') || true
31+
docker rmi -f $$(docker images | awk '$$1 ~ /citrix-k8s-node-controller-v2/ { print $$3}') || true
3232

build/gitpush.sh

Lines changed: 0 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -53,9 +53,7 @@ git_push() {
5353
push_image() {
5454
echo 'publish latest and $(version) to ${DOCKER_REGISTRY}'
5555
echo "${QUAY_PASSWORD}" | docker login -u "${QUAY_USERNAME}" --password-stdin quay.io
56-
docker tag ${IMAGE_NAME}:latest ${DOCKER_REGISTRY}/${IMAGE_NAME}:latest
5756
docker tag ${IMAGE_NAME}:latest ${DOCKER_REGISTRY}/${IMAGE_NAME}:${version}
58-
docker push ${DOCKER_REGISTRY}/${IMAGE_NAME}:latest
5957
docker push ${DOCKER_REGISTRY}/${IMAGE_NAME}:${version}
6058
}
6159

build/start.sh

Lines changed: 32 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,32 @@
1+
#!/bin/sh
2+
3+
# Start Kubernetes Route extender
4+
./go/bin/kube-chorus-router -D &
5+
status=$?
6+
if [ $status -ne 0 ]; then
7+
echo "Failed to start Route Extender: $status"
8+
exit $status
9+
fi
10+
11+
# Start the citrix node controller
12+
./go/bin/citrix-node-controller -D &
13+
status=$?
14+
if [ $status -ne 0 ]; then
15+
echo "Failed to start citrix Node Controller: $status"
16+
exit $status
17+
fi
18+
19+
20+
while /bin/true; do
21+
ps aux |grep kube-chorus-router |grep -q -v grep
22+
PROCESS_1_STATUS=$?
23+
ps aux |grep citrix-node-controller |grep -q -v grep
24+
PROCESS_2_STATUS=$?
25+
# If the greps above find anything, they will exit with 0 status
26+
# If they are not both 0, then something is wrong
27+
if [ $PROCESS_1_STATUS -ne 0 -o $PROCESS_2_STATUS -ne 0 ]; then
28+
echo "One of the processes has already exited."
29+
exit -1
30+
fi
31+
sleep 60
32+
done

0 commit comments

Comments
 (0)