You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+14-14Lines changed: 14 additions & 14 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -9,7 +9,7 @@ Just connect against `localhost:9092`. If you are on Mac or Windows and want to
9
9
10
10
# kafka-stack-docker-compose
11
11
12
-
This replicates as well as possible real deployment configurations, where you have your zookeeper servers and kafka servers actually all distinct from each other. This solves all the networking hurdles that comes with Docker and docker-compose, and is compatible cross platform.
12
+
This replicates as well as possible real deployment configurations, where you have your zookeeper servers and kafka servers actually all distinct from each other. This solves all the networking hurdles that comes with Docker and dockercompose, and is compatible cross platform.
13
13
14
14
**UPDATE**: No /etc/hosts file changes are necessary anymore. Explanations at: https://rmoff.net/2018/08/02/kafka-listeners-explained/
15
15
@@ -65,10 +65,10 @@ password: `admin`
65
65
66
66
Run with:
67
67
```
68
-
docker-compose -f full-stack.yml up
69
-
docker-compose -f full-stack.yml down
68
+
dockercompose -f full-stack.yml up
69
+
dockercompose -f full-stack.yml down
70
70
```
71
-
** Note: if you find that you can not connect to [localhost:8080](http://localhost:8080/) please run `docker-compose -f full-stack.yml build` to rebuild the port mappings.
71
+
** Note: if you find that you can not connect to [localhost:8080](http://localhost:8080/) please run `dockercompose -f full-stack.yml build` to rebuild the port mappings.
72
72
73
73
## Single Zookeeper / Single Kafka
74
74
@@ -80,8 +80,8 @@ This configuration fits most development requirements.
80
80
81
81
Run with:
82
82
```
83
-
docker-compose -f zk-single-kafka-single.yml up
84
-
docker-compose -f zk-single-kafka-single.yml down
83
+
dockercompose -f zk-single-kafka-single.yml up
84
+
dockercompose -f zk-single-kafka-single.yml down
85
85
```
86
86
87
87
## Single Zookeeper / Multiple Kafka
@@ -94,8 +94,8 @@ If you want to have three brokers and experiment with kafka replication / fault-
94
94
95
95
Run with:
96
96
```
97
-
docker-compose -f zk-single-kafka-multiple.yml up
98
-
docker-compose -f zk-single-kafka-multiple.yml down
97
+
dockercompose -f zk-single-kafka-multiple.yml up
98
+
dockercompose -f zk-single-kafka-multiple.yml down
99
99
```
100
100
101
101
## Multiple Zookeeper / Single Kafka
@@ -108,8 +108,8 @@ If you want to have three zookeeper nodes and experiment with zookeeper fault-to
108
108
109
109
Run with:
110
110
```
111
-
docker-compose -f zk-multiple-kafka-single.yml up
112
-
docker-compose -f zk-multiple-kafka-single.yml down
111
+
dockercompose -f zk-multiple-kafka-single.yml up
112
+
dockercompose -f zk-multiple-kafka-single.yml down
113
113
```
114
114
115
115
@@ -122,8 +122,8 @@ If you want to have three zookeeper nodes and three kafka brokers to experiment
122
122
123
123
Run with:
124
124
```
125
-
docker-compose -f zk-multiple-kafka-multiple.yml up
126
-
docker-compose -f zk-multiple-kafka-multiple.yml down
125
+
dockercompose -f zk-multiple-kafka-multiple.yml up
126
+
dockercompose -f zk-multiple-kafka-multiple.yml down
127
127
```
128
128
129
129
# FAQ
@@ -132,11 +132,11 @@ docker-compose -f zk-multiple-kafka-multiple.yml down
132
132
133
133
**Q: Kafka's log is too verbose, how can I reduce it?**
134
134
135
-
A: Add the following line to your docker-compose environment variables: `KAFKA_LOG4J_LOGGERS: "kafka.controller=INFO,kafka.producer.async.DefaultEventHandler=INFO,state.change.logger=INFO"`. Full logging control can be accessed here: https://github.com/confluentinc/cp-docker-images/blob/master/debian/kafka/include/etc/confluent/docker/log4j.properties.template
135
+
A: Add the following line to your dockercompose environment variables: `KAFKA_LOG4J_LOGGERS: "kafka.controller=INFO,kafka.producer.async.DefaultEventHandler=INFO,state.change.logger=INFO"`. Full logging control can be accessed here: https://github.com/confluentinc/cp-docker-images/blob/master/debian/kafka/include/etc/confluent/docker/log4j.properties.template
136
136
137
137
**Q: How do I delete data to start fresh?**
138
138
139
-
A: Your data is persisted from within the docker compose folder, so if you want for example to reset the data in the full-stack docker compose, do a `docker-compose -f full-stack.yml down`.
139
+
A: Your data is persisted from within the docker compose folder, so if you want for example to reset the data in the full-stack docker compose, do a `dockercompose -f full-stack.yml down`.
0 commit comments