Skip to content

Commit 8b5443a

Browse files
committed
4.4 Reworked chapter
1 parent 9b6a59f commit 8b5443a

File tree

1 file changed

+12
-12
lines changed

1 file changed

+12
-12
lines changed

Part 4 - Concurrency/4. Backpressure.md

Lines changed: 12 additions & 12 deletions
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
11
# Backpressure
22

3-
Rx leads events from the end of a pipeline to another other. The actions that take place on each end can be very dissimilar. What happens when the producer and the consumer require different amounts of time to process a value? In a synchronous model this question isn't an issue. Consider the following example:
3+
Rx leads events from one end of a pipeline to the other. The actions which take place on each end can be very dissimilar. What happens when the producer and the consumer require different amounts of time to process a value? In a synchronous model, this question isn't an issue. Consider the following example:
44

55
```java
66
// Produce
@@ -79,13 +79,13 @@ Output
7979

8080
The are similar operators that can serve the same purpose.
8181
* The [throttle](/Part 3 - Taming the sequence/5. Time-shifted sequences.md#throttling) family of operators also filters on rate, but allows you to speficy in a diffent way which element to let through when stressed.
82-
* [Debounce](/Part 3 - Taming the sequence/5. Time-shifted sequences.md#debouncing) does not cut the rate to a fixed maximum. Instead, it will completely remove any burst of information and replace it with a single value.
82+
* [Debounce](/Part 3 - Taming the sequence/5. Time-shifted sequences.md#debouncing) does not cut the rate to a fixed maximum. Instead, it will completely remove every burst of information and replace it with a single value.
8383

8484
#### Collect
8585

86-
Instead of sampling the data, you can use `buffer` and `window` to collect overflowing data while the consumer is busy. This is useful if processing items in batches is faster. Alternatively, you can decide manually how many and which of the buffered items to process.
86+
Instead of sampling the data, you can use `buffer` and `window` to collect overflowing data while the consumer is busy. This is useful if processing items in batches is faster. Alternatively, you can inspect the buffer to manually decide how many and which of the buffered items to process.
8787

88-
The example that we saw previously processes multiple items with the same speed that it processes bulks. Here we slowed down the producer to make the batches fit a line, but the principle remains the same.
88+
In the example that we saw previously, the consumer processes single items and bulks at practically the same speed. Here we slowed down the producer to make the batches fit a line, but the principle remains the same.
8989

9090
```java
9191
Observable.interval(10, TimeUnit.MILLISECONDS)
@@ -143,7 +143,7 @@ class MySubscriber extends Subscriber<T> {
143143
}
144144
```
145145

146-
The `request(1)` in `onStart` establishes backpressure and that the observable should only emit the first value. After processing it in `onNext`, we request the next item to be sent, if and when it is available. Calling `request(Long.MAX_VALUE)` would disable backpressure.
146+
The `request(1)` in `onStart` establishes backpressure and informs the observable that it should only emit the first value. After processing the value in `onNext`, we request the next item to be sent, if and when it is available. Calling `request(Long.MAX_VALUE)` disables backpressure.
147147

148148
### doOnRequested
149149

@@ -153,7 +153,7 @@ public final Observable<T> doOnRequest(Action1<java.lang.Long> onRequest)
153153
```
154154
The `doOnRequested` meta-event happens when a subscriber requests for more items. The value supplied to the action is the number of items requested.
155155

156-
At this moment, `doOnRequest` is in beta. It is the only beta operator that we will discuss in this book. We're making an exception, because it enables us to peek into stable backpressure functionality that is otherwise hidden. Let's see what happens in the most simple observable
156+
At this moment, `doOnRequest` is in beta. In this book, we have been avoiding beta operators. We're making an exception here, because it enables us to peek into stable backpressure functionality that is otherwise hidden. Let's see what happens in a simple observable:
157157

158158
```java
159159
Observable.range(0, 3)
@@ -168,7 +168,7 @@ Requested 9223372036854775807
168168
2
169169
```
170170

171-
We see that `subscribe` requests the maximum number of items from the beginning. That means that `subscribe` doesn't resist values at all. Subscribe will only use backpressure if we provide a subscriber that implements backpressure. Here is a complete example for such an implementation
171+
We see that `subscribe` requests the maximum number of items from the beginning. This means that `subscribe` doesn't resist values at all. Subscribe will only use backpressure if we provide a subscriber that implements backpressure. Here is a complete example for such an implementation
172172

173173
```java
174174
public class ControlledPullSubscriber<T> extends Subscriber<T> {
@@ -245,9 +245,9 @@ Requested 1
245245
2
246246
```
247247

248-
First we requested no emissions. Then we requested 2 and we got 2 values.
248+
First we requested no emissions (our `ControlledPullSubscriber` does this in `onStart`). Then we requested 2 and we got 2 values, then we requested 1 and we got 1.
249249

250-
Rx operators that use queues and buffers internally should use backpressure to avoid storing an infinite amount of values. Large-scale buffering should be left to operators that explicitly serve this purpose, such as `cache`, `buffer` etc. `zip` is one operator that needs to buffer items: the first observable might emit two or more values before the second observable emits its next value. Such small asymmetries are expected and they shouldn't cause the operator to fail. For that reason, `zip` has a small buffer of 128 items.
250+
Rx operators that use queues and buffers internally should use backpressure to avoid storing an infinite amount of values. Large-scale buffering should be left to operators that explicitly serve this purpose, such as `cache`, `buffer` etc. An example of an operator that needs to buffer items is `zip`: the first observable might emit two or more values before the second observable emits its next value. Such small asymmetries are expected even when the two sequences are supposed to have the same frequency. Needing to buffer a couple of items shouldn't cause the operator to fail. For that reason, `zip` has a small buffer of 128 items.
251251

252252
```java
253253
Observable.range(0, 300)
@@ -266,12 +266,12 @@ Requested 90
266266
Requested 90
267267
```
268268

269-
The `zip` operator starts by requesting enough items to fill its buffer, and requests more when it has consumed enough. The details of how many items `zip` requests isn't interesting. What the reader should take away is the realisation that some buffering and backpressure exist in Rx whether the developer requests for it or not. This gives an Rx pipeline some flexibility where you might expect none. This might trick you into thinking that your code is solid, by silently saving small tests from failing, but you're not safe until you have explicitly declared behaviour with regard to backpressure.
269+
The `zip` operator starts by requesting enough items to fill its buffer, and requests more when it consumes them. The details of how many items `zip` requests isn't interesting. What the reader should take away is the realisation that some buffering and backpressure exist in Rx whether the developer requests for it or not. This gives an Rx pipeline some flexibility, where you might expect none. It might trick you into thinking that your code is solid, by silently saving small tests from failing, but you're not safe until you have explicitly declared behaviour with regard to backpressure.
270270

271271

272272
## Backpressure policies
273273

274-
Many Rx operators use backpressure internally to avoid overfilling their internal queues. This way, the problem of a slow consumer is propagated backwards in the chain of operators. Backpressure doesn't make the problem go away. It merely moves it where it may be handled better. We still need to decide what to do with the values of an overproducing observable.
274+
Many Rx operators use backpressure internally to avoid overfilling their internal queues. This way, the problem of a slow consumer is propagated backwards in the chain of operators: if an operator stops accepting values, then the previous operator will fill its buffers until it stops accepting values too, and so on. Backpressure doesn't make the problem go away. It merely moves it where it may be handled better. We still need to decide what to do with the values of an overproducing observable.
275275

276276
There are Rx operators that declare how you want to deal with situations where a subscriber cannot accept the values that are being emitted.
277277

@@ -348,7 +348,7 @@ Output
348348
...
349349
```
350350

351-
What we see here is that the first 128 items where consumed normally, but then we jumped forward. The items inbetween were dropped by `onBackPressureDrop`. Even though we did not request it, the first 128 items where still buffered. Rx employs small buffers even when we don't request it.
351+
What we see here is that the first 128 items where consumed normally, but then we jumped forward. The items inbetween were dropped by `onBackPressureDrop`. Even though we did not request it, the first 128 items where still buffered, since `observeOn` uses a small buffer between switching threads.
352352

353353

354354
| Previous | Next |

0 commit comments

Comments
 (0)