You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/zkEVM/architecture/proving-system/order-and-prove.md
+31-31Lines changed: 31 additions & 31 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -4,7 +4,7 @@ In this document, we distinguish between the sequencing and proving processes, w
4
4
5
5
The decoupling of sequencing from proving enhances the system's efficiency.
6
6
7
-
Central to the zkEVM architecture are the verifier smart contract, prover, aggregator and sequencer.
7
+
Central to the zkEVM architecture are the verifier smart contract, the prover, the aggregator, and sequencer.
8
8
9
9
## Typical state transition
10
10
@@ -26,7 +26,7 @@ The figure below depicts a state transition.
26
26
27
27
The zkEVM's key performance indicators (KPIs) are _Delay_ and _Throughput_.
28
28
29
-
_Delay_ refers to the delay that occurs from the moment a user sends an L2 transaction until the consequence of the transaction execution is part of the L2 state.
29
+
_Delay_ refers to the time elapsed from when a user sends an L2 transaction until the transaction's execution results are reflected in the L2 state.
30
30
31
31
This is a major KPI when it comes to positive user experience (UX).
32
32
@@ -52,9 +52,9 @@ There are three parameters affecting these KPIs: $\mathtt{close\_a\_batch\_time}
52
52
53
53
Let's explore how this parameters impact the _Delay_ and _Throughput_ of the full system.
54
54
55
-
### Example. (Processing pipeline)
55
+
### Processing pipeline
56
56
57
-
Consider a scenario of a simplified processing pipeline, like an assembly line in a factory, as depicted in the figure below.
57
+
Consider an example of a simplified processing pipeline, similar to an assembly line in a factory, as illustrated in the figure below.
58
58
59
59
Identify two key performance indicators (KPIs) of interest: lead time (or delay) and production rate (or throughput).
60
60
@@ -70,7 +70,7 @@ Identify two key performance indicators (KPIs) of interest: lead time (or delay)
70
70
71
71
#### Two scaling methods
72
72
73
-
The following question arises: How can we improve both these metrics: delay and throughput?
73
+
The following question arises: How can we improve both delay and throughput metrics?
74
74
75
75
The objective is to increase the throughput and reduce the delay.
76
76
@@ -118,9 +118,9 @@ $$
118
118
\texttt{throughput} = \dfrac{1}{\mathtt{prove\_a\_batch\_time}}\ [\text{batches per second}]
119
119
$$
120
120
121
-
When computing the throughput, we assume that the closing, proving, and verifying a batch can be done in parallel with respect to other batches.
121
+
When computing throughput, we assume that closing, proving, and verifying a batch can be done in parallel with other batches.
122
122
123
-
And thus, depends on the longest part of the process, which is to prove the batch.
123
+
And thus, in practice, throughput is determined by the longest part of the process, which is to prove the batch.
124
124
125
125
$$
126
126
\begin{aligned}
@@ -129,7 +129,7 @@ $$
129
129
\end{aligned}
130
130
$$
131
131
132
-
In order to have some numbers:
132
+
To provide specific numbers:
133
133
134
134
$$
135
135
\begin{aligned}
@@ -139,36 +139,36 @@ $$
139
139
\end{aligned}
140
140
$$
141
141
142
-
### Improving KPIs with Vertical Scaling
142
+
### Improving KPIs with vertical scaling
143
143
144
144
The aim is to increase throughput and reduce delay.
145
145
146
146
The one limiting factor in this case is the $\mathtt{prove\_a\_batch\_time}$.
147
147
148
148
Vertical scaling means adding more resources to the existing machines.
149
149
150
-
It can be done by running provers in more powerful machines, optimizing the proving system, or a combination of both.
150
+
It can be achieved by running provers on more powerful machines, optimizing the proving system, or a combination of both.
151
151
152
152
Although vertical scaling seems like a straightforward solution to speed up proof generation, it has limitations:
153
153
154
154
- Cost-effectiveness: Upgrading to very powerful machines often results in diminishing returns. The cost increase might not be proportional to the performance gain, especially for high-end hardware.
155
-
- Optimization challenges: Optimizing the proof system itself can be complex and time-consuming.
155
+
- Optimization challenges: Optimizing the prover system itself can be complex and time-consuming.
156
156
157
-
### Improving KPIs with Horizontal Scaling
157
+
### Improving KPIs with horizontal scaling
158
158
159
159
Another option is to scale the system horizontally.
160
160
161
-
Horizontal scaling involves adding more processing units (workers) to distribute the workload across multiple machines.
161
+
Horizontal scaling involves adding more processing units (workers) to distribute the workload across multiple machines and leverage additional hardware resources in parallel.
162
162
163
163
In the context of a batch processing system, this translates to spinning up multiple provers to work in parallel.
164
164
165
-
#### Naïve horizontal scaling
165
+
#### Naive horizontal scaling
166
166
167
167
Consider the figure below, depicting a naive implementation of horizontal scaling, which involves:
168
168
169
-
1. Parallelized proof generation, by spinning up multiple provers.
170
-
2.Proofs reception, where each prover individually sends the proof it generated to the aggregator.
171
-
3.Proofs Verification, which means the aggregator puts all these proofs into an L1 transaction, and sends it to the smart contract for verification of batches.
169
+
1. Parallelized proof generation by spinning up multiple provers.
170
+
2.Proof reception, where each prover individually sends the proof it generated to the aggregator.
171
+
3.Proof Verification, which means the aggregator puts all these proofs into an L1 transaction, and sends it to the smart contract for verification of batches.
@@ -178,7 +178,7 @@ Notice that, as depicted in the figure above, the proofs $\pi_a$, $\pi_b$ and $\
178
178
179
179
This means the overall verification cost is proportional to the number of proofs sent to the aggregator.
180
180
181
-
The disadvantage with the naïve approach is the associated costs, seen in terms of the space occupied by each proof, and cumulative verification expenses with every additional proof.
181
+
The disadvantage with the naive approach is the associated costs, seen in terms of the space occupied by each proof, and cumulative verification expenses with every additional proof.
182
182
183
183
#### Proof aggregation in horizontal scaling
184
184
@@ -187,11 +187,11 @@ Another option is to scale the system horizontally with proof aggregation, as sh
187
187
Here’s how it works:
188
188
189
189
1. Parallelized proof generation, by instantiating multiple provers.
190
-
2.Proofs reception, where each prover individually sends the proof it generated to the aggregator.
190
+
2.Proof reception, where each prover individually sends the proof it generated to the aggregator.
191
191
3. Proof aggregation, where proofs are aggregated into a single proof.
192
192
4. Proof verification here means encapsulating only one proof, the aggregated proof, in an L1 transaction. And hence transmitting it to the smart contract for batch verification.
193
193
194
-
The bedrock of this approach lies in the zkEVM's custom cryptographic backend, which specifically supports proof aggregation.
194
+
The foundation of this approach rests on zkEVM's custom cryptographic backend, designed specifically to support proof aggregation.
195
195
196
196
It allows multiple proofs to be combined into a single verifiable proof.
197
197
@@ -217,7 +217,7 @@ Observe that, since proving and closing batches, and aggregating proofs can run
217
217
218
218
Hence the denominator, in the above formula, is the maximum among the values: $\mathtt{prove\_a\_batch\_time}$, $N · \mathtt{close\_a\_batch\_time}$, $\mathtt{block\_time}$, and $\mathtt{aggregation\_time}$.
219
219
220
-
This means, in the case where the maximum time in the denominator is $\mathtt{prove\_a\_batch\_time}$, the systems throughput increases by a factor of $N$.
220
+
This means, in the case where the maximum time in the denominator is $\mathtt{prove\_a\_batch\_time}$, the system's throughput increases by a factor of $N$.
221
221
222
222
Delay in this scenario can be computed as follows:
223
223
@@ -229,28 +229,28 @@ A straightforward aggregation of batches substantially increases delay relative
229
229
230
230
As discussed earlier, delay is a critical factor to user experience.
231
231
232
-
To retain the throughput gains while improving the delay, we can adopt a two-step approach for batch processing: first, _order_ (also known as _sequence_) and then _prove_.
232
+
To retain the throughput gains while reducing the delay, we can adopt a two-step approach for batch processing: first, _order_ (also known as _sequence_) and then _prove_.
233
233
234
-
This segmentation allows for optimization in each step, potentially enhancing the overall delay without compromising the achieved throughput improvements.
234
+
This segmentation allows for optimization in each step, potentially reducing the overall delay while maintaining improvements in throughput.
235
235
236
236
### Enhancing delay by order then prove
237
237
238
-
The idea behind decoupling batch ordering (sequencing) from batch proving is to:
238
+
The rationale behind decoupling batch ordering (sequencing) from batch proving is twofold:
239
239
240
-
-Provide a low delay response to users about their L2 transactions.
241
-
-While being able to aggregate transactions to provide high system throughput.
240
+
-Ensure swift responses to users regarding their L2 transactions with minimal delay.
241
+
-Enable transaction aggregation for maximizing system throughput.
242
242
243
243
Sequencing an L2 batch involves deciding which L2 transactions should be part of the next batch. That is, when to create or close the batch, and sent it to L1.
244
244
245
-
As the sequence of batches is written to L1, data availability and immutability provided on L1.
245
+
As the sequence of batches is written to L1, data availability and immutability are ensured on L1.
246
246
247
247
Sequenced batches may not be proved immediately, but they are guaranteed to be proved eventually.
248
248
249
249
This creates a state within the L2 system that reflects the eventual outcome of executing those transactions, even though the proof hasn’t been completed yet.
250
250
251
-
Such a state is called a virtual state because it represents a future state to be consolidated once the proof is processed.
251
+
Such a state is called a *virtual state* because it represents a future state to be consolidated once the proof is processed.
252
252
253
-
More precisely, the virtual state is the state reached after executing and sequencing batches in L1, before they are proved.
253
+
More precisely, the virtual state is the state reached after executing and sequencing batches in L1, before they are validated using proofs.
254
254
255
255

256
256
@@ -260,7 +260,7 @@ Notable improvement lies in the ability to close batches more rapidly than the b
260
260
261
261
Let’s adopt a revised definition for the delay:
262
262
263
-
- The duration from the moment a user submits an L2 transaction until that transaction reaches a virtual state.
263
+
- The duration from the moment a user submits an L2 transaction until that transaction reaches the virtual state.
264
264
265
265
From the user’s perspective, once the transaction is in the virtual state, it can be regarded as processed.
266
266
@@ -274,7 +274,7 @@ $$
274
274
\mathtt{delay}^{(\mathtt{to\_virtual})} = N · \mathtt{close\_a\_batch\_time} + \mathtt{block\_time}
275
275
$$
276
276
277
-
Observe that we have experienced an improvement in the delay.
277
+
Note that we have experienced a significant reduction in the delay.
278
278
279
279
Below, we present several advantages of decoupling batch sequencing from batch proving:
0 commit comments