Skip to content

Commit f0bf4bb

Browse files
EmpieichOhsutaiyu
andauthored
Apply suggestions from code review
Co-authored-by: hsutaiyu <51791408+hsutaiyu@users.noreply.github.com>
1 parent d4a25b1 commit f0bf4bb

File tree

1 file changed

+31
-31
lines changed

1 file changed

+31
-31
lines changed

docs/zkEVM/architecture/proving-system/order-and-prove.md

Lines changed: 31 additions & 31 deletions
Original file line numberDiff line numberDiff line change
@@ -4,7 +4,7 @@ In this document, we distinguish between the sequencing and proving processes, w
44

55
The decoupling of sequencing from proving enhances the system's efficiency.
66

7-
Central to the zkEVM architecture are the verifier smart contract, prover, aggregator and sequencer.
7+
Central to the zkEVM architecture are the verifier smart contract, the prover, the aggregator, and sequencer.
88

99
## Typical state transition
1010

@@ -26,7 +26,7 @@ The figure below depicts a state transition.
2626

2727
The zkEVM's key performance indicators (KPIs) are _Delay_ and _Throughput_.
2828

29-
_Delay_ refers to the delay that occurs from the moment a user sends an L2 transaction until the consequence of the transaction execution is part of the L2 state.
29+
_Delay_ refers to the time elapsed from when a user sends an L2 transaction until the transaction's execution results are reflected in the L2 state.
3030

3131
This is a major KPI when it comes to positive user experience (UX).
3232

@@ -52,9 +52,9 @@ There are three parameters affecting these KPIs: $\mathtt{close\_a\_batch\_time}
5252

5353
Let's explore how this parameters impact the _Delay_ and _Throughput_ of the full system.
5454

55-
### Example. (Processing pipeline)
55+
### Processing pipeline
5656

57-
Consider a scenario of a simplified processing pipeline, like an assembly line in a factory, as depicted in the figure below.
57+
Consider an example of a simplified processing pipeline, similar to an assembly line in a factory, as illustrated in the figure below.
5858

5959
Identify two key performance indicators (KPIs) of interest: lead time (or delay) and production rate (or throughput).
6060

@@ -70,7 +70,7 @@ Identify two key performance indicators (KPIs) of interest: lead time (or delay)
7070

7171
#### Two scaling methods
7272

73-
The following question arises: How can we improve both these metrics: delay and throughput?
73+
The following question arises: How can we improve both delay and throughput metrics?
7474

7575
The objective is to increase the throughput and reduce the delay.
7676

@@ -118,9 +118,9 @@ $$
118118
\texttt{throughput} = \dfrac{1}{\mathtt{prove\_a\_batch\_time}}\ [\text{batches per second}]
119119
$$
120120

121-
When computing the throughput, we assume that the closing, proving, and verifying a batch can be done in parallel with respect to other batches.
121+
When computing throughput, we assume that closing, proving, and verifying a batch can be done in parallel with other batches.
122122

123-
And thus, depends on the longest part of the process, which is to prove the batch.
123+
And thus, in practice, throughput is determined by the longest part of the process, which is to prove the batch.
124124

125125
$$
126126
\begin{aligned}
@@ -129,7 +129,7 @@ $$
129129
\end{aligned}
130130
$$
131131

132-
In order to have some numbers:
132+
To provide specific numbers:
133133

134134
$$
135135
\begin{aligned}
@@ -139,36 +139,36 @@ $$
139139
\end{aligned}
140140
$$
141141

142-
### Improving KPIs with Vertical Scaling
142+
### Improving KPIs with vertical scaling
143143

144144
The aim is to increase throughput and reduce delay.
145145

146146
The one limiting factor in this case is the $\mathtt{prove\_a\_batch\_time}$.
147147

148148
Vertical scaling means adding more resources to the existing machines.
149149

150-
It can be done by running provers in more powerful machines, optimizing the proving system, or a combination of both.
150+
It can be achieved by running provers on more powerful machines, optimizing the proving system, or a combination of both.
151151

152152
Although vertical scaling seems like a straightforward solution to speed up proof generation, it has limitations:
153153

154154
- Cost-effectiveness: Upgrading to very powerful machines often results in diminishing returns. The cost increase might not be proportional to the performance gain, especially for high-end hardware.
155-
- Optimization challenges: Optimizing the proof system itself can be complex and time-consuming.
155+
- Optimization challenges: Optimizing the prover system itself can be complex and time-consuming.
156156

157-
### Improving KPIs with Horizontal Scaling
157+
### Improving KPIs with horizontal scaling
158158

159159
Another option is to scale the system horizontally.
160160

161-
Horizontal scaling involves adding more processing units (workers) to distribute the workload across multiple machines.
161+
Horizontal scaling involves adding more processing units (workers) to distribute the workload across multiple machines and leverage additional hardware resources in parallel.
162162

163163
In the context of a batch processing system, this translates to spinning up multiple provers to work in parallel.
164164

165-
#### Naïve horizontal scaling
165+
#### Naive horizontal scaling
166166

167167
Consider the figure below, depicting a naive implementation of horizontal scaling, which involves:
168168

169-
1. Parallelized proof generation, by spinning up multiple provers.
170-
2. Proofs reception, where each prover individually sends the proof it generated to the aggregator.
171-
3. Proofs Verification, which means the aggregator puts all these proofs into an L1 transaction, and sends it to the smart contract for verification of batches.
169+
1. Parallelized proof generation by spinning up multiple provers.
170+
2. Proof reception, where each prover individually sends the proof it generated to the aggregator.
171+
3. Proof Verification, which means the aggregator puts all these proofs into an L1 transaction, and sends it to the smart contract for verification of batches.
172172

173173
![Figure: Naive approach - Horizontal scaling](../../../img/zkEVM/od-naive-approach-horizontal.png)
174174

@@ -178,7 +178,7 @@ Notice that, as depicted in the figure above, the proofs $\pi_a$, $\pi_b$ and $\
178178

179179
This means the overall verification cost is proportional to the number of proofs sent to the aggregator.
180180

181-
The disadvantage with the naïve approach is the associated costs, seen in terms of the space occupied by each proof, and cumulative verification expenses with every additional proof.
181+
The disadvantage with the naive approach is the associated costs, seen in terms of the space occupied by each proof, and cumulative verification expenses with every additional proof.
182182

183183
#### Proof aggregation in horizontal scaling
184184

@@ -187,11 +187,11 @@ Another option is to scale the system horizontally with proof aggregation, as sh
187187
Here’s how it works:
188188

189189
1. Parallelized proof generation, by instantiating multiple provers.
190-
2. Proofs reception, where each prover individually sends the proof it generated to the aggregator.
190+
2. Proof reception, where each prover individually sends the proof it generated to the aggregator.
191191
3. Proof aggregation, where proofs are aggregated into a single proof.
192192
4. Proof verification here means encapsulating only one proof, the aggregated proof, in an L1 transaction. And hence transmitting it to the smart contract for batch verification.
193193

194-
The bedrock of this approach lies in the zkEVM's custom cryptographic backend, which specifically supports proof aggregation.
194+
The foundation of this approach rests on zkEVM's custom cryptographic backend, designed specifically to support proof aggregation.
195195

196196
It allows multiple proofs to be combined into a single verifiable proof.
197197

@@ -217,7 +217,7 @@ Observe that, since proving and closing batches, and aggregating proofs can run
217217

218218
Hence the denominator, in the above formula, is the maximum among the values: $\mathtt{prove\_a\_batch\_time}$, $N · \mathtt{close\_a\_batch\_time}$, $\mathtt{block\_time}$, and $\mathtt{aggregation\_time}$.
219219

220-
This means, in the case where the maximum time in the denominator is $\mathtt{prove\_a\_batch\_time}$, the systems throughput increases by a factor of $N$​.
220+
This means, in the case where the maximum time in the denominator is $\mathtt{prove\_a\_batch\_time}$, the system's throughput increases by a factor of $N$​.
221221

222222
Delay in this scenario can be computed as follows:
223223

@@ -229,28 +229,28 @@ A straightforward aggregation of batches substantially increases delay relative
229229

230230
As discussed earlier, delay is a critical factor to user experience.
231231

232-
To retain the throughput gains while improving the delay, we can adopt a two-step approach for batch processing: first, _order_ (also known as _sequence_) and then _prove_.
232+
To retain the throughput gains while reducing the delay, we can adopt a two-step approach for batch processing: first, _order_ (also known as _sequence_) and then _prove_.
233233

234-
This segmentation allows for optimization in each step, potentially enhancing the overall delay without compromising the achieved throughput improvements.
234+
This segmentation allows for optimization in each step, potentially reducing the overall delay while maintaining improvements in throughput.
235235

236236
### Enhancing delay by order then prove
237237

238-
The idea behind decoupling batch ordering (sequencing) from batch proving is to:
238+
The rationale behind decoupling batch ordering (sequencing) from batch proving is twofold:
239239

240-
- Provide a low delay response to users about their L2 transactions.
241-
- While being able to aggregate transactions to provide high system throughput.
240+
- Ensure swift responses to users regarding their L2 transactions with minimal delay.
241+
- Enable transaction aggregation for maximizing system throughput.
242242

243243
Sequencing an L2 batch involves deciding which L2 transactions should be part of the next batch. That is, when to create or close the batch, and sent it to L1.
244244

245-
As the sequence of batches is written to L1, data availability and immutability provided on L1.
245+
As the sequence of batches is written to L1, data availability and immutability are ensured on L1.
246246

247247
Sequenced batches may not be proved immediately, but they are guaranteed to be proved eventually.
248248

249249
This creates a state within the L2 system that reflects the eventual outcome of executing those transactions, even though the proof hasn’t been completed yet.
250250

251-
Such a state is called a virtual state because it represents a future state to be consolidated once the proof is processed.
251+
Such a state is called a *virtual state* because it represents a future state to be consolidated once the proof is processed.
252252

253-
More precisely, the virtual state is the state reached after executing and sequencing batches in L1, before they are proved.
253+
More precisely, the virtual state is the state reached after executing and sequencing batches in L1, before they are validated using proofs.
254254

255255
![Figure: Definition of the Virtual state](../../../img/zkEVM/od-defn-virtual-state.png)
256256

@@ -260,7 +260,7 @@ Notable improvement lies in the ability to close batches more rapidly than the b
260260

261261
Let’s adopt a revised definition for the delay:
262262

263-
- The duration from the moment a user submits an L2 transaction until that transaction reaches a virtual state.
263+
- The duration from the moment a user submits an L2 transaction until that transaction reaches the virtual state.
264264

265265
From the user’s perspective, once the transaction is in the virtual state, it can be regarded as processed.
266266

@@ -274,7 +274,7 @@ $$
274274
\mathtt{delay}^{(\mathtt{to\_virtual})} = N · \mathtt{close\_a\_batch\_time} + \mathtt{block\_time}
275275
$$
276276

277-
Observe that we have experienced an improvement in the delay.
277+
Note that we have experienced a significant reduction in the delay.
278278

279279
Below, we present several advantages of decoupling batch sequencing from batch proving:
280280

0 commit comments

Comments
 (0)