|
| 1 | +## Summary |
| 2 | + |
| 3 | +!!! warning |
| 4 | + This local migration test doc is work-in-progress and unpublished currently. |
| 5 | + |
| 6 | +These instructions show you how to fully test locally the migration steps from fork 7 to 8. |
| 7 | + |
| 8 | +## Build Docker dev geth with fork 7 |
| 9 | + |
| 10 | +1. Clone `zkevm-contracts`, checkout branch `v4.0.0-fork.7`, and install. |
| 11 | + |
| 12 | + ```sh |
| 13 | + git clone https://github.com/0xPolygonHermez/zkevm-contracts |
| 14 | + cd zkevm-contracts/ |
| 15 | + git checkout v4.0.0-fork.7 |
| 16 | + npm install |
| 17 | + ``` |
| 18 | + |
| 19 | +2. Edit the `docker/scripts/v2/deploy_parameters_docker.json` file to change the following: |
| 20 | + |
| 21 | + ```json |
| 22 | + "minDelayTimelock": 3600, -> "minDelayTimelock": 1, |
| 23 | + ``` |
| 24 | + |
| 25 | +3. Edit the `docker/scripts/v2/create_rollup_parameters_docker.json` file and change `"consensusContract": "PolygonZkEVMEtrog",` to the following: |
| 26 | + |
| 27 | + ```json |
| 28 | + "consensusContract": "PolygonValidiumEtrog", |
| 29 | + ``` |
| 30 | + |
| 31 | + And add the following parameter: |
| 32 | + |
| 33 | + ```json |
| 34 | + "dataAvailabilityProtocol": "PolygonDataCommittee", |
| 35 | + ``` |
| 36 | + |
| 37 | +4. Run the following command: |
| 38 | + |
| 39 | + ```sh |
| 40 | + cp docker/scripts/v2/hardhat.example.paris hardhat.config.ts |
| 41 | + ``` |
| 42 | + |
| 43 | +5. Edit the `docker/scripts/v2/deploy-docker.sh` file to add the following line: |
| 44 | + |
| 45 | + ```sh |
| 46 | + sudo chmod -R go+rxw docker/gethData before docker build -t hermeznetwork/geth-zkevm-contracts -f docker/Dockerfile . |
| 47 | + ``` |
| 48 | + |
| 49 | +6. Uncomment the following lines from `deployment/v2/4_createRollup.ts`: |
| 50 | + |
| 51 | + ```sh |
| 52 | + // Setup data commitee to 0 |
| 53 | + await (await polygonDataCommittee?.setupCommittee(0, [], "0x")).wait(); |
| 54 | + ``` |
| 55 | + |
| 56 | +7. Build the image: |
| 57 | + |
| 58 | + ```sh |
| 59 | + npm run docker:contracts |
| 60 | + ``` |
| 61 | + |
| 62 | +8. Tag the image: |
| 63 | + |
| 64 | + ```sh |
| 65 | + docker image tag hermeznetwork/geth-zkevm-contracts hermeznetwork/geth-zkevm-contracts:test-upgrade-7 |
| 66 | + ``` |
| 67 | + |
| 68 | +## Build the genesis file for the node |
| 69 | + |
| 70 | +1. Clone the `cdk-validium-node` repo, `cd` into it, and checkout the `v0.5.13+cdk.6` branch. |
| 71 | + |
| 72 | + ```sh |
| 73 | + git clone https://github.com/0xPolygon/cdk-validium-node |
| 74 | + cd cdk-validium-node |
| 75 | + git checkout v0.5.13+cdk.6 |
| 76 | + ``` |
| 77 | + |
| 78 | +2. Edit the `test/config/test.genesis.config.json` file with values in the output files in the `zkevm-contracts/docker/deploymentOutput` directory. |
| 79 | + |
| 80 | + - `l1Config.polygonZkEVMAddress` ==> `rollupAddress` @ create_rollup_output.json |
| 81 | + - `l1Config.polygonRollupManagerAddress` ==> `polygonRollupManager` @ deploy_output.json |
| 82 | + - `l1Config.polTokenAddress` ==> `polTokenAddress` @ deploy_output.json |
| 83 | + - `l1Config.polygonZkEVMGlobalExitRootAddress` ==> `polygonZkEVMGlobalExitRootAddress` @ deploy_output.json |
| 84 | + - `rollupCreationBlockNumber` ==> `createRollupBlock` @ create_rollup_output.json |
| 85 | + - `rollupManagerCreationBlockNumber` ==> `deploymentBlockNumber` @ deploy_output.json |
| 86 | + - `root` ==> `root` @ genesis.json |
| 87 | + - `genesis` ==> `genesis` @ genesis.json |
| 88 | + |
| 89 | +## Run the network using fork 7 |
| 90 | + |
| 91 | +1. Build the docker: `make build-docker`. |
| 92 | + |
| 93 | +2. Edit `test/docker-compose.yml` to use the geth image from the first step, replace: |
| 94 | + |
| 95 | + - `hermeznetwork/geth-cdk-validium-contracts:v0.0.4` => `hermeznetwork/geth-zkevm-contracts:test-upgrade-7` |
| 96 | + |
| 97 | +3. Run all the stack: `cd test` then `make run`. |
| 98 | + |
| 99 | +4. Send a bunch of txs using metamask: |
| 100 | + |
| 101 | + - Import private key `0xac0974bec39a17e36ba4a6b4d238ff944bacb478cbed5efcae784d7bf4f2ff80` which has funds on L2 since genesis. |
| 102 | + - chainID is 1001 |
| 103 | + - port is 8123 |
| 104 | + |
| 105 | +## Stop the sequencer in a clean state (no WIP batch) to avoid reorgs |
| 106 | + |
| 107 | +This explained in more detail and orientated to a prod env [here](?) |
| 108 | + |
| 109 | +1. Make sure you are on the test directory |
| 110 | + |
| 111 | +2. Stop sequencer |
| 112 | + |
| 113 | + ```sh |
| 114 | + docker compose stop zkevm-sequencer && docker compose rm -f zkevm-sequencer` |
| 115 | + ``` |
| 116 | + |
| 117 | +3. Connect to StateDB using: |
| 118 | + |
| 119 | + ```psql |
| 120 | + psql -h localhost -p 5432 -U state_user state_db |
| 121 | + ``` |
| 122 | + then introduce password `state_password` |
| 123 | + |
| 124 | +4. Get WIP batch number: |
| 125 | + |
| 126 | + ```sql |
| 127 | + SELECT batch_num, wip FROM state.batch WHERE wip IS true;` |
| 128 | + ``` |
| 129 | + |
| 130 | +!!! danger |
| 131 | + → X. Write down somewhere the value of X, as it's gonna be used ina. bunch of places |
| 132 | +
|
| 133 | +5. Edit node config (`test/config/test.node.config.toml`): |
| 134 | +
|
| 135 | +6. Change: `Sequencer.Finalizer.HaltOnBatchNumber = X+1 # wip batch_num+1` |
| 136 | + |
| 137 | + For a prod env you may want to tweak other values, see the linked doc above, for the local dev env defaults are ok -> waiting for link |
| 138 | +
|
| 139 | +7. Restart sequencer: |
| 140 | + |
| 141 | + ```sh |
| 142 | + docker compose up -d zkevm-sequencer` |
| 143 | + ``` |
| 144 | +
|
| 145 | +8. Check sequencer halted when reaching batch X+1 |
| 146 | +
|
| 147 | + ```sh |
| 148 | + docker logs -f zkevm-sequencer` |
| 149 | + ``` |
| 150 | +
|
| 151 | +9. Wait until all pending batches are virtualized and verified (X). Note that this could be checked on etherscan or using custom RPC endpoint |
| 152 | +
|
| 153 | + ```sql |
| 154 | + SELECT batch_num FROM state.virtual_batch ORDER BY batch_num DESC LIMIT 1; → X |
| 155 | + SELECT batch_num FROM state.verified_batch ORDER BY batch_num DESC LIMIT 1; → X |
| 156 | + ``` |
| 157 | +
|
| 158 | +10. Stop all the node components: |
| 159 | +
|
| 160 | + ```sh |
| 161 | + make stop-node && make stop-zkprover` |
| 162 | + ``` |
| 163 | +
|
| 164 | +## L1 interactions to upgrade CDK to fork 8 |
| 165 | +
|
| 166 | +On zkevm-contracts, checkout develop: |
| 167 | +
|
| 168 | + ```sh |
| 169 | + git stash && git checkout develop && npm i` |
| 170 | + ``` |
| 171 | +
|
| 172 | +### Deploy verifier |
| 173 | +
|
| 174 | +1. Run |
| 175 | +
|
| 176 | + ```sh |
| 177 | + cp tools/deployVerifier/deploy_verifier_parameters.example tools/deployVerifier/deploy_verifier_parameters.json` |
| 178 | + ``` |
| 179 | +
|
| 180 | +2. Edit `tools/deployVerifier/deploy_verifier_parameters.json`: |
| 181 | + |
| 182 | + - `realVerifier` ==> `false` |
| 183 | +
|
| 184 | +3. Run |
| 185 | +
|
| 186 | + ```sh |
| 187 | + cp docker/scripts/v2/hardhat.example.paris hardhat.config.ts` |
| 188 | + ``` |
| 189 | +
|
| 190 | +4. Deploy verifier: |
| 191 | +
|
| 192 | + ```sh |
| 193 | + npx hardhat run tools/deployVerifier/deployVerifier.ts --network localhost` |
| 194 | + ``` |
| 195 | +
|
| 196 | +5. Write the deployed address somewhere (you will only get that on the logs, something similar to `verifierContract deployed to: 0xa85233C63b9Ee964Add6F2cffe00Fd84eb32338f`) |
| 197 | +
|
| 198 | +!!! warning |
| 199 | + On production this step should be skipped, as the fork 8 verifier should already be deployed (since it's already being used by Hermez) |
| 200 | + |
| 201 | +### Add rollup type |
| 202 | + |
| 203 | +1. Edit `tools/addRollupType/add_rollup_type.json` using values from the output files @ `docker/deploymentOutputs` |
| 204 | + |
| 205 | + - `consensusContract` ==> `PolygonValidiumEtrog` |
| 206 | + - `polygonRollupManagerAddress` ==> `polygonRollupManager` @ deploy_output.json |
| 207 | + - `polygonZkEVMBridgeAddress` ==> `polygonZkEVMBridgeAddress` @ deploy_output.json |
| 208 | + - `polygonZkEVMGlobalExitRootAddress` ==> `polygonZkEVMGlobalExitRootAddress` @ deploy_output.json |
| 209 | + - `polTokenAddress` ==> `polTokenAddress` @ deploy_output.json |
| 210 | + - `verifierAddress` ==> value outputed on the logs of previous step |
| 211 | + - `timelockDelay` ==> `0` |
| 212 | + |
| 213 | +2. Run |
| 214 | + |
| 215 | + ```sh |
| 216 | + npx hardhat run tools/addRollupType/addRollupType.ts --network localhost |
| 217 | + ``` |
| 218 | + |
| 219 | + should output: Added new Rollup Type deployed |
| 220 | + |
| 221 | +3. Write the type ID (you will only get that on the logs, something similar to `type: 2`,) THIS IS INCORRECT, LoL. Need a better way to detect the correct typeID |
| 222 | + |
| 223 | +!!! warning |
| 224 | + The procedure is not the same when using timelocks! |
| 225 | + |
| 226 | +### Update rollup |
| 227 | + |
| 228 | +1. Run |
| 229 | + |
| 230 | + ```sh |
| 231 | + cp tools/updateRollup/updateRollup.json.example tools/updateRollup/updateRollup.json` |
| 232 | + ``` |
| 233 | + |
| 234 | +2. Edit `tools/updateRollup/updateRollup.json` using values from the output files @ docker/deploymentOutputs: |
| 235 | + |
| 236 | + - `rollupAddress` ==> `rollupAddress` @ create_rollup_output.json |
| 237 | + - `newRollupTypeID` ==> value outputed on the logs of previous step (put 2 if running with docker as per the instructions) |
| 238 | + - `polygonRollupManagerAddress` ==> `polygonRollupManager` @ deploy_output.json |
| 239 | + - `timelockDelay` ==> `minDelayTimelock` @ docker/scripts/v2/deploy_parameters_docker.json |
| 240 | + - (ADD) `timelockAddress` ==> `timelockContractAddress` @ deploy_output.json |
| 241 | + |
| 242 | +#### With timelock (NOT TESTED) |
| 243 | + |
| 244 | +1. Run |
| 245 | + |
| 246 | + ```sh |
| 247 | + npx hardhat run tools/updateRollup/updateRollup.ts --network localhost` |
| 248 | + ``` |
| 249 | +
|
| 250 | +2. Create `tools/updateRollup/executeUpdate.ts` and `tools/updateRollup/scheduleUpdate.ts` |
| 251 | +
|
| 252 | +3. Run |
| 253 | +
|
| 254 | + ```sh |
| 255 | + npx hardhat run tools/updateRollup/scheduleUpdate.ts --network localhost` |
| 256 | + ``` |
| 257 | + |
| 258 | +4. Wait for the timelock delay to be elapsed (just one second ) |
| 259 | + |
| 260 | +5. Run |
| 261 | + |
| 262 | + ```sh |
| 263 | + npx hardhat run tools/updateRollup/executeUpdate.ts --network localhost` |
| 264 | + ``` |
| 265 | +
|
| 266 | +#### Without timelock |
| 267 | +
|
| 268 | +1. Create `tools/updateRollup/noTimelock.ts` |
| 269 | +
|
| 270 | +2. Run |
| 271 | +
|
| 272 | + ```sh |
| 273 | + npx hardhat run tools/updateRollup/noTimelock.ts --network localhost` |
| 274 | + ``` |
| 275 | + |
| 276 | +3. Missing to implement something to verify that the tx went through…?? |
| 277 | + |
| 278 | +!!! warning |
| 279 | + After upgrade, the dataAvailabilityProtocol of the Validium contract is lost (set to 0x000…0), needed to set it up again using the script at the bottom of the doc |
| 280 | + |
| 281 | +## Upgrade node to fork 8 |
| 282 | + |
| 283 | +!!! tip |
| 284 | + It is recommended to have log level set to debug until the upgrade is confirmed to be succesful |
| 285 | + |
| 286 | +1. Make sure you are on the root directory of `cdk-validium-node` |
| 287 | + |
| 288 | +2. Backup the genesis file so no need to re-write it: |
| 289 | + |
| 290 | + ```sh |
| 291 | + cp test/config/test.genesis.config.json /tmp` |
| 292 | + ``` |
| 293 | +
|
| 294 | +3. Update node version: |
| 295 | +
|
| 296 | + ```sh |
| 297 | + git stash && git checkout v0.6.2+cdk` |
| 298 | + ``` |
| 299 | + |
| 300 | +4. Build docker (on root directory): |
| 301 | + |
| 302 | + ```sh |
| 303 | + make build-docker` |
| 304 | + ``` |
| 305 | +
|
| 306 | +5. Backup genesis file: |
| 307 | +
|
| 308 | + ```sh |
| 309 | + mv /tmp/test.genesis.config.json test/config` |
| 310 | + ``` |
| 311 | + |
| 312 | +6. Run the synchronizer: |
| 313 | + |
| 314 | + ```sh |
| 315 | + cd test && make run-zkprover && docker compose up -d zkevm-sync` |
| 316 | + ``` |
| 317 | +
|
| 318 | +7. Connect to StateDB using: |
| 319 | +
|
| 320 | + ```sql |
| 321 | + psql -h localhost -p 5432 -U state_user state_db, |
| 322 | + ``` |
| 323 | + then introduce password `state_password` |
| 324 | +
|
| 325 | +8. Query the registered fork IDs: |
| 326 | +
|
| 327 | + ```sql |
| 328 | + SELECT * FROM state.fork_id; |
| 329 | + ``` |
| 330 | + You should get two rows, one with 7 and the other with 8 (it may take a bit) |
| 331 | +
|
| 332 | +9. Start the rest of the node components: |
| 333 | +
|
| 334 | + ```sh |
| 335 | + make run-node |
| 336 | + ``` |
| 337 | + |
| 338 | +10. Send a bunch of transactions using metamask |
| 339 | +
|
| 340 | +11. Wait until new batches are virtualized and verified (> X). Note that this could be checked on etherscan or using custom RPC endpoint |
| 341 | +
|
| 342 | + ```sql |
| 343 | + SELECT batch_num FROM state.batch ORDER BY batch_num DESC LIMIT 1; → > X |
| 344 | + SELECT batch_num FROM state.virtual_batch ORDER BY batch_num DESC LIMIT 1; → > X |
| 345 | + SELECT batch_num FROM state.verified_batch ORDER BY batch_num DESC LIMIT 1; → > X |
| 346 | + ``` |
0 commit comments