You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
@@ -6,204 +6,41 @@ When setting up a new sentry, validator, or full node server, it is recommended
6
6
7
7
## Community snapshots
8
8
9
-
### Amoy and beyond
9
+
Polygon PoS has transitioned to a community-driven model for snapshots. Active community members now contribute to provide snapshots. Some of these members include:
10
10
11
-
With the [deprecation of the Mumbai testnet](https://forum.polygon.technology/t/pos-tooling-after-mumbai-deprecation-no-action-required/13740), Polygon PoS is transitioning to a community-driven model for snapshots. Active validators such as Vault Staking, Stakepool, StakeCraft, and Girnaar Nodes will now provide these snapshots. For future community snapshots on the Sepolia-anchored Amoy testnet, visit [All4nodes.io](https://all4nodes.io/Polygon), an aggregator for Polygon community snapshots.
| Stakecraft | Mainnet, Amoy, Erigon | Support for Erigon archive snapshot |
14
+
|[PublicNode (by Allnodes)](https://publicnode.com/snapshots#polygon)| Mainnet, Amoy | Support for PBSS + PebbleDB enabled snapshot |
15
+
| Stakepool | Mainnet, Amoy | - |
16
+
| Vaultstaking | Mainnet | - |
17
+
| Girnaar Nodes | Amoy | - |
12
18
13
-
### Legacy snapshots
19
+
!!! info "Snapshot aggregator"
14
20
15
-
If you're looking for older snapshots, please visit [Polygon Chains Snapshots](https://snapshot.polygon.technology/).
16
-
17
-
!!! note
18
-
19
-
Bor archive snapshots are no longer supported due to unsustainable data growth.
21
+
Visit [All4nodes.io](https://all4nodes.io/Polygon) for a comprehensive list of community snapshots.
20
22
21
23
## Downloading and using client snapshots
22
24
23
-
!!! warning "Mumbai testnet now deprecated"
24
-
25
-
Mumbai testnet is no longer supported. [Existing snapshots](https://snapshot.polygon.technology/), however, will still be available for the users who rely on them.
25
+
To begin, ensure that your node environment meets the **prerequisites** outlined [here](../how-to/full-node/full-node-binaries.md).
26
26
27
-
To begin, ensure that your node environment meets the **prerequisites** outlined [here](../how-to/full-node/full-node-binaries.md). Before starting any services, execute the shell script provided below. This script will download and extract the snapshot data, which allows for faster bootstrapping. This example uses an Ubuntu Linux m5d.4xlarge machine with an 8TB block device attached.
28
-
To transfer the correct chain data to your disk, follow these steps:
27
+
The majority of snapshot providers have also outlined the steps that need to be followed to download and use their respective client snapshots. Navigate to [All4nodes](https://all4nodes.io/Polygon) to view the snapshot source.
29
28
30
-
- Specify the network (`mainnet` or `mumbai`) and client type (`heimdall`or `bor` or `erigon`) of your desired snapshot and run the following command:
29
+
In case the steps are unavailable or the procedure is unclear, the following tips will come in handy:
31
30
31
+
- You can use the `wget` command to download and extract the `.tar` snapshot files. For example:
- Configure your client's `datadir` setting to match the directory where you downloaded and extracted the snapshot data. This ensures the `systemd` services can correctly register the snapshot data when the client is spun up.
42
38
43
-
!!! tip
39
+
- To maintain your client's default configuration settings, consider using symbolic links (symlinks).
44
40
45
-
This bash script automatically handles all download and extraction phases, as well as optimizing disk space by deleting already extracted files along the way.
41
+
## Example
46
42
47
-
-`--extract-dir` and `--validate-checksum` flags are optional.
48
-
- Consider using a Screen session to prevent accidental interruptions during the chaindata download and extraction process.
49
-
- The raw bash script code is collapsed below for transparency:
50
-
51
-
<details>
52
-
<summary>View script here ↓</summary>
53
-
54
-
```bash
55
-
#!/bin/bash
56
-
57
-
functionvalidate_network() {
58
-
if [[ "$1"!="mainnet"&&"$1"!="mumbai" ]];then
59
-
echo"Invalid network input. Please enter 'mainnet' or 'mumbai'."
60
-
exit 1
61
-
fi
62
-
}
63
-
64
-
functionvalidate_client() {
65
-
if [[ "$1"!="heimdall"&&"$1"!="bor"&&"$1"!="erigon" ]];then
66
-
echo"Invalid client input. Please enter 'heimdall' or 'bor' or 'erigon'."
67
-
exit 1
68
-
fi
69
-
}
70
-
71
-
functionvalidate_checksum() {
72
-
if [[ "$1"!="true"&&"$1"!="false" ]];then
73
-
echo"Invalid checksum input. Please enter 'true' or 'false'."
74
-
exit 1
75
-
fi
76
-
}
77
-
78
-
# Parse command-line arguments
79
-
while [[ $#-gt 0 ]];do
80
-
key="$1"
81
-
82
-
case$keyin
83
-
-n | --network)
84
-
validate_network "$2"
85
-
network="$2"
86
-
shift# past argument
87
-
shift# past value
88
-
;;
89
-
-c | --client)
90
-
validate_client "$2"
91
-
client="$2"
92
-
shift# past argument
93
-
shift# past value
94
-
;;
95
-
-d | --extract-dir)
96
-
extract_dir="$2"
97
-
shift# past argument
98
-
shift# past value
99
-
;;
100
-
-v | --validate-checksum)
101
-
validate_checksum "$2"
102
-
checksum="$2"
103
-
shift# past argument
104
-
shift# past value
105
-
;;
106
-
*) # unknown option
107
-
echo"Unknown option: $1"
108
-
exit 1
109
-
;;
110
-
esac
111
-
done
112
-
113
-
# Set default values if not provided through command-line arguments
114
-
network=${network:-mumbai}
115
-
client=${client:-heimdall}
116
-
extract_dir=${extract_dir:-"${client}_extract"}
117
-
checksum=${checksum:-false}
118
-
119
-
120
-
# install dependencies and cursor to extract directory
121
-
sudo apt-get update -y
122
-
sudo apt-get install -y zstd pv aria2
123
-
mkdir -p "$extract_dir"
124
-
cd"$extract_dir"
125
-
126
-
# download compiled incremental snapshot files list
Once the extraction is complete, ensure that you update the datadir configuration of your client to point to the path where the extracted data is located. This ensures that the systemd services can correctly register the snapshot data when the client starts.
204
-
If you wish to preserve the default client configuration settings, you can use symbolic links (symlinks).
205
-
206
-
For example, let's say you have mounted your block device at `~/snapshots` and have downloaded and extracted the chaindata for Heimdall into the directory `heimdall_extract`, and for Bor into the directory `bor_extract`. To ensure proper registration of the extracted data when starting the Heimdall or Bor systemd services, you can use the following sample commands:
43
+
Let's say you have mounted your block device at `~/snapshots` and have downloaded and extracted the chain data into the `heimdall_extract` directory for Heimdall, and into the `bor_extract` directory for Bor. Use the following commands to register the extracted data for Heimdall and Bor `systemd` services:
207
44
208
45
```bash
209
46
# remove any existing datadirs for Heimdall and Bor
@@ -222,46 +59,42 @@ sudo service heimdalld start
222
59
sudo service bor start
223
60
```
224
61
62
+
!!! tip "Appropriate user permissions"
63
+
64
+
Ensure that the Bor and Heimdall user files have appropriate permissions to access the `datadir`. To set correct permissions for Bor, execute `sudo chown -R bor:nogroup /var/lib/heimdall/data`. Similarly, for Heimdall, run `sudo chown -R heimdall:nogroup /var/lib/bor/data/bor`
The PoS Network is deprecating archive node snapshots. Please move to the Erigon client and use Erigon snapshots to sync your nodes.
257
-
258
-
### Polygon mainnet Erigon archive
259
-
260
-
Please check the hardware requirements for an Erigon mainnet archive node on the [pre-requisites page for deploying a Polygon node using Erigon](https://erigon.gitbook.io/erigon/basic-usage/getting-started#hardware-requirements).
- Disk IOPS will impact speed of downloading/extracting snapshots, getting in sync, and performing LevelDB compaction.
265
-
- To minimize disk latency, directattached storage is ideal.
266
-
- In AWS, when using gp3 disk types, we recommend provisioning IOPS of 16000 and throughput of 1000. This minimizes cost and adds a lot of performance. io2 EBS volumes with matching IOPS and throughput values are similarly performant.
267
-
- For GCP, we recommend using performance (SSD) persistent disks (`pd-ssd`) or extreme persistent disks (`pd-extreme`) with similar IOPS and throughput values as seen above.
97
+
- Disk IOPS will affect the speed of downloading/extracting snapshots, getting in sync, and performing LevelDB compaction.
98
+
- To minimize disk latency, direct-attached storage is ideal.
99
+
- In AWS, when using gp3 disk types, we recommend provisioning IOPS of 16,000 and throughput of 1,000. This minimizes costs while providing significant performance benefits. io2 EBS volumes with matching IOPS and throughput values offer similar performance.
100
+
- For GCP, we recommend using performance (SSD) persistent disks (`pd-ssd`) or extreme persistent disks (`pd-extreme`) with similar IOPS and throughput values as mentioned above.
0 commit comments