- We capture a comprehensive snapshot of full node data, including pruned state data, once per week.
- Your free disk space has more than twice the size of the snapshot.
- Install the zstd on your machine.
- Install the tmux for long time process operation.
Here are snapshots of HBSS with leveldb and pebbledb.
-
Mainnet LevelDB: chaindata-20241201.tar.zst
- MD5: c340ee8de978850f6bebf918519a2662
- Size: 418G
- Inspect File: inspect-data-20241201.txt
-
Mainnet PebbleDB: pebbledb-chaindata-20241201.tar.zst
- MD5: c5f0f335fc27e417c41b385524c30082
- Size: 416G
-
Testnet LevelDB: leveldb-testnet-chaindata-20241201.tar.zst
-
MD5: 4112223e59e8836146d7bec563f469cf
-
Size: 44G
-
Inspect File: leveldb-testnet-inspect-data-20241201.txt
-
Step 1: Preparation
- Make sure your hardware meets the suggested requirement.
- A disk with enough free storage, at least twice the size of the snapshot.
Step 2: Download & Uncompress
- Copy the above snapshot URL.
- Download:
wget -O chaindata.tar.zst "<paste snapshot URL here>"
. It will take one or two hours to download the snapshot, you can put it in to thetmux
bywget -O chaindata.tar.gz "<paste snapshot URL here>"
- [OPTIONAL] If you need to speedup download, just use aria2c
aria2c -o chaindata.tar.zst -s14 -x14 -k100M https://pub-3cca138de6c349f8afe5f6635f9f6f81.r2.dev/data/{filename}
But aria2c may fail sometimes, you need to rerun the download command. To make it convient, you can use the following script, save it into file download.sh
, open new tmux
session and run: chmod +x download.sh && ./download.sh "<paste snapshot URL here>" <your dir>
#!/bin/bash
if [ $# -eq 1 ]; then
dir=$(pwd)
elif [ $# -eq 2 ]; then
dir=$2
else
echo "Usage: $0 <uri> [filepath] "
exit 1
fi
uri=$1
filename=$(basename "$uri")
status=-1
while (( status != 0 ))
do
PIDS=$(pgrep aria2c)
if [ -z "$PIDS" ]; then
aria2c -d $dir -o $filename -s14 -x14 -k100M $uri
fi
status=$?
pid=$(pidof aria2c)
wait $pid
echo aria2c exit.
case $status in
3)
echo file not exist.
exit 3
;;
9)
echo No space left on device.
exit 9
;;
*)
continue
;;
esac
done
echo download succeed.
exit 0
- Performance pretty good compare to
wget
command:
[#daede1 145GiB/145GiB(99%) CN:1 DL:115MiB]
10/05 10:34:40 [NOTICE] Download complete: /axie/geth.tar.zst
Download Results:
gid |stat|avg speed |path/URI
======+====+===========+=======================================================
daede1|OK | 207MiB/s|/axie/geth.tar.zst
Status Legend:
(OK):download completed.
real 12m2.862s
user 1m57.320s
sys 2m28.624s
- Uncompress:
tar -I zstd -xvf chaindata.tar.zst
. It will take more than 20 min to uncompress. You can put it in thetmux
session and run commandtar -I zst -xvf chaindata.tar.zst
- You can combine the above steps by running a script:
wget -O chaindata.tar.zst "<paste snapshot URL here>"
tar -I zstd -xvf chaindata.tar.zst
- If you do not need to store the archive for use with other nodes, you may also extract it while downloading to save time and disk space:
wget -q -O - <snapshot URL> | tar -I zstd -xvf -
Step 3: Install the node
- Now you can follow steps by steps from here Install the node
- This docs is the detail for
6.(Optional) Download the snapshot
Storage size: 9.8TB - we split it into 500GB for each file.
- Download && Concatenate && Uncompress:
for i in {000..019}; do wget "https://ss.roninchain.com/archive-mainnet-chaindata-20240909.tar.zst-$i"; done
cat "archive-mainnet-chaindata-20240909.tar.zst-"* > chaindata.tar.zst
tar -I zstd -xvf chaindata.tar.zst
- If you do not need to store the archive for use with other nodes, you may also extract it while joining files to save time and disk space:
for i in {000..019}; do wget "https://ss.roninchain.com/archive-mainnet-chaindata-20240909.tar.zst-$i"; done
cat "archive-mainnet-chaindata-20240909.tar.zst-"* | tar -I zstd -xvf - -C chaindata