- Fossil Light Client - Technical Documentation
This documentation outlines two deployment approaches for the Fossil Light Client:
- π Docker-Based Deployment: Recommended for most users, handles all dependencies automatically
- π§ Manual Compilation: For development and debugging, runs light client binaries from source
-
Clone the repository:
git clone https://github.com/OilerNetwork/fossil-light-client.git cd fossil-light-client
-
Initialize repository:
git submodule update --init --recursive
-
Install Yarn:
-
For macOS:
# Using Homebrew brew install yarn # Using npm npm install --global yarn
-
For Linux:
# Using npm npm install --global yarn # Using Debian/Ubuntu curl -sS https://dl.yarnpkg.com/debian/pubkey.gpg | sudo apt-key add - echo "deb https://dl.yarnpkg.com/debian/ stable main" | sudo tee /etc/apt/sources.list.d/yarn.list sudo apt update sudo apt install yarn
-
For Windows:
# Using npm npm install --global yarn # Using Chocolatey choco install yarn # Using Scoop scoop install yarn
-
-
Install IPFS:
- Download and install IPFS Desktop
- Ensure IPFS daemon is running before proceeding
-
Platform-specific requirements:
-
For macOS users:
# Install Python toolchain and gettext brew install python brew install gettext # Add to ~/.zshrc or ~/.bash_profile: export PATH="/usr/local/opt/python/libexec/bin:$PATH"
-
For Linux users: No additional requirements
-
To run the documentation locally:
cd docs/
yarn
yarn start
This will start a local server and open the documentation in your default browser. The documentation will automatically reload when you make changes to the source files.
β οΈ Note: The Docker-based deployment is currently under development and not functional. Please use the Manual Compilation and Execution method instead.
-
Install Docker Desktop (includes Docker Engine and Compose)
-
For Linux only: Install Docker Buildx
mkdir -p ~/.docker/cli-plugins/ curl -L https://github.com/docker/buildx/releases/download/v0.12.1/buildx-v0.12.1.linux-amd64 -o ~/.docker/cli-plugins/docker-buildx chmod +x ~/.docker/cli-plugins/docker-buildx
-
Set up configuration:
cp config/.env.example .env cp config/.env.docker.example .env.docker
-
Build images:
chmod +x scripts/build-images.sh ./scripts/build-images.sh
-
Start core infrastructure:
docker-compose up -d docker-compose logs -f # Monitor until initialization complete
-
Deploy services:
# Initialize MMR builder docker-compose -f docker-compose.services.yml run --rm mmr-builder # Deploy relayer and client docker-compose -f docker-compose.services.yml up -d relayer docker-compose -f docker-compose.services.yml up -d client
# View containers
docker ps
# View logs
docker-compose logs -f
docker-compose -f docker-compose.services.yml logs -f
# Stop everything
docker-compose down
docker-compose -f docker-compose.services.yml down
This setup uses Docker only for networks (Ethereum & StarkNet) and contract deployments, while running light client components directly with Cargo.
-
Install Rust:
curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh
-
Install Risc0:
curl -L https://risczero.com/install | bash && rzup
-
Configure environment:
cp config/.env.local.example .env.local
-
Start networks and deploy contracts:
chmod +x scripts/build-network.sh ./scripts/build-network.sh docker-compose up
Wait for the
deploy-starknet
container to complete the deployment of all StarkNet contracts. The deployment is finished when you see a log message indicating environment variables have been updated. (it might take a few minutes) -
Build the project:
cargo build
-
Build MMR and generate proofs: This step will:
- Start from the latest Ethereum finalized block and process 8 blocks backwards (2 batches * 1024 blocks)
- Generate a ZK proof of computation for each batch
- Create and store .db files for each MMR batch and upload them to IPFS
- Generate and verify Groth16 proofs on StarkNet for batch correctness
- Extract batch state from proof journal and store it in the Fossil Store contract
cargo run --bin build-mmr -- --num-batches 2 --env-file .env.local
-
Start the relayer: This step will:
- Monitor the latest finalized block on Ethereum
- Call the L1 contract to relay the finalized block hash to Starknet
- Automatically retry on failures and continue monitoring
- Run as a background service with configurable intervals (default: 3 minutes for local testing)
chmod +x scripts/run_relayer_local.sh ./scripts/run_relayer_local.sh
-
Start the client: This step will:
- Monitor the Fossil Store contract on Starknet for new block hash events
- Upon receiving a new block hash:
- Fetch block headers from the latest MMR root up to the new block hash
- Update the local light client state with the new block headers
- Verify the cryptographic proofs for each block header
- Maintain a recent block buffer to handle potential chain reorganizations
cargo run --bin client -- --env-file .env.local
-
Test Fee Proof Fetching: In a new terminal, fetch the fees for a block range from the Fossil Store contract:
starkli call <fossil_store_contract_address> get_avg_fees_in_range <start_timestamp> <end_timestamp> --rpc http://localhost:5050
Note: The block range should match the blocks that were added to the MMR in step 4. You can find these numbers in the build_mmr output logs.
When requesting state proofs for fees, you can query any hour-aligned timestamp or range within the processed blocks. The system aggregates fees hourly and requires timestamps to be multiples of 3600 seconds (1 hour).
For example, if blocks from timestamp 1704067200 (Jan 1, 2024 00:00:00 UTC) to 1704153600 (Jan 2, 2024 00:00:00 UTC) have been processed:
- You can query a single hour: 1704070800 (Jan 1, 2024 01:00:00 UTC)
- Or a range: 1704067200 to 1704153600 (full 24 hours)
- Or any subset of hours within these bounds
Key validation rules:
- All timestamps must be hour-aligned (multiples of 3600 seconds)
- For range queries, start timestamp must be β€ end timestamp
- Queries return weighted average fees based on number of blocks in each hour
Note: While blocks are processed in batches internally, fee queries operate on hour boundaries regardless of batch structure.
-
Reset deployment:
docker-compose down docker-compose -f docker-compose.services.yml down docker network rm fossil-network
-
Remove orphaned containers:
docker-compose up -d --remove-orphans
- Ensure IPFS daemon is running
- Verify Docker network connectivity
- Check logs:
docker-compose logs -f
# View containers
docker ps
# View logs
docker-compose logs -f
docker-compose -f docker-compose.services.yml logs -f
# Stop everything
docker-compose down
docker-compose -f docker-compose.services.yml down