Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
34 changes: 30 additions & 4 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -11,10 +11,10 @@ Go implementation for client to interact with storage nodes in 0G Storage networ

Following packages can help applications to integrate with 0g storage network:

- **[core](core)**: provides underlying utilities to build merkle tree for files or iterable data, and defines data padding standard to interact with [Flow contract](contract/contract.go).
- **[core](core)**: provides underlying utilities to build merkle tree for files or iterable data, defines data padding standard to interact with [Flow contract](contract/contract.go), and implements client-side AES-256-CTR encryption for file uploads.
- **[node](node)**: defines RPC client structures to facilitate RPC interactions with 0g storage nodes and 0g key-value (KV) nodes.
- **[kv](kv)**: defines structures to interact with 0g storage kv.
- **[transfer](transfer)** : defines data structures and functions for transferring data between local and 0g storage.
- **[kv](kv)**: defines structures to interact with 0g storage kv, with optional stream data encryption via `UploadOption.EncryptionKey`.
- **[transfer](transfer)**: defines data structures and functions for transferring data between local and 0g storage, including encrypted upload/download support via `UploadOption.EncryptionKey` and `Downloader.WithEncryptionKey`.
- **[indexer](indexer)**: select storage nodes to upload data from indexer which maintains trusted node list. Besides, allow clients to download files via HTTP GET requests.

## CLI
Expand Down Expand Up @@ -53,13 +53,31 @@ To generate a file for test purpose, with a fixed file size or random file size

The client will submit the data segments to the storage nodes which is determined by the indexer according to their shard configurations.

**Upload with encryption**

Encrypt files client-side using AES-256-CTR before uploading. The encryption key is a hex-encoded 32-byte key with `0x` prefix:

```
./0g-storage-client upload --url <blockchain_rpc_endpoint> --key <private_key> --indexer <storage_indexer_endpoint> --file <file_path> --encryption-key <0x_hex_encoded_32_byte_key>
```

**Download file**
```
./0g-storage-client download --indexer <storage_indexer_endpoint> --root <file_root_hash> --file <output_file_path>
```

If you want to verify the **merkle proof** of downloaded segment, please specify `--proof` option.

**Download with decryption**

To download and decrypt a file that was uploaded with an encryption key:

```
./0g-storage-client download --indexer <storage_indexer_endpoint> --root <file_root_hash> --file <output_file_path> --encryption-key <0x_hex_encoded_32_byte_key>
```

The encryption key must match the one used during upload.

**Write to KV**

By indexer:
Expand All @@ -69,13 +87,21 @@ By indexer:

`--stream-keys` and `--stream-values` are comma separated string list and their length must be equal.

**Write to KV with encryption**

```
./0g-storage-client kv-write --url <blockchain_rpc_endpoint> --key <private_key> --indexer <storage_indexer_endpoint> --stream-id <stream_id> --stream-keys <stream_keys> --stream-values <stream_values> --encryption-key <0x_hex_encoded_32_byte_key>
```

The entire stream data is encrypted client-side using AES-256-CTR before uploading. The KV node must be configured with the encryption key to decrypt and replay the data.

**Read from KV**

```
./0g-storage-client kv-read --node <kv_node_rpc_endpoint> --stream-id <stream_id> --stream-keys <stream_keys>
```

Please pay attention here `--node` is the url of a KV node.
Please pay attention here `--node` is the url of a KV node. If data was written with encryption, the KV node handles decryption during replay — no encryption key is needed for reading.

## Indexer

Expand Down
17 changes: 17 additions & 0 deletions cmd/download.go
Original file line number Diff line number Diff line change
Expand Up @@ -9,6 +9,7 @@ import (
"github.com/0gfoundation/0g-storage-client/indexer"
"github.com/0gfoundation/0g-storage-client/node"
"github.com/0gfoundation/0g-storage-client/transfer"
"github.com/ethereum/go-ethereum/common/hexutil"
"github.com/sirupsen/logrus"
"github.com/spf13/cobra"
)
Expand All @@ -23,6 +24,8 @@ type downloadArgument struct {
roots []string
proof bool

encryptionKey string

routines int

timeout time.Duration
Expand All @@ -43,6 +46,8 @@ func bindDownloadFlags(cmd *cobra.Command, args *downloadArgument) {

cmd.Flags().BoolVar(&args.proof, "proof", false, "Whether to download with merkle proof for validation")

cmd.Flags().StringVar(&args.encryptionKey, "encryption-key", "", "Hex-encoded 32-byte AES-256 encryption key for file decryption")

cmd.Flags().IntVar(&args.routines, "routines", runtime.GOMAXPROCS(0), "number of go routines for downloading simultaneously")

cmd.Flags().DurationVar(&args.timeout, "timeout", 0, "cli task timeout, 0 for no timeout")
Expand Down Expand Up @@ -100,6 +105,18 @@ func download(*cobra.Command, []string) {
logrus.WithError(err).Fatal("Failed to initialize downloader")
}
downloaderImpl.WithRoutines(downloadArgs.routines)
if downloadArgs.encryptionKey != "" {
keyBytes, err := hexutil.Decode(downloadArgs.encryptionKey)
if err != nil {
closer()
logrus.WithError(err).Fatal("Failed to decode encryption key")
}
if len(keyBytes) != 32 {
closer()
logrus.Fatal("Encryption key must be exactly 32 bytes (64 hex characters)")
}
downloaderImpl.WithEncryptionKey(keyBytes)
}
downloader = downloaderImpl
defer closer()
}
Expand Down
21 changes: 18 additions & 3 deletions cmd/kv_write.go
Original file line number Diff line number Diff line change
Expand Up @@ -13,6 +13,7 @@ import (
"github.com/0gfoundation/0g-storage-client/node"
"github.com/0gfoundation/0g-storage-client/transfer"
"github.com/ethereum/go-ethereum/common"
"github.com/ethereum/go-ethereum/common/hexutil"
"github.com/sirupsen/logrus"
"github.com/spf13/cobra"
)
Expand All @@ -39,9 +40,10 @@ var (
fee float64
nonce uint

method string
fullTrusted bool
timeout time.Duration
method string
fullTrusted bool
timeout time.Duration
encryptionKey string
}

kvWriteCmd = &cobra.Command{
Expand Down Expand Up @@ -83,6 +85,7 @@ func init() {
kvWriteCmd.Flags().UintVar(&kvWriteArgs.nonce, "nonce", 0, "nonce of upload transaction")
kvWriteCmd.Flags().StringVar(&kvWriteArgs.method, "method", "random", "method for selecting nodes, can be max, min, random, or positive number, if provided a number, will fail if the requirement cannot be met")
kvWriteCmd.Flags().BoolVar(&kvWriteArgs.fullTrusted, "full-trusted", false, "whether all selected nodes should be from trusted nodes")
kvWriteCmd.Flags().StringVar(&kvWriteArgs.encryptionKey, "encryption-key", "", "Hex-encoded 32-byte AES-256 encryption key for encrypting the stream data")

rootCmd.AddCommand(kvWriteCmd)
}
Expand Down Expand Up @@ -111,6 +114,17 @@ func kvWrite(*cobra.Command, []string) {
if kvWriteArgs.finalityRequired {
finalityRequired = transfer.FileFinalized
}
var encryptionKey []byte
if kvWriteArgs.encryptionKey != "" {
var err error
encryptionKey, err = hexutil.Decode(kvWriteArgs.encryptionKey)
if err != nil {
logrus.WithError(err).Fatal("Failed to decode encryption key")
}
if len(encryptionKey) != 32 {
logrus.Fatalf("Encryption key must be 32 bytes, got %d", len(encryptionKey))
}
}
opt := transfer.UploadOption{
FinalityRequired: finalityRequired,
TaskSize: kvWriteArgs.taskSize,
Expand All @@ -120,6 +134,7 @@ func kvWrite(*cobra.Command, []string) {
Nonce: nonce,
Method: kvWriteArgs.method,
FullTrusted: kvWriteArgs.fullTrusted,
EncryptionKey: encryptionKey,
}

var clients *transfer.SelectedNodes
Expand Down
17 changes: 17 additions & 0 deletions cmd/upload.go
Original file line number Diff line number Diff line change
Expand Up @@ -64,6 +64,8 @@ type uploadArgument struct {

timeout time.Duration

encryptionKey string

flowAddress string
marketAddress string
}
Expand Down Expand Up @@ -97,6 +99,8 @@ func bindUploadFlags(cmd *cobra.Command, args *uploadArgument) {

cmd.Flags().DurationVar(&args.timeout, "timeout", 0, "cli task timeout, 0 for no timeout")

cmd.Flags().StringVar(&args.encryptionKey, "encryption-key", "", "Hex-encoded 32-byte AES-256 encryption key for file encryption")

cmd.Flags().StringVar(&args.flowAddress, "flow-address", "", "Flow contract address (skip storage node status call when set)")
cmd.Flags().StringVar(&args.marketAddress, "market-address", "", "Market contract address (optional, skip flow lookup when set)")
}
Expand Down Expand Up @@ -162,6 +166,18 @@ func upload(*cobra.Command, []string) {
if uploadArgs.maxGasPrice > 0 {
maxGasPrice = big.NewInt(int64(uploadArgs.maxGasPrice))
}
var encryptionKey []byte
if uploadArgs.encryptionKey != "" {
var err error
encryptionKey, err = hexutil.Decode(uploadArgs.encryptionKey)
if err != nil {
logrus.WithError(err).Fatal("Failed to decode encryption key")
}
if len(encryptionKey) != 32 {
logrus.Fatal("Encryption key must be exactly 32 bytes (64 hex characters)")
}
}

opt := transfer.UploadOption{
Submitter: submitter,
Tags: hexutil.MustDecode(uploadArgs.tags),
Expand All @@ -177,6 +193,7 @@ func upload(*cobra.Command, []string) {
Method: uploadArgs.method,
FullTrusted: uploadArgs.fullTrusted,
FastMode: uploadArgs.fastMode,
EncryptionKey: encryptionKey,
}

file, err := core.Open(uploadArgs.file)
Expand Down
113 changes: 113 additions & 0 deletions core/encrypted_data.go
Original file line number Diff line number Diff line change
@@ -0,0 +1,113 @@
package core

// EncryptedData wraps an IterableData with AES-256-CTR encryption.
// It prepends a 17-byte encryption header (version + nonce) to the data stream
// and encrypts the inner data on-the-fly during reads.
type EncryptedData struct {
inner IterableData
key [32]byte
header *EncryptionHeader
encryptedSize int64
paddedSize uint64
}

var _ IterableData = (*EncryptedData)(nil)

// NewEncryptedData creates an EncryptedData wrapper around the given data source.
// A random nonce is generated for the encryption header.
func NewEncryptedData(inner IterableData, key [32]byte) (*EncryptedData, error) {
header, err := NewEncryptionHeader()
if err != nil {
return nil, err
}
encryptedSize := inner.Size() + int64(EncryptionHeaderSize)
paddedSize := IteratorPaddedSize(encryptedSize, true)

return &EncryptedData{
inner: inner,
key: key,
header: header,
encryptedSize: encryptedSize,
paddedSize: paddedSize,
}, nil
}

// Header returns the encryption header containing the version and nonce.
func (ed *EncryptedData) Header() *EncryptionHeader {
return ed.header
}

func (ed *EncryptedData) NumChunks() uint64 {
return NumSplits(ed.encryptedSize, DefaultChunkSize)
}

func (ed *EncryptedData) NumSegments() uint64 {
return NumSplits(ed.encryptedSize, DefaultSegmentSize)
}

func (ed *EncryptedData) Size() int64 {
return ed.encryptedSize
}

func (ed *EncryptedData) PaddedSize() uint64 {
return ed.paddedSize
}

func (ed *EncryptedData) Offset() int64 {
return 0
}

// Read reads encrypted data at the given offset.
// For offsets within the header region (0..16), header bytes are returned.
// For offsets beyond the header, data is read from the inner source and encrypted.
func (ed *EncryptedData) Read(buf []byte, offset int64) (int, error) {
if offset < 0 || offset >= ed.encryptedSize {
return 0, nil
}

headerSize := int64(EncryptionHeaderSize)
written := 0

// If offset falls within the header region
if offset < headerSize {
headerBytes := ed.header.ToBytes()
headerStart := int(offset)
headerEnd := int(headerSize)
if headerEnd > headerStart+len(buf) {
headerEnd = headerStart + len(buf)
}
n := headerEnd - headerStart
copy(buf[:n], headerBytes[headerStart:headerEnd])
written += n
}

// If we still have room in buf and there's data beyond the header
if written < len(buf) {
var dataOffset int64
if offset < headerSize {
dataOffset = 0
} else {
dataOffset = offset - headerSize
}

remainingBuf := buf[written:]
innerRead, err := ed.inner.Read(remainingBuf, dataOffset)
if err != nil {
return written, err
}

// Encrypt the data we just read
if innerRead > 0 {
CryptAt(&ed.key, &ed.header.Nonce, uint64(dataOffset), buf[written:written+innerRead])
}

written += innerRead
}

return written, nil
}

// Split returns the encrypted data as a single fragment (splitting is not supported for encrypted data).
func (ed *EncryptedData) Split(fragmentSize int64) []IterableData {
return []IterableData{ed}
}
Loading
Loading