Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

EIP7594: p2p-interface #6358

Draft
wants to merge 110 commits into
base: kzgpeerdas
Choose a base branch
from
Draft
Show file tree
Hide file tree
Changes from 11 commits
Commits
Show all changes
110 commits
Select commit Hold shift + click to select a range
02e5430
init: add req/res domain for peerdas
agnxsh Jun 14, 2024
986a2bd
save work push, build failing
agnxsh Jun 14, 2024
5934400
add: req/res rpc handlers
agnxsh Jun 14, 2024
ebe9b3b
rm: TODO comment, revisiting later
agnxsh Jun 15, 2024
6064ac3
add tests for sanity checking data columns
agnxsh Jun 17, 2024
8e49f88
update URLs
agnxsh Jun 17, 2024
46d07b1
add: data column support in sync_protocol, sync_manager, request_mana…
agnxsh Jun 18, 2024
c0eb4c4
update test report for db test
agnxsh Jun 18, 2024
51f189e
add: getMissingDataColumns, requestManagerDataColumnLoop
agnxsh Jun 18, 2024
b87a6d7
Merge branch 'peerdas-p2p' of https://github.com/status-im/nimbus-eth…
agnxsh Jun 18, 2024
f0cae30
add: pruneDataColumns at the end of slot
agnxsh Jun 19, 2024
e2afc58
fix: reviews, pass1
agnxsh Jun 21, 2024
791d2fb
add: forward and backward syncing for data columns, broadcasting data…
agnxsh Jun 24, 2024
9bdcd5e
fix: sync tests
agnxsh Jun 24, 2024
07d33b3
add dataColumns to db during forward syncing
agnxsh Jun 24, 2024
325bdfd
support for enqueueing whichever is activated blob/data_column
agnxsh Jun 25, 2024
aa390e9
rm: message router logic for data column, need to move it
agnxsh Jun 25, 2024
81b55fa
add: fetch subnetCount for super node when subscribeAllSubnets flag p…
agnxsh Jun 25, 2024
87bc91f
fix: message router
agnxsh Jun 25, 2024
18e3ba2
fix: get_data_column_sidecar
agnxsh Jun 26, 2024
0b4cf10
rm: unused code in data column getter
agnxsh Jun 26, 2024
34a2478
add: blob recovery logic
agnxsh Jun 26, 2024
3db92f8
add: data column reconstruction logic
agnxsh Jun 27, 2024
5bf1e02
initiate data column quarantine
agnxsh Jun 28, 2024
27b0705
verify kzg disable
agnxsh Jun 28, 2024
0e01d2f
experimental disable for inclusion proofs
agnxsh Jun 28, 2024
ca3bd3e
experimental: disable scoring for data columns
agnxsh Jun 28, 2024
7426690
dc quarantine activation, keeping blobs compatible
agnxsh Jun 29, 2024
c8d957a
add: experimental checkpoints on gossip validation to localize failin…
agnxsh Jun 29, 2024
8292341
fix: block_processor test
agnxsh Jun 29, 2024
510d988
disable subnet gossip condition, fixed inclusion proof
agnxsh Jun 30, 2024
26ac587
request man for data columns
agnxsh Jul 1, 2024
8ac4cc9
add: data column grouping conditions for range request
agnxsh Jul 1, 2024
8e28654
exp: build failing, checking if failing on other machines with these …
agnxsh Jul 1, 2024
3b1f5b4
weird fix
agnxsh Jul 1, 2024
9325423
strangely disable this line makes it go away :)
agnxsh Jul 1, 2024
0e02eb4
fix test_sync_manager
agnxsh Jul 1, 2024
a8e2c3e
exp: disable some gossip conditions
agnxsh Jul 1, 2024
9e6cad4
bit more disabling for kurtosis
agnxsh Jul 1, 2024
67fe8ac
disable blob activity (exp), improve gossip validation
agnxsh Jul 2, 2024
2f7a3d0
reenable checkpoints to debug exception
agnxsh Jul 2, 2024
75c3e0b
debug
agnxsh Jul 2, 2024
77cc2ef
debug2
agnxsh Jul 2, 2024
8d2c489
debug3
agnxsh Jul 2, 2024
d8e1bef
debug 4
agnxsh Jul 2, 2024
d0722cd
update constants
agnxsh Jul 2, 2024
0e710da
update timings
agnxsh Jul 2, 2024
8f9f654
intentionally increase custody requirement
agnxsh Jul 2, 2024
ad64b22
shortLog for dc
agnxsh Jul 2, 2024
d292e94
sync queue
agnxsh Jul 2, 2024
152d276
added reconstruction logic
agnxsh Jul 3, 2024
9f42196
exp disable of some gossip conditions
agnxsh Jul 3, 2024
53f7175
minor fix
agnxsh Jul 3, 2024
887a44a
revert gossip val
agnxsh Jul 3, 2024
7063739
fix: get_data_column_sidecars
agnxsh Jul 3, 2024
d49b1a1
fix: ckzg function change
agnxsh Jul 3, 2024
1a85760
fix: cell and proof aggregator
agnxsh Jul 3, 2024
c6662bd
reenable blobs in block proposal
agnxsh Jul 4, 2024
a755dba
inclusion proof depth
agnxsh Jul 4, 2024
3bea574
reduce data column response cost
agnxsh Jul 4, 2024
b927ddd
fix: get_data_column_sidecars
agnxsh Jul 4, 2024
13029d9
fix: get data column
agnxsh Jul 4, 2024
41b35b9
fix: get data column fixes
agnxsh Jul 4, 2024
fe183e7
change timings
agnxsh Jul 4, 2024
24b30a9
test kurtosis
agnxsh Jul 4, 2024
93c3525
increase ops cost
agnxsh Jul 4, 2024
6cdc6bf
debug: verify data column kzg proofs via kurtosis
agnxsh Jul 4, 2024
eb46f4c
fix kzg inclusion proof logic
agnxsh Jul 4, 2024
14afc82
gindex fix
agnxsh Jul 4, 2024
318d656
fix: gindex
agnxsh Jul 5, 2024
fab427d
enable dc in gossip and message router
agnxsh Jul 5, 2024
086d3f1
fix: get dc sidecar
agnxsh Jul 5, 2024
778ea9f
fix build proof in get dc
agnxsh Jul 5, 2024
a92eda5
prevent pulling const values from deneb preset
agnxsh Jul 5, 2024
7101f93
gindex issue fix
agnxsh Jul 5, 2024
adc717c
change return type for get dc
agnxsh Jul 5, 2024
1729bdc
reduce parallel requests
agnxsh Jul 5, 2024
85db9ca
regressive fix
agnxsh Jul 5, 2024
74ee8bb
refactor cells and proofs logic + fix edge cases
agnxsh Jul 6, 2024
7b9c68b
oops
agnxsh Jul 6, 2024
755c24d
fix: blob len 0 case
agnxsh Jul 6, 2024
abf5892
handle empty blobs
agnxsh Jul 6, 2024
e5237d1
cell and proof extraction
agnxsh Jul 6, 2024
c14b592
add: checkpoints for debug support
agnxsh Jul 6, 2024
7a891f1
rework on cell and proof
agnxsh Jul 6, 2024
0bffdd0
change checkpoints
agnxsh Jul 6, 2024
aaba448
convert to List add
agnxsh Jul 7, 2024
5eb854b
checkpoint 2 cleanup
agnxsh Jul 7, 2024
cf40d7f
cleanup for debugs, complete
agnxsh Jul 8, 2024
b33900b
added enr struct
agnxsh Jul 15, 2024
e034f30
add: subscribeAllSubnets feature
agnxsh Jul 15, 2024
fa5b154
add: logic constructing valid set of peers
agnxsh Jul 19, 2024
5265eeb
refactor: sync manager to range request only valid peers if not super…
agnxsh Jul 22, 2024
d2c7208
Eth2Node not needed in sync man
agnxsh Jul 22, 2024
b001499
add: valid custody peer set to RequestManager
agnxsh Jul 23, 2024
c651312
fix reviewed code
agnxsh Jul 23, 2024
7faec9b
nits
agnxsh Jul 23, 2024
8744888
add: hypergeom cdf
agnxsh Jul 25, 2024
2e9750b
add: get_extended_sample_count for lossy sampler and it's unit test
agnxsh Jul 25, 2024
e80bd36
add: verify data column kzg proof during storeBlock, added serializeD…
agnxsh Jul 26, 2024
329fc21
add: condition for being able to selfReconstruct
agnxsh Jul 29, 2024
20e6b18
resovle merge conflicts
agnxsh Jul 29, 2024
b32205d
upstream peerdas alpha3 related spec changes + fix upstream related i…
agnxsh Aug 5, 2024
9be615d
add: data column reconstruction and broadcast (#6481)
agnxsh Aug 8, 2024
1ebba1f
add: metadata-v3 for custody subnet count (#6486)
agnxsh Aug 12, 2024
249eb0e
bump nim-bearssl to 646fa2152b11980c24bf34b3e214b479c9d25f21
agnxsh Aug 13, 2024
722480a
bumped nim-chronos to 1b9d9253e89445d585d0fff39cc0d19254fdfd0d
agnxsh Aug 13, 2024
65a5255
change gcc config to tackle incompatible pointer types
agnxsh Aug 13, 2024
cc21a2a
fix: enr bitfield logic for custody subnet count
agnxsh Aug 14, 2024
f3f61cb
conditionally reconstruct and broadcast only when supernode
agnxsh Aug 15, 2024
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
5 changes: 3 additions & 2 deletions AllTests-mainnet.md
Original file line number Diff line number Diff line change
Expand Up @@ -55,14 +55,15 @@ OK: 4/4 Fail: 0/4 Skip: 0/4
+ sanity check Deneb states [Preset: mainnet] OK
+ sanity check Deneb states, reusing buffers [Preset: mainnet] OK
+ sanity check blobs [Preset: mainnet] OK
+ sanity check data columns [Preset: mainnet] OK
+ sanity check genesis roundtrip [Preset: mainnet] OK
+ sanity check phase 0 blocks [Preset: mainnet] OK
+ sanity check phase 0 getState rollback [Preset: mainnet] OK
+ sanity check phase 0 states [Preset: mainnet] OK
+ sanity check phase 0 states, reusing buffers [Preset: mainnet] OK
+ sanity check state diff roundtrip [Preset: mainnet] OK
```
OK: 25/25 Fail: 0/25 Skip: 0/25
OK: 26/26 Fail: 0/26 Skip: 0/26
## Beacon state [Preset: mainnet]
```diff
+ Smoke test initialize_beacon_state_from_eth1 [Preset: mainnet] OK
Expand Down Expand Up @@ -1154,4 +1155,4 @@ OK: 2/2 Fail: 0/2 Skip: 0/2
OK: 9/9 Fail: 0/9 Skip: 0/9

---TOTAL---
OK: 803/808 Fail: 0/808 Skip: 5/808
OK: 804/809 Fail: 0/809 Skip: 5/809
29 changes: 29 additions & 0 deletions beacon_chain/beacon_chain_db.nim
Original file line number Diff line number Diff line change
Expand Up @@ -254,6 +254,13 @@ func blobkey(root: Eth2Digest, index: BlobIndex) : array[40, byte] =

ret

func columnkey(root: Eth2Digest, index: ColumnIndex) : array[40, byte] =
agnxsh marked this conversation as resolved.
Show resolved Hide resolved
var ret: array[40, byte]
ret[0..<8] = toBytes(index)
ret[8..<40] = root.data

ret

template expectDb(x: auto): untyped =
# There's no meaningful error handling implemented for a corrupt database or
# full disk - this requires manual intervention, so we'll panic for now
Expand Down Expand Up @@ -808,11 +815,22 @@ proc putBlobSidecar*(
let block_root = hash_tree_root(value.signed_block_header.message)
db.blobs.putSZSSZ(blobkey(block_root, value.index), value)

proc putDataColumnSidecar*(
db: BeaconChainDB,
value: DataColumnSidecar) =
let block_root = hash_tree_root(value.signed_block_header.message)
db.blobs.putSZSSZ(columnkey(block_root, value.index), value)

proc delBlobSidecar*(
db: BeaconChainDB,
root: Eth2Digest, index: BlobIndex): bool =
db.blobs.del(blobkey(root, index)).expectDb()

proc delDataColumnSidecar*(
db: BeaconChainDB,
root: Eth2Digest, index: ColumnIndex): bool =
db.blobs.del(columnkey(root, index)).expectDb()

proc updateImmutableValidators*(
db: BeaconChainDB, validators: openArray[Validator]) =
# Must be called before storing a state that references the new validators
Expand Down Expand Up @@ -1071,6 +1089,17 @@ proc getBlobSidecar*(db: BeaconChainDB, root: Eth2Digest, index: BlobIndex,
value: var BlobSidecar): bool =
db.blobs.getSZSSZ(blobkey(root, index), value) == GetResult.found

proc getDataColumnSidecarSZ*(db: BeaconChainDB, root: Eth2Digest,
index: ColumnIndex, data: var seq[byte]): bool =
let dataPtr = addr data # Short-lived
func decode(data: openArray[byte]) =
assign(dataPtr[], data)
db.blobs.get(columnkey(root, index), decode).expectDb()

proc getDataColumnSidecar*(db: BeaconChainDB, root: Eth2Digest, index: ColumnIndex,
value: var DataColumnSidecar): bool =
db.blobs.getSZSSZ(columnkey(root, index), value) == GetResult.found

proc getBlockSZ*(
db: BeaconChainDB, key: Eth2Digest, data: var seq[byte],
T: type phase0.TrustedSignedBeaconBlock): bool =
Expand Down
3 changes: 2 additions & 1 deletion beacon_chain/beacon_node.nim
Original file line number Diff line number Diff line change
Expand Up @@ -22,7 +22,7 @@ import
./el/el_manager,
./consensus_object_pools/[
blockchain_dag, blob_quarantine, block_quarantine, consensus_manager,
attestation_pool, sync_committee_msg_pool, validator_change_pool],
data_column_quarantine, attestation_pool, sync_committee_msg_pool, validator_change_pool],
agnxsh marked this conversation as resolved.
Show resolved Hide resolved
./spec/datatypes/[base, altair],
./spec/eth2_apis/dynamic_fee_recipients,
./sync/[sync_manager, request_manager],
Expand Down Expand Up @@ -71,6 +71,7 @@ type
dag*: ChainDAGRef
quarantine*: ref Quarantine
blobQuarantine*: ref BlobQuarantine
dataColumnQuarantine*: ref DataColumnQuarantine
attestationPool*: ref AttestationPool
syncCommitteeMsgPool*: ref SyncCommitteeMsgPool
lightClientPool*: ref LightClientPool
Expand Down
15 changes: 15 additions & 0 deletions beacon_chain/consensus_object_pools/block_quarantine.nim
Original file line number Diff line number Diff line change
Expand Up @@ -57,6 +57,8 @@ type
## all blobs for this block, we can proceed to resolving the
## block as well. A blobless block inserted into this table must
## have a resolved parent (i.e., it is not an orphan).

columnless: OrderedTable[Eth2Digest, ForkedSignedBeaconBlock]

unviable*: OrderedTable[Eth2Digest, tuple[]]
## Unviable blocks are those that come from a history that does not
Expand Down Expand Up @@ -336,3 +338,16 @@ func popBlobless*(
iterator peekBlobless*(quarantine: var Quarantine): ForkedSignedBeaconBlock =
for k, v in quarantine.blobless.mpairs():
yield v

func popColumnless*(
quarantine: var Quarantine,
root: Eth2Digest): Opt[ForkedSignedBeaconBlock] =
var blck: ForkedSignedBeaconBlock
if quarantine.columnless.pop(root, blck):
Opt.some(blck)
else:
Opt.none(ForkedSignedBeaconBlock)

iterator peekColumnless*(quarantine: var Quarantine): ForkedSignedBeaconBlock =
agnxsh marked this conversation as resolved.
Show resolved Hide resolved
for k,v in quarantine.columnless.mpairs():
yield v
103 changes: 103 additions & 0 deletions beacon_chain/consensus_object_pools/data_column_quarantine.nim
Original file line number Diff line number Diff line change
@@ -0,0 +1,103 @@
# beacon_chain
# Copyright (c) 2018-2024 Status Research & Development GmbH
# Licensed and distributed under either of
# * MIT license (license terms in the root directory or at https://opensource.org/licenses/MIT).
# * Apache v2 license (license terms in the root directory or at https://www.apache.org/licenses/LICENSE-2.0).
# at your option. This file may not be copied, modified, or distributed except according to those terms.

{.push raises: [].}

import
std/tables,
../spec/[helpers, eip7594_helpers]

from std/sequtils import mapIt
from std/strutils import join

const
MaxDataColumns = 3 * SLOTS_PER_EPOCH * NUMBER_OF_COLUMNS
## Same limit as `MaxOrphans` in `block_quarantine`
## data columns may arrive before an orphan is tagged `columnless`

type
DataColumnQuarantine* = object
data_columns*:
OrderedTable[(Eth2Digest, ColumnIndex), ref DataColumnSidecar]
onDataColumnSidecarCallback*: OnDataColumnSidecarCallback

DataColumnFetchRecord* = object
block_root*: Eth2Digest
indices*: seq[ColumnIndex]

OnDataColumnSidecarCallback = proc(data: DataColumnSidecar) {.gcsafe, raises: [].}

func shortLog*(x: seq[ColumnIndex]): string =
"<" & x.mapIt($it).join(", ") & ">"

func shortLog*(x: seq[DataColumnFetchRecord]): string =
"[" & x.mapIt(shortLog(it.block_root) & shortLog(it.indices)).join(", ") & "]"

func put*(quarantine: var DataColumnQuarantine, dataColumnSidecar: ref DataColumnSidecar) =
if quarantine.data_columns.lenu64 >= MaxDataColumns:
# FIFO if full. For example, sync manager and request manager can race to
# put blobs in at the same time, so one gets blob insert -> block resolve
agnxsh marked this conversation as resolved.
Show resolved Hide resolved
# -> blob insert sequence, which leaves garbage blobs.
#
# This also therefore automatically garbage-collects otherwise valid garbage
# blobs which are correctly signed, point to either correct block roots or a
# block root which isn't ever seen, and then are for any reason simply never
# used.
var oldest_column_key: (Eth2Digest, ColumnIndex)
for k in quarantine.data_columns.keys:
oldest_column_key = k
break
quarantine.data_columns.del(oldest_column_key)
let block_root = hash_tree_root(dataColumnSidecar.signed_block_header.message)
discard quarantine.data_columns.hasKeyOrPut(
(block_root, dataColumnSidecar.index), dataColumnSidecar)

func hasDataColumn*(
quarantine: DataColumnQuarantine,
slot: Slot,
proposer_index: uint64,
index: ColumnIndex): bool =
for data_column_sidecar in quarantine.data_columns.values:
template block_header: untyped = data_column_sidecar.signed_block_header.message
if block_header.slot == slot and
block_header.proposer_index == proposer_index and
data_column_sidecar.index == index:
return true
false

func popDataColumns*(
quarantine: var DataColumnQuarantine, digest: Eth2Digest,
blck: deneb.SignedBeaconBlock | electra.SignedBeaconBlock):
seq[ref DataColumnSidecar] =
var r: seq[ref DataColumnSidecar]
for idx in 0..<len(blck.message.body.blob_kzg_commitments):
var c: ref DataColumnSidecar
if quarantine.data_columns.pop((digest, ColumnIndex idx), c):
r.add(c)
true

func hasDataColumns*(quarantine: DataColumnQuarantine,
blck: deneb.SignedBeaconBlock | electra.SignedBeaconBlock): bool =
Copy link
Contributor

@tersec tersec Jul 6, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There's just so much of this temporary workaround for broken devnet-1 fork schedule that's going to become kludgy jank the moment the first few devnets are over

for idx in 0..<len(blck.message.body.blob_kzg_commitments):
if (blck.root, ColumnIndex idx) notin quarantine.data_columns:
return false
true

func dataColumnFetchRecord*(quarantine: DataColumnQuarantine,
blck: deneb.SignedBeaconBlock | electra.SignedBeaconBlock): DataColumnFetchRecord =
var indices: seq[ColumnIndex]
for i in 0..<len(blck.message.body.blob_kzg_commitments):
let idx = ColumnIndex(i)
if not quarantine.data_columns.hasKey(
(blck.root, idx)):
indices.add(idx)
DataColumnFetchRecord(block_root: blck.root, indices: indices)

func init*(
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What is this init function doing -- is it likely in near future that it will be inadmissible to create a DataColumnQuarantine without arguments? It's fine as a placeholder until those are finalized, but when merging into unstable, it's cleaner to just use Nim's default mechanisms for this than add an empty shim init function. Just

  var x: DataColumnQuarantine

will also default-initialize this way (any substantive numeric fields are zero, strings empty, ref's not allocated, etc), as will other default-creation approaches.

T: type DataColumnQuarantine, onDataColumnSidecarCallback: OnDataColumnSidecarCallback): T =
T(onDataColumnSidecarCallback: onDataColumnSidecarCallback)

8 changes: 5 additions & 3 deletions beacon_chain/gossip_processing/eth2_processor.nim
Original file line number Diff line number Diff line change
Expand Up @@ -15,8 +15,8 @@ import
../spec/datatypes/[altair, phase0, deneb, eip7594],
../consensus_object_pools/[
blob_quarantine, block_clearance, block_quarantine, blockchain_dag,
attestation_pool, light_client_pool, sync_committee_msg_pool,
validator_change_pool],
data_column_quarantine, attestation_pool, light_client_pool,
sync_committee_msg_pool, validator_change_pool],
../validators/validator_pool,
../beacon_clock,
"."/[gossip_validation, block_processor, batch_validation],
Expand Down Expand Up @@ -156,6 +156,8 @@ type

blobQuarantine*: ref BlobQuarantine

dataColumnQuarantine*: ref DataColumnQuarantine

# Application-provided current time provider (to facilitate testing)
getCurrentBeaconTime*: GetBeaconTimeFn

Expand Down Expand Up @@ -345,7 +347,7 @@ proc processDataColumnSidecar*(
debug "Data column received", delay

let v =
self.dag.validateDataColumnSidecar(self.quarantine, self.blobQuarantine,
self.dag.validateDataColumnSidecar(self.quarantine, self.dataColumnQuarantine,
dataColumnSidecar, wallTime, subnet_id)

if v.isErr():
Expand Down
6 changes: 3 additions & 3 deletions beacon_chain/gossip_processing/gossip_validation.nim
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,7 @@ import
beaconstate, state_transition_block, forks, helpers, network, signatures, eip7594_helpers],
../consensus_object_pools/[
attestation_pool, blockchain_dag, blob_quarantine, block_quarantine,
spec_cache, light_client_pool, sync_committee_msg_pool,
data_column_quarantine, spec_cache, light_client_pool, sync_committee_msg_pool,
validator_change_pool],
".."/[beacon_clock],
./batch_validation
Expand Down Expand Up @@ -490,7 +490,7 @@ proc validateBlobSidecar*(
# https://github.com/ethereum/consensus-specs/blob/5f48840f4d768bf0e0a8156a3ed06ec333589007/specs/_features/eip7594/p2p-interface.md#the-gossip-domain-gossipsub
proc validateDataColumnSidecar*(
dag: ChainDAGRef, quarantine: ref Quarantine,
blobQuarantine: ref BlobQuarantine, data_column_sidecar: DataColumnSidecar,
dataColumnQuarantine: ref DataColumnQuarantine, data_column_sidecar: DataColumnSidecar,
wallTime: BeaconTime, subnet_id: uint64): Result[void, ValidationError] =

template block_header: untyped = data_column_sidecar.signed_block_header.message
Expand Down Expand Up @@ -538,7 +538,7 @@ proc validateDataColumnSidecar*(
let block_root = hash_tree_root(block_header)
if dag.getBlockRef(block_root).isSome():
return errIgnore("DataColumnSidecar: already have block")
if blobQuarantine[].hasBlob(
if dataColumnQuarantine[].hasDataColumn(
block_header.slot, block_header.proposer_index, data_column_sidecar.index):
return errIgnore("DataColumnSidecar: already have valid data column from same proposer")

Expand Down
20 changes: 20 additions & 0 deletions beacon_chain/nimbus_beacon_node.nim
Original file line number Diff line number Diff line change
Expand Up @@ -14,6 +14,7 @@ import
stew/[byteutils, io2],
eth/p2p/discoveryv5/[enr, random2],
./consensus_object_pools/blob_quarantine,
./consensus_object_pools/data_column_quarantine,
./consensus_object_pools/vanity_logs/vanity_logs,
./networking/[topic_params, network_metadata_downloads],
./rpc/[rest_api, state_ttl_cache],
Expand Down Expand Up @@ -1410,6 +1411,24 @@ proc pruneBlobs(node: BeaconNode, slot: Slot) =
count = count + 1
debug "pruned blobs", count, blobPruneEpoch

proc pruneDataColumns(node: BeaconNode, slot: Slot) =
let dataColumnPruneEpoch = (slot.epoch -
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This seems to underflow if called too early to genesis? Even if currently, no networks have launched with PeerDAS at genesis, presumably at some point there will be an Electra genesis, PeerDAS network just as there are now Deneb genesis, KZG-launched networks.

node.dag.cfg.MIN_EPOCHS_FOR_DATA_COLUMN_SIDECARS_REQUESTS - 1)
if slot.is_epoch() and dataColumnPruneEpoch >= node.dag.cfg.DENEB_FORK_EPOCH:
var blocks: array[SLOTS_PER_EPOCH.int, BlockId]
var count = 0
let startIndex = node.dag.getBlockRange(
dataColumnPruneEpoch.start_slot, 1, blocks.toopenArray(0, SLOTS_PER_EPOCH - 1))
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
dataColumnPruneEpoch.start_slot, 1, blocks.toopenArray(0, SLOTS_PER_EPOCH - 1))
dataColumnPruneEpoch.start_slot, 1, blocks.toOpenArray(0, SLOTS_PER_EPOCH - 1))

for i in startIndex..<SLOTS_PER_EPOCH:
let blck = node.dag.getForkedBlock(blocks[int(i)]).valueOr: continue
withBlck(blck):
when typeof(forkyBlck).kind < ConsensusFork.Deneb: continue
else:
for j in 0..len(forkyBlck.message.body.blob_kzg_commitments) - 1:
if node.db.delDataColumnSidecar(blocks[int(i)].root, ColumnIndex(j)):
count = count + 1
debug "pruned data columns", count, dataColumnPruneEpoch

proc onSlotEnd(node: BeaconNode, slot: Slot) {.async.} =
# Things we do when slot processing has ended and we're about to wait for the
# next slot
Expand Down Expand Up @@ -1444,6 +1463,7 @@ proc onSlotEnd(node: BeaconNode, slot: Slot) {.async.} =
# the pruning for later
node.dag.pruneHistory()
node.pruneBlobs(slot)
node.pruneDataColumns(slot)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

And here, it will trigger the underflow in a new devnet. This code won't, I think, be hit during syncing, so existing testnets won't have a problem, but new devnets will


when declared(GC_fullCollect):
# The slots in the beacon node work as frames in a game: we want to make
Expand Down
2 changes: 2 additions & 0 deletions beacon_chain/rpc/rest_constants.nim
Original file line number Diff line number Diff line change
Expand Up @@ -213,6 +213,8 @@ const
"LC finality update unavailable"
LCOptUpdateUnavailable* =
"LC optimistic update unavailable"
DataColumnsOutOfRange* =
"Requested slot is out of data column window"
DeprecatedRemovalBeaconBlocksDebugStateV1* =
"v1/beacon/blocks/{block_id} and v1/debug/beacon/states/{state_id} " &
"endpoints were deprecated and replaced by v2: " &
Expand Down
12 changes: 9 additions & 3 deletions beacon_chain/spec/datatypes/eip7594.nim
Original file line number Diff line number Diff line change
Expand Up @@ -9,6 +9,9 @@

import "."/[base, deneb], kzg4844

from std/sequtils import mapIt
from std/strutils import join

export base

const
Expand Down Expand Up @@ -51,8 +54,8 @@ type
DataColumnSidecar* = object
index*: ColumnIndex # Index of column in extended matrix
column*: DataColumn
kzg_commitments*: List[KzgCommitment, Limit(MAX_BLOB_COMMITMENTS_PER_BLOCK)]
kzg_proofs*: List[KzgProof, Limit(MAX_BLOB_COMMITMENTS_PER_BLOCK)]
kzg_commitments*: KzgCommitments
kzg_proofs*: KzgProofs
signed_block_header*: SignedBeaconBlockHeader
kzg_commitments_inclusion_proof*:
array[KZG_COMMITMENTS_INCLUSION_PROOF_DEPTH, Eth2Digest]
Expand All @@ -74,4 +77,7 @@ func shortLog*(v: DataColumnSidecar): auto =
kzg_commitments: v.kzg_commitments.len,
kzg_proofs: v.kzg_proofs.len,
block_header: shortLog(v.signed_block_header.message),
)
)

func shortLog*(x: seq[DataColumnIdentifier]): string =
"[" & x.mapIt(shortLog(it.block_root) & "/" & $it.index).join(", ") & "]"
1 change: 1 addition & 0 deletions beacon_chain/spec/eth2_apis/eth2_rest_serialization.nim
Original file line number Diff line number Diff line change
Expand Up @@ -53,6 +53,7 @@ RestJson.useDefaultSerializationFor(
Checkpoint,
Consolidation,
ContributionAndProof,
DataColumnSidecar,
DataEnclosedObject,
DataMetaEnclosedObject,
DataOptimisticAndFinalizedObject,
Expand Down
4 changes: 4 additions & 0 deletions beacon_chain/spec/network.nim
Original file line number Diff line number Diff line change
Expand Up @@ -34,6 +34,10 @@ const
MAX_REQUEST_BLOB_SIDECARS*: uint64 =
MAX_REQUEST_BLOCKS_DENEB * MAX_BLOBS_PER_BLOCK

# https://github.com/ethereum/consensus-specs/blob/v1.5.0-alpha.2/specs/_features/eip7594/p2p-interface.md#configuration
MAX_REQUEST_DATA_COLUMNS*: uint64 =
MAX_REQUEST_BLOCKS_DENEB * NUMBER_OF_COLUMNS

defaultEth2TcpPort* = 9000
defaultEth2TcpPortDesc* = $defaultEth2TcpPort

Expand Down
Loading
Loading