Conversation
|
Note Reviews pausedIt looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the Use the following commands to manage reviews:
Use the checkboxes below for quick actions:
📝 WalkthroughWalkthroughAdds a new Anchor-based Doom Index NFT program with GlobalConfig and MintReservation accounts, Metaplex Core CPIs for collection and asset creation, instructions for init/reserve/mint/admin flows, utility helpers, devnet scripts, CI/tooling updates, integration tests, and a mint requirements spec. Changes
Sequence DiagramsequenceDiagram
participant Admin as Admin
participant Program as Doom Program
participant GlobalConfig as GlobalConfig (Account)
participant User as User
participant Reservation as MintReservation (Account)
participant Metaplex as Metaplex Core
Admin->>Program: initialize_global_config(base_url, upgrade_authority)
Program->>GlobalConfig: create GlobalConfig (admin, next_token_id=1, base_url)
Admin->>Program: initialize_collection(collection_pubkey)
Program->>Metaplex: CreateCollectionV2 CPI (name, uri, update_authority)
Metaplex-->>Program: collection created
Program->>GlobalConfig: set collection & collection_update_authority
User->>Program: reserve_token_id()
Program->>GlobalConfig: read next_token_id
Program->>Reservation: create reservation(token_id, reserver)
Program->>GlobalConfig: increment next_token_id
User->>Program: mint_doom_index_nft(token_id)
Program->>Reservation: validate reservation (token_id, reserver, not minted)
Program->>Metaplex: CreateV2 CPI (asset name, uri) signed by collection authority (PDA)
Metaplex-->>Program: asset created
Program->>Reservation: mark minted=true
Program-->>User: return asset address
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 11
🧹 Nitpick comments (7)
programs/doom-nft-program/src/utils.rs (1)
18-20: ReuseCOLLECTION_NAMEhere to avoid name drift.
build_asset_nameduplicates the literal already defined inprograms/doom-nft-program/src/constants.rs, Line 1. If the collection label changes later, asset names and collection metadata can silently diverge.Suggested fix
-use crate::error::DoomNftProgramError; +use crate::{constants::COLLECTION_NAME, error::DoomNftProgramError}; @@ pub fn build_asset_name(token_id: u64) -> String { - format!("DOOM INDEX #{token_id}") + format!("{COLLECTION_NAME} #{token_id}") }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@programs/doom-nft-program/src/utils.rs` around lines 18 - 20, The function build_asset_name currently embeds the collection literal; change it to reference the existing COLLECTION_NAME constant instead to prevent drift: import or refer to COLLECTION_NAME (from the module that defines it) inside build_asset_name and build the string with COLLECTION_NAME and the token_id (e.g., format!("{} #{token_id}", COLLECTION_NAME) or equivalent), removing the hard-coded "DOOM INDEX" literal so asset names always derive from COLLECTION_NAME.scripts/devnet/init.ts (1)
14-17: Mirror the on-chainBASE_METADATA_URLvalidation locally.The program rejects trailing slashes, and
GlobalConfig.base_metadata_urlcannot exceed 256 bytes. This script only checks presence, so guaranteed-failing values still make it to the RPC layer.Suggested change
const baseMetadataUrl = process.env.BASE_METADATA_URL; if (!baseMetadataUrl) { throw new Error("BASE_METADATA_URL is required"); } + if (baseMetadataUrl.endsWith("/") || Buffer.byteLength(baseMetadataUrl, "utf8") > 256) { + throw new Error("BASE_METADATA_URL must not end with '/' and must be <= 256 bytes"); + }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@scripts/devnet/init.ts` around lines 14 - 17, Validate BASE_METADATA_URL the same way on-chain does: after reading const baseMetadataUrl, check that it does not end with a trailing slash (e.g., baseMetadataUrl.endsWith('/')) and that its byte length is <= 256 bytes (use Buffer.byteLength(baseMetadataUrl, 'utf8')). If either check fails, throw an Error with a clear message. Update the checks around baseMetadataUrl in scripts/devnet/init.ts to enforce both the trailing-slash rule and the GlobalConfig.base_metadata_url 256-byte limit before proceeding.tests/src/lib.rs (4)
42-51: Unsafe transmute requires careful justification.The
std::mem::transmuteis used to adapt ProgramTest's processor signature to Anchor's expected lifetime bounds. While the comment explains the rationale, this pattern is inherently fragile. The transmute is sound here because the lifetimes are being narrowed (more restrictive), not extended, and both slice layouts are identical. Consider adding a// SAFETY:comment documenting these invariants.📝 Suggested safety comment
fn doom_nft_program_test_processor<'a, 'b, 'c, 'd>( program_id: &'a Pubkey, accounts: &'b [AccountInfo<'c>], instruction_data: &'d [u8], ) -> ProgramResult { - // Anchor's generated entrypoint ties the slice lifetime to the inner AccountInfo lifetime. - // ProgramTest uses the looser builtin processor signature, so the wrapper narrows it for this call. + // SAFETY: Anchor's generated entrypoint requires &'c [AccountInfo<'c>] while ProgramTest + // provides &'b [AccountInfo<'c>]. We narrow 'b to 'c which is safe because: + // 1. Both types have identical memory layout (slice of AccountInfo) + // 2. We're restricting, not extending, the lifetime ('c outlives 'b in this call context) + // 3. The transmuted reference is only used for the duration of this function call let accounts: &'c [AccountInfo<'c>] = unsafe { std::mem::transmute(accounts) }; doom_nft_program::entry(program_id, accounts, instruction_data) }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/src/lib.rs` around lines 42 - 51, The use of unsafe std::mem::transmute in doom_nft_program_test_processor to coerce accounts to &'c [AccountInfo<'c>] is fragile and needs an explicit SAFETY justification: add a // SAFETY: comment immediately above the transmute that documents the invariants (that the transmute only narrows lifetimes and does not extend them, the slice memory/layout is identical, and each AccountInfo's lifetime parameter is compatible with the target lifetime), reference the accounts binding and the doom_nft_program::entry call to explain why this makes the cast sound, and ensure the comment states any assumptions callers must uphold.
392-429: Test name promises more than it delivers.The test
pause_blocks_reserve_and_mintonly verifies thatreserve_token_idfails when paused. It doesn't test thatmint_doom_index_nftis blocked when paused. Consider either:
- Renaming to
pause_blocks_reserve, or- Extending the test to also verify minting is blocked (requires setting up a reservation before pausing)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/src/lib.rs` around lines 392 - 429, The test pause_blocks_reserve_and_mint currently only asserts reserve_token_id_ix fails when paused; either rename the test to pause_blocks_reserve to match behavior or extend the test to also assert mint_doom_index_nft is blocked: create a reservation (call reserve_token_id_ix and assert success), then pause via the SetMintPaused instruction (as done), then attempt mint_doom_index_nft and assert it returns an error (similar to the reserve assertion). Update the test name or add the mint assertion using the existing helpers (initialize_global_config_ix, reserve_token_id_ix, mint_doom_index_nft, and SetMintPaused) so the test behavior and name remain consistent.
354-390: Consider adding tests for additional error paths.The current negative test only covers "mint without reservation." Consider adding tests for:
- ReservationOwnerMismatch: User A reserves, User B attempts to mint
- ReservationAlreadyMinted: Same user attempts to mint twice with the same reservation
These would strengthen confidence in the security-critical validation logic.
Would you like me to draft these additional test cases?
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/src/lib.rs` around lines 354 - 390, Add two negative tests following the style of mint_without_reservation_fails: (1) ReservationOwnerMismatch — create a reservation under one Keypair and then call mint_doom_index_nft_ix using a different payer/minter Keypair and assert the instruction fails (expect_err and match the returned BanksClientError::TransactionError or the program-specific ReservationOwnerMismatch error); (2) ReservationAlreadyMinted — create a reservation, successfully mint once with mint_doom_index_nft_ix, then attempt to mint again with the same reservation and assert it fails with the ReservationAlreadyMinted/transaction error. Reuse helper functions/process_instruction, initialize_global_config_ix, initialize_collection_ix and mint_doom_index_nft_ix from the existing tests for setup and signing to keep the new tests consistent with existing fixtures.
181-184: Remove redundant variable assignment.Line 184 creates an unnecessary alias
configforglobal_configwhich is immediately used once at line 189.♻️ Simplify by removing redundant variable
fn mint_doom_index_nft_ix( user: Pubkey, token_id: u64, asset: Pubkey, collection: Pubkey, ) -> Instruction { let (global_config, _) = global_config_pda(); let (reservation, _) = reservation_pda(token_id); let (collection_update_authority, _) = collection_authority_pda(global_config); - let config = global_config; Instruction { program_id: doom_nft_program::id(), accounts: MintDoomIndexNft { - global_config: config, + global_config, reservation, user,🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/src/lib.rs` around lines 181 - 184, Remove the redundant alias by deleting the `let config = global_config;` assignment and update any subsequent use of `config` (e.g., in calls like `collection_authority_pda(config)`) to use `global_config` directly; ensure the variables from `global_config_pda()`, `reservation_pda(token_id)`, and `collection_authority_pda(global_config)` remain unchanged and referenced by their original names.programs/doom-nft-program/src/instructions/mint_doom_index_nft.rs (1)
43-48: Consider a more specific error for collection address mismatch.The
CollectionNotInitializederror is used here to validate that the passedcollectionmatchesglobal_config.collection. If someone passes a different (but valid) collection address, the error message "CollectionNotInitialized" may be confusing. Consider using a dedicated error likeCollectionMismatchfor the address constraint, reservingCollectionNotInitializedfor thePubkey::default()check at line 61.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@programs/doom-nft-program/src/instructions/mint_doom_index_nft.rs` around lines 43 - 48, The address constraint on the collection account currently raises DoomNftProgramError::CollectionNotInitialized which is misleading; add a new error variant (e.g., CollectionMismatch) to the DoomNftProgramError enum and change the #[account(address = global_config.collection @ DoomNftProgramError::CollectionNotInitialized)] attribute on the collection field to use the new variant (e.g., @ DoomNftProgramError::CollectionMismatch), and keep using CollectionNotInitialized only for the Pubkey::default() existence check later in the function (the check around global_config.collection at line 61).
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@package.json`:
- Around line 17-21: Update the repository check flow to run TypeScript
typechecking: add a new npm script (e.g., "typecheck") that invokes the
TypeScript compiler in noEmit mode against the project (so the new scripts under
"devnet:init", "devnet:reserve", "devnet:mint" get typechecked), and include
that script in the existing "check" script so "check" runs "format:check",
"lint", "test" and the new "typecheck" step; ensure the "typecheck" script
references your tsconfig or appropriate file globs so the devnet/*.ts files are
covered.
In `@programs/doom-nft-program/src/instructions/set_upgrade_authority.rs`:
- Around line 8-35: The SetUpgradeAuthority instruction
(process_set_upgrade_authority) updates GlobalConfig.upgrade_authority but that
field is never used to gate any privileged flows; either wire it into the
intended privilege checks (e.g., replace or supplement existing admin checks in
instructions that should be governed by upgrade_authority by adding has_one =
upgrade_authority or explicit signer requirement checking
ctx.accounts.upgrade_authority) or remove the field and the SetUpgradeAuthority
instruction and UpgradeAuthorityUpdated event to avoid exposing a misleading
API; locate GlobalConfig, SetUpgradeAuthority, process_set_upgrade_authority,
and any instructions that currently gate on admin and decide whether to switch
those has_one checks to reference upgrade_authority or delete the unused field
and related instruction/event.
In `@programs/doom-nft-program/src/instructions/transfer_admin.rs`:
- Around line 5-20: The transfer currently accepts an arbitrary Pubkey; require
the recipient to actually sign by adding a new_admin: Signer<'info> to the
TransferAdmin accounts struct (alongside the existing admin signer) and stop
accepting a raw Pubkey parameter in process_transfer_admin; instead set
ctx.accounts.global_config.admin = *ctx.accounts.new_admin.key() inside
process_transfer_admin so the new admin proof is a real signature and not an
arbitrary address.
In `@programs/doom-nft-program/src/lib.rs`:
- Around line 1-2: The crate currently has a blanket suppression via
#![allow(deprecated)]—remove that top-level attribute and either (a) delete it
entirely if no deprecated APIs are actually used, or (b) replace it by adding
localized #[allow(deprecated)] attributes only on the specific functions/types
that rely on deprecated items with an inline comment explaining why that
deprecated API is required; search for usages in this crate (e.g., in lib.rs
exports, any functions referencing anchor-lang or mpl-core types) and apply the
targeted attribute to those exact symbols rather than keeping the broad
crate-level allow.
In `@programs/doom-nft-program/src/utils.rs`:
- Around line 5-12: The current validate_base_metadata_url only checks emptiness
and trailing slash and allows whitespace or non-URL schemes; update
validate_base_metadata_url to trim the input, reject trimmed-empty strings, and
validate that the value is a well-formed URL with an HTTPS scheme (for example
by parsing with url::Url::parse and asserting url.scheme() == "https") before
returning Ok(()); on any failure return
DoomNftProgramError::BaseMetadataUrlInvalid so malformed, non-HTTPS, or
whitespace-only inputs are rejected.
In `@scripts/build-test-sbf.sh`:
- Around line 10-12: The current check uses -f on "$OUT_DIR/mpl_core_program.so"
which treats zero-byte files as present and allows truncated dumps to be reused;
change the existence test to ensure the dumped program is non-empty (e.g., use
-s or an equivalent size check) so that if "$OUT_DIR/mpl_core_program.so" is
zero bytes the script will re-run solana program dump (refer to the OUT_DIR
variable and the mpl_core_program.so output name in the dump step).
In `@scripts/devnet/common.ts`:
- Around line 214-235: The reserve flow must retry when next_token_id advances:
wrap the fetchGlobalConfig -> compute tokenId -> compute PDAs (globalConfigPda,
reservationPda) -> createInstruction("reserve_token_id", ...) ->
sendInstructions sequence in a small retry loop that re-fetches the latest
global config and recomputes tokenId/reservation on contention; detect the PDA
seed/init failure from sendInstructions (or catch the transaction error), and on
that error refetch with a short backoff and retry up to a max attempts (e.g.,
3-5) before bubbling the error; ensure you continue to return the final
signature, tokenId, reservation and the refreshed globalConfig (use
fetchGlobalConfig, globalConfigPda, reservationPda, createInstruction,
sendInstructions and nextTokenId names to locate the code).
- Around line 68-77: The loadOrCreateKeypair function has a TOCTOU and
permission issue: replace the existsSync + writeFileSync sequence with an atomic
exclusive file creation using fs.openSync or fs.writeFile with flag 'wx' and
mode 0o600 when writing the generated secret key; keep mkdirSync beforehand to
ensure the directory exists, and if the write fails with EEXIST (another process
created the file concurrently) catch that error and call
loadKeypair(absolutePath) as a fallback; ensure the secret is serialized the
same way (JSON Array.from(keypair.secretKey)) and return the generated Keypair
on success.
In `@scripts/devnet/mint.ts`:
- Around line 20-45: After minting, don't re-use the locally reconstructed URI;
instead read and decode the minted asset's on-chain metadata and validate the
URI actually written there. After sendInstructions returns, fetch the
token/metadata account for asset.publicKey (the same account used by
mintDoomIndexNftInstruction), parse the on-chain metadata blob to extract the
stored URI field, and use that extracted URI for the HTTP fetch/validation
(replace usage of buildMetadataUri/metadataUri with the on-chain value); ensure
you still handle fetch errors (metadataResponse.ok) and throw the same formatted
error if the on-chain URI fetch fails.
- Around line 54-64: The current HEAD checks for metadata.image and
metadata.animation_url treat non-OK HEAD responses as definitive failures;
update the logic so that after the initial fetch(..., { method: "HEAD" }) for
metadata.image and metadata.animation_url you treat 405/403 (and other
non-supporting statuses) as inconclusive and retry with a fetch(..., { method:
"GET" }) before throwing. Concretely: keep the existing
imageResponse/animationResponse HEAD requests, if response.ok continue, if
response.status is 405 or 403 (or if HEAD fails to return ok but not a network
error) then perform a GET and check its .ok; only throw an Error using the
response.status/statusText when both HEAD and the fallback GET are not ok; apply
this change to the image and animation_url checks (imageResponse,
animationResponse, metadata.image, metadata.animation_url).
In `@scripts/test-contract-v1.sh`:
- Around line 4-7: The cargo test invocation currently runs from the caller's
CWD and not ROOT_DIR, causing workspace resolution failures; update the script
so the test runs from ROOT_DIR (e.g. change the cargo invocation that uses
BPF_OUT_DIR and cargo test -p tests --lib -- --test-threads=1 to run with cwd
set to "$ROOT_DIR" — either by cd "$ROOT_DIR" && ... or wrapping the command in
a subshell like (cd "$ROOT_DIR" && BPF_OUT_DIR=... cargo test ...)) so tests
always resolve the workspace correctly; reference ROOT_DIR and the existing
cargo test line when making the change.
---
Nitpick comments:
In `@programs/doom-nft-program/src/instructions/mint_doom_index_nft.rs`:
- Around line 43-48: The address constraint on the collection account currently
raises DoomNftProgramError::CollectionNotInitialized which is misleading; add a
new error variant (e.g., CollectionMismatch) to the DoomNftProgramError enum and
change the #[account(address = global_config.collection @
DoomNftProgramError::CollectionNotInitialized)] attribute on the collection
field to use the new variant (e.g., @ DoomNftProgramError::CollectionMismatch),
and keep using CollectionNotInitialized only for the Pubkey::default() existence
check later in the function (the check around global_config.collection at line
61).
In `@programs/doom-nft-program/src/utils.rs`:
- Around line 18-20: The function build_asset_name currently embeds the
collection literal; change it to reference the existing COLLECTION_NAME constant
instead to prevent drift: import or refer to COLLECTION_NAME (from the module
that defines it) inside build_asset_name and build the string with
COLLECTION_NAME and the token_id (e.g., format!("{} #{token_id}",
COLLECTION_NAME) or equivalent), removing the hard-coded "DOOM INDEX" literal so
asset names always derive from COLLECTION_NAME.
In `@scripts/devnet/init.ts`:
- Around line 14-17: Validate BASE_METADATA_URL the same way on-chain does:
after reading const baseMetadataUrl, check that it does not end with a trailing
slash (e.g., baseMetadataUrl.endsWith('/')) and that its byte length is <= 256
bytes (use Buffer.byteLength(baseMetadataUrl, 'utf8')). If either check fails,
throw an Error with a clear message. Update the checks around baseMetadataUrl in
scripts/devnet/init.ts to enforce both the trailing-slash rule and the
GlobalConfig.base_metadata_url 256-byte limit before proceeding.
In `@tests/src/lib.rs`:
- Around line 42-51: The use of unsafe std::mem::transmute in
doom_nft_program_test_processor to coerce accounts to &'c [AccountInfo<'c>] is
fragile and needs an explicit SAFETY justification: add a // SAFETY: comment
immediately above the transmute that documents the invariants (that the
transmute only narrows lifetimes and does not extend them, the slice
memory/layout is identical, and each AccountInfo's lifetime parameter is
compatible with the target lifetime), reference the accounts binding and the
doom_nft_program::entry call to explain why this makes the cast sound, and
ensure the comment states any assumptions callers must uphold.
- Around line 392-429: The test pause_blocks_reserve_and_mint currently only
asserts reserve_token_id_ix fails when paused; either rename the test to
pause_blocks_reserve to match behavior or extend the test to also assert
mint_doom_index_nft is blocked: create a reservation (call reserve_token_id_ix
and assert success), then pause via the SetMintPaused instruction (as done),
then attempt mint_doom_index_nft and assert it returns an error (similar to the
reserve assertion). Update the test name or add the mint assertion using the
existing helpers (initialize_global_config_ix, reserve_token_id_ix,
mint_doom_index_nft, and SetMintPaused) so the test behavior and name remain
consistent.
- Around line 354-390: Add two negative tests following the style of
mint_without_reservation_fails: (1) ReservationOwnerMismatch — create a
reservation under one Keypair and then call mint_doom_index_nft_ix using a
different payer/minter Keypair and assert the instruction fails (expect_err and
match the returned BanksClientError::TransactionError or the program-specific
ReservationOwnerMismatch error); (2) ReservationAlreadyMinted — create a
reservation, successfully mint once with mint_doom_index_nft_ix, then attempt to
mint again with the same reservation and assert it fails with the
ReservationAlreadyMinted/transaction error. Reuse helper
functions/process_instruction, initialize_global_config_ix,
initialize_collection_ix and mint_doom_index_nft_ix from the existing tests for
setup and signing to keep the new tests consistent with existing fixtures.
- Around line 181-184: Remove the redundant alias by deleting the `let config =
global_config;` assignment and update any subsequent use of `config` (e.g., in
calls like `collection_authority_pda(config)`) to use `global_config` directly;
ensure the variables from `global_config_pda()`, `reservation_pda(token_id)`,
and `collection_authority_pda(global_config)` remain unchanged and referenced by
their original names.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 3a953534-0933-4539-a5e8-ee59084f4934
⛔ Files ignored due to path filters (1)
Cargo.lockis excluded by!**/*.lock
📒 Files selected for processing (29)
.gitignoredocs/DOOM_INDEX_NFT_MINT_REQUIREMENTS.mdpackage.jsonprograms/doom-nft-program/Cargo.tomlprograms/doom-nft-program/src/constants.rsprograms/doom-nft-program/src/error.rsprograms/doom-nft-program/src/events.rsprograms/doom-nft-program/src/instructions/initialize_collection.rsprograms/doom-nft-program/src/instructions/initialize_global_config.rsprograms/doom-nft-program/src/instructions/mint_doom_index_nft.rsprograms/doom-nft-program/src/instructions/mod.rsprograms/doom-nft-program/src/instructions/reserve_token_id.rsprograms/doom-nft-program/src/instructions/set_mint_paused.rsprograms/doom-nft-program/src/instructions/set_upgrade_authority.rsprograms/doom-nft-program/src/instructions/transfer_admin.rsprograms/doom-nft-program/src/instructions/update_base_metadata_url.rsprograms/doom-nft-program/src/lib.rsprograms/doom-nft-program/src/state/global_config.rsprograms/doom-nft-program/src/state/mint_reservation.rsprograms/doom-nft-program/src/state/mod.rsprograms/doom-nft-program/src/utils.rsscripts/build-test-sbf.shscripts/devnet/common.tsscripts/devnet/init.tsscripts/devnet/mint.tsscripts/devnet/reserve.tsscripts/test-contract-v1.shtests/Cargo.tomltests/src/lib.rs
| #[derive(Accounts)] | ||
| pub struct SetUpgradeAuthority<'info> { | ||
| #[account( | ||
| mut, | ||
| seeds = [GLOBAL_CONFIG_SEED], | ||
| bump = global_config.bump, | ||
| has_one = upgrade_authority @ DoomNftProgramError::Unauthorized | ||
| )] | ||
| pub global_config: Account<'info, GlobalConfig>, | ||
|
|
||
| pub upgrade_authority: Signer<'info>, | ||
| } | ||
|
|
||
| pub fn process_set_upgrade_authority( | ||
| ctx: Context<SetUpgradeAuthority>, | ||
| new_upgrade_authority: Pubkey, | ||
| ) -> Result<()> { | ||
| let global_config = &mut ctx.accounts.global_config; | ||
| let old_upgrade_authority = global_config.upgrade_authority; | ||
| global_config.upgrade_authority = new_upgrade_authority; | ||
|
|
||
| emit!(UpgradeAuthorityUpdated { | ||
| old_upgrade_authority, | ||
| new_upgrade_authority, | ||
| }); | ||
|
|
||
| Ok(()) | ||
| } |
There was a problem hiding this comment.
🛠️ Refactor suggestion | 🟠 Major
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
set -euo pipefail
# Expectation: `upgrade_authority` is only used here, while other privileged
# instructions still gate on `admin`.
rg -n -C2 'has_one = (admin|upgrade_authority)|global_config\.upgrade_authority' \
programs/doom-nft-program/src/instructions \
programs/doom-nft-program/src/stateRepository: doom-protocol/doom-nft-program
Length of output: 4067
🏁 Script executed:
# List all instruction files to ensure we didn't miss any
git ls-files programs/doom-nft-program/src/instructions/ | head -20Repository: doom-protocol/doom-nft-program
Length of output: 653
🏁 Script executed:
# Broader search for upgrade_authority usage across entire codebase
rg -n 'upgrade_authority' programs/doom-nft-program/src/Repository: doom-protocol/doom-nft-program
Length of output: 2166
🏁 Script executed:
# Check the GlobalConfig struct definition to understand the field
cat -n programs/doom-nft-program/src/state/global_config.rsRepository: doom-protocol/doom-nft-program
Length of output: 527
🏁 Script executed:
# Check mint_doom_index_nft.rs for any authorization checks
cat -n programs/doom-nft-program/src/instructions/mint_doom_index_nft.rsRepository: doom-protocol/doom-nft-program
Length of output: 4498
🏁 Script executed:
# Check reserve_token_id.rs for any authorization checks
cat -n programs/doom-nft-program/src/instructions/reserve_token_id.rsRepository: doom-protocol/doom-nft-program
Length of output: 2047
upgrade_authority is not driving any privileged path.
In the reviewed instruction set, state-changing privileged flows all gate on admin, and this instruction only rewrites the stored field. upgrade_authority is never referenced to authorize any operation—it is defined in GlobalConfig, initialized once, and can be rotated, but no instruction checks it to grant permissions. Either wire this role into the intended governance paths or remove it so the API does not imply authority it does not provide.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@programs/doom-nft-program/src/instructions/set_upgrade_authority.rs` around
lines 8 - 35, The SetUpgradeAuthority instruction
(process_set_upgrade_authority) updates GlobalConfig.upgrade_authority but that
field is never used to gate any privileged flows; either wire it into the
intended privilege checks (e.g., replace or supplement existing admin checks in
instructions that should be governed by upgrade_authority by adding has_one =
upgrade_authority or explicit signer requirement checking
ctx.accounts.upgrade_authority) or remove the field and the SetUpgradeAuthority
instruction and UpgradeAuthorityUpdated event to avoid exposing a misleading
API; locate GlobalConfig, SetUpgradeAuthority, process_set_upgrade_authority,
and any instructions that currently gate on admin and decide whether to switch
those has_one checks to reference upgrade_authority or delete the unused field
and related instruction/event.
There was a problem hiding this comment.
Actionable comments posted: 1
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In @.github/workflows/ci.yml:
- Around line 27-30: Replace the current "Install Solana CLI" step that runs the
curl installer and appends to GITHUB_PATH with the official
solana-developers/github-actions/setup-all action (used in place of the curl
command), pin the action to a commit SHA, and pass an explicit pinned Solana
version (not "stable") to ensure reproducible installs and enable the action's
caching/toolchain options to avoid reinstalling on every run.
- Add instruction tests (init, mint, reserve, admin) - Add test context and instruction helpers - Update DOOM_INDEX_NFT_MINT_REQUIREMENTS, PRODUCT, README - Adjust Anchor.toml, lib.rs, devnet common.ts - Update .agents/memory/todo.md Made-with: Cursor
There was a problem hiding this comment.
Actionable comments posted: 3
🧹 Nitpick comments (6)
tests/src/instructions/set_mint_paused.rs (1)
30-36: Consider adding an unpause verification.The test validates that pausing blocks reservations, but doesn't verify that unpausing re-enables them. This would provide complete round-trip coverage of the pause functionality.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/src/instructions/set_mint_paused.rs` around lines 30 - 36, The test currently asserts that reserve_token_id_ix (via process_instruction) fails while paused; add a round-trip by unpausing the mint and verifying reservations succeed: invoke the instruction that unpauses the mint (e.g., the set_mint_paused/unpause instruction used in your program) via process_instruction, assert it returns Ok, then call reserve_token_id_ix again and assert it succeeds (no error) to confirm unpause re-enables reservations; reference reserve_token_id_ix and process_instruction and the set_mint_paused/unpause instruction when you add these assertions.docs/DOOM_INDEX_NFT_MINT_REQUIREMENTS.md (1)
90-99: Make the state schema sections normative.Because this document is the V1 contract requirements spec, labeling these account layouts as “Recommended fields” makes the wire/state interface sound optional. If these fields are part of the committed V1 shape, “Required fields” would be clearer.
Also applies to: 109-114
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@docs/DOOM_INDEX_NFT_MINT_REQUIREMENTS.md` around lines 90 - 99, Change the state schema headings from non-normative to normative by replacing the label "Recommended fields" with "Required fields" for the listed account layouts (the block containing `admin: Pubkey`, `upgrade_authority: Pubkey`, `next_token_id: u64`, `mint_paused: bool`, `base_metadata_url: String`, `collection: Pubkey`, `collection_update_authority: Pubkey`, `bump: u8`) and repeat the same replacement for the second occurrence referenced in the comment (the other block around lines 109-114) so the V1 contract requirements explicitly assert these fields as required in the committed state shape.tests/src/instructions/initialize_global_config.rs (1)
24-29:sets_defaultsshould cover the zeroed collection fields too.The initializer also sets
collectionandcollection_update_authoritytoPubkey::default(). Right now this test can still pass if either of those defaults regresses, even though they are part of the initialized state.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/src/instructions/initialize_global_config.rs` around lines 24 - 29, Add assertions to the existing test that verifies initialized defaults (the one calling fetch_global_config(&mut context).await) to also check that config.collection and config.collection_update_authority equal Pubkey::default() (or Default::default()); update the assertions in the test (the "sets_defaults"/initialize_global_config test) alongside the existing checks for admin, upgrade_authority, next_token_id, mint_paused, and base_metadata_url so the zeroed collection fields are explicitly verified.tests/src/instructions/transfer_admin.rs (1)
34-37: Also assert thatupgrade_authoritystays unchanged.This is the other half of the authority-separation contract. Without that extra assertion,
transfer_admincould accidentally overwriteupgrade_authorityand this test would still pass.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/src/instructions/transfer_admin.rs` around lines 34 - 37, Before calling transfer_admin, capture the current upgrade authority from fetch_global_config (e.g., let prev_upgrade = fetch_global_config(&mut context).await.upgrade_authority), then after the transfer assert that fetch_global_config(&mut context).await.upgrade_authority == prev_upgrade so the test verifies upgrade_authority remains unchanged; reference the existing fetch_global_config, transfer_admin, next_admin.pubkey(), and upgrade_authority identifiers when adding this check.tests/src/instructions/mint_doom_index_nft.rs (1)
84-92: Assert the no-reservation failure more tightly.
BanksClientError::TransactionError(_)will also pass if minting breaks for an unrelated reason, so this test currently only proves “something failed.” Pinning the concrete error for the missing-reservation case would make the negative coverage much more trustworthy.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/src/instructions/mint_doom_index_nft.rs` around lines 84 - 92, The test currently only asserts a generic failure from process_instruction/mint_doom_index_nft_ix by matching BanksClientError::TransactionError(_); instead, assert the specific program error for "missing reservation" by unwrapping the TransactionError to its InstructionError and checking for InstructionError::Custom(code) equals the program's missing-reservation error code (the enum/constant your program exposes, e.g., MissingReservation or NO_RESERVATION as u32). Use pattern matching on BanksClientError::TransactionError(TransactionError::InstructionError(_, InstructionError::Custom(code))) or extract the inner error and assert_eq!(code, expected_code) so the test fails only for unrelated errors.tests/src/test_context.rs (1)
182-197:reserve_token_id_ixshould not take an arbitrarytoken_id.The on-chain instruction derives the reservation PDA from
global_config.next_token_id, not from instruction data. This helper makes every caller predict chain state up front, so a stale hard-coded ID produces a failure unrelated to the behavior the test is trying to exercise.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/src/test_context.rs` around lines 182 - 197, The helper reserve_token_id_ix currently accepts an arbitrary token_id which lets tests pass a stale ID; change it to not take token_id and instead derive the reservation PDA from the on-chain global config's next_token_id: call global_config_pda(), load/deserialize the GlobalConfig account to read its next_token_id (using the project's GlobalConfig deserializer/loader), then call reservation_pda(next_token_id) to build the reservation Pubkey; keep the rest of the Instruction construction the same and update any callers to stop passing token_id (or pass the reservation Pubkey if you prefer that signature change).
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@Anchor.toml`:
- Line 2: The Anchor.toml currently sets package_manager = "npm" which conflicts
with package.json's "packageManager": "bun@1.3.10"; update the package_manager
value in Anchor.toml to "bun" so the Anchor CLI uses Bun for dependency
management and script execution (verify package_manager = "bun" in Anchor.toml
and keep package.json's packageManager unchanged).
In `@README.md`:
- Line 3: Update the README text that currently reads "Metaplex Core based" to
use a hyphenated compound adjective "Metaplex Core-based" so the sentence
becomes "...targets a Metaplex Core-based mint flow with deterministic metadata
URIs."; edit the string in README.md accordingly.
In `@tests/src/test_context.rs`:
- Around line 24-37: The tests are forcing the native processor by calling
prefer_bpf(false) in program_test, so update the program_test setup to use the
compiled SBF artifact: remove or change the prefer_bpf(false) call to
prefer_bpf(true) (or simply delete the call to let ProgramTest prefer BPF by
default), ensure BPF_OUT_DIR is still set as done with manifest_dir/bpf_out_dir,
and keep the existing test.add_program("doom_nft_program",
doom_nft_program::id(), processor!(doom_nft_program_test_processor)) and
add_upgradeable_program_to_genesis calls unchanged so the ProgramTest will load
the SBF binary instead of the native processor.
---
Nitpick comments:
In `@docs/DOOM_INDEX_NFT_MINT_REQUIREMENTS.md`:
- Around line 90-99: Change the state schema headings from non-normative to
normative by replacing the label "Recommended fields" with "Required fields" for
the listed account layouts (the block containing `admin: Pubkey`,
`upgrade_authority: Pubkey`, `next_token_id: u64`, `mint_paused: bool`,
`base_metadata_url: String`, `collection: Pubkey`, `collection_update_authority:
Pubkey`, `bump: u8`) and repeat the same replacement for the second occurrence
referenced in the comment (the other block around lines 109-114) so the V1
contract requirements explicitly assert these fields as required in the
committed state shape.
In `@tests/src/instructions/initialize_global_config.rs`:
- Around line 24-29: Add assertions to the existing test that verifies
initialized defaults (the one calling fetch_global_config(&mut context).await)
to also check that config.collection and config.collection_update_authority
equal Pubkey::default() (or Default::default()); update the assertions in the
test (the "sets_defaults"/initialize_global_config test) alongside the existing
checks for admin, upgrade_authority, next_token_id, mint_paused, and
base_metadata_url so the zeroed collection fields are explicitly verified.
In `@tests/src/instructions/mint_doom_index_nft.rs`:
- Around line 84-92: The test currently only asserts a generic failure from
process_instruction/mint_doom_index_nft_ix by matching
BanksClientError::TransactionError(_); instead, assert the specific program
error for "missing reservation" by unwrapping the TransactionError to its
InstructionError and checking for InstructionError::Custom(code) equals the
program's missing-reservation error code (the enum/constant your program
exposes, e.g., MissingReservation or NO_RESERVATION as u32). Use pattern
matching on
BanksClientError::TransactionError(TransactionError::InstructionError(_,
InstructionError::Custom(code))) or extract the inner error and assert_eq!(code,
expected_code) so the test fails only for unrelated errors.
In `@tests/src/instructions/set_mint_paused.rs`:
- Around line 30-36: The test currently asserts that reserve_token_id_ix (via
process_instruction) fails while paused; add a round-trip by unpausing the mint
and verifying reservations succeed: invoke the instruction that unpauses the
mint (e.g., the set_mint_paused/unpause instruction used in your program) via
process_instruction, assert it returns Ok, then call reserve_token_id_ix again
and assert it succeeds (no error) to confirm unpause re-enables reservations;
reference reserve_token_id_ix and process_instruction and the
set_mint_paused/unpause instruction when you add these assertions.
In `@tests/src/instructions/transfer_admin.rs`:
- Around line 34-37: Before calling transfer_admin, capture the current upgrade
authority from fetch_global_config (e.g., let prev_upgrade =
fetch_global_config(&mut context).await.upgrade_authority), then after the
transfer assert that fetch_global_config(&mut context).await.upgrade_authority
== prev_upgrade so the test verifies upgrade_authority remains unchanged;
reference the existing fetch_global_config, transfer_admin, next_admin.pubkey(),
and upgrade_authority identifiers when adding this check.
In `@tests/src/test_context.rs`:
- Around line 182-197: The helper reserve_token_id_ix currently accepts an
arbitrary token_id which lets tests pass a stale ID; change it to not take
token_id and instead derive the reservation PDA from the on-chain global
config's next_token_id: call global_config_pda(), load/deserialize the
GlobalConfig account to read its next_token_id (using the project's GlobalConfig
deserializer/loader), then call reservation_pda(next_token_id) to build the
reservation Pubkey; keep the rest of the Instruction construction the same and
update any callers to stop passing token_id (or pass the reservation Pubkey if
you prefer that signature change).
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 33923577-f6e8-4186-a607-9c53461c0880
📒 Files selected for processing (19)
.agents/memory/todo.md.github/workflows/ci.ymlAnchor.tomlREADME.mddocs/DOOM_INDEX_NFT_MINT_REQUIREMENTS.mddocs/PRODUCT.mdprograms/doom-nft-program/src/lib.rsscripts/devnet/common.tstests/src/instructions/initialize_collection.rstests/src/instructions/initialize_global_config.rstests/src/instructions/mint_doom_index_nft.rstests/src/instructions/mod.rstests/src/instructions/reserve_token_id.rstests/src/instructions/set_mint_paused.rstests/src/instructions/set_upgrade_authority.rstests/src/instructions/transfer_admin.rstests/src/instructions/update_base_metadata_url.rstests/src/lib.rstests/src/test_context.rs
✅ Files skipped from review due to trivial changes (1)
- docs/PRODUCT.md
🚧 Files skipped from review as they are similar to previous changes (1)
- .github/workflows/ci.yml
| @@ -1,19 +1,23 @@ | |||
| [toolchain] | |||
| package_manager = "bun" | |||
| package_manager = "npm" | |||
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Check if package.json scripts use bun
grep -n "bun" package.json | head -20
# Check README for package manager references
grep -n "bun\|npm" README.md | head -20Repository: doom-protocol/doom-nft-program
Length of output: 798
Change package_manager to "bun" to match project configuration.
Anchor.toml sets package_manager = "npm", but package.json declares "packageManager": "bun@1.3.10" and all scripts and README instructions use Bun exclusively. This mismatch will cause Anchor CLI to invoke npm instead of bun for dependency management and script execution.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@Anchor.toml` at line 2, The Anchor.toml currently sets package_manager =
"npm" which conflicts with package.json's "packageManager": "bun@1.3.10"; update
the package_manager value in Anchor.toml to "bun" so the Anchor CLI uses Bun for
dependency management and script execution (verify package_manager = "bun" in
Anchor.toml and keep package.json's packageManager unchanged).
| # Doom NFT Program | ||
|
|
||
| Solanaブロックチェーン上で動作するNFT(Non-Fungible Token)プログラムです。Anchorフレームワークを使用して実装されています。 | ||
| This repository contains the Solana program for minting DOOM INDEX artworks as NFTs. It is built with Anchor and currently targets a Metaplex Core based mint flow with deterministic metadata URIs. |
There was a problem hiding this comment.
Minor grammar fix: use a hyphen in "Core-based".
📝 Proposed fix
-This repository contains the Solana program for minting DOOM INDEX artworks as NFTs. It is built with Anchor and currently targets a Metaplex Core based mint flow with deterministic metadata URIs.
+This repository contains the Solana program for minting DOOM INDEX artworks as NFTs. It is built with Anchor and currently targets a Metaplex Core-based mint flow with deterministic metadata URIs.📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| This repository contains the Solana program for minting DOOM INDEX artworks as NFTs. It is built with Anchor and currently targets a Metaplex Core based mint flow with deterministic metadata URIs. | |
| This repository contains the Solana program for minting DOOM INDEX artworks as NFTs. It is built with Anchor and currently targets a Metaplex Core-based mint flow with deterministic metadata URIs. |
🧰 Tools
🪛 LanguageTool
[grammar] ~3-~3: Use a hyphen to join words.
Context: ...or and currently targets a Metaplex Core based mint flow with deterministic metad...
(QB_NEW_EN_HYPHEN)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@README.md` at line 3, Update the README text that currently reads "Metaplex
Core based" to use a hyphenated compound adjective "Metaplex Core-based" so the
sentence becomes "...targets a Metaplex Core-based mint flow with deterministic
metadata URIs."; edit the string in README.md accordingly.
| pub fn program_test() -> ProgramTest { | ||
| let manifest_dir = std::path::PathBuf::from(env!("CARGO_MANIFEST_DIR")); | ||
| let bpf_out_dir = manifest_dir.join("..").join("target").join("test-sbf"); | ||
| std::env::set_var("BPF_OUT_DIR", &bpf_out_dir); | ||
|
|
||
| let mut test = ProgramTest::default(); | ||
| test.prefer_bpf(false); | ||
| test.add_program( | ||
| "doom_nft_program", | ||
| doom_nft_program::id(), | ||
| processor!(doom_nft_program_test_processor), | ||
| ); | ||
| test.add_upgradeable_program_to_genesis("mpl_core_program", &mpl_core::ID); | ||
| test |
There was a problem hiding this comment.
🧩 Analysis chain
🌐 Web query:
In solana-program-test 2.2.x, when ProgramTest::add_programis given both a processor and a compiled program inBPF_OUT_DIR, does prefer_bpf(false) force execution through the native processor instead of the SBF binary?
💡 Result:
Yes. In solana-program-test 2.2.x, ProgramTest::add_program dispatches on (self.prefer_bpf, program_file, builtin_function). When you call prefer_bpf(false) and you also pass Some(processor) as builtin_function, it will select the native/builtin processor arm (false, _, Some(builtin_function)) even if a compiled .so is found via BPF_OUT_DIR/SBF_OUT_DIR (or other search paths). Only when prefer_bpf is true does it prefer the .so ((true, Some(file), _)). [1]
Source: solana-program-test 2.2.6 add_program implementation (same behavior across 2.2.x). [1]
[1] (docs.rs)
Citations:
Tests are bypassing the compiled SBF program.
Line 30 sets prefer_bpf(false), which forces ProgramTest to use the native processor even though a compiled binary exists in BPF_OUT_DIR. This means the suite doesn't exercise the deployable SBF artifact, allowing SBF-specific regressions to slip through undetected.
Fix
- test.prefer_bpf(false);
+ test.prefer_bpf(true);📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| pub fn program_test() -> ProgramTest { | |
| let manifest_dir = std::path::PathBuf::from(env!("CARGO_MANIFEST_DIR")); | |
| let bpf_out_dir = manifest_dir.join("..").join("target").join("test-sbf"); | |
| std::env::set_var("BPF_OUT_DIR", &bpf_out_dir); | |
| let mut test = ProgramTest::default(); | |
| test.prefer_bpf(false); | |
| test.add_program( | |
| "doom_nft_program", | |
| doom_nft_program::id(), | |
| processor!(doom_nft_program_test_processor), | |
| ); | |
| test.add_upgradeable_program_to_genesis("mpl_core_program", &mpl_core::ID); | |
| test | |
| pub fn program_test() -> ProgramTest { | |
| let manifest_dir = std::path::PathBuf::from(env!("CARGO_MANIFEST_DIR")); | |
| let bpf_out_dir = manifest_dir.join("..").join("target").join("test-sbf"); | |
| std::env::set_var("BPF_OUT_DIR", &bpf_out_dir); | |
| let mut test = ProgramTest::default(); | |
| test.prefer_bpf(true); | |
| test.add_program( | |
| "doom_nft_program", | |
| doom_nft_program::id(), | |
| processor!(doom_nft_program_test_processor), | |
| ); | |
| test.add_upgradeable_program_to_genesis("mpl_core_program", &mpl_core::ID); | |
| test |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@tests/src/test_context.rs` around lines 24 - 37, The tests are forcing the
native processor by calling prefer_bpf(false) in program_test, so update the
program_test setup to use the compiled SBF artifact: remove or change the
prefer_bpf(false) call to prefer_bpf(true) (or simply delete the call to let
ProgramTest prefer BPF by default), ensure BPF_OUT_DIR is still set as done with
manifest_dir/bpf_out_dir, and keep the existing
test.add_program("doom_nft_program", doom_nft_program::id(),
processor!(doom_nft_program_test_processor)) and
add_upgradeable_program_to_genesis calls unchanged so the ProgramTest will load
the SBF binary instead of the native processor.
- Add scripts/devnet/common.test.ts - Add tests/fixtures (mpl_core_program.so, README) - Update Anchor.toml, README, build-test-sbf.sh, common.ts - Update .agents/memory/todo.md Made-with: Cursor
There was a problem hiding this comment.
Actionable comments posted: 1
♻️ Duplicate comments (3)
README.md (1)
3-3:⚠️ Potential issue | 🟡 MinorHyphenate “Core-based” in the overview sentence.
Line 3 still uses “Metaplex Core based” as an unhyphenated compound modifier. Please change it to “Metaplex Core-based”.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@README.md` at line 3, Update the overview sentence that reads "This repository contains the Solana program for minting DOOM INDEX artworks as NFTs. It is built with Anchor and currently targets a Metaplex Core based mint flow with deterministic metadata URIs." by hyphenating the compound modifier: change "Metaplex Core based" to "Metaplex Core-based" so the sentence reads "...targets a Metaplex Core-based mint flow...".scripts/devnet/common.ts (2)
72-82:⚠️ Potential issue | 🟠 MajorTOCTOU race condition and insecure file permissions remain unaddressed.
This function still has the race condition between
existsSyncandwriteFileSync, and writes the secret key without explicit0600permissions. Use atomic exclusive creation with thewxflag andmode: 0o600, catchingEEXISTto fall back toloadKeypair.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@scripts/devnet/common.ts` around lines 72 - 82, The loadOrCreateKeypair function has a TOCTOU issue and doesn't set secure file permissions; change it to attempt atomic exclusive file creation when writing the new keypair by opening/writing the file with flags "wx" and mode 0o600 (or equivalent) so creation fails if the file already exists, and catch the EEXIST error to call loadKeypair(absolutePath) instead of overwriting; ensure you still create parent dirs beforehand (mkdirSync(dirname(absolutePath), { recursive: true })) and return the generated Keypair only after a successful exclusive write.
208-240:⚠️ Potential issue | 🟠 MajorReservation still lacks retry logic for concurrent token ID contention.
If another reservation lands after fetching
nextTokenIdbut before the transaction confirms, this helper fails with a PDA seed mismatch. Wrap the fetch/derive/send flow in a retry loop that refetchesglobalConfigon contention errors.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@scripts/devnet/common.ts` around lines 208 - 240, The reserveTokenId helper (function reserveTokenId) can fail with a PDA seed mismatch when another reservation increments nextTokenId between fetching globalConfig (fetchGlobalConfig) and sending the transaction (createInstruction/reservationPda/globalConfigPda); wrap the fetch-derive-send flow in a retry loop: attempt N times (e.g., 3-5), on detection of a PDA seed mismatch or transaction failure indicating “seed”/“PDA”/“account already exists” contention refetch fetchGlobalConfig, recompute tokenId and reservationPda, rebuild the instruction (createInstruction) and resend (sendInstructions); include a short backoff between retries and throw the last error if retries exhausted.
🧹 Nitpick comments (1)
scripts/devnet/common.ts (1)
144-146: Consider dynamic buffer allocation for args encoding.The fixed 1024-byte buffer works for current instructions but could cause issues if future instructions have larger arguments. Using
argsLayout.getSpan(args)(if available) or a dynamically-sized approach would be safer.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@scripts/devnet/common.ts` around lines 144 - 146, The code uses a fixed 1024-byte buffer (argsBuffer) for argsLayout.encode which can overflow for larger argument sets; replace the fixed Buffer.alloc(1024) with a dynamically sized buffer by first computing the required span (use argsLayout.getSpan(args) if available, or call argsLayout.encode to get the required length then allocate), then allocate a Buffer of that exact size and call argsLayout.encode into it, and finally build data with Buffer.concat([discriminator, argsBuffer]) (or subarray of the correctly sized buffer) so encoding never truncates or wastes space.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@scripts/devnet/common.ts`:
- Around line 308-315: In resolveConfiguredPath, don't replace a leading tilde
with an empty string when process.env.HOME is undefined because that produces
malformed paths; instead detect when filePath begins with "~" (or
"^~(?=$|\/|\\)") and if process.env.HOME is undefined throw a clear error (e.g.,
"HOME environment variable not set, cannot expand ~ in path") before computing
expandedPath, otherwise proceed to replace and resolve as before; update the
expandedPath logic and error message near the resolveConfiguredPath function to
ensure callers get a clear failure instead of an invalid path.
---
Duplicate comments:
In `@README.md`:
- Line 3: Update the overview sentence that reads "This repository contains the
Solana program for minting DOOM INDEX artworks as NFTs. It is built with Anchor
and currently targets a Metaplex Core based mint flow with deterministic
metadata URIs." by hyphenating the compound modifier: change "Metaplex Core
based" to "Metaplex Core-based" so the sentence reads "...targets a Metaplex
Core-based mint flow...".
In `@scripts/devnet/common.ts`:
- Around line 72-82: The loadOrCreateKeypair function has a TOCTOU issue and
doesn't set secure file permissions; change it to attempt atomic exclusive file
creation when writing the new keypair by opening/writing the file with flags
"wx" and mode 0o600 (or equivalent) so creation fails if the file already
exists, and catch the EEXIST error to call loadKeypair(absolutePath) instead of
overwriting; ensure you still create parent dirs beforehand
(mkdirSync(dirname(absolutePath), { recursive: true })) and return the generated
Keypair only after a successful exclusive write.
- Around line 208-240: The reserveTokenId helper (function reserveTokenId) can
fail with a PDA seed mismatch when another reservation increments nextTokenId
between fetching globalConfig (fetchGlobalConfig) and sending the transaction
(createInstruction/reservationPda/globalConfigPda); wrap the fetch-derive-send
flow in a retry loop: attempt N times (e.g., 3-5), on detection of a PDA seed
mismatch or transaction failure indicating “seed”/“PDA”/“account already exists”
contention refetch fetchGlobalConfig, recompute tokenId and reservationPda,
rebuild the instruction (createInstruction) and resend (sendInstructions);
include a short backoff between retries and throw the last error if retries
exhausted.
---
Nitpick comments:
In `@scripts/devnet/common.ts`:
- Around line 144-146: The code uses a fixed 1024-byte buffer (argsBuffer) for
argsLayout.encode which can overflow for larger argument sets; replace the fixed
Buffer.alloc(1024) with a dynamically sized buffer by first computing the
required span (use argsLayout.getSpan(args) if available, or call
argsLayout.encode to get the required length then allocate), then allocate a
Buffer of that exact size and call argsLayout.encode into it, and finally build
data with Buffer.concat([discriminator, argsBuffer]) (or subarray of the
correctly sized buffer) so encoding never truncates or wastes space.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 27ff7034-18a9-40c4-8b27-0043e5fcc04d
⛔ Files ignored due to path filters (1)
tests/fixtures/mpl_core_program.sois excluded by!**/*.so
📒 Files selected for processing (8)
.agents/memory/todo.mdAnchor.tomlREADME.mdscripts/build-test-sbf.shscripts/devnet/common.test.tsscripts/devnet/common.tstests/fixtures/README.mdtests/fixtures/mpl_core_program.so.sha256
✅ Files skipped from review due to trivial changes (1)
- tests/fixtures/mpl_core_program.so.sha256
🚧 Files skipped from review as they are similar to previous changes (2)
- Anchor.toml
- scripts/build-test-sbf.sh
| function resolveConfiguredPath(filePath: string, anchorTomlPath: string): string { | ||
| const expandedPath = filePath.replace(/^~(?=$|\/|\\)/, process.env.HOME ?? ""); | ||
| if (isAbsolute(expandedPath)) { | ||
| return expandedPath; | ||
| } | ||
|
|
||
| return resolve(dirname(anchorTomlPath), expandedPath); | ||
| } |
There was a problem hiding this comment.
Fallback to empty string when HOME is undefined produces malformed paths.
If HOME is undefined, the tilde is replaced with an empty string, resulting in paths like /.config/solana/id.json instead of the intended user home directory path. Consider throwing an error when HOME is required but undefined.
🛡️ Proposed fix
function resolveConfiguredPath(filePath: string, anchorTomlPath: string): string {
- const expandedPath = filePath.replace(/^~(?=$|\/|\\)/, process.env.HOME ?? "");
+ const expandedPath = filePath.replace(/^~(?=$|\/|\\)/, () => {
+ if (!process.env.HOME) {
+ throw new Error("HOME environment variable is not set");
+ }
+ return process.env.HOME;
+ });
if (isAbsolute(expandedPath)) {
return expandedPath;
}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@scripts/devnet/common.ts` around lines 308 - 315, In resolveConfiguredPath,
don't replace a leading tilde with an empty string when process.env.HOME is
undefined because that produces malformed paths; instead detect when filePath
begins with "~" (or "^~(?=$|\/|\\)") and if process.env.HOME is undefined throw
a clear error (e.g., "HOME environment variable not set, cannot expand ~ in
path") before computing expandedPath, otherwise proceed to replace and resolve
as before; update the expandedPath logic and error message near the
resolveConfiguredPath function to ensure callers get a clear failure instead of
an invalid path.
- Program: error codes, mint_doom_index_nft, transfer_admin, utils, lib - Devnet: common/init/mint scripts, common.test.ts, mint.test.ts - Tests: mint_doom_index_nft, set_mint_paused, transfer_admin, test_context - Config: Cargo.lock, package.json, Cargo.toml, tsconfig, test-contract-v1.sh - Update .agents/memory/todo.md Made-with: Cursor
Summary
Verification
cargo clippy -p doom-nft-program --all-features --all-targets -- -D warnings./scripts/test-contract-v1.shbun x tsc --noEmitbun run testSummary by CodeRabbit
New Features
Documentation
Tests
Chores