Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
65 changes: 30 additions & 35 deletions rpc-integration-test/run.sh
Original file line number Diff line number Diff line change
@@ -1,51 +1,46 @@
#!/bin/sh

TESTDIR=/tmp/rust_bitcoincore_rpc_test
set -e

rm -rf ${TESTDIR}
mkdir -p ${TESTDIR}/1 ${TESTDIR}/2
TESTDIR=/tmp/rust_dashcore_rpc_test

# To kill any remaining open bitcoind.
killall -9 bitcoind
rm -rf "${TESTDIR}"
mkdir -p "${TESTDIR}/dash"

bitcoind -regtest \
-datadir=${TESTDIR}/1 \
-port=12348 \
-server=0 \
-printtoconsole=0 &
PID1=$!
# Kill any remaining dashd to avoid port conflicts
if command -v killall >/dev/null 2>&1; then
killall -9 dashd 2>/dev/null || true
fi

# Make sure it's listening on its p2p port.
sleep 3
# Start Dash Core on regtest using standard Dash RPC port 19898
dashd -regtest \
-datadir="${TESTDIR}/dash" \
-rpcport=19898 \
-server=1 \
-txindex=1 \
-printtoconsole=0 &
PID=$!

BLOCKFILTERARG=""
if bitcoind -version | grep -q "v0\.\(19\|2\)"; then
BLOCKFILTERARG="-blockfilterindex=1"
fi
# Allow time for startup
sleep 5

FALLBACKFEEARG=""
if bitcoind -version | grep -q "v0\.2"; then
FALLBACKFEEARG="-fallbackfee=0.00001000"
fi
# Pre-create faucet wallet "main" so the test can fund addresses
dash-cli -regtest -datadir="${TESTDIR}/dash" -rpcport=19898 -named createwallet wallet_name=main descriptors=false >/dev/null 2>&1 || true

bitcoind -regtest $BLOCKFILTERARG $FALLBACKFEEARG \
-datadir=${TESTDIR}/2 \
-connect=127.0.0.1:12348 \
-rpcport=12349 \
-server=1 \
-txindex=1 \
-printtoconsole=0 &
PID2=$!
# Fund the faucet wallet with mature coins
FAUCET_ADDR=$(dash-cli -regtest -datadir="${TESTDIR}/dash" -rpcport=19898 -rpcwallet=main getnewaddress)
dash-cli -regtest -datadir="${TESTDIR}/dash" -rpcport=19898 generatetoaddress 110 "$FAUCET_ADDR" >/dev/null

# Let it connect to the other node.
sleep 5
# Export per-node env vars expected by the test (both point to same node)
export WALLET_NODE_RPC_URL="http://127.0.0.1:19898"
export EVO_NODE_RPC_URL="http://127.0.0.1:19898"
export WALLET_NODE_RPC_COOKIE="${TESTDIR}/dash/regtest/.cookie"
export EVO_NODE_RPC_COOKIE="${TESTDIR}/dash/regtest/.cookie"

RPC_URL=http://localhost:12349 \
RPC_COOKIE=${TESTDIR}/2/regtest/.cookie \
cargo run
cargo run

RESULT=$?

kill -9 $PID1 $PID2
kill -9 $PID 2>/dev/null || true

exit $RESULT
Comment on lines +3 to 46
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

⚠️ Potential issue

Make the harness deterministic and safe: trap-based cleanup, readiness wait, and configurable RPC port.

As written, set -e will abort before cleanup if cargo run fails, leaving dashd running and causing flaky port conflicts. Also, sleep 5 is racy on CI, and hard-coding port 19898 risks collisions. Apply this consolidated patch:

-#!/bin/sh
+#!/bin/sh

-set -e
+set -eu

 TESTDIR=/tmp/rust_dashcore_rpc_test
+
+# Allow overriding the RPC port; default to Dash regtest typical port
+RPC_PORT="${RPC_PORT:-19898}"
+
+cleanup() {
+  if [ -n "${PID:-}" ] && kill -0 "$PID" 2>/dev/null; then
+    # Prefer graceful stop; fall back to TERM
+    dash-cli -regtest -datadir="${TESTDIR}/dash" -rpcport="${RPC_PORT}" stop >/dev/null 2>&1 \
+      || kill -TERM "$PID" 2>/dev/null || true
+    wait "$PID" 2>/dev/null || true
+  fi
+}
+trap 'cleanup' EXIT INT TERM

 rm -rf "${TESTDIR}"
 mkdir -p "${TESTDIR}/dash"

-# Kill any remaining dashd to avoid port conflicts
-if command -v killall >/dev/null 2>&1; then
-  killall -9 dashd 2>/dev/null || true
-fi
+# Remove any stray dashd that uses our test datadir (avoid nuking user daemons)
+if command -v pgrep >/dev/null 2>&1; then
+  PIDS="$(pgrep -f "dashd .* -datadir=${TESTDIR}/dash" || true)"
+  if [ -n "$PIDS" ]; then kill -9 $PIDS 2>/dev/null || true; fi
+fi

-# Start Dash Core on regtest using standard Dash RPC port 19898
+# Start Dash Core on regtest
 dashd -regtest \
   -datadir="${TESTDIR}/dash" \
-  -rpcport=19898 \
+  -rpcport="${RPC_PORT}" \
   -server=1 \
   -txindex=1 \
   -printtoconsole=0 &
 PID=$!

-# Allow time for startup
-sleep 5
+# Wait for RPC to come up
+rpc() { dash-cli -regtest -datadir="${TESTDIR}/dash" -rpcport="${RPC_PORT}" "$@"; }
+i=0
+until rpc getblockchaininfo >/dev/null 2>&1; do
+  i=$((i+1))
+  if [ "$i" -ge 120 ]; then
+    echo "dashd not responding on 127.0.0.1:${RPC_PORT} after 60s" >&2
+    exit 1
+  fi
+  sleep 0.5
+done

 # Pre-create faucet wallet "main" so the test can fund addresses
-dash-cli -regtest -datadir="${TESTDIR}/dash" -rpcport=19898 -named createwallet wallet_name=main descriptors=false >/dev/null 2>&1 || true
+rpc -named createwallet wallet_name=main descriptors=false >/dev/null 2>&1 || true

 # Fund the faucet wallet with mature coins
-FAUCET_ADDR=$(dash-cli -regtest -datadir="${TESTDIR}/dash" -rpcport=19898 -rpcwallet=main getnewaddress)
-dash-cli -regtest -datadir="${TESTDIR}/dash" -rpcport=19898 generatetoaddress 110 "$FAUCET_ADDR" >/dev/null
+FAUCET_ADDR="$(rpc -rpcwallet=main getnewaddress)"
+rpc generatetoaddress 110 "$FAUCET_ADDR" >/dev/null

 # Export per-node env vars expected by the test (both point to same node)
-export WALLET_NODE_RPC_URL="http://127.0.0.1:19898"
-export EVO_NODE_RPC_URL="http://127.0.0.1:19898"
+export WALLET_NODE_RPC_URL="http://127.0.0.1:${RPC_PORT}"
+export EVO_NODE_RPC_URL="http://127.0.0.1:${RPC_PORT}"
 export WALLET_NODE_RPC_COOKIE="${TESTDIR}/dash/regtest/.cookie"
 export EVO_NODE_RPC_COOKIE="${TESTDIR}/dash/regtest/.cookie"

-cargo run
+set +e
+cargo run
+RESULT=$?
+set -e
-
-RESULT=$?
-
-kill -9 $PID 2>/dev/null || true
-
-exit $RESULT
+exit $RESULT
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
set -e
rm -rf ${TESTDIR}
mkdir -p ${TESTDIR}/1 ${TESTDIR}/2
TESTDIR=/tmp/rust_dashcore_rpc_test
# To kill any remaining open bitcoind.
killall -9 bitcoind
rm -rf "${TESTDIR}"
mkdir -p "${TESTDIR}/dash"
bitcoind -regtest \
-datadir=${TESTDIR}/1 \
-port=12348 \
-server=0 \
-printtoconsole=0 &
PID1=$!
# Kill any remaining dashd to avoid port conflicts
if command -v killall >/dev/null 2>&1; then
killall -9 dashd 2>/dev/null || true
fi
# Make sure it's listening on its p2p port.
sleep 3
# Start Dash Core on regtest using standard Dash RPC port 19898
dashd -regtest \
-datadir="${TESTDIR}/dash" \
-rpcport=19898 \
-server=1 \
-txindex=1 \
-printtoconsole=0 &
PID=$!
BLOCKFILTERARG=""
if bitcoind -version | grep -q "v0\.\(19\|2\)"; then
BLOCKFILTERARG="-blockfilterindex=1"
fi
# Allow time for startup
sleep 5
FALLBACKFEEARG=""
if bitcoind -version | grep -q "v0\.2"; then
FALLBACKFEEARG="-fallbackfee=0.00001000"
fi
# Pre-create faucet wallet "main" so the test can fund addresses
dash-cli -regtest -datadir="${TESTDIR}/dash" -rpcport=19898 -named createwallet wallet_name=main descriptors=false >/dev/null 2>&1 || true
bitcoind -regtest $BLOCKFILTERARG $FALLBACKFEEARG \
-datadir=${TESTDIR}/2 \
-connect=127.0.0.1:12348 \
-rpcport=12349 \
-server=1 \
-txindex=1 \
-printtoconsole=0 &
PID2=$!
# Fund the faucet wallet with mature coins
FAUCET_ADDR=$(dash-cli -regtest -datadir="${TESTDIR}/dash" -rpcport=19898 -rpcwallet=main getnewaddress)
dash-cli -regtest -datadir="${TESTDIR}/dash" -rpcport=19898 generatetoaddress 110 "$FAUCET_ADDR" >/dev/null
# Let it connect to the other node.
sleep 5
# Export per-node env vars expected by the test (both point to same node)
export WALLET_NODE_RPC_URL="http://127.0.0.1:19898"
export EVO_NODE_RPC_URL="http://127.0.0.1:19898"
export WALLET_NODE_RPC_COOKIE="${TESTDIR}/dash/regtest/.cookie"
export EVO_NODE_RPC_COOKIE="${TESTDIR}/dash/regtest/.cookie"
RPC_URL=http://localhost:12349 \
RPC_COOKIE=${TESTDIR}/2/regtest/.cookie \
cargo run
cargo run
RESULT=$?
kill -9 $PID1 $PID2
kill -9 $PID 2>/dev/null || true
exit $RESULT
#!/bin/sh
set -eu
TESTDIR=/tmp/rust_dashcore_rpc_test
# Allow overriding the RPC port; default to Dash regtest typical port
RPC_PORT="${RPC_PORT:-19898}"
cleanup() {
if [ -n "${PID:-}" ] && kill -0 "$PID" 2>/dev/null; then
# Prefer graceful stop; fall back to TERM
dash-cli -regtest -datadir="${TESTDIR}/dash" -rpcport="${RPC_PORT}" stop >/dev/null 2>&1 \
|| kill -TERM "$PID" 2>/dev/null || true
wait "$PID" 2>/dev/null || true
fi
}
trap 'cleanup' EXIT INT TERM
rm -rf "${TESTDIR}"
mkdir -p "${TESTDIR}/dash"
# Remove any stray dashd that uses our test datadir (avoid nuking user daemons)
if command -v pgrep >/dev/null 2>&1; then
PIDS="$(pgrep -f "dashd .* -datadir=${TESTDIR}/dash" || true)"
if [ -n "$PIDS" ]; then kill -9 $PIDS 2>/dev/null || true; fi
fi
# Start Dash Core on regtest
dashd -regtest \
-datadir="${TESTDIR}/dash" \
-rpcport="${RPC_PORT}" \
-server=1 \
-txindex=1 \
-printtoconsole=0 &
PID=$!
# Wait for RPC to come up
rpc() { dash-cli -regtest -datadir="${TESTDIR}/dash" -rpcport="${RPC_PORT}" "$@"; }
i=0
until rpc getblockchaininfo >/dev/null 2>&1; do
i=$((i+1))
if [ "$i" -ge 120 ]; then
echo "dashd not responding on 127.0.0.1:${RPC_PORT} after 60s" >&2
exit 1
fi
sleep 0.5
done
# Pre-create faucet wallet "main" so the test can fund addresses
rpc -named createwallet wallet_name=main descriptors=false >/dev/null 2>&1 || true
# Fund the faucet wallet with mature coins
FAUCET_ADDR="$(rpc -rpcwallet=main getnewaddress)"
rpc generatetoaddress 110 "$FAUCET_ADDR" >/dev/null
# Export per-node env vars expected by the test (both point to same node)
export WALLET_NODE_RPC_URL="http://127.0.0.1:${RPC_PORT}"
export EVO_NODE_RPC_URL="http://127.0.0.1:${RPC_PORT}"
export WALLET_NODE_RPC_COOKIE="${TESTDIR}/dash/regtest/.cookie"
export EVO_NODE_RPC_COOKIE="${TESTDIR}/dash/regtest/.cookie"
set +e
cargo run
RESULT=$?
set -e
exit $RESULT
🤖 Prompt for AI Agents
In rpc-integration-test/run.sh around lines 3 to 46, the test harness currently
can leave dashd running on failures, uses a racy fixed sleep to wait for
readiness, and hard-codes RPC port 19898; change it to (1) set a TRAP-based
cleanup that kills the dashd PID on EXIT (and on INT/TERM) so dashd is always
cleaned up even on failures or set -e aborts, (2) replace the fixed sleep with a
readiness loop that polls dash-cli (or tests TCP socket) until RPC responds or a
short timeout elapses, failing if not ready, and (3) make the RPC port
configurable via an environment variable like RPC_PORT with a default (e.g.,
19898) and use that variable everywhere (dashd, dash-cli URLs, cookie paths) to
avoid hard-coded collisions; ensure kill commands tolerate missing PID and that
the script exits with the cargo run exit code after cleanup.

114 changes: 101 additions & 13 deletions rpc-integration-test/src/main.rs
Original file line number Diff line number Diff line change
Expand Up @@ -40,8 +40,9 @@ use dashcore_rpc::json::QuorumType::LlmqTest;

const FAUCET_WALLET_NAME: &str = "main";
const TEST_WALLET_NAME: &str = "testwallet";
const DEFAULT_WALLET_NODE_RPC_URL: &str = "http://127.0.0.1:20002";
const DEFAULT_EVO_NODE_RPC_URL: &str = "http://127.0.0.1:20302";
// Dash regtest default RPC port is 19898. For mainnet/testnet use 9998/19998.
const DEFAULT_WALLET_NODE_RPC_URL: &str = "http://127.0.0.1:19898";
const DEFAULT_EVO_NODE_RPC_URL: &str = "http://127.0.0.1:19898";

lazy_static! {
static ref SECP: secp256k1::Secp256k1<secp256k1::All> = secp256k1::Secp256k1::new();
Expand Down Expand Up @@ -120,26 +121,46 @@ fn sbtc<F: Into<f64>>(btc: F) -> SignedAmount {
}

fn get_rpc_urls() -> (Option<String>, Option<String>) {
let wallet_node_url = std::env::var("WALLET_NODE_RPC_URL").ok().filter(|s| !s.is_empty());

let evo_node_rpc_url = std::env::var("EVO_NODE_RPC_URL").ok().filter(|s| !s.is_empty());
// Prefer explicit per-node URLs; fall back to a generic RPC_URL for both
let generic_url = std::env::var("RPC_URL").ok().filter(|s| !s.is_empty());
let wallet_node_url = std::env::var("WALLET_NODE_RPC_URL")
.ok()
.filter(|s| !s.is_empty())
.or_else(|| generic_url.clone());
let evo_node_rpc_url = std::env::var("EVO_NODE_RPC_URL")
.ok()
.filter(|s| !s.is_empty())
.or_else(|| generic_url.clone());

(wallet_node_url, evo_node_rpc_url)
}

fn get_auth() -> (Auth, Auth) {
// Prefer explicit per-node cookie; fall back to generic RPC_COOKIE
let wallet_node_auth = std::env::var("WALLET_NODE_RPC_COOKIE")
.ok()
.filter(|s| !s.is_empty())
.map(|cookie| Auth::CookieFile(cookie.into()))
.unwrap_or_else(|| {
// Prefer per-node user/pass; fall back to generic RPC_USER/PASS
std::env::var("WALLET_NODE_RPC_USER")
.or_else(|_| std::env::var("RPC_USER"))
.ok()
.filter(|s| !s.is_empty())
.map(|user| {
Auth::UserPass(user, std::env::var("WALLET_NODE_RPC_PASS").unwrap_or_default())
let pass = std::env::var("WALLET_NODE_RPC_PASS")
.or_else(|_| std::env::var("RPC_PASS"))
.unwrap_or_default();
Auth::UserPass(user, pass)
})
.unwrap_or_else(|| {
// Generic cookie as last resort
std::env::var("RPC_COOKIE")
.ok()
.filter(|s| !s.is_empty())
.map(|cookie| Auth::CookieFile(cookie.into()))
.unwrap_or(Auth::None)
})
.unwrap_or(Auth::None)
});

let evo_node_auth = std::env::var("EVO_NODE_RPC_COOKIE")
Expand All @@ -148,12 +169,22 @@ fn get_auth() -> (Auth, Auth) {
.map(|cookie| Auth::CookieFile(cookie.into()))
.unwrap_or_else(|| {
std::env::var("EVO_NODE_RPC_USER")
.or_else(|_| std::env::var("RPC_USER"))
.ok()
.filter(|s| !s.is_empty())
.map(|user| {
Auth::UserPass(user, std::env::var("EVO_NODE_RPC_PASS").unwrap_or_default())
let pass = std::env::var("EVO_NODE_RPC_PASS")
.or_else(|_| std::env::var("RPC_PASS"))
.unwrap_or_default();
Auth::UserPass(user, pass)
})
.unwrap_or_else(|| {
std::env::var("RPC_COOKIE")
.ok()
.filter(|s| !s.is_empty())
.map(|cookie| Auth::CookieFile(cookie.into()))
.unwrap_or(Auth::None)
})
.unwrap_or(Auth::None)
});

(wallet_node_auth, evo_node_auth)
Expand Down Expand Up @@ -189,7 +220,8 @@ fn main() {

let faucet_rpc_url = format!("{}/wallet/{}", wallet_node_rpc_url, FAUCET_WALLET_NAME);
let wallet_rpc_url = format!("{}/wallet/{}", wallet_node_rpc_url, TEST_WALLET_NAME);
let evo_rpc_url = format!("{}/wallet/{}", evo_node_rpc_url, TEST_WALLET_NAME);
// Evo/masternode RPCs are non-wallet; use base RPC URL
let evo_rpc_url = evo_node_rpc_url.clone();

let faucet_client = Client::new(&faucet_rpc_url, wallet_node_auth.clone().clone()).unwrap();
let wallet_client = Client::new(&wallet_rpc_url, wallet_node_auth).unwrap();
Expand Down Expand Up @@ -247,7 +279,17 @@ fn main() {
trace!(target: "integration_test", "Funded wallet \"{}\". Total balance: {}", TEST_WALLET_NAME, balance);
faucet_client.generate_to_address(8, &test_wallet_address).unwrap();
test_wallet_node_endpoints(&wallet_client);
test_evo_node_endpoints(&evo_client, &wallet_client);

// Gate evo/masternode tests behind env, as they require a proper evo-enabled setup.
let run_evo = std::env::var("RUN_EVO_TESTS")
.ok()
.map(|v| v.eq_ignore_ascii_case("1") || v.eq_ignore_ascii_case("true"))
.unwrap_or(false);
if run_evo {
test_evo_node_endpoints(&evo_client, &wallet_client);
} else {
trace!(target: "integration_test", "Skipping evo/masternode RPC tests (set RUN_EVO_TESTS=true to enable)");
}

// //TODO import_multi(
// //TODO verify_message(
Expand All @@ -272,7 +314,14 @@ fn test_wallet_node_endpoints(wallet_client: &Client) {
// test_get_balance_generate_to_address(wallet_client);
test_get_balances_generate_to_address(wallet_client);
test_get_best_block_hash(wallet_client);
test_get_best_chain_lock(wallet_client);
// ChainLocks depend on LLMQ; run only when evo tests are enabled
let run_evo = std::env::var("RUN_EVO_TESTS")
.ok()
.map(|v| v.eq_ignore_ascii_case("1") || v.eq_ignore_ascii_case("true"))
.unwrap_or(false);
if run_evo {
test_get_best_chain_lock(wallet_client);
}
test_get_block_count(wallet_client);
test_get_block_hash(wallet_client);
// TODO(dashcore): - fails to parse block
Expand Down Expand Up @@ -735,7 +784,46 @@ fn test_sign_raw_transaction_with_send_raw_transaction(cl: &Client) {
minimum_amount: Some(btc(2)),
..Default::default()
};
let unspent = cl.list_unspent(Some(6), None, None, None, Some(options)).unwrap();
// Ensure we have a confirmed, sufficiently large UTXO owned by this wallet.
// 1) Create a fresh funding output to a new wallet address, with fallbacks if balance is tight.
let fund_addr = cl.get_new_address(None).unwrap().require_network(*NET).unwrap();
let mut funded_amount_btc: Option<f64> = None;
for amt in [3.0_f64, 1.0_f64, 0.5_f64] {
match cl.send_to_address(
&fund_addr,
btc(amt),
None,
None,
None,
None,
None,
None,
None,
None,
) {
Ok(_) => {
funded_amount_btc = Some(amt);
break;
}
Err(dashcore_rpc::Error::JsonRpc(dashcore_rpc::jsonrpc::error::Error::Rpc(e)))
if e.code == -6 && e.message.contains("Insufficient funds") =>
{
continue;
}
Err(e) => panic!("Unexpected error funding test UTXO: {:?}", e),
}
}
let funded_amount_btc =
funded_amount_btc.expect("wallet has insufficient balance even for 0.5 DASH");
// 2) Mine 6 blocks to confirm all pending transactions (not coinbases).
let mine_addr = cl.get_new_address(None).unwrap().require_network(*NET).unwrap();
let _ = cl.generate_to_address(6, &mine_addr).unwrap();
// 3) Select a confirmed UTXO with at least the funded amount (the vout to fund_addr equals the send amount).
let options = json::ListUnspentQueryOptions {
minimum_amount: Some(btc(funded_amount_btc)),
..Default::default()
};
let unspent = cl.list_unspent(Some(6), None, Some(&[&fund_addr]), None, Some(options)).unwrap();
let unspent = unspent.into_iter().next().unwrap();

let tx = Transaction {
Expand Down
2 changes: 1 addition & 1 deletion rpc-json/Cargo.toml
Original file line number Diff line number Diff line change
Expand Up @@ -26,6 +26,6 @@ serde_repr = "0.1"
hex = { version="0.4", features=["serde"]}

key-wallet = { path = "../key-wallet", features=["serde"] }
dashcore = { path = "../dash", features=["std", "secp-recovery", "rand-std", "signer", "serde"], default-features = false }
dashcore = { path = "../dash", features=["std", "secp-recovery", "rand-std", "signer", "serde", "core-block-hash-use-x11"], default-features = false }

bincode = { version = "=2.0.0-rc.3", features = ["serde"] }
15 changes: 5 additions & 10 deletions rpc-json/src/lib.rs
Original file line number Diff line number Diff line change
Expand Up @@ -83,16 +83,11 @@ pub struct GetNetworkInfoResult {
#[serde(rename = "networkactive")]
pub network_active: bool,
pub connections: usize,
#[serde(rename = "inboundconnections")]
pub inbound_connections: usize,
#[serde(rename = "outboundconnections")]
pub outbound_connections: usize,
#[serde(rename = "mnconnections")]
pub mn_connections: usize,
#[serde(rename = "inboundmnconnections")]
pub inbound_mn_connections: usize,
#[serde(rename = "outboundmnconnections")]
pub outbound_mn_connections: usize,
pub connections_in: usize,
pub connections_out: usize,
pub connections_mn: usize,
pub connections_mn_in: usize,
pub connections_mn_out: usize,
Comment on lines +86 to +90
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Preserve back-compat in GetNetworkInfoResult: add serde aliases + defaults to avoid deserialization breakage across Core versions

Nodes that still return the legacy keys (e.g., inboundconnections/outboundconnections/mnconnections/inboundmnconnections/outboundmnconnections) will fail to deserialize since these fields are required and not optional. Add aliases and defaults so both old and new schemas work.

Apply this diff:

     pub connections: usize,
-    pub connections_in: usize,
-    pub connections_out: usize,
-    pub connections_mn: usize,
-    pub connections_mn_in: usize,
-    pub connections_mn_out: usize,
+    #[serde(default, alias = "inboundconnections")]
+    pub connections_in: usize,
+    #[serde(default, alias = "outboundconnections")]
+    pub connections_out: usize,
+    #[serde(default, alias = "mnconnections")]
+    pub connections_mn: usize,
+    #[serde(default, alias = "inboundmnconnections")]
+    pub connections_mn_in: usize,
+    #[serde(default, alias = "outboundmnconnections")]
+    pub connections_mn_out: usize,

Optionally, I can add a small serde test to verify both old and new JSON shapes deserialize correctly.

📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
pub connections_in: usize,
pub connections_out: usize,
pub connections_mn: usize,
pub connections_mn_in: usize,
pub connections_mn_out: usize,
pub connections: usize,
#[serde(default, alias = "inboundconnections")]
pub connections_in: usize,
#[serde(default, alias = "outboundconnections")]
pub connections_out: usize,
#[serde(default, alias = "mnconnections")]
pub connections_mn: usize,
#[serde(default, alias = "inboundmnconnections")]
pub connections_mn_in: usize,
#[serde(default, alias = "outboundmnconnections")]
pub connections_mn_out: usize,
🤖 Prompt for AI Agents
In rpc-json/src/lib.rs around lines 86 to 90, the GetNetworkInfoResult struct
currently requires new field names and will fail to deserialize JSON from older
Core versions that use legacy keys; add serde aliases for each field mapping the
legacy keys to the new names and add serde(default) so missing legacy or new
keys default to 0. Concretely, annotate connections_in with #[serde(alias =
"inboundconnections", default)], connections_out with #[serde(alias =
"outboundconnections", default)], connections_mn with #[serde(alias =
"mnconnections", default)], connections_mn_in with #[serde(alias =
"inboundmnconnections", default)], and connections_mn_out with #[serde(alias =
"outboundmnconnections", default)] so both old and new JSON shapes deserialize
successfully.

#[serde(rename = "socketevents")]
pub socket_events: String,
pub networks: Vec<GetNetworkInfoResultNetwork>,
Expand Down
Loading