Skip to content

feat: add unit tests, CHANGELOG, examples, README overhaul#3

Merged
zaferdace merged 7 commits intomainfrom
feat/tests-docs-readme
Apr 2, 2026
Merged

feat: add unit tests, CHANGELOG, examples, README overhaul#3
zaferdace merged 7 commits intomainfrom
feat/tests-docs-readme

Conversation

@zaferdace
Copy link
Copy Markdown
Owner

@zaferdace zaferdace commented Apr 1, 2026

Summary

  • 116 unit tests (vitest) covering core tools and shared utilities
  • README rewritten with badges, tool tables, improved quick start
  • CHANGELOG.md for v0.1.0 release
  • 3 usage example documents

Test plan

  • npx vitest run — 116 passed, 0 failed
  • All 6 test files compile cleanly

🤖 Generated with Claude Code

Summary by Sourcery

Add comprehensive test coverage, documentation, and examples around core MCP tools and shipcheck workflows, and wire up Vitest in the build tooling.

Build:

  • Add Vitest as a dev dependency and npm scripts for running tests in CI and watch mode, and format package metadata for consistency.

Documentation:

  • Rewrite README with updated positioning, tool category tables, usage prompts, contributing guidelines, and testing status.
  • Add a CHANGELOG for the 0.1.0 release detailing available tools, audits, and known issues.
  • Add example guides for pre-publish audits, accessibility checks, and release comparison workflows.

Tests:

  • Introduce Vitest configuration and multiple unit test suites covering shared utilities, core search and property tools, and shipcheck audits (accessibility, prepublish, release diff).

Unit tests (116 tests, all passing):
- vitest setup with StudioBridgeClient mocking
- Tests for prepublish_audit, accessibility_audit, search_project,
  get_instance_properties, release_diff, and shared utilities
- 61 tests for pure utility functions alone

Documentation:
- README rewritten with badges, tool category tables, quick start examples
- CHANGELOG.md for v0.1.0 (43 tools, 7 bug fixes, known issues)
- 3 usage examples: prepublish audit, accessibility check, release comparison
@sourcery-ai
Copy link
Copy Markdown

sourcery-ai Bot commented Apr 1, 2026

Reviewer's Guide

Adds a Vitest-based unit test suite for core shared utilities and shipcheck tools, introduces a CHANGELOG and three example usage docs, configures Vitest in the build tooling, and significantly expands the README with tool catalog, usage guidance, and testing details.

Sequence diagram for running a prepublish audit via an AI MCP client

sequenceDiagram
  actor Developer
  participant MCPClient
  participant MCPServer
  participant BridgeServer
  participant StudioPlugin
  participant RobloxStudio

  Developer->>MCPClient: Ask AI
  activate MCPClient
  MCPClient->>MCPServer: tools.call rbx_prepublish_audit
  activate MCPServer

  MCPServer->>BridgeServer: HTTP request\nrbx_prepublish_audit
  activate BridgeServer
  BridgeServer->>StudioPlugin: Long-poll message\nstart prepublish audit
  activate StudioPlugin
  StudioPlugin->>RobloxStudio: Inspect DataModel, GUIs, scripts

  RobloxStudio-->>StudioPlugin: Structural and metadata findings
  StudioPlugin-->>BridgeServer: Aggregated audit results
  deactivate StudioPlugin

  BridgeServer-->>MCPServer: JSON results\n(category scores, issues, recommendations)
  deactivate BridgeServer
  MCPServer-->>MCPClient: Tool response payload
  deactivate MCPServer

  MCPClient-->>Developer: Render categorized report\nwith overall_score and guidance
  deactivate MCPClient
Loading

Flow diagram for release comparison and targeted audits workflow

flowchart TD
  A["Start"] --> B["Save baseline with rbx_release_diff\nmode baseline_saved"]
  B --> C["Make changes in Roblox Studio"]
  C --> D["Run rbx_release_diff with baseline_path\nmode diff"]
  D --> E["Review summary and changes\n(risk_score, risk_level, scripts_changed)"]

  E --> F{"run_targeted_audits enabled?"}
  F -- Yes --> G["Read recommended_audits list"]
  G --> H["Run each recommended tool\n(e.g. rbx_remote_contract_audit,\nrbx_marketplace_compliance_audit,\nrbx_validate_mobile_ui)"]
  F -- No --> I["Manually choose audits to run"]

  H --> J["Fix issues in Studio based on findings"]
  I --> J

  J --> K["Run rbx_release_readiness_gate\n(score-based SHIP / HOLD)"]
  K --> L{Verdict SHIP?}
  L -- Yes --> M["Publish experience (optionally via rbx_publish_place)"]
  L -- No --> N["Address blocking issues and rerun gate"]
  M --> O["End"]
  N --> J
Loading

File-Level Changes

Change Details Files
Introduce Vitest unit test suite for core utilities and shipcheck tools.
  • Add unit tests for shared utilities such as scoring, path handling, traversal, GUI bounds, and patch previews.
  • Add tests for release diff behavior including baseline save/diff modes, risk scoring, and recommended audits.
  • Add tests for accessibility audit scoring and rules around contrast, touch targets, text scaling, and navigation affordance.
  • Add tests for search project functionality including name/class/script search, scoping, and validation.
  • Add tests for get-instance-properties behavior, argument validation, and error propagation.
  • Add tests for prepublish audit orchestration, category scoring, and Open Cloud integration handling.
  • Add Vitest configuration file with node environment and globals enabled.
src/__tests__/shared.test.ts
src/__tests__/release-diff.test.ts
src/__tests__/accessibility-audit.test.ts
src/__tests__/search-project.test.ts
src/__tests__/get-instance-properties.test.ts
src/__tests__/prepublish-audit.test.ts
vitest.config.ts
Update npm metadata and tooling to support running tests.
  • Add Vitest dependency to devDependencies.
  • Add npm scripts for test and test:watch using Vitest.
  • Reformat package.json fields (files, engines, bugs) to multi-line JSON style for readability.
package.json
package-lock.json
Expand and restructure README with tool catalog, usage, and testing details.
  • Replace static version badges with dynamic npm, license, Node.js, GitHub stars, and MCP badges.
  • Rewrite introduction to emphasize MCP tools, live Studio connection, and high-level capabilities.
  • Add a "Why" section explaining key questions shipcheck helps answer and clarifying limitations.
  • Add rich example prompts and quick start refinements for connecting and running shipcheck.
  • Introduce detailed tool category tables covering core, shipcheck, automation, building, cloud, playtester, and shooter genre tools.
  • Clarify what shipcheck checks, how findings are structured, and provide a sample report section.
  • Update Studio-tested matrix counts and explanation, and add explicit limitations and non-goals sections.
  • Add a Contributing section with workflow and code style expectations.
README.md
Add versioned changelog documenting initial 0.1.0 feature set and known issues.
  • Create CHANGELOG with categorized lists of added tools, fixes, and known limitations for v0.1.0.
CHANGELOG.md
Add example workflow documents for key shipcheck use cases.
  • Add example document walking through release comparison and diff-based recommended audits.
  • Add example document detailing accessibility audit usage, output interpretation, and remediation workflow.
  • Add example document describing prepublish audit categories, expected findings, and suggested run order.
examples/release-comparison.md
examples/accessibility-check.md
examples/prepublish-audit.md

Tips and commands

Interacting with Sourcery

  • Trigger a new review: Comment @sourcery-ai review on the pull request.
  • Continue discussions: Reply directly to Sourcery's review comments.
  • Generate a GitHub issue from a review comment: Ask Sourcery to create an
    issue from a review comment by replying to it. You can also reply to a
    review comment with @sourcery-ai issue to create an issue from it.
  • Generate a pull request title: Write @sourcery-ai anywhere in the pull
    request title to generate a title at any time. You can also comment
    @sourcery-ai title on the pull request to (re-)generate the title at any time.
  • Generate a pull request summary: Write @sourcery-ai summary anywhere in
    the pull request body to generate a PR summary at any time exactly where you
    want it. You can also comment @sourcery-ai summary on the pull request to
    (re-)generate the summary at any time.
  • Generate reviewer's guide: Comment @sourcery-ai guide on the pull
    request to (re-)generate the reviewer's guide at any time.
  • Resolve all Sourcery comments: Comment @sourcery-ai resolve on the
    pull request to resolve all Sourcery comments. Useful if you've already
    addressed all the comments and don't want to see them anymore.
  • Dismiss all Sourcery reviews: Comment @sourcery-ai dismiss on the pull
    request to dismiss all existing Sourcery reviews. Especially useful if you
    want to start fresh with a new review - don't forget to comment
    @sourcery-ai review to trigger a new review!

Customizing Your Experience

Access your dashboard to:

  • Enable or disable review features such as the Sourcery-generated pull request
    summary, the reviewer's guide, and others.
  • Change the review language.
  • Add, remove or edit custom review instructions.
  • Adjust other review settings.

Getting Help

Copy link
Copy Markdown

@sourcery-ai sourcery-ai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hey - I've found 3 issues, and left some high level feedback:

  • There are several duplicated test helpers like makeNode/makeTree across different __tests__ files; consider extracting these into a shared test utility module to reduce repetition and keep fixtures consistent.
Prompt for AI Agents
Please address the comments from this code review:

## Overall Comments
- There are several duplicated test helpers like `makeNode`/`makeTree` across different `__tests__` files; consider extracting these into a shared test utility module to reduce repetition and keep fixtures consistent.

## Individual Comments

### Comment 1
<location path="src/__tests__/shared.test.ts" line_range="521" />
<code_context>
+describe("buildPatchOperationsPreview", () => {
</code_context>
<issue_to_address>
**suggestion (testing):** Consider tests for patch previews when target paths do not exist

In addition to the existing happy-path coverage, please add cases where `target_path` or `new_parent_path` do not exist in the tree (e.g. delete/update/reparent of a missing node) to confirm the preview output and error handling are well-defined and don’t produce misleading `before`/`after` data.

```suggestion
  it("returns an empty preview when deleting a missing node", () => {
    const ops: PatchOperation[] = [
      {
        type: "delete",
        target_path: "game.Workspace.MissingPart",
      },
    ];

    const preview = buildPatchOperationsPreview(root, ops);

    expect(preview).toEqual([]);
  });

  it("returns an empty preview when updating a missing node", () => {
    const ops: PatchOperation[] = [
      {
        type: "update",
        target_path: "game.Workspace.MissingPart",
        properties: {
          Anchored: false,
        },
      },
    ];

    const preview = buildPatchOperationsPreview(root, ops);

    expect(preview).toEqual([]);
  });

  it("returns an empty preview when reparenting a missing node", () => {
    const ops: PatchOperation[] = [
      {
        type: "reparent",
        target_path: "game.Workspace.MissingPart",
        new_parent_path: "game.Workspace",
      },
    ];

    const preview = buildPatchOperationsPreview(root, ops);

    expect(preview).toEqual([]);
  });

  it("returns an empty preview when reparent target's new parent does not exist", () => {
    const ops: PatchOperation[] = [
      {
        type: "reparent",
        target_path: "game.Workspace.Part",
        new_parent_path: "game.Workspace.MissingFolder",
      },
    ];

    const preview = buildPatchOperationsPreview(root, ops);

    expect(preview).toEqual([]);
  });

  it("previews a create operation", () => {
```
</issue_to_address>

### Comment 2
<location path="src/__tests__/search-project.test.ts" line_range="57-66" />
<code_context>
+describe("rbx_search_project", () => {
</code_context>
<issue_to_address>
**suggestion (testing):** Add validation tests for studio_port and response metadata in search-project

Two edge cases look untested:
- Invalid `studio_port` values (e.g. 0 or negative) should be rejected by the schema, as with the other tools.
- The response envelope metadata (`schema_version`, `freshness`, `source.studio_port`) is not asserted; a focused test for these fields would keep behavior consistent and catch regressions in envelope creation.

Suggested implementation:

```typescript
  const workspace = makeNode("Workspace", "Workspace", [
    makeNode("Map", "Model", [makeNode("BasePart", "Part")]),
  ]);
  const starterGui = makeNode("StarterGui", "StarterGui", [button]);
  const serverStorage = makeNode("ServerStorage", "ServerStorage", [script]);
  return makeNode("game", "DataModel", [workspace, starterGui, serverStorage]);
}

// ── tests ─────────────────────────────────────────────────────────────────────

describe("rbx_search_project", () => {
  it("returns a well-formed response envelope", async () => {
    mockGetDataModel.mockResolvedValue(makeSampleTree());
    const result = (await executeTool("rbx_search_project", {
      query: "Workspace",
      search_type: "name",
    })) as Record<string, unknown>;
    expect(result).toHaveProperty("data");
    const data = result["data"] as Record<string, unknown>;
    expect(typeof data["total_matches"]).toBe("number");
    expect(Array.isArray(data["matches"])).toBe(true);
  });

  it("rejects invalid studio_port values", async () => {
    mockGetDataModel.mockResolvedValue(makeSampleTree());

    await expect(
      executeTool("rbx_search_project", {
        query: "Workspace",
        search_type: "name",
        studio_port: 0,
      }),
    ).rejects.toThrow();

    await expect(
      executeTool("rbx_search_project", {
        query: "Workspace",
        search_type: "name",
        studio_port: -1,
      }),
    ).rejects.toThrow();
  });

  it("includes response envelope metadata with source.studio_port", async () => {
    mockGetDataModel.mockResolvedValue(makeSampleTree());

    const studioPort = 6500;
    const result = (await executeTool("rbx_search_project", {
      query: "Workspace",
      search_type: "name",
      studio_port: studioPort,
    })) as Record<string, unknown>;

    expect(typeof result["schema_version"]).toBe("string");
    expect(result["schema_version"]).not.toHaveLength(0);

    expect(typeof result["freshness"]).toBe("string");
    // If freshness is constrained to specific values elsewhere, narrow this as needed:
    // expect(["fresh", "stale"]).toContain(result["freshness"]);

    const source = result["source"] as Record<string, unknown>;
    expect(source).toBeDefined();
    expect(source).toEqual(
      expect.objectContaining({
        studio_port: studioPort,
      }),
    );
  });

```

1. If your schema validation for `studio_port` uses a more specific error type or message (e.g. Zod error with a particular `code` or `path`), tighten the `.rejects.toThrow()` expectations to assert on that structure (for example, `.rejects.toMatchObject(...)` or `.rejects.toThrow(/studio_port/)`), consistent with other schema tests in this file.
2. If `freshness` is an enum with known values (e.g. `"fresh"` / `"stale"`), replace the generic string assertion with an explicit membership check, as hinted in the comment.
3. If the envelope shape is slightly different (e.g. metadata nested under `result.meta` instead of at the top level), adjust the property access in the metadata test accordingly to match the existing envelope structure for other tools.
</issue_to_address>

### Comment 3
<location path="src/__tests__/get-instance-properties.test.ts" line_range="38-47" />
<code_context>
+describe("rbx_get_instance_properties", () => {
</code_context>
<issue_to_address>
**suggestion (testing):** Add tests for studio_port validation and ping behavior in get-instance-properties

The current tests cover validation and error propagation well. To fully capture expected behavior, consider also adding tests that:
- Reject invalid `studio_port` values (0 or negative), matching behavior in other tools.
- Verify `ping` is invoked before `getProperties`, so connection errors surface consistently and early.
These should be small additions but will help ensure consistent behavior across the suite.
</issue_to_address>

Sourcery is free for open source - if you like our reviews please consider sharing them ✨
Help me be more useful! Please click 👍 or 👎 on each comment and I'll use the feedback to improve your reviews.

const workspace = makeNode("Workspace", "Workspace", [partNode]);
const root = makeNode("game", "DataModel", [workspace]);

it("previews a create operation", () => {
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

suggestion (testing): Consider tests for patch previews when target paths do not exist

In addition to the existing happy-path coverage, please add cases where target_path or new_parent_path do not exist in the tree (e.g. delete/update/reparent of a missing node) to confirm the preview output and error handling are well-defined and don’t produce misleading before/after data.

Suggested change
it("previews a create operation", () => {
it("returns an empty preview when deleting a missing node", () => {
const ops: PatchOperation[] = [
{
type: "delete",
target_path: "game.Workspace.MissingPart",
},
];
const preview = buildPatchOperationsPreview(root, ops);
expect(preview).toEqual([]);
});
it("returns an empty preview when updating a missing node", () => {
const ops: PatchOperation[] = [
{
type: "update",
target_path: "game.Workspace.MissingPart",
properties: {
Anchored: false,
},
},
];
const preview = buildPatchOperationsPreview(root, ops);
expect(preview).toEqual([]);
});
it("returns an empty preview when reparenting a missing node", () => {
const ops: PatchOperation[] = [
{
type: "reparent",
target_path: "game.Workspace.MissingPart",
new_parent_path: "game.Workspace",
},
];
const preview = buildPatchOperationsPreview(root, ops);
expect(preview).toEqual([]);
});
it("returns an empty preview when reparent target's new parent does not exist", () => {
const ops: PatchOperation[] = [
{
type: "reparent",
target_path: "game.Workspace.Part",
new_parent_path: "game.Workspace.MissingFolder",
},
];
const preview = buildPatchOperationsPreview(root, ops);
expect(preview).toEqual([]);
});
it("previews a create operation", () => {

Comment on lines +57 to +66
describe("rbx_search_project", () => {
it("returns a well-formed response envelope", async () => {
mockGetDataModel.mockResolvedValue(makeSampleTree());
const result = (await executeTool("rbx_search_project", {
query: "Workspace",
search_type: "name",
})) as Record<string, unknown>;
expect(result).toHaveProperty("data");
const data = result["data"] as Record<string, unknown>;
expect(typeof data["total_matches"]).toBe("number");
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

suggestion (testing): Add validation tests for studio_port and response metadata in search-project

Two edge cases look untested:

  • Invalid studio_port values (e.g. 0 or negative) should be rejected by the schema, as with the other tools.
  • The response envelope metadata (schema_version, freshness, source.studio_port) is not asserted; a focused test for these fields would keep behavior consistent and catch regressions in envelope creation.

Suggested implementation:

  const workspace = makeNode("Workspace", "Workspace", [
    makeNode("Map", "Model", [makeNode("BasePart", "Part")]),
  ]);
  const starterGui = makeNode("StarterGui", "StarterGui", [button]);
  const serverStorage = makeNode("ServerStorage", "ServerStorage", [script]);
  return makeNode("game", "DataModel", [workspace, starterGui, serverStorage]);
}

// ── tests ─────────────────────────────────────────────────────────────────────

describe("rbx_search_project", () => {
  it("returns a well-formed response envelope", async () => {
    mockGetDataModel.mockResolvedValue(makeSampleTree());
    const result = (await executeTool("rbx_search_project", {
      query: "Workspace",
      search_type: "name",
    })) as Record<string, unknown>;
    expect(result).toHaveProperty("data");
    const data = result["data"] as Record<string, unknown>;
    expect(typeof data["total_matches"]).toBe("number");
    expect(Array.isArray(data["matches"])).toBe(true);
  });

  it("rejects invalid studio_port values", async () => {
    mockGetDataModel.mockResolvedValue(makeSampleTree());

    await expect(
      executeTool("rbx_search_project", {
        query: "Workspace",
        search_type: "name",
        studio_port: 0,
      }),
    ).rejects.toThrow();

    await expect(
      executeTool("rbx_search_project", {
        query: "Workspace",
        search_type: "name",
        studio_port: -1,
      }),
    ).rejects.toThrow();
  });

  it("includes response envelope metadata with source.studio_port", async () => {
    mockGetDataModel.mockResolvedValue(makeSampleTree());

    const studioPort = 6500;
    const result = (await executeTool("rbx_search_project", {
      query: "Workspace",
      search_type: "name",
      studio_port: studioPort,
    })) as Record<string, unknown>;

    expect(typeof result["schema_version"]).toBe("string");
    expect(result["schema_version"]).not.toHaveLength(0);

    expect(typeof result["freshness"]).toBe("string");
    // If freshness is constrained to specific values elsewhere, narrow this as needed:
    // expect(["fresh", "stale"]).toContain(result["freshness"]);

    const source = result["source"] as Record<string, unknown>;
    expect(source).toBeDefined();
    expect(source).toEqual(
      expect.objectContaining({
        studio_port: studioPort,
      }),
    );
  });
  1. If your schema validation for studio_port uses a more specific error type or message (e.g. Zod error with a particular code or path), tighten the .rejects.toThrow() expectations to assert on that structure (for example, .rejects.toMatchObject(...) or .rejects.toThrow(/studio_port/)), consistent with other schema tests in this file.
  2. If freshness is an enum with known values (e.g. "fresh" / "stale"), replace the generic string assertion with an explicit membership check, as hinted in the comment.
  3. If the envelope shape is slightly different (e.g. metadata nested under result.meta instead of at the top level), adjust the property access in the metadata test accordingly to match the existing envelope structure for other tools.

Comment on lines +38 to +47
describe("rbx_get_instance_properties", () => {
it("returns a well-formed response envelope", async () => {
mockGetProperties.mockResolvedValue(makeMockProperties());
const result = (await executeTool("rbx_get_instance_properties", {
path: "game.Workspace.MyPart",
})) as Record<string, unknown>;
expect(result).toHaveProperty("data");
expect(result).toHaveProperty("schema_version");
expect(result).toHaveProperty("freshness");
});
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

suggestion (testing): Add tests for studio_port validation and ping behavior in get-instance-properties

The current tests cover validation and error propagation well. To fully capture expected behavior, consider also adding tests that:

  • Reject invalid studio_port values (0 or negative), matching behavior in other tools.
  • Verify ping is invoked before getProperties, so connection errors surface consistently and early.
    These should be small additions but will help ensure consistent behavior across the suite.

zafermg and others added 6 commits April 2, 2026 09:35
…udio_port validation

- Extract duplicated makeNode/makeTree into shared test-helpers.ts
- Add 4 missing-path tests for buildPatchOperationsPreview
- Add studio_port validation tests for search-project and get-instance-properties
- Add response envelope metadata assertions
- 124 tests, all passing
…e HUD, teams, anticheat

New tools (9 shooter tools total):
- rbx_shooter_weapon_config_sanity — audit weapon values (damage, fire rate, range, DPS)
- rbx_shooter_hitbox_audit — raycast patterns, server-side validation, deprecated API detection
- rbx_shooter_scope_ui_check — scope overlay, crosshair, FOV zoom, ADS patterns
- rbx_shooter_mobile_hud — ammo, health, minimap, kill feed, fire/reload buttons, touch detection
- rbx_shooter_team_infrastructure — Teams service, spawn assignment, team balance
- rbx_shooter_anticheat_surface — speed/raycast/teleport/damage validation, security remotes

All tools reviewed by Codex — heuristics tuned:
- FFA-friendly (teams optional), severity levels adjusted
- Configuration folders + IntValue support for weapon scanning
- Frame elements included in crosshair detection
- Humanoid.Health + TakeDamage for damage validation
- Informational notes instead of false-positive issues
@zaferdace zaferdace merged commit ed30db4 into main Apr 2, 2026
3 checks passed
@zaferdace zaferdace deleted the feat/tests-docs-readme branch April 2, 2026 09:42
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants