feat: add unit tests, CHANGELOG, examples, README overhaul#3
Conversation
Unit tests (116 tests, all passing): - vitest setup with StudioBridgeClient mocking - Tests for prepublish_audit, accessibility_audit, search_project, get_instance_properties, release_diff, and shared utilities - 61 tests for pure utility functions alone Documentation: - README rewritten with badges, tool category tables, quick start examples - CHANGELOG.md for v0.1.0 (43 tools, 7 bug fixes, known issues) - 3 usage examples: prepublish audit, accessibility check, release comparison
Reviewer's GuideAdds a Vitest-based unit test suite for core shared utilities and shipcheck tools, introduces a CHANGELOG and three example usage docs, configures Vitest in the build tooling, and significantly expands the README with tool catalog, usage guidance, and testing details. Sequence diagram for running a prepublish audit via an AI MCP clientsequenceDiagram
actor Developer
participant MCPClient
participant MCPServer
participant BridgeServer
participant StudioPlugin
participant RobloxStudio
Developer->>MCPClient: Ask AI
activate MCPClient
MCPClient->>MCPServer: tools.call rbx_prepublish_audit
activate MCPServer
MCPServer->>BridgeServer: HTTP request\nrbx_prepublish_audit
activate BridgeServer
BridgeServer->>StudioPlugin: Long-poll message\nstart prepublish audit
activate StudioPlugin
StudioPlugin->>RobloxStudio: Inspect DataModel, GUIs, scripts
RobloxStudio-->>StudioPlugin: Structural and metadata findings
StudioPlugin-->>BridgeServer: Aggregated audit results
deactivate StudioPlugin
BridgeServer-->>MCPServer: JSON results\n(category scores, issues, recommendations)
deactivate BridgeServer
MCPServer-->>MCPClient: Tool response payload
deactivate MCPServer
MCPClient-->>Developer: Render categorized report\nwith overall_score and guidance
deactivate MCPClient
Flow diagram for release comparison and targeted audits workflowflowchart TD
A["Start"] --> B["Save baseline with rbx_release_diff\nmode baseline_saved"]
B --> C["Make changes in Roblox Studio"]
C --> D["Run rbx_release_diff with baseline_path\nmode diff"]
D --> E["Review summary and changes\n(risk_score, risk_level, scripts_changed)"]
E --> F{"run_targeted_audits enabled?"}
F -- Yes --> G["Read recommended_audits list"]
G --> H["Run each recommended tool\n(e.g. rbx_remote_contract_audit,\nrbx_marketplace_compliance_audit,\nrbx_validate_mobile_ui)"]
F -- No --> I["Manually choose audits to run"]
H --> J["Fix issues in Studio based on findings"]
I --> J
J --> K["Run rbx_release_readiness_gate\n(score-based SHIP / HOLD)"]
K --> L{Verdict SHIP?}
L -- Yes --> M["Publish experience (optionally via rbx_publish_place)"]
L -- No --> N["Address blocking issues and rerun gate"]
M --> O["End"]
N --> J
File-Level Changes
Tips and commandsInteracting with Sourcery
Customizing Your ExperienceAccess your dashboard to:
Getting Help
|
There was a problem hiding this comment.
Hey - I've found 3 issues, and left some high level feedback:
- There are several duplicated test helpers like
makeNode/makeTreeacross different__tests__files; consider extracting these into a shared test utility module to reduce repetition and keep fixtures consistent.
Prompt for AI Agents
Please address the comments from this code review:
## Overall Comments
- There are several duplicated test helpers like `makeNode`/`makeTree` across different `__tests__` files; consider extracting these into a shared test utility module to reduce repetition and keep fixtures consistent.
## Individual Comments
### Comment 1
<location path="src/__tests__/shared.test.ts" line_range="521" />
<code_context>
+describe("buildPatchOperationsPreview", () => {
</code_context>
<issue_to_address>
**suggestion (testing):** Consider tests for patch previews when target paths do not exist
In addition to the existing happy-path coverage, please add cases where `target_path` or `new_parent_path` do not exist in the tree (e.g. delete/update/reparent of a missing node) to confirm the preview output and error handling are well-defined and don’t produce misleading `before`/`after` data.
```suggestion
it("returns an empty preview when deleting a missing node", () => {
const ops: PatchOperation[] = [
{
type: "delete",
target_path: "game.Workspace.MissingPart",
},
];
const preview = buildPatchOperationsPreview(root, ops);
expect(preview).toEqual([]);
});
it("returns an empty preview when updating a missing node", () => {
const ops: PatchOperation[] = [
{
type: "update",
target_path: "game.Workspace.MissingPart",
properties: {
Anchored: false,
},
},
];
const preview = buildPatchOperationsPreview(root, ops);
expect(preview).toEqual([]);
});
it("returns an empty preview when reparenting a missing node", () => {
const ops: PatchOperation[] = [
{
type: "reparent",
target_path: "game.Workspace.MissingPart",
new_parent_path: "game.Workspace",
},
];
const preview = buildPatchOperationsPreview(root, ops);
expect(preview).toEqual([]);
});
it("returns an empty preview when reparent target's new parent does not exist", () => {
const ops: PatchOperation[] = [
{
type: "reparent",
target_path: "game.Workspace.Part",
new_parent_path: "game.Workspace.MissingFolder",
},
];
const preview = buildPatchOperationsPreview(root, ops);
expect(preview).toEqual([]);
});
it("previews a create operation", () => {
```
</issue_to_address>
### Comment 2
<location path="src/__tests__/search-project.test.ts" line_range="57-66" />
<code_context>
+describe("rbx_search_project", () => {
</code_context>
<issue_to_address>
**suggestion (testing):** Add validation tests for studio_port and response metadata in search-project
Two edge cases look untested:
- Invalid `studio_port` values (e.g. 0 or negative) should be rejected by the schema, as with the other tools.
- The response envelope metadata (`schema_version`, `freshness`, `source.studio_port`) is not asserted; a focused test for these fields would keep behavior consistent and catch regressions in envelope creation.
Suggested implementation:
```typescript
const workspace = makeNode("Workspace", "Workspace", [
makeNode("Map", "Model", [makeNode("BasePart", "Part")]),
]);
const starterGui = makeNode("StarterGui", "StarterGui", [button]);
const serverStorage = makeNode("ServerStorage", "ServerStorage", [script]);
return makeNode("game", "DataModel", [workspace, starterGui, serverStorage]);
}
// ── tests ─────────────────────────────────────────────────────────────────────
describe("rbx_search_project", () => {
it("returns a well-formed response envelope", async () => {
mockGetDataModel.mockResolvedValue(makeSampleTree());
const result = (await executeTool("rbx_search_project", {
query: "Workspace",
search_type: "name",
})) as Record<string, unknown>;
expect(result).toHaveProperty("data");
const data = result["data"] as Record<string, unknown>;
expect(typeof data["total_matches"]).toBe("number");
expect(Array.isArray(data["matches"])).toBe(true);
});
it("rejects invalid studio_port values", async () => {
mockGetDataModel.mockResolvedValue(makeSampleTree());
await expect(
executeTool("rbx_search_project", {
query: "Workspace",
search_type: "name",
studio_port: 0,
}),
).rejects.toThrow();
await expect(
executeTool("rbx_search_project", {
query: "Workspace",
search_type: "name",
studio_port: -1,
}),
).rejects.toThrow();
});
it("includes response envelope metadata with source.studio_port", async () => {
mockGetDataModel.mockResolvedValue(makeSampleTree());
const studioPort = 6500;
const result = (await executeTool("rbx_search_project", {
query: "Workspace",
search_type: "name",
studio_port: studioPort,
})) as Record<string, unknown>;
expect(typeof result["schema_version"]).toBe("string");
expect(result["schema_version"]).not.toHaveLength(0);
expect(typeof result["freshness"]).toBe("string");
// If freshness is constrained to specific values elsewhere, narrow this as needed:
// expect(["fresh", "stale"]).toContain(result["freshness"]);
const source = result["source"] as Record<string, unknown>;
expect(source).toBeDefined();
expect(source).toEqual(
expect.objectContaining({
studio_port: studioPort,
}),
);
});
```
1. If your schema validation for `studio_port` uses a more specific error type or message (e.g. Zod error with a particular `code` or `path`), tighten the `.rejects.toThrow()` expectations to assert on that structure (for example, `.rejects.toMatchObject(...)` or `.rejects.toThrow(/studio_port/)`), consistent with other schema tests in this file.
2. If `freshness` is an enum with known values (e.g. `"fresh"` / `"stale"`), replace the generic string assertion with an explicit membership check, as hinted in the comment.
3. If the envelope shape is slightly different (e.g. metadata nested under `result.meta` instead of at the top level), adjust the property access in the metadata test accordingly to match the existing envelope structure for other tools.
</issue_to_address>
### Comment 3
<location path="src/__tests__/get-instance-properties.test.ts" line_range="38-47" />
<code_context>
+describe("rbx_get_instance_properties", () => {
</code_context>
<issue_to_address>
**suggestion (testing):** Add tests for studio_port validation and ping behavior in get-instance-properties
The current tests cover validation and error propagation well. To fully capture expected behavior, consider also adding tests that:
- Reject invalid `studio_port` values (0 or negative), matching behavior in other tools.
- Verify `ping` is invoked before `getProperties`, so connection errors surface consistently and early.
These should be small additions but will help ensure consistent behavior across the suite.
</issue_to_address>Help me be more useful! Please click 👍 or 👎 on each comment and I'll use the feedback to improve your reviews.
| const workspace = makeNode("Workspace", "Workspace", [partNode]); | ||
| const root = makeNode("game", "DataModel", [workspace]); | ||
|
|
||
| it("previews a create operation", () => { |
There was a problem hiding this comment.
suggestion (testing): Consider tests for patch previews when target paths do not exist
In addition to the existing happy-path coverage, please add cases where target_path or new_parent_path do not exist in the tree (e.g. delete/update/reparent of a missing node) to confirm the preview output and error handling are well-defined and don’t produce misleading before/after data.
| it("previews a create operation", () => { | |
| it("returns an empty preview when deleting a missing node", () => { | |
| const ops: PatchOperation[] = [ | |
| { | |
| type: "delete", | |
| target_path: "game.Workspace.MissingPart", | |
| }, | |
| ]; | |
| const preview = buildPatchOperationsPreview(root, ops); | |
| expect(preview).toEqual([]); | |
| }); | |
| it("returns an empty preview when updating a missing node", () => { | |
| const ops: PatchOperation[] = [ | |
| { | |
| type: "update", | |
| target_path: "game.Workspace.MissingPart", | |
| properties: { | |
| Anchored: false, | |
| }, | |
| }, | |
| ]; | |
| const preview = buildPatchOperationsPreview(root, ops); | |
| expect(preview).toEqual([]); | |
| }); | |
| it("returns an empty preview when reparenting a missing node", () => { | |
| const ops: PatchOperation[] = [ | |
| { | |
| type: "reparent", | |
| target_path: "game.Workspace.MissingPart", | |
| new_parent_path: "game.Workspace", | |
| }, | |
| ]; | |
| const preview = buildPatchOperationsPreview(root, ops); | |
| expect(preview).toEqual([]); | |
| }); | |
| it("returns an empty preview when reparent target's new parent does not exist", () => { | |
| const ops: PatchOperation[] = [ | |
| { | |
| type: "reparent", | |
| target_path: "game.Workspace.Part", | |
| new_parent_path: "game.Workspace.MissingFolder", | |
| }, | |
| ]; | |
| const preview = buildPatchOperationsPreview(root, ops); | |
| expect(preview).toEqual([]); | |
| }); | |
| it("previews a create operation", () => { |
| describe("rbx_search_project", () => { | ||
| it("returns a well-formed response envelope", async () => { | ||
| mockGetDataModel.mockResolvedValue(makeSampleTree()); | ||
| const result = (await executeTool("rbx_search_project", { | ||
| query: "Workspace", | ||
| search_type: "name", | ||
| })) as Record<string, unknown>; | ||
| expect(result).toHaveProperty("data"); | ||
| const data = result["data"] as Record<string, unknown>; | ||
| expect(typeof data["total_matches"]).toBe("number"); |
There was a problem hiding this comment.
suggestion (testing): Add validation tests for studio_port and response metadata in search-project
Two edge cases look untested:
- Invalid
studio_portvalues (e.g. 0 or negative) should be rejected by the schema, as with the other tools. - The response envelope metadata (
schema_version,freshness,source.studio_port) is not asserted; a focused test for these fields would keep behavior consistent and catch regressions in envelope creation.
Suggested implementation:
const workspace = makeNode("Workspace", "Workspace", [
makeNode("Map", "Model", [makeNode("BasePart", "Part")]),
]);
const starterGui = makeNode("StarterGui", "StarterGui", [button]);
const serverStorage = makeNode("ServerStorage", "ServerStorage", [script]);
return makeNode("game", "DataModel", [workspace, starterGui, serverStorage]);
}
// ── tests ─────────────────────────────────────────────────────────────────────
describe("rbx_search_project", () => {
it("returns a well-formed response envelope", async () => {
mockGetDataModel.mockResolvedValue(makeSampleTree());
const result = (await executeTool("rbx_search_project", {
query: "Workspace",
search_type: "name",
})) as Record<string, unknown>;
expect(result).toHaveProperty("data");
const data = result["data"] as Record<string, unknown>;
expect(typeof data["total_matches"]).toBe("number");
expect(Array.isArray(data["matches"])).toBe(true);
});
it("rejects invalid studio_port values", async () => {
mockGetDataModel.mockResolvedValue(makeSampleTree());
await expect(
executeTool("rbx_search_project", {
query: "Workspace",
search_type: "name",
studio_port: 0,
}),
).rejects.toThrow();
await expect(
executeTool("rbx_search_project", {
query: "Workspace",
search_type: "name",
studio_port: -1,
}),
).rejects.toThrow();
});
it("includes response envelope metadata with source.studio_port", async () => {
mockGetDataModel.mockResolvedValue(makeSampleTree());
const studioPort = 6500;
const result = (await executeTool("rbx_search_project", {
query: "Workspace",
search_type: "name",
studio_port: studioPort,
})) as Record<string, unknown>;
expect(typeof result["schema_version"]).toBe("string");
expect(result["schema_version"]).not.toHaveLength(0);
expect(typeof result["freshness"]).toBe("string");
// If freshness is constrained to specific values elsewhere, narrow this as needed:
// expect(["fresh", "stale"]).toContain(result["freshness"]);
const source = result["source"] as Record<string, unknown>;
expect(source).toBeDefined();
expect(source).toEqual(
expect.objectContaining({
studio_port: studioPort,
}),
);
});- If your schema validation for
studio_portuses a more specific error type or message (e.g. Zod error with a particularcodeorpath), tighten the.rejects.toThrow()expectations to assert on that structure (for example,.rejects.toMatchObject(...)or.rejects.toThrow(/studio_port/)), consistent with other schema tests in this file. - If
freshnessis an enum with known values (e.g."fresh"/"stale"), replace the generic string assertion with an explicit membership check, as hinted in the comment. - If the envelope shape is slightly different (e.g. metadata nested under
result.metainstead of at the top level), adjust the property access in the metadata test accordingly to match the existing envelope structure for other tools.
| describe("rbx_get_instance_properties", () => { | ||
| it("returns a well-formed response envelope", async () => { | ||
| mockGetProperties.mockResolvedValue(makeMockProperties()); | ||
| const result = (await executeTool("rbx_get_instance_properties", { | ||
| path: "game.Workspace.MyPart", | ||
| })) as Record<string, unknown>; | ||
| expect(result).toHaveProperty("data"); | ||
| expect(result).toHaveProperty("schema_version"); | ||
| expect(result).toHaveProperty("freshness"); | ||
| }); |
There was a problem hiding this comment.
suggestion (testing): Add tests for studio_port validation and ping behavior in get-instance-properties
The current tests cover validation and error propagation well. To fully capture expected behavior, consider also adding tests that:
- Reject invalid
studio_portvalues (0 or negative), matching behavior in other tools. - Verify
pingis invoked beforegetProperties, so connection errors surface consistently and early.
These should be small additions but will help ensure consistent behavior across the suite.
…udio_port validation - Extract duplicated makeNode/makeTree into shared test-helpers.ts - Add 4 missing-path tests for buildPatchOperationsPreview - Add studio_port validation tests for search-project and get-instance-properties - Add response envelope metadata assertions - 124 tests, all passing
…e HUD, teams, anticheat New tools (9 shooter tools total): - rbx_shooter_weapon_config_sanity — audit weapon values (damage, fire rate, range, DPS) - rbx_shooter_hitbox_audit — raycast patterns, server-side validation, deprecated API detection - rbx_shooter_scope_ui_check — scope overlay, crosshair, FOV zoom, ADS patterns - rbx_shooter_mobile_hud — ammo, health, minimap, kill feed, fire/reload buttons, touch detection - rbx_shooter_team_infrastructure — Teams service, spawn assignment, team balance - rbx_shooter_anticheat_surface — speed/raycast/teleport/damage validation, security remotes All tools reviewed by Codex — heuristics tuned: - FFA-friendly (teams optional), severity levels adjusted - Configuration folders + IntValue support for weapon scanning - Frame elements included in crosshair detection - Humanoid.Health + TakeDamage for damage validation - Informational notes instead of false-positive issues
Summary
Test plan
npx vitest run— 116 passed, 0 failed🤖 Generated with Claude Code
Summary by Sourcery
Add comprehensive test coverage, documentation, and examples around core MCP tools and shipcheck workflows, and wire up Vitest in the build tooling.
Build:
Documentation:
Tests: