Skip to content

Conversation

@neofight78
Copy link
Contributor

@neofight78 neofight78 commented May 10, 2025

Just some clean up for the engine tests.

  • Tests are no longer skipped but as they are a little slow I put them behind a build tag
  • Instead of expecting the engine to be inside the project folder, it just checks to see if it is in the PATH
  • Tests with both Stockfish and Lc0
  • Checks if an engine is installed/available first and skips the test for it if not (so won't fail if not all engines installed)
  • Removed the log test as it was outdated and brittle (engine version and supported options change over time)

Summary by CodeRabbit

  • Refactor

    • Updated test suite to support multiple chess engines and dynamically skip tests if an engine is unavailable.
    • Streamlined test structure and cleaned up unused code for improved clarity.
  • Bug Fixes

    • Adjusted test logic to ensure tests only skip when engines are unavailable.

@coderabbitai
Copy link
Contributor

coderabbitai bot commented May 10, 2025

"""

Walkthrough

The test suite for UCI chess engines was refactored to support multiple engines by dynamically checking for their availability and running tests as subtests per engine. Test cases were updated for consistency, expected move changes, and multipv settings. The logging test and related code were removed, and unused imports were cleaned up.

Changes

File(s) Change Summary
uci/engine_test.go Refactored tests to support multiple engines, added dynamic engine availability checks, updated test cases with new FEN and expected moves, adjusted multipv settings, removed logging test and related code, cleaned up unused imports and variables.

Poem

In the warren of code, the engines now race,
Stockfish and Lc0 both take their place.
Tests leap and skip if engines can't play,
Old logs are gone—cleared out of the way.
Multipv lines trimmed, moves fresh and new,
The rabbits rejoice—clean code, engines two!
🐰♟️
"""

Tip

⚡️ Faster reviews with caching
  • CodeRabbit now supports caching for code and dependencies, helping speed up reviews. This means quicker feedback, reduced wait times, and a smoother review experience overall. Cached data is encrypted and stored securely. This feature will be automatically enabled for all accounts on May 16th. To opt out, configure Review - Disable Cache at either the organization or repository level. If you prefer to disable all data retention across your organization, simply turn off the Data Retention setting under your Organization Settings.

Enjoy the performance boost—your workflow just got faster.

✨ Finishing Touches
  • 📝 Generate Docstrings

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Generate unit testing code for this file.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai generate unit testing code for this file.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and generate unit testing code.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Support

Need help? Create a ticket on our support page for assistance with any issues or questions.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai generate docstrings to generate docstrings for this PR.
  • @coderabbitai generate sequence diagram to generate a sequence diagram of the changes in this PR.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🧹 Nitpick comments (2)
uci/engine_test.go (2)

132-132: Consider using a more descriptive variable name.

Using err2 doesn't provide clear context about what this error represents. Consider using a more descriptive name like runErr or cmdErr.


130-131: Consider adding a test timeout.

The test is running chess games until completion, which could potentially run for a very long time or even forever in some edge cases. Consider adding a test timeout or a maximum number of moves to prevent long-running tests.

📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 7b06df2 and 0e7b3f7.

📒 Files selected for processing (1)
  • uci/engine_test.go (1 hunks)
⏰ Context from checks skipped due to timeout of 90000ms (1)
  • GitHub Check: vulncheck
🔇 Additional comments (6)
uci/engine_test.go (6)

1-2: Good use of build tags for conditional testing.

Using build tags instead of skipping tests is a more efficient approach, as tagged tests are conditionally compiled rather than compiled and then skipped at runtime.


7-7: Appropriate import addition for engine availability check.

Adding the os/exec package is necessary for the LookPath functionality used to check engine availability.


15-15: Improved test flexibility with multiple engines.

Supporting multiple engines makes the tests more comprehensive and robust.


17-20: Good implementation of engine availability check.

The isEngineAvailable function elegantly handles the engine availability verification using exec.LookPath.


22-57: Test refactoring improves maintainability and coverage.

The test has been well-refactored to support multiple engines with proper subtest structure and appropriate skipping when engines aren't available.


59-105: Consistent test structure with appropriate assertions.

This test follows the same good pattern as the previous test, with appropriate expectations for multipv results.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

♻️ Duplicate comments (1)
uci/engine_test.go (1)

110-144: Test logic fixed to be outside the skip condition.

The test code has been moved outside the skip condition, addressing the issue raised in the previous review. The test now correctly skips when engines aren't available and runs the test logic only when they are.

📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 0e7b3f7 and 1037f5f.

📒 Files selected for processing (1)
  • uci/engine_test.go (1 hunks)
🔇 Additional comments (10)
uci/engine_test.go (10)

1-2: Good approach using build tags for slow engine tests.

Using the //go:build engine tag allows developers to choose when to run these integration tests, which is great for CI performance. This is a good improvement from the previous approach.


7-7: Import change appropriate for new functionality.

Adding the os/exec import is necessary for the new exec.LookPath functionality used to check engine availability.


15-15: Good approach for supporting multiple engines.

Using a slice of engine names makes it easy to add or remove engines in the future without modifying the test logic.


17-20: Well-implemented engine availability check.

The isEngineAvailable helper function correctly uses exec.LookPath to check if the engine is available in the system PATH, making the tests robust against missing dependencies.


23-29: Tests now properly run as subtests for each engine.

The test now uses Go's subtest functionality to run for each configured engine, with appropriate skipping when an engine is not available. This is a good improvement for test organization and clarity.


24-24: Test position updated for better engine compatibility.

The FEN position used for testing has been updated, which is likely more appropriate for testing with both Stockfish and Lc0 engines.


42-47: Updated MultiPV setting for better test stability.

The MultiPV option is now set to 2 (previously 3), which is adequate for the test cases and likely more stable across different engines.


49-54: Expected move has been updated to match the new position.

The test now expects "Ne5" as the best move which corresponds to the updated FEN position. This makes the test more accurate for the multiple engine setup.


88-90: Test expectation updated to match multipv setting.

The test now correctly expects 2 MultiPV lines to match the multipv value of 2 set earlier, ensuring logical consistency in the test.


98-102: Secondary move expectation added for multipv testing.

Testing the second best move "e5" ensures the MultiPV functionality works correctly across different engines.

Comment on lines +129 to +144
for game.Outcome() == chess.NoOutcome {
cmdPos := uci.CmdPosition{Position: game.Position()}
cmdGo := uci.CmdGo{MoveTime: time.Second / 100}
if err2 := eng.Run(cmdPos, cmdGo); err2 != nil {
t.Fatal("failed to run command", err2)
}

move := eng.SearchResults().BestMove
pos := game.Position()
san := notation.Encode(pos, move)

err = game.PushMove(san, nil)
if err != nil {
t.Fatal(fmt.Sprintf("failed to push move %s - %s - %v. Pos: %s", san, move.String(), move.HasTag(chess.Capture), pos.String()), err)
}
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Verification agent

❓ Verification inconclusive

Verify game termination in the test loop.

The test loop runs until the game has an outcome, but it might be good to add a safety measure like a maximum number of moves to prevent potential infinite loops if an engine behaves unexpectedly.


🏁 Script executed:

#!/bin/bash
# Verify if there's a risk of infinite loops in this test

# Look for similar patterns in the code that use a game outcome loop with a safety limit
rg "for .* Outcome\(\) == chess.NoOutcome" -A 5 --glob "*.go" | grep -E "count|limit|max"

Length of output: 91


Add a maximum move limit in the test loop to prevent infinite execution

The loop in uci/engine_test.go (around lines 129–144) currently relies solely on game.Outcome() to terminate, which could hang indefinitely if the engine misbehaves. Introduce a simple counter guard:

• File: uci/engine_test.go (lines 129–144)
• Add at the top of the test:

const maxMoves = 200
moves := 0

• Wrap the loop with a move-count check:

 for game.Outcome() == chess.NoOutcome {
+    if moves++; moves > maxMoves {
+        t.Fatalf("test loop exceeded max moves (%d)", maxMoves)
+    }
     cmdPos := uci.CmdPosition{Position: game.Position()}
     cmdGo  := uci.CmdGo{MoveTime: time.Second / 100}
     …  
 }

This ensures the test fails fast if it ever exceeds a reasonable move limit.

@CorentinGS CorentinGS merged commit 1ea7088 into CorentinGS:main May 12, 2025
7 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants