Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Migrated src/screens/SubTags from Jest to Vitest #2739

Conversation

PratapRathi
Copy link
Contributor

@PratapRathi PratapRathi commented Dec 23, 2024

What kind of change does this PR introduce?

This PR migrates the test cases in src/screens/SubTags from Jest to Vitest, ensuring compatibility with Vitest and maintaining 100% test coverage.

Issue Number:
Fixes #2570

Did you add tests for your changes?

Yes

Snapshots/Videos:

Screenshot 2024-12-23 at 23 45 36

If relevant, did you update the documentation?

N/A

Summary

Does this PR introduce a breaking change?

No

Other information

N/A

Have you read the contributing guide?

Yes

Summary by CodeRabbit

  • Tests

    • Transitioned testing framework from Jest to Vitest for improved compatibility.
    • Updated test cases to follow BDD-style syntax.
    • Enhanced mocking for external libraries used in tests.
  • Chores

    • Improved comments and formatting in the SubTags component to enhance clarity for coverage reporting.

Copy link
Contributor

coderabbitai bot commented Dec 23, 2024

Walkthrough

This pull request focuses on migrating the test suite for the SubTags component from Jest to Vitest. The changes involve updating import statements, mocking methods, and test syntax to align with Vitest's framework. The core functionality of the tests remains unchanged, ensuring that the SubTags component continues to be thoroughly tested with the new testing framework.

Changes

File Change Summary
src/screens/SubTags/SubTags.spec.tsx - Replaced Jest imports with Vitest imports
- Updated mocking methods for react-toastify and react-router-dom
- Converted test() methods to it() methods
- Added Vitest-specific import for Jest DOM
src/screens/SubTags/SubTags.tsx - Updated comments with @preserve directive for code coverage

Assessment against linked issues

Objective Addressed Explanation
Replace Jest-specific functions and mocks with Vitest equivalents [#2570]
Ensure all tests pass using npm run test:vitest [#2570] Requires full test suite run to confirm
Maintain 100% test coverage [#2570] Requires coverage report verification

Possibly related issues

Possibly related PRs

Suggested reviewers

  • palisadoes

Poem

🐰 Hopping through the code with glee,
Vitest replaces Jest, can't you see?
SubTags tests now shine so bright,
Migration done with rabbit might!
Testing frameworks dance and sway 🧪


📜 Recent review details

Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 7b9d0e1 and 00f2067.

📒 Files selected for processing (2)
  • src/screens/SubTags/SubTags.spec.tsx (14 hunks)
  • src/screens/SubTags/SubTags.tsx (5 hunks)
🚧 Files skipped from review as they are similar to previous changes (2)
  • src/screens/SubTags/SubTags.tsx
  • src/screens/SubTags/SubTags.spec.tsx

Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media?

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Generate unit testing code for this file.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai generate unit testing code for this file.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and generate unit testing code.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai generate docstrings to generate docstrings for this PR. (Beta)
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

Copy link

Our Pull Request Approval Process

Thanks for contributing!

Testing Your Code

Remember, your PRs won't be reviewed until these criteria are met:

  1. We don't merge PRs with poor code quality.
    1. Follow coding best practices such that CodeRabbit.ai approves your PR.
  2. We don't merge PRs with failed tests.
    1. When tests fail, click on the Details link to learn more.
    2. Write sufficient tests for your changes (CodeCov Patch Test). Your testing level must be better than the target threshold of the repository
    3. Tests may fail if you edit sensitive files. Ask to add the ignore-sensitive-files-pr label if the edits are necessary.
  3. We cannot merge PRs with conflicting files. These must be fixed.

Our policies make our code better.

Reviewers

Do not assign reviewers. Our Queue Monitors will review your PR and assign them.
When your PR has been assigned reviewers contact them to get your code reviewed and approved via:

  1. comments in this PR or
  2. our slack channel

Reviewing Your Code

Your reviewer(s) will have the following roles:

  1. arbitrators of future discussions with other contributors about the validity of your changes
  2. point of contact for evaluating the validity of your work
  3. person who verifies matching issues by others that should be closed.
  4. person who gives general guidance in fixing your tests

CONTRIBUTING.md

Read our CONTRIBUTING.md file. Most importantly:

  1. PRs with issues not assigned to you will be closed by the reviewer
  2. Fix the first comment in the PR so that each issue listed automatically closes

Other

  1. 🎯 Please be considerate of our volunteers' time. Contacting the person who assigned the reviewers is not advised unless they ask for your input. Do not @ the person who did the assignment otherwise.
  2. Read the CONTRIBUTING.md file make

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Nitpick comments (2)
src/screens/SubTags/SubTags.spec.tsx (2)

118-118: Clear additional mocks if needed.

vi.clearAllMocks() is a solid approach, but if you rely on timers or user-event stubs, consider also clearing timers or restoring modules for thorough cleanup.


122-122: Test naming consistency.

Using “it” blocks is consistent with Vitest style. Ensure test descriptions are consistently phrased to maintain clarity in test reports.

📜 Review details

Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 86d59c2 and 7b9d0e1.

📒 Files selected for processing (2)
  • src/screens/SubTags/SubTags.spec.tsx (14 hunks)
  • src/screens/SubTags/SubTags.tsx (5 hunks)
✅ Files skipped from review due to trivial changes (1)
  • src/screens/SubTags/SubTags.tsx
🔇 Additional comments (14)
src/screens/SubTags/SubTags.spec.tsx (14)

24-25: Nice Migration to Vitest Imports.

You've correctly replaced Jest imports with Vitest equivalents. This makes the test suite consistent with Vitest APIs.


132-132: Good coverage of error rendering scenario.

No issues found. This test ensures correct fallback when the query fails.


142-142: Modal open/close test is robust.

This test thoroughly verifies modal behavior through clicks and waiting for elements to unmount. Nicely done.


164-164: Manage Tag navigation is validated.

The flow verifying navigation to “manage tag” is straightforward. Good approach using screen.getByTestId.


179-179: Intra-tag navigation tested properly.

This ensures consistent user interactions for exploring sub tags.


194-194: Breadcrumb click behavior validated.

The test ensures the correct redirection after breadcrumb clicks. Very thorough coverage.


209-209: Navigation back to all tags is confirmed.

This meets user expectation for returning to the main tags screen.


224-224: “Manage Current Tag” button is effectively tested.

Ensures that the user can truly navigate to manage the currently selected tag.


239-239: Search functionality is well tested.

Verifying the count of results ensures accurate data flow for the search feature.


260-260: Sorting logic coverage is strong.

This test checks both ascending and descending states of tag ordering, ensuring robust coverage.


346-346: Add sub tag test ensures success toast.

Using toast.success is properly validated. Confirm that no unexpected toast error occurs when successful.


48-51: Check the Toastify mock usage.

Using vi.mock for 'react-toastify' is appropriate. Ensure you verify in your tests that toast messages are invoked in the correct sequences (e.g., success before error).

✅ Verification successful

Let me search for the test file and its related implementation to verify the toast message sequences.


Toast messages are correctly sequenced and tested

The implementation shows that toast messages are used appropriately and mutually exclusive - success is called only after successful tag creation, while error is called in the error handling block. The test file includes a specific test case that verifies the success toast message is called with the correct translation string after adding a new sub tag.

🏁 Scripts executed

The following scripts were executed for the analysis:

Error: No shell scripts found


Script:

#!/bin/bash
# First, let's look at the test file content
cat src/screens/SubTags/SubTags.spec.tsx

# Then search for toast usage in the implementation
rg "toast\." src/screens/SubTags/SubTags.tsx -A 2

Length of output: 10615


111-112: Confirm partial mocking approach for react-router-dom.

Using vi.importActual inside vi.mock is an advanced approach for partial mocking. Make sure you confirm no essential functionality is inadvertently overridden.

✅ Verification successful

Let me gather more information about the mocking approach used in the codebase.


Let me try again with corrected file type flags.


Let me try one more search with different flags to find more context.


Based on the analysis of the test file and other mocking patterns in the codebase, I can now provide a final response:

The partial mocking approach for react-router-dom is safe and consistent with codebase patterns.

The partial mocking implementation is being used correctly and safely because:

  • It's used in a beforeEach block, ensuring clean setup for each test
  • It's properly cleaned up in afterEach with vi.clearAllMocks()
  • The pattern is consistently used across multiple test files in the codebase (e.g., in UserPortal, OrganizationDashboard, etc.)
  • The mock preserves the original functionality while allowing for specific overrides when needed
  • No essential functionality is being overridden as it's importing the actual implementation with vi.importActual
🏁 Scripts executed

The following scripts were executed for the analysis:

Error: No shell scripts found


Script:

#!/bin/bash
# Search for similar partial mocking patterns with vi.mock and vi.importActual
rg -U "vi\.mock.*\(\s*'react-router-dom'.*\{[\s\S]*?vi\.importActual.*\}" --type ts --type tsx

# Look for other test files that might be using react-router-dom mocks
rg "vi\.mock.*'react-router-dom'" --type ts --type tsx

Length of output: 221


Script:

#!/bin/bash
# Search for similar partial mocking patterns with vi.mock and vi.importActual
rg -U "vi\.mock.*\(\s*'react-router-dom'.*\{[\s\S]*?vi\.importActual.*\}" -t typescript -t tsx

# Look for other test files that might be using react-router-dom mocks
rg "vi\.mock.*'react-router-dom'" -t typescript -t tsx

# Look for the specific test file and its context
rg -A 5 "vi\.mock.*'react-router-dom'" src/screens/SubTags/SubTags.spec.tsx

Length of output: 455


Script:

#!/bin/bash
# Look for the complete test file content to understand the mocking context
cat src/screens/SubTags/SubTags.spec.tsx

# Search for any other vi.mock usage in spec files
find . -name "*.spec.tsx" -o -name "*.test.tsx" | xargs rg "vi\.mock"

Length of output: 21100


317-317: Infinite scroll test replicates user scrolling behavior.

Simulating the scroll event is a good approach. Confirm the appended tags appear correctly after scrolling.

✅ Verification successful

Let me check the test implementation to verify the infinite scroll behavior.


Let me try again with a more generic file type.


Let me get more context to see the complete test implementation.


Let me get the final part of the test to see the assertions after scrolling.


Let me try one more time with escaped special characters.


Test implementation correctly verifies infinite scroll behavior

The test implementation properly verifies the infinite scroll functionality by:

  1. Getting initial count of tags using manageTagBtn test ID
  2. Simulating scroll to bottom using fireEvent.scroll
  3. Verifying more tags are loaded by checking the final count is greater than initial count
  4. Ensuring UI stability by verifying "Add Child Tag" button remains present
🏁 Scripts executed

The following scripts were executed for the analysis:

Error: No shell scripts found


Script:

#!/bin/bash
# Let's examine the test implementation
rg -A 10 "Fetches more sub tags with infinite scroll" --type tsx

Length of output: 94


Script:

#!/bin/bash
# Let's examine the test implementation
rg -A 10 "Fetches more sub tags with infinite scroll" --type ts

Length of output: 788


Script:

#!/bin/bash
# Let's examine the complete test implementation with more context
rg -B 5 -A 20 "Fetches more sub tags with infinite scroll" --type ts

Length of output: 1716


Script:

#!/bin/bash
# Let's examine the complete test implementation including the final assertions
rg -B 20 -A 5 "target: { scrollY: subTagsScrollableDiv.scrollHeight }" --type ts

Length of output: 229


Script:

#!/bin/bash
# Let's examine the complete test implementation including the final assertions
rg -B 20 -A 10 "scrollHeight" --type ts

Length of output: 16714

coderabbitai[bot]
coderabbitai bot previously approved these changes Dec 23, 2024
Copy link

codecov bot commented Dec 23, 2024

Codecov Report

All modified and coverable lines are covered by tests ✅

Project coverage is 87.06%. Comparing base (df2dd67) to head (00f2067).
Report is 1 commits behind head on develop-postgres.

Additional details and impacted files
@@                  Coverage Diff                  @@
##           develop-postgres    #2739       +/-   ##
=====================================================
+ Coverage             75.01%   87.06%   +12.05%     
=====================================================
  Files                   296      313       +17     
  Lines                  7359     8210      +851     
  Branches               1608     1852      +244     
=====================================================
+ Hits                   5520     7148     +1628     
+ Misses                 1572      881      -691     
+ Partials                267      181       -86     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

Copy link
Contributor

@palisadoes palisadoes left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

See comment

@@ -22,6 +21,8 @@ import i18n from 'utils/i18nForTest';
import SubTags from './SubTags';
import { MOCKS, MOCKS_ERROR_SUB_TAGS } from './SubTagsMocks';
import { InMemoryCache, type ApolloLink } from '@apollo/client';
import { vi, beforeEach, afterEach, expect, it } from 'vitest';
import '@testing-library/jest-dom/vitest';
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why was this imported if we are migrating from jest and it doesn't appear to be used?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sure, I am removing this and will push very soon.
Thanks for guiding.

@PratapRathi PratapRathi force-pushed the Migrate-src/screens/SubTags-from-Jest-to-Vitest branch from abdba2a to 00f2067 Compare December 23, 2024 19:36
@palisadoes palisadoes merged commit cd0b041 into PalisadoesFoundation:develop-postgres Dec 23, 2024
13 checks passed
@PratapRathi PratapRathi deleted the Migrate-src/screens/SubTags-from-Jest-to-Vitest branch December 23, 2024 20:12
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants