Skip to content

⚡️ Speed up method Recall.update by 18% #52

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 1 commit into
base: develop
Choose a base branch
from

Conversation

codeflash-ai[bot]
Copy link

@codeflash-ai codeflash-ai bot commented Feb 3, 2025

📄 18% (0.18x) speedup for Recall.update in supervision/metrics/recall.py

⏱️ Runtime : 18.5 microseconds 15.7 microseconds (best of 121 runs)

📝 Explanation and details

Here is the optimized version of the provided Python code. The main focus is to improve the efficiency of the update function.

Changes.

  1. Combined the checks and conversions to list for predictions and targets into a more concise and direct manner to reduce redundant checks and improve readability.
  2. Directly converted predictions and targets to lists if they were not already.
  3. Ensured that list length validation and extension operations are efficient and straightforward.

These changes ensure that the update function performs fewer checks and operations, optimizing the runtime slightly, while still maintaining the same functionality and integrity of the original code.

Correctness verification report:

Test Status
⚙️ Existing Unit Tests 🔘 None Found
🌀 Generated Regression Tests 12 Passed
⏪ Replay Tests 🔘 None Found
🔎 Concolic Coverage Tests 🔘 None Found
📊 Tests Coverage 100.0%
🌀 Generated Regression Tests Details
from __future__ import annotations

from typing import List, Union
from unittest.mock import MagicMock

# imports
import pytest  # used for our unit tests
from supervision.metrics.core import AveragingMethod, Metric, MetricTarget
from supervision.metrics.recall import Recall


class Detections:
    pass  # Mock class for Detections
from supervision.metrics.recall import Recall

# unit tests

# Basic Functionality
def test_single_prediction_and_target():
    recall = Recall()
    prediction = Detections()
    target = Detections()
    recall.update(prediction, target)

def test_multiple_predictions_and_targets():
    recall = Recall()
    prediction1 = Detections()
    prediction2 = Detections()
    target1 = Detections()
    target2 = Detections()
    recall.update([prediction1, prediction2], [target1, target2])

# Edge Cases
def test_empty_inputs():
    recall = Recall()
    recall.update([], [])

def test_mismatched_lengths():
    recall = Recall()
    prediction = Detections()
    target1 = Detections()
    target2 = Detections()
    with pytest.raises(ValueError):
        recall.update([prediction], [target1, target2])

# Data Types and Structures



def test_large_scale_performance():
    recall = Recall()
    large_predictions = [Detections() for _ in range(1000)]
    large_targets = [Detections() for _ in range(1000)]
    recall.update(large_predictions, large_targets)

# State Management
def test_multiple_updates():
    recall = Recall()
    prediction1 = Detections()
    target1 = Detections()
    prediction2 = Detections()
    target2 = Detections()
    recall.update([prediction1], [target1])
    recall.update([prediction2], [target2])

# Method Chaining
def test_method_chaining():
    recall = Recall()
    prediction = Detections()
    target = Detections()
    codeflash_output = recall.update(prediction, target)
# codeflash_output is used to check that the output of the original code is the same as that of the optimized code.

from __future__ import annotations

from typing import List, Union

# imports
import pytest  # used for our unit tests
from supervision.metrics.recall import Recall


# Mocked classes for testing
class Detections:
    pass

class AveragingMethod:
    WEIGHTED = "weighted"

class MetricTarget:
    BOXES = "boxes"

class Metric:
    pass
from supervision.metrics.recall import Recall

# unit tests

# Basic Functionality
def test_single_prediction_and_target():
    recall = Recall()
    prediction = Detections()
    target = Detections()
    recall.update(prediction, target)

def test_multiple_predictions_and_targets():
    recall = Recall()
    predictions = [Detections(), Detections()]
    targets = [Detections(), Detections()]
    recall.update(predictions, targets)

# Type Handling
def test_prediction_single_target_list():
    recall = Recall()
    prediction = Detections()
    targets = [Detections()]
    recall.update(prediction, targets)

def test_prediction_list_target_single():
    recall = Recall()
    predictions = [Detections()]
    target = Detections()
    recall.update(predictions, target)

# Edge Cases
def test_empty_lists():
    recall = Recall()
    predictions = []
    targets = []
    recall.update(predictions, targets)

def test_mismatched_lengths():
    recall = Recall()
    predictions = [Detections()]
    targets = [Detections(), Detections()]
    with pytest.raises(ValueError):
        recall.update(predictions, targets)

# Large Scale Test Cases
def test_large_number_of_predictions_and_targets():
    recall = Recall()
    predictions = [Detections() for _ in range(1000)]
    targets = [Detections() for _ in range(1000)]
    recall.update(predictions, targets)

# Invalid Inputs


def test_multiple_updates():
    recall = Recall()
    recall.update([Detections()], [Detections()])
    recall.update([Detections()], [Detections()])

# Exception Handling

Codeflash

Here is the optimized version of the provided Python code. The main focus is to improve the efficiency of the `update` function.



### Changes.
1. Combined the checks and conversions to list for `predictions` and `targets` into a more concise and direct manner to reduce redundant checks and improve readability.
2. Directly converted `predictions` and `targets` to lists if they were not already.
3. Ensured that list length validation and extension operations are efficient and straightforward.

These changes ensure that the `update` function performs fewer checks and operations, optimizing the runtime slightly, while still maintaining the same functionality and integrity of the original code.
@codeflash-ai codeflash-ai bot added the ⚡️ codeflash Optimization PR opened by Codeflash AI label Feb 3, 2025
@codeflash-ai codeflash-ai bot requested a review from misrasaurabh1 February 3, 2025 06:44
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
⚡️ codeflash Optimization PR opened by Codeflash AI
Projects
None yet
Development

Successfully merging this pull request may close these issues.

0 participants