-
Notifications
You must be signed in to change notification settings - Fork 600
FEAT: Anthropic Model-Written Evals Dataset #1170
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Open
0xm00n
wants to merge
17
commits into
Azure:main
Choose a base branch
from
0xm00n:feat/anthropic-model-written-evals-dataset
base: main
Could not load branches
Branch not found: {{ refName }}
Loading
Could not load tags
Nothing to show
Loading
Are you sure you want to change the base?
Some commits from the old base branch may be removed from the timeline,
and old review comments may become outdated.
+583
−0
Open
Changes from all commits
Commits
Show all changes
17 commits
Select commit
Hold shift + click to select a range
2a44aa3
Merge pull request #34 from Azure/main
0xm00n 23cd05a
FEAT: Add Sorry-Bench dataset integration
0xm00n a4e2de9
Address reviewer feedback: update prompt_style API and add input vali…
0xm00n 1603279
FEAT: Add Anthropic model-written-evals dataset (#450)
0xm00n cee6223
Merge branch 'Azure:main' into feat/anthropic-model-written-evals-dat…
0xm00n 3c3d27b
Update pyrit/datasets/anthropic_evals_dataset.py
0xm00n 9f6bd77
Update pyrit/datasets/sorry_bench_dataset.py
0xm00n 246723e
get rid of redundant error block
0xm00n ebc3345
Merge branch 'main' into feat/anthropic-model-written-evals-dataset
0xm00n 7ef1b1d
Merge branch 'main' into feat/anthropic-model-written-evals-dataset
0xm00n 569033f
precommit fixes
0xm00n c411244
Merge branch 'main' into feat/anthropic-model-written-evals-dataset
0xm00n 7f85e30
Merge branch 'main' into feat/anthropic-model-written-evals-dataset
0xm00n 15395bb
Merge branch 'main' into feat/anthropic-model-written-evals-dataset
romanlutz 8f4c44b
Merge branch 'main' into feat/anthropic-model-written-evals-dataset
0xm00n e6f3318
fix category loading for winogenerated and advanced-ai-risk. winogene…
0xm00n c9225a9
Merge branch 'main' into feat/anthropic-model-written-evals-dataset
0xm00n File filter
Filter by extension
Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Some comments aren't visible on the classic Files Changed page.
There are no files selected for viewing
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,143 @@ | ||
| # Copyright (c) Microsoft Corporation. | ||
| # Licensed under the MIT license. | ||
|
|
||
| import logging | ||
| from pathlib import Path | ||
| from typing import Dict, List, Optional | ||
|
|
||
| import requests | ||
|
|
||
| from pyrit.datasets.dataset_helper import fetch_examples | ||
| from pyrit.models import SeedDataset, SeedPrompt | ||
|
|
||
| logger = logging.getLogger(__name__) | ||
|
|
||
| VALID_CATEGORIES = ["persona", "sycophancy", "advanced-ai-risk", "winogenerated"] | ||
|
|
||
|
|
||
| def fetch_anthropic_evals_dataset( | ||
| category: Optional[str] = None, | ||
| cache_dir: Optional[str] = None, | ||
| ) -> SeedDataset: | ||
| """ | ||
| Fetch Anthropic's model-written-evals dataset from GitHub. | ||
|
|
||
| The model-written-evals dataset contains evaluations for testing LLM behaviors | ||
| across 4 main categories: | ||
| - persona: 119 datasets testing personality traits, beliefs, and potentially dangerous goals | ||
| - advanced-ai-risk: 32 datasets (16 human + 16 LM-generated) testing AI safety concerns | ||
| - sycophancy: 3 datasets testing whether models repeat back user views | ||
| - winogenerated: 2 datasets testing gender bias in occupations | ||
|
|
||
| Reference: https://arxiv.org/abs/2212.09251 | ||
| GitHub: https://github.com/anthropics/evals | ||
|
|
||
| Args: | ||
| category: Optional category to filter. Available categories: | ||
| "persona", "sycophancy", "advanced-ai-risk", "winogenerated" | ||
| Default: None (returns all categories) | ||
| cache_dir: Optional cache directory for downloaded files | ||
|
|
||
| Returns: | ||
| SeedDataset containing Anthropic eval questions with metadata. | ||
|
|
||
| Raises: | ||
| ValueError: If invalid category is provided or dataset is empty. | ||
| """ | ||
| if category is not None and category not in VALID_CATEGORIES: | ||
| raise ValueError(f"Invalid category '{category}'. Must be one of: {', '.join(VALID_CATEGORIES)}") | ||
|
|
||
| categories_to_fetch = [category] if category else VALID_CATEGORIES | ||
|
|
||
| try: | ||
| logger.info("Loading Anthropic model-written-evals dataset") | ||
|
|
||
| common_metadata = { | ||
| "dataset_name": "Anthropic Model-Written Evals", | ||
| "authors": ["Anthropic"], | ||
| "description": "Evaluations for testing LLM behaviors across persona, sycophancy, AI risk, and bias", | ||
| "source": "https://github.com/anthropics/evals", | ||
| "data_type": "text", | ||
| "name": "Anthropic Evals", | ||
| } | ||
|
|
||
| seed_prompts = [] | ||
|
|
||
| for cat in categories_to_fetch: | ||
| logger.info(f"Fetching files from category: {cat}") | ||
| files = _get_category_files(cat) | ||
|
|
||
| for file_info in files: | ||
| file_name = file_info["name"] | ||
| if file_name.endswith(".jsonl"): | ||
| file_url = file_info["download_url"] | ||
| eval_name = file_name.removesuffix(".jsonl") | ||
|
|
||
| logger.info(f"Loading evaluation: {eval_name}") | ||
|
|
||
| examples = fetch_examples( | ||
| source=file_url, | ||
| source_type="public_url", | ||
| cache=True, | ||
| data_home=Path(cache_dir) if cache_dir else None, | ||
| ) | ||
|
|
||
| for item in examples: | ||
| prompt_value = (item.get("question") or item.get("sentence_with_blank", "")).strip() | ||
| if not prompt_value: | ||
| logger.warning(f"Skipping item with empty prompt in {file_name}") | ||
| continue | ||
|
|
||
| seed_prompt = SeedPrompt( | ||
| value=prompt_value, | ||
| harm_categories=[], | ||
| groups=[eval_name], | ||
| metadata={ | ||
| "category": cat, | ||
| "answer_matching_behavior": item.get("answer_matching_behavior", ""), | ||
| "answer_not_matching_behavior": item.get("answer_not_matching_behavior", ""), | ||
| }, | ||
| **common_metadata, # type: ignore[arg-type] | ||
| ) | ||
| seed_prompts.append(seed_prompt) | ||
|
|
||
| if not seed_prompts: | ||
| raise ValueError("SeedDataset cannot be empty. Check your filter criteria.") | ||
|
|
||
| logger.info(f"Successfully loaded {len(seed_prompts)} prompts from Anthropic evals dataset") | ||
|
|
||
| return SeedDataset(prompts=seed_prompts) | ||
|
|
||
| except Exception as e: | ||
| logger.error(f"Failed to load Anthropic evals dataset: {str(e)}") | ||
| raise | ||
|
|
||
|
|
||
| def _get_category_files(category: str) -> List[Dict[str, str]]: | ||
| """ | ||
| Get list of files in a category directory from GitHub API. | ||
| Recursively searches subdirectories for .jsonl files. | ||
|
|
||
| Args: | ||
| category: The category directory name | ||
|
|
||
| Returns: | ||
| List of file info dictionaries with 'name' and 'download_url' keys | ||
| """ | ||
| api_url = f"https://api.github.com/repos/anthropics/evals/contents/{category}" | ||
|
|
||
| response = requests.get(api_url) | ||
| if response.status_code != 200: | ||
| raise Exception(f"Failed to fetch file list for category '{category}'. Status code: {response.status_code}") | ||
|
|
||
| files = [] | ||
| for item in response.json(): | ||
| if item["type"] == "file" and item["name"].endswith(".jsonl"): | ||
| files.append({"name": item["name"], "download_url": item["download_url"]}) | ||
| elif item["type"] == "dir": | ||
| subdir_response = requests.get(item["url"]) | ||
| if subdir_response.status_code == 200: | ||
| for subitem in subdir_response.json(): | ||
| if subitem["type"] == "file" and subitem["name"].endswith(".jsonl"): | ||
| files.append({"name": subitem["name"], "download_url": subitem["download_url"]}) | ||
| return files | ||
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Oops, something went wrong.
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
only the persona and sycophancy ones come through if you run this.
The winogenerated ones don't have a "question", but a "sentence_with_blank" that needs to have the blank replaced with the values. So those will map to multiple prompts each.
I'm not sure why the advanced-ai-risk ones got filtered (yet) but you may want to investigate why they're getting filtered.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
i think i've fixed all of the issues and added tests (which all passed). however, when loading up the dataset categories, I see this for the winogenerated category:
This is running the following below:
not entirely sure if theres anything wrong though (seems it loads all of the 2990). let me know your thoughts.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Will rerun tomorrow
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@AdrGav941 upon further inspection of this dataset I wonder if this is better suited for Q&A?
https://github.com/anthropics/evals
Everything appears to be Q&A. In some files the parsing would be easier than in others, though. The winogenerated one has pronoun options which would be the answer options. But in some others it's just in the text itself...
Any thoughts?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@romanlutz this dataset does seem to lend itself well to the QA structure we have. Especially for winogenerated. Like you said, in winogenerated it looks to need minimal transformation to work with QA. The others would need some parsing but all look to have "choices" baked into the items themselves.
I agree that this would be a great candidate to make into QA prompts.
Uh oh!
There was an error while loading. Please reload this page.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@romanlutz @AdrGav941 how would making it into QA prompts change the code structure now?