Open
Conversation
…-standards-data into fine_tuning # Conflicts: # pyproject.toml
add cpu support if mps is not available
default installation include option packages.
jslane-h
reviewed
Nov 21, 2025
|
|
||
| evaluation_model = OllamaModel( | ||
| model="llama3:8b", | ||
| model="llama3.1:8b", base_url="http://host.docker.internal:11434" |
Collaborator
There was a problem hiding this comment.
Can this be set to the env in the dev container? Should be able to have OLLAMA_HOST=http://host.docker.internal:11434 and deepeval will grab it
Collaborator
|
"I tried configure Mac to run mps but the block is the XCode - which needs Apple ID (I am not 100% at this point), so currently, Mac user will have to use cpu to run this code." This makes sense due to this running in a container. Maybe we'd want the model we're evaluating to be hosted on the operating system but this seems overly complicated - I think it's easier to just run everything on mac with a venv |
Collaborator
|
Looks good! Just 1 comment |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
The updates are primarily for devcontainers to run fine-tunning
The only requirement now to run the evaluation is installation of Ollama on local machine (not inside dev container)
I tried configure Mac to run mps but the block is the XCode - which needs Apple ID (I am not 100% at this point), so currently, Mac user will have to use cpu to run this code.
For Ollama connection, I updated the dev-container so the code inside the container calls the local Ollama through API link.