Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[prototype] language model v2 spec #2678

Draft
wants to merge 11 commits into
base: main
Choose a base branch
from
Draft

Conversation

lgrammel
Copy link
Collaborator

@lgrammel lgrammel commented Aug 15, 2024

Why

  • provider-specific message and message part extensions
  • provider-specific response / finish metadata
  • support for various data parts, both as input and output
  • eliminate differences between regular, object-json, object-tool modes
  • return ids & response model for telemetry
  • return raw response information to help with debugging
  • support models that provide sources (for RAG)

Tasks

  • draft language model v2 spec
  • prototypes
    • Anthropic: prompt caching
    • OpenAI: custom role names
    • google gen ai: pdf support
    • google vertex: sources
    • OpenAI: log response model & id
  • regression check
    • streamobject tool
    • streamobject json
    • generateobject tool
    • generateobject json
  • implementation
    • AI SDK
      • generateText
      • streamText
      • generateObject
      • streamObject
      • streamUI
    • Providers

@lgrammel lgrammel self-assigned this Aug 15, 2024
@lgrammel lgrammel changed the title draft v2 spec [wip] draft v2 spec Aug 15, 2024
@lgrammel lgrammel changed the title [wip] draft v2 spec [prototype] language model v2 spec Aug 15, 2024
* functionality that needs to be applied per message,
* e.g. the OpenAI name parameter.
*/
providerMetadata: Record<string, JSONValue> | undefined;

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am probably missing something but couldn't you just add this field in a backwards compatible way to the LanguageModelV1 types since the property is optional?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good point. My preference is to move to a cleaner language model, but considering the need to get prompt caching ready sooner rather than later I'll see if that's an option.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants