Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Fix] Unified LLM Whisperer adapters #144

Open
wants to merge 19 commits into
base: main
Choose a base branch
from

Conversation

jagadeeswaran-zipstack
Copy link
Contributor

What

  • Remove the LLMWhisperer v1 adapter and add it as an option in the V2 adapter. The versions v1 and v2 should be a dropdown with v2 being the default.
    ...

Why

  • For better UI/UX experience
    ...

How

...

Relevant Docs

Related Issues or PRs

Dependencies Versions / Env Variables

Notes on Testing

...

Screenshots

image

...

Checklist

I have read and understood the Contribution Guidelines.

@jagadeeswaran-zipstack jagadeeswaran-zipstack changed the title [Fix] Unifyied LLM Whispeper adapters [Fix] Unified LLM Whispeper adapters Jan 9, 2025
Copy link
Contributor

@chandrasekharan-zipstack chandrasekharan-zipstack left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@jagadeeswaran-zipstack

  1. please consult with @gaya3-zipstack and update the version in __init__.py
  2. Please help address the ask of separating envs
  3. What about removing the other v2 sub-package?

Copy link
Contributor

@chandrasekharan-zipstack chandrasekharan-zipstack left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM - left a NIT comment around documentation

Copy link
Contributor

@muhammad-ali-e muhammad-ali-e left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

please incorporate the latest changes in llm_whisperer (v1 and v2) adapters into this PR

@muhammad-ali-e
Copy link
Contributor

@jagadeeswaran-zipstack
NIT: To ensure a clean codebase with clear separation between versions—and to make it easy to deprecate and remove older versions without affecting newer ones—we can follow this approach:

  1. Define a Common Base Class
    Introduce a BaseLLMWhisperer class to encapsulate shared functionality (e.g., retrieving a JSON schema). If no common logic is needed, we can skip this base class to keep versions fully independent.

  2. Implement Versioned Classes (LLMWhispererV1 and LLMWhispererV2)
    Each version will inherit from X2TextAdapter and override get_id() and get_name(), ensuring minimal dependencies between versions. This separation allows V1 to be removed easily without impacting V2.

  3. Create a Proxy Class (LLMWhispererProxy)
    This factory will handle version selection dynamically, ensuring that the appropriate implementation is used based on the requested version, further reinforcing modularity.

class LLMWhispererProxy:
    _versions = {
        "v1": LLMWhispererV1,
        "v2": LLMWhispererV2
    }

    def __init__(self, settings: Dict[str, Any]):
        self.settings = settings
        self.version = settings.get("version", "v1") 
        if self.version not in self._versions:
            raise ValueError(f"Unsupported LLMWhisperer version: {self.version}")
        self.adapter = self._versions[self.version](settings)

    def get_id(self) -> str:
        return self.adapter.get_id()

    def get_name(self) -> str:
        return self.adapter.get_name()
        .......
      <other methods>
        
metadata = {
    "name": "LLMWhisperer",
    "version": "...",
    "adapter": LLMWhispererProxy,
    "description": "LLMWhisperer X2Text adapter supporting multiple versions",
    "is_active": True,
}

@jagadeeswaran-zipstack jagadeeswaran-zipstack changed the title [Fix] Unified LLM Whispeper adapters [Fix] Unified LLM Whisperer adapters Feb 12, 2025
@jagadeeswaran-zipstack
Copy link
Contributor Author

@jagadeeswaran-zipstack NIT: To ensure a clean codebase with clear separation between versions—and to make it easy to deprecate and remove older versions without affecting newer ones—we can follow this approach:

  1. Define a Common Base Class
    Introduce a BaseLLMWhisperer class to encapsulate shared functionality (e.g., retrieving a JSON schema). If no common logic is needed, we can skip this base class to keep versions fully independent.
  2. Implement Versioned Classes (LLMWhispererV1 and LLMWhispererV2)
    Each version will inherit from X2TextAdapter and override get_id() and get_name(), ensuring minimal dependencies between versions. This separation allows V1 to be removed easily without impacting V2.
  3. Create a Proxy Class (LLMWhispererProxy)
    This factory will handle version selection dynamically, ensuring that the appropriate implementation is used based on the requested version, further reinforcing modularity.
class LLMWhispererProxy:
    _versions = {
        "v1": LLMWhispererV1,
        "v2": LLMWhispererV2
    }

    def __init__(self, settings: Dict[str, Any]):
        self.settings = settings
        self.version = settings.get("version", "v1") 
        if self.version not in self._versions:
            raise ValueError(f"Unsupported LLMWhisperer version: {self.version}")
        self.adapter = self._versions[self.version](settings)

    def get_id(self) -> str:
        return self.adapter.get_id()

    def get_name(self) -> str:
        return self.adapter.get_name()
        .......
      <other methods>
        
metadata = {
    "name": "LLMWhisperer",
    "version": "...",
    "adapter": LLMWhispererProxy,
    "description": "LLMWhisperer X2Text adapter supporting multiple versions",
    "is_active": True,
}

Will make these improvements Iteratively.

Copy link
Contributor

@muhammad-ali-e muhammad-ali-e left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@jagadeeswaran-zipstack , please ensure we provide a timeout or wait_timeout for the whisperer-client. This might be handled through the JSON schema, but I think we should set some upper and lower (default) caps on our side. We can implement this later if needed.
cc: @johnyrahul @hari-kuriakose

@jagadeeswaran-zipstack
Copy link
Contributor Author

@jagadeeswaran-zipstack , please ensure we provide a timeout or wait_timeout for the whisperer-client. This might be handled through the JSON schema, but I think we should set some upper and lower (default) caps on our side. We can implement this later if needed. cc: @johnyrahul @hari-kuriakose

@muhammad-ali-e We do have wait timeout which has default value of 300s. refer

URL_IN_POST = False
TAG = "default"
TEXT_ONLY = False
WAIT_TIMEOUT = 300
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do we allow the user to set this from screen?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

or can we at least take this from env

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants