Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Team Multi-Agent Chats to reduce the Context Token Load on LLM. #596

Open
MadeInLondon opened this issue Dec 8, 2024 · 0 comments
Open

Comments

@MadeInLondon
Copy link

Is your feature request related to a problem? Please describe:

As the project grows I found the token limit becomes an issue but it is because the LLM is taking on a bigger role than it can handle alone.

Describe the solution you'd like:

We may be able to use a single LLM and give it a multi chat memory to pull and remember its context and role. If we can use the side chat window to create agents that can do a task. It is currently used for opening up different projects which can be done in a simple drop down menu instead. We could then have more options in the chat window to setup templates that remembers the first prompts of the chat which sets the tone.

An example will be one for just updating the requirements.txt and documentation or another to just look at the code structure and identify what needs to be updated or what can break. The coder agent asks all these questions to a team for better decision making.

Additional context:

I am happy to discuss ways we can achieve this.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant