-
Notifications
You must be signed in to change notification settings - Fork 5.3k
Description
Is your feature request related to a problem? Please describe.
Yes, the program crashes often because of literals and other reasons. But when restarting, it reloads the cached conversation, which then gets trimmed, causing you to need to basicly start over with questioning. This especially using with GPT-4, which for me is the only model doing it right, is lets say, costly. And i believe MemGPT might be a solution if we could integrate the same principles.
Describe the solution you'd like
Check for MemGPT or Youtube to understand what i mean.
I recently discovered MemGPT, a fascinating technology that enables long-term memory for GPT models efficiently. It struck me as a valuable addition to the open-interpreter framework. This innovation has the potential to address some significant challenges, especially when it comes to costly repetitive requests. While open-interpreter already caches conversations, there's still the issue of reaching the maximum context length after a crash. This max context length is especially easily reached using open-interpreter. This often leads to repeating the questions and providing context again, which can be expensive, particularly when using GPT-4, a model that, in my opinion, performs better then others
i have tested. Most even gave poor results, but even these models should improve when long term memory like MemGPT has created.
Describe alternatives you've considered
No response
Additional context
Improved User Experience: As the AI learns from its memory, it can provide more personalized and context-aware responses. This means that interactions with the AI become more efficient and relevant to users' needs.
Reduced Repetition: With long-term memory, the AI can recall previous parts of the conversation, reducing the need for users to repeat information or context after a crash or interruption.
Enhanced Learning: The AI can accumulate knowledge and insights from past interactions, leading to continuous improvement in its ability to understand and assist users.
Cost Efficiency: Long-term memory can help lower costs by reducing the computational overhead associated with restarting conversations and re-establishing context.
Adaptability: Over time, the AI can adapt to users' preferences and communication styles, making it a more versatile tool for a wide range of tasks.
Greater Reliability: Long-term memory can enhance the robustness and reliability of open-interpreter, making them more dependable for critical applications.