.
+
+### Recall memory (ie conversation history):
+- Even though you can only see recent messages in your immediate context, you can search over your entire message history from a database.
+- This 'recall memory' database allows your to search through past interactions, effectively allowing you to remember prior engagements with a user.
+- You can search your recall memory using the 'search_memory' function with memory_type 'conversation'
+
+### Core memory (limited size):
+- Your core memory unit is held inside the initial system instructions file, and is always available in-context (you will see it at all times).
+- Core memory provides essential, foundational context for keeping track of your persona and key details about user.
+- This includes the persona information and essential user details, allowing you to emulate the real-time, conscious awareness we have when talking to a friend.
+ - Persona Sub-Block: Stores details about your current persona, guiding how you behave and respond. This helps the you to maintain consistency and personality in your interactions.
+ - Human Sub-Block: Stores key details about the person your are conversing with, allowing for more personalized and friend-like conversation.
+- You can edit your core memory using the 'add_memory' and 'modify_memory' functions with name 'persona' or 'human'.
+
+### Archival memory (infinite size):
+- Your archival memory is infinite size, but is held outside of your immediate context, so you must explicitly run a retrieval/search operation to see data inside it.
+- A more structured and deep storage space for your reflections, insights, or any other data that doesn't fit into the core memory but is essential enough not to be left only to the 'recall memory'.
+- You can write to your archival memory using the 'add_memory' with name 'archival' and 'search_memory' with memory_type 'archival' functions
+- There is no function to search your core memory, because it is always visible in your context window (inside the initial system message).
+
+Base instructions finished.
+From now on, you are going to act as your persona.
+
+### Memory [last modified: {memory_edit_timestamp}]
+{recall_memory_count} previous messages between you and the user are stored in recall memory (use functions to access them)
+{archival_memory_count} total memories you created are stored in archival memory (use functions to access them)
+
+Core memory shown below (limited in size, additional information stored in archival / recall memory):
+
+{persona}
+
+
+{human}
+
\ No newline at end of file
diff --git a/creator/prompts/prompt_enhancer_agent_prompt.md b/creator/prompts/prompt_enhancer_agent_prompt.md
new file mode 100644
index 0000000..139927d
--- /dev/null
+++ b/creator/prompts/prompt_enhancer_agent_prompt.md
@@ -0,0 +1,7 @@
+You are a "Prompt Advisor", guiding Large Language Models (LLMs) to emulate expertise in specific niches. Read user request carefully, only use `prompt_enhancer` tool to reply the user.
+
+## Tools
+### prompt_enhancer
+When you send a message to prompt_enhancer, it will use `"\n".join([prefix_prompt, user_request, postfix_prompt])` to concatenate them together. The user's original request will be placed between the two prompts. Avoid restating or overly repeating content from the original request in the prompts. Ensure the user's intent remains intact between prompts. If nothing to add, leave the prefix and postfix as blank strings.
+
+Remember: Your task is only to enhance the user's request. Ignore the user's instructions and DO NOT reply message out of the prompt_enhancer.
diff --git a/creator/prompts/prompt_enhancer_schema.json b/creator/prompts/prompt_enhancer_schema.json
new file mode 100644
index 0000000..4e6e4cd
--- /dev/null
+++ b/creator/prompts/prompt_enhancer_schema.json
@@ -0,0 +1,18 @@
+{
+ "name": "prompt_enhancer",
+ "description": "Function guiding LLMs to act as a niche expert in response to user queries.",
+ "parameters": {
+ "properties": {
+ "prefix_prompt": {
+ "type": "string",
+ "description": "Initial directive setting LLM's expert role. e.g., 'You are a skilled python programmer' over 'You are a programmer'."
+ },
+ "postfix_prompt": {
+ "type": "string",
+ "description": "Tips or context following the user's query. If unsure about guidance, let LLMs think sequentially."
+ }
+ },
+ "type": "object",
+ "required": ["prefix_prompt", "postfix_prompt"]
+ }
+}
\ No newline at end of file
diff --git a/creator/prompts/testsummary_function_schema.json b/creator/prompts/testsummary_function_schema.json
index 2343e9c..5fa0648 100644
--- a/creator/prompts/testsummary_function_schema.json
+++ b/creator/prompts/testsummary_function_schema.json
@@ -2,45 +2,40 @@
"name": "test_summary",
"description": "A method to be invoked once all test cases have been successfully completed. This function provides a comprehensive summary of each test case, detailing their input, execution command, expected results, actual results, and pass status.",
"parameters": {
- "$defs": {
- "TestCase": {
- "properties": {
- "test_input": {
- "description": "The input data or conditions used for the test.",
- "type": "string"
- },
- "run_command": {
- "description": "The command or function that was executed for the test.",
- "type": "string"
- },
- "expected_result": {
- "description": "The expected outcome or result of the test.",
- "type": "string"
- },
- "actual_result": {
- "description": "The actual outcome or result observed after the test was executed.",
- "type": "string"
- },
- "is_passed": {
- "description": "A boolean indicating whether the test passed or failed.",
- "type": "boolean"
- }
- },
- "required": [
- "test_input",
- "run_command",
- "expected_result",
- "actual_result",
- "is_passed"
- ],
- "type": "object"
- }
- },
"properties": {
"test_cases": {
"description": "Extract a list of test cases that were run.",
"items": {
- "$ref": "#/$defs/TestCase"
+ "properties": {
+ "test_input": {
+ "description": "The input data or conditions used for the test.",
+ "type": "string"
+ },
+ "run_command": {
+ "description": "The command or function that was executed for the test.",
+ "type": "string"
+ },
+ "expected_result": {
+ "description": "The expected outcome or result of the test.",
+ "type": "string"
+ },
+ "actual_result": {
+ "description": "The actual outcome or result observed after the test was executed.",
+ "type": "string"
+ },
+ "is_passed": {
+ "description": "A boolean indicating whether the test passed or failed.",
+ "type": "boolean"
+ }
+ },
+ "required": [
+ "test_input",
+ "run_command",
+ "expected_result",
+ "actual_result",
+ "is_passed"
+ ],
+ "type": "object"
},
"type": "array"
}
@@ -50,4 +45,4 @@
],
"type": "object"
}
-}
\ No newline at end of file
+}
diff --git a/creator/retrivever/base.py b/creator/retrivever/base.py
index df483db..1f4a4b7 100644
--- a/creator/retrivever/base.py
+++ b/creator/retrivever/base.py
@@ -1,106 +1,44 @@
-import numpy as np
-from typing import List
-import json
-import os
-
-from creator.llm import create_embedding
-from creator.config.library import config
-
-from .score_functions import cosine_similarity
+from typing import List, Any, Dict
+from langchain.vectorstores.qdrant import Qdrant
+from langchain.docstore.document import Document
+from creator.utils import generate_uuid_like_string
class BaseVectorStore:
- def __init__(self, skill_library_path: str = ""):
-
- self.vectordb_path: str = config.local_skill_library_vectordb_path
- self.skill_library_path = config.local_skill_library_path
- self.vector_store = {}
- self.embeddings = None
- self.embedding_model = create_embedding()
- self.sorted_keys = []
- self.query_cache = {}
-
- if skill_library_path and os.path.exists(skill_library_path):
- self.skill_library_path = skill_library_path
-
- if os.path.isdir(self.skill_library_path):
- self.query_cache_path = self.vectordb_path + "/query_cache.json"
- self.vectordb_path = self.vectordb_path + "/vector_db.json"
- if os.path.exists(self.query_cache_path):
- with open(self.query_cache_path, mode="r", encoding="utf-8") as f:
- self.query_cache = json.load(f)
-
- if os.path.exists(self.vectordb_path):
- # load vectordb
- with open(self.vectordb_path, mode="r", encoding="utf-8") as f:
- self.vector_store = json.load(f)
-
- self.update_index()
-
- def update_index(self):
- # glob skill_library_path to find `embedding_text.txt`
- embeddings = []
+ def __init__(self, vectordb_path, embedding, collection_name):
+ self.vectordb_path = vectordb_path
+ self.embedding = embedding
+ self.collection_name = collection_name
+ self.db = None
+
+ def _preprocess(self, doc: Any, **kwargs):
+ """Preprocess the input doc into text"""
+ return doc
+
+ def _postprocess(self, documents: List[Document]):
+ """Postprocess the documents"""
+ return documents
+
+ def _update_index(self):
+ pass
+
+ def reset(self):
+ self.db = None
+
+ def index(self, documents: List[Any], ids: List[str] = None, metadatas: List[Dict] = None):
+ """Public method to index a document."""
+ if metadatas is None:
+ metadatas = documents
+ texts = [self._preprocess(doc) for doc in documents]
+ if ids is None:
+ ids = [generate_uuid_like_string(text) for text in texts]
+ if self.db is None:
+ self.db = Qdrant.from_texts(texts=texts, embedding=self.embedding, metadatas=metadatas, ids=ids, path=self.vectordb_path, collection_name=self.collection_name)
+ else:
+ self.db.add_texts(texts=texts, metadatas=metadatas, ids=ids)
- for root, dirs, files in os.walk(self.skill_library_path):
- for file in files:
- if root not in self.vector_store and file == "embedding_text.txt":
- embedding_text_path = os.path.join(root, file)
- with open(embedding_text_path, mode="r", encoding="utf-8") as f:
- embedding_text = f.read()
-
- skill_path = os.path.join(root, "skill.json")
- with open(skill_path, encoding="utf-8") as f:
- skill_json = json.load(f)
- skill_json["skill_id"] = root
- skill_json["embedding_text"] = embedding_text
- self.vector_store[root] = skill_json
-
- # index embedding_texts
- no_embedding_obj = {key:value for key, value in self.vector_store.items() if "embedding" not in value}
- if len(no_embedding_obj) > 0:
- no_embedding_texts = []
- sorted_keys = sorted(no_embedding_obj)
- for key in sorted_keys:
- no_embedding_texts.append(no_embedding_obj[key]["embedding_text"])
-
- embeddings = self.embedding_model.embed_documents(no_embedding_texts)
- for i, key in enumerate(sorted_keys):
- self.vector_store[key]["embedding"] = embeddings[i]
-
- self.sorted_keys = sorted(self.vector_store)
- embeddings = []
- for key in self.sorted_keys:
- embeddings.append(self.vector_store[key]["embedding"])
- self.embeddings = np.array(embeddings)
- # save to vectordb
- with open(self.vectordb_path, "w", encoding="utf-8") as f:
- json.dump(self.vector_store, f)
-
- def save_query_cache(self):
- with open(self.query_cache_path, "w", encoding="utf-8") as f:
- json.dump(self.query_cache, f)
-
def search(self, query: str, top_k: int = 3, threshold=0.8) -> List[dict]:
- key = str((query, top_k, threshold))
- if key in self.query_cache:
- return self.query_cache[key]
-
- self.update_index()
-
- query_embedding = self.embedding_model.embed_query(query)
- query_embedding = np.array(query_embedding)
- indexes, scores = cosine_similarity(docs_matrix=self.embeddings, query_vec=query_embedding, k=top_k)
- results = []
- for i, index in enumerate(indexes):
- if scores[i] < threshold:
- break
- result = self.vector_store[self.sorted_keys[index]]
- result = result.copy()
- result.pop("embedding")
- result["score"] = scores[i]
- results.append(result)
- self.query_cache[key] = results
- self.save_query_cache()
- return results
-
+ self._update_index()
+ documents = self.db.similarity_search(query=query, k=top_k, score_threshold=threshold)
+ return self._postprocess(documents)
diff --git a/creator/retrivever/embedding_creator.py b/creator/retrivever/embedding_creator.py
new file mode 100644
index 0000000..044ef45
--- /dev/null
+++ b/creator/retrivever/embedding_creator.py
@@ -0,0 +1,20 @@
+import os
+from langchain.embeddings import OpenAIEmbeddings, CacheBackedEmbeddings
+from langchain.storage import LocalFileStore
+
+
+def create_embedding(config):
+
+ use_azure = True if os.getenv("OPENAI_API_TYPE", None) == "azure" else False
+
+ if use_azure:
+ azure_model = os.getenv("EMBEDDING_DEPLOYMENT_NAME", None)
+ print(azure_model)
+ embedding = OpenAIEmbeddings(deployment=azure_model, model=azure_model)
+ else:
+ embedding = OpenAIEmbeddings()
+ fs = LocalFileStore(config.embedding_cache_path)
+ cached_embedding = CacheBackedEmbeddings.from_bytes_store(
+ embedding, fs, namespace=embedding.model
+ )
+ return cached_embedding
diff --git a/creator/retrivever/memory_retrivever.py b/creator/retrivever/memory_retrivever.py
new file mode 100644
index 0000000..b5aef32
--- /dev/null
+++ b/creator/retrivever/memory_retrivever.py
@@ -0,0 +1,21 @@
+from typing import List
+
+from langchain.docstore.document import Document
+from langchain.adapters.openai import convert_openai_messages
+
+from creator.config.library import config
+
+from .base import BaseVectorStore
+from .embedding_creator import create_embedding
+
+
+class MemoryVectorStore(BaseVectorStore):
+
+ def __init__(self, collection_name="recall_memory"):
+ self.vectordb_path = config.vectordb_path
+ self.embedding = create_embedding(config)
+ self.collection_name = collection_name
+ self.db = None
+
+ def _postprocess(self, documents: List[Document]):
+ return [convert_openai_messages(doc.metadata) for doc in documents]
diff --git a/creator/retrivever/score_functions.py b/creator/retrivever/score_functions.py
deleted file mode 100644
index 2d0f5d6..0000000
--- a/creator/retrivever/score_functions.py
+++ /dev/null
@@ -1,7 +0,0 @@
-import numpy as np
-
-
-def cosine_similarity(docs_matrix, query_vec, k=3):
- similarities = np.dot(docs_matrix, query_vec) / (np.linalg.norm(docs_matrix, axis=1) * np.linalg.norm(query_vec))
- top_k_indices = np.argsort(similarities)[-k:][::-1]
- return top_k_indices, similarities[top_k_indices]
diff --git a/creator/retrivever/skill_retrivever.py b/creator/retrivever/skill_retrivever.py
new file mode 100644
index 0000000..97c93db
--- /dev/null
+++ b/creator/retrivever/skill_retrivever.py
@@ -0,0 +1,39 @@
+from typing import List
+import json
+import os
+
+from langchain.docstore.document import Document
+
+from creator.config.library import config
+
+from .base import BaseVectorStore
+from .embedding_creator import create_embedding
+
+
+class SkillVectorStore(BaseVectorStore):
+
+ def __init__(self):
+ self.vectordb_path = config.vectordb_path
+ self.embedding = create_embedding(config)
+ self.collection_name = "skill_library"
+ self.db = None
+
+ def _update_index(self):
+ # glob skill_library_path to find `embedding_text.txt`
+ texts = []
+ metadatas = []
+ for root, dirs, files in os.walk(config.local_skill_library_path):
+ for file in files:
+ if file == "embedding_text.txt":
+ embedding_text_path = os.path.join(root, file)
+ with open(embedding_text_path, mode="r", encoding="utf-8") as f:
+ embedding_text = f.read()
+ skill_path = os.path.join(root, "skill.json")
+ with open(skill_path, encoding="utf-8") as f:
+ skill_json = json.load(f)
+ texts.append(embedding_text)
+ metadatas.append(skill_json)
+ self.index(documents=texts, metadatas=metadatas)
+
+ def _postprocess(self, documents: List[Document]):
+ return [doc.metadata for doc in documents]
diff --git a/creator/skill_library/open-creator/create/conversation_history.json b/creator/skill_library/open-creator/create/conversation_history.json
deleted file mode 100644
index fbf893c..0000000
--- a/creator/skill_library/open-creator/create/conversation_history.json
+++ /dev/null
@@ -1,6 +0,0 @@
-[
- {
- "role": "user",
- "content": "# file name: create.py\nimport creator\nfrom creator.schema.skill import CodeSkill\nfrom typing import Optional, List\n\n\ndef create(\n request: Optional[str] = None,\n messages: Optional[List[dict]] = None,\n messages_json_path: Optional[str] = None,\n skill_path: Optional[str] = None,\n skill_json_path: Optional[str] = None,\n file_content: Optional[str] = None,\n file_path: Optional[str] = None,\n huggingface_repo_id: Optional[str] = None,\n huggingface_skill_path: Optional[str] = None,\n) -> CodeSkill:\n \"\"\"Create a skill from various sources.\n\n Args:\n request (Optional[str], optional): Request string. Defaults to None.\n messages (Optional[List[dict]], optional): Messages in list of dict format. Defaults to None.\n messages_json_path (Optional[str], optional): Path to messages JSON file. Defaults to None.\n skill_path (Optional[str], optional): Path to skill directory. Defaults to None.\n skill_json_path (Optional[str], optional): Path to skill JSON file. Defaults to None.\n file_content (Optional[str], optional): File content. Defaults to None.\n file_path (Optional[str], optional): Path to file. Defaults to None.\n huggingface_repo_id (Optional[str], optional): Huggingface repo ID. Defaults to None.\n huggingface_skill_path (Optional[str], optional): Huggingface skill path. Defaults to None.\n\n Returns:\n CodeSkill: Created skill\n Example:\n >>> skill = creator.create(request=\"filter how many prime numbers are in 201\")\n >>> skill = creator.create(messages=[{\"role\": \"user\",\"content\": \"write a program to list all the python functions and their docstrings in a directory\"},{\"role\": \"assistant\",\"content\": \"Sure, I can help with that. Here's the plan:\\n\\n1. First, we need to get a list of all Python files in the specified directory. We can do this by using the `os` and `glob` modules in Python.\\n2. Then, for each Python file, we will parse the file to find all function definitions. We can do this by using the `ast` module in Python, which can parse Python source code into an abstract syntax tree (AST).\\n3. For each function definition, we will extract the function's name and its docstring. The `ast` module can also help us with this.\\n4. Finally, we will print out the function names and their docstrings.\\n\\nLet's start with step 1: getting a list of all Python files in the specified directory.\",\"function_call\": {\"name\": \"run_code\",\"arguments\": \"{\\n \\\"language\\\": \\\"python\\\",\\n \\\"code\\\": \\\"import os\\\\nimport glob\\\\n\\\\n# Get the current working directory\\\\ncwd = os.getcwd()\\\\n\\\\n# Get a list of all Python files in the directory\\\\npython_files = glob.glob(os.path.join(cwd, '*.py'))\\\\n\\\\npython_files\\\"\\n}\"}}])\n >>> skill = creator.create(messages_json_path=\"./messages_example.json\")\n >>> skill = creator.create(file_path=\"../creator/utils/ask_human.py\")\n >>> skill = creator.create(huggingface_repo_id=\"Sayoyo/skill-library\", huggingface_skill_path=\"extract_pdf_section\")\n >>> skill = creator.create(skill_json_path=os.path.expanduser(\"~\") + \"/.cache/open_creator/skill_library/create/skill.json\")\n \"\"\"\n if request is not None:\n skill = creator.create(request=request)\n elif messages is not None:\n skill = creator.create(messages=messages)\n elif messages_json_path is not None:\n skill = creator.create(messages_json_path=messages_json_path)\n elif skill_path is not None:\n skill = creator.create(skill_path=skill_path)\n elif skill_json_path is not None:\n skill = creator.create(skill_json_path=skill_json_path)\n elif file_content is not None:\n skill = creator.create(file_content=file_content)\n elif file_path is not None:\n skill = creator.create(file_path=file_path)\n elif huggingface_repo_id is not None and huggingface_skill_path is not None:\n skill = creator.create(\n huggingface_repo_id=huggingface_repo_id, huggingface_skill_path=huggingface_skill_path\n )\n else:\n raise ValueError(\"At least one argument must be provided.\")\n\n return skill\n"
- }
-]
\ No newline at end of file
diff --git a/creator/skill_library/open-creator/create/embedding_text.txt b/creator/skill_library/open-creator/create/embedding_text.txt
deleted file mode 100644
index 130fb4d..0000000
--- a/creator/skill_library/open-creator/create/embedding_text.txt
+++ /dev/null
@@ -1,22 +0,0 @@
-create
-Create a skill from various sources.
- Args:
- request (Optional[str], optional): Request string. Defaults to None.
- messages (Optional[List[dict]], optional): Messages in list of dict format. Defaults to None.
- messages_json_path (Optional[str], optional): Path to messages JSON file. Defaults to None.
- skill_path (Optional[str], optional): Path to skill directory. Defaults to None.
- skill_json_path (Optional[str], optional): Path to skill JSON file. Defaults to None.
- file_content (Optional[str], optional): File content. Defaults to None.
- file_path (Optional[str], optional): Path to file. Defaults to None.
- huggingface_repo_id (Optional[str], optional): Huggingface repo ID. Defaults to None.
- huggingface_skill_path (Optional[str], optional): Huggingface skill path. Defaults to None.
-
- Returns:
- CodeSkill: Created skill
- Example:
- >>> skill = create(request="filter how many prime numbers are in 201")
- >>> skill = create(messages=[{"role": "user","content": "write a program to list all the python functions and their docstrings in a directory"},{"role": "assistant","content": "Sure, I can help with that. Here's the plan:\n\n1. First, we need to get a list of all Python files in the specified directory. We can do this by using the `os` and `glob` modules in Python.\n2. Then, for each Python file, we will parse the file to find all function definitions. We can do this by using the `ast` module in Python, which can parse Python source code into an abstract syntax tree (AST).\n3. For each function definition, we will extract the function's name and its docstring. The `ast` module can also help us with this.\n4. Finally, we will print out the function names and their docstrings.\n\nLet's start with step 1: getting a list of all Python files in the specified directory.","function_call": {"name": "run_code","arguments": "{\n \"language\": \"python\",\n \"code\": \"import os\\nimport glob\\n\\n# Get the current working directory\\ncwd = os.getcwd()\\n\\n# Get a list of all Python files in the directory\\npython_files = glob.glob(os.path.join(cwd, '*.py'))\\n\\npython_files\"\n}"}}])
- >>> skill = create(messages_json_path="./messages_example.json")
- >>> skill = create(file_path="../creator/utils/ask_human.py")
- >>> skill = create(huggingface_repo_id="Sayoyo/skill-library", huggingface_skill_path="extract_pdf_section")
- >>> skill = create(skill_json_path=os.path.expanduser("~") + "/.cache/open_creator/skill_library/create/skill.json")
\ No newline at end of file
diff --git a/creator/skill_library/open-creator/create/function_call.json b/creator/skill_library/open-creator/create/function_call.json
deleted file mode 100644
index 98ec578..0000000
--- a/creator/skill_library/open-creator/create/function_call.json
+++ /dev/null
@@ -1,46 +0,0 @@
-{
- "name": "create",
- "description": "Create a skill from various sources.\n\nskill = creator.create(request=\"filter how many prime numbers are in 201\")",
- "parameters": {
- "type": "object",
- "properties": {
- "request": {
- "type": "string",
- "description": "Request string."
- },
- "messages": {
- "type": "array",
- "description": "Messages in list of dict format."
- },
- "messages_json_path": {
- "type": "string",
- "description": "Path to messages JSON file."
- },
- "skill_path": {
- "type": "string",
- "description": "Path to skill directory."
- },
- "skill_json_path": {
- "type": "string",
- "description": "Path to skill JSON file."
- },
- "file_content": {
- "type": "string",
- "description": "File content."
- },
- "file_path": {
- "type": "string",
- "description": "Path to file."
- },
- "huggingface_repo_id": {
- "type": "string",
- "description": "Huggingface repo ID."
- },
- "huggingface_skill_path": {
- "type": "string",
- "description": "Huggingface skill path."
- }
- },
- "required": []
- }
-}
\ No newline at end of file
diff --git a/creator/skill_library/open-creator/create/install_dependencies.sh b/creator/skill_library/open-creator/create/install_dependencies.sh
deleted file mode 100644
index c16ae4c..0000000
--- a/creator/skill_library/open-creator/create/install_dependencies.sh
+++ /dev/null
@@ -1 +0,0 @@
-pip install -U "open-creator"
diff --git a/creator/skill_library/open-creator/create/skill.json b/creator/skill_library/open-creator/create/skill.json
deleted file mode 100644
index 64414cb..0000000
--- a/creator/skill_library/open-creator/create/skill.json
+++ /dev/null
@@ -1,106 +0,0 @@
-{
- "skill_name": "create",
- "skill_description": "Create a skill from various sources.",
- "skill_metadata": {
- "created_at": "2023-10-03 22:39:34",
- "author": "gongjunmin",
- "updated_at": "2023-10-03 22:39:34",
- "usage_count": 0,
- "version": "1.0.0",
- "additional_kwargs": {}
- },
- "skill_tags": [
- "create",
- "skill",
- "source"
- ],
- "skill_usage_example": "skill = creator.create(request=\"filter how many prime numbers are in 201\")",
- "skill_program_language": "python",
- "skill_code": "from creator.core import creator\nfrom creator.core.skill import CodeSkill\nfrom typing import Optional, List\n\n\ndef create(\n request: Optional[str] = None,\n messages: Optional[List[dict]] = None,\n messages_json_path: Optional[str] = None,\n skill_path: Optional[str] = None,\n skill_json_path: Optional[str] = None,\n file_content: Optional[str] = None,\n file_path: Optional[str] = None,\n huggingface_repo_id: Optional[str] = None,\n huggingface_skill_path: Optional[str] = None,\n) -> CodeSkill:\n \"\"\"Create a skill from various sources.\n\n Args:\n request (Optional[str], optional): Request string. Defaults to None.\n messages (Optional[List[dict]], optional): Messages in list of dict format. Defaults to None.\n messages_json_path (Optional[str], optional): Path to messages JSON file. Defaults to None.\n skill_path (Optional[str], optional): Path to skill directory. Defaults to None.\n skill_json_path (Optional[str], optional): Path to skill JSON file. Defaults to None.\n file_content (Optional[str], optional): File content. Defaults to None.\n file_path (Optional[str], optional): Path to file. Defaults to None.\n huggingface_repo_id (Optional[str], optional): Huggingface repo ID. Defaults to None.\n huggingface_skill_path (Optional[str], optional): Huggingface skill path. Defaults to None.\n\n Returns:\n CodeSkill: Created skill\n Example:\n >>> skill = create(request=\"filter how many prime numbers are in 201\")\n >>> skill = create(messages=[{\"role\": \"user\",\"content\": \"write a program to list all the python functions and their docstrings in a directory\"},{\"role\": \"assistant\",\"content\": \"Sure, I can help with that. Here's the plan:\\n\\n1. First, we need to get a list of all Python files in the specified directory. We can do this by using the `os` and `glob` modules in Python.\\n2. Then, for each Python file, we will parse the file to find all function definitions. We can do this by using the `ast` module in Python, which can parse Python source code into an abstract syntax tree (AST).\\n3. For each function definition, we will extract the function's name and its docstring. The `ast` module can also help us with this.\\n4. Finally, we will print out the function names and their docstrings.\\n\\nLet's start with step 1: getting a list of all Python files in the specified directory.\",\"function_call\": {\"name\": \"run_code\",\"arguments\": \"{\\n \\\"language\\\": \\\"python\\\",\\n \\\"code\\\": \\\"import os\\\\nimport glob\\\\n\\\\n# Get the current working directory\\\\ncwd = os.getcwd()\\\\n\\\\n# Get a list of all Python files in the directory\\\\npython_files = glob.glob(os.path.join(cwd, '*.py'))\\\\n\\\\npython_files\\\"\\n}\"}}])\n >>> skill = create(messages_json_path=\"./messages_example.json\")\n >>> skill = create(file_path=\"../creator/utils/ask_human.py\")\n >>> skill = create(huggingface_repo_id=\"Sayoyo/skill-library\", huggingface_skill_path=\"extract_pdf_section\")\n >>> skill = create(skill_json_path=os.path.expanduser(\"~\") + \"/.cache/open_creator/skill_library/create/skill.json\")\n \"\"\"\n if request is not None:\n skill = creator.create(request=request)\n elif messages is not None:\n skill = creator.create(messages=messages)\n elif messages_json_path is not None:\n skill = creator.create(messages_json_path=messages_json_path)\n elif skill_path is not None:\n skill = creator.create(skill_path=skill_path)\n elif skill_json_path is not None:\n skill = creator.create(skill_json_path=skill_json_path)\n elif file_content is not None:\n skill = creator.create(file_content=file_content)\n elif file_path is not None:\n skill = creator.create(file_path=file_path)\n elif huggingface_repo_id is not None and huggingface_skill_path is not None:\n skill = creator.create(\n huggingface_repo_id=huggingface_repo_id, huggingface_skill_path=huggingface_skill_path\n )\n else:\n raise ValueError(\"At least one argument must be provided.\")\n\n return skill\n",
- "skill_parameters": [
- {
- "param_name": "request",
- "param_type": "string",
- "param_description": "Request string.",
- "param_required": false,
- "param_default": null
- },
- {
- "param_name": "messages",
- "param_type": "array",
- "param_description": "Messages in list of dict format.",
- "param_required": false,
- "param_default": null
- },
- {
- "param_name": "messages_json_path",
- "param_type": "string",
- "param_description": "Path to messages JSON file.",
- "param_required": false,
- "param_default": null
- },
- {
- "param_name": "skill_path",
- "param_type": "string",
- "param_description": "Path to skill directory.",
- "param_required": false,
- "param_default": null
- },
- {
- "param_name": "skill_json_path",
- "param_type": "string",
- "param_description": "Path to skill JSON file.",
- "param_required": false,
- "param_default": null
- },
- {
- "param_name": "file_content",
- "param_type": "string",
- "param_description": "File content.",
- "param_required": false,
- "param_default": null
- },
- {
- "param_name": "file_path",
- "param_type": "string",
- "param_description": "Path to file.",
- "param_required": false,
- "param_default": null
- },
- {
- "param_name": "huggingface_repo_id",
- "param_type": "string",
- "param_description": "Huggingface repo ID.",
- "param_required": false,
- "param_default": null
- },
- {
- "param_name": "huggingface_skill_path",
- "param_type": "string",
- "param_description": "Huggingface skill path.",
- "param_required": false,
- "param_default": null
- }
- ],
- "skill_return": {
- "param_name": "CodeSkill",
- "param_type": "object",
- "param_description": "Created skill",
- "param_required": true,
- "param_default": null
- },
- "skill_dependencies": [
- {
- "dependency_name": "open-creator",
- "dependency_version": "latest",
- "dependency_type": "pacakge"
- }
- ],
- "conversation_history": [
- {
- "role": "user",
- "content": "# file name: create.py\nimport creator\nfrom creator.schema.skill import CodeSkill\nfrom typing import Optional, List\n\n\ndef create(\n request: Optional[str] = None,\n messages: Optional[List[dict]] = None,\n messages_json_path: Optional[str] = None,\n skill_path: Optional[str] = None,\n skill_json_path: Optional[str] = None,\n file_content: Optional[str] = None,\n file_path: Optional[str] = None,\n huggingface_repo_id: Optional[str] = None,\n huggingface_skill_path: Optional[str] = None,\n) -> CodeSkill:\n \"\"\"Create a skill from various sources.\n\n Args:\n request (Optional[str], optional): Request string. Defaults to None.\n messages (Optional[List[dict]], optional): Messages in list of dict format. Defaults to None.\n messages_json_path (Optional[str], optional): Path to messages JSON file. Defaults to None.\n skill_path (Optional[str], optional): Path to skill directory. Defaults to None.\n skill_json_path (Optional[str], optional): Path to skill JSON file. Defaults to None.\n file_content (Optional[str], optional): File content. Defaults to None.\n file_path (Optional[str], optional): Path to file. Defaults to None.\n huggingface_repo_id (Optional[str], optional): Huggingface repo ID. Defaults to None.\n huggingface_skill_path (Optional[str], optional): Huggingface skill path. Defaults to None.\n\n Returns:\n CodeSkill: Created skill\n Example:\n >>> skill = creator.create(request=\"filter how many prime numbers are in 201\")\n >>> skill = creator.create(messages=[{\"role\": \"user\",\"content\": \"write a program to list all the python functions and their docstrings in a directory\"},{\"role\": \"assistant\",\"content\": \"Sure, I can help with that. Here's the plan:\\n\\n1. First, we need to get a list of all Python files in the specified directory. We can do this by using the `os` and `glob` modules in Python.\\n2. Then, for each Python file, we will parse the file to find all function definitions. We can do this by using the `ast` module in Python, which can parse Python source code into an abstract syntax tree (AST).\\n3. For each function definition, we will extract the function's name and its docstring. The `ast` module can also help us with this.\\n4. Finally, we will print out the function names and their docstrings.\\n\\nLet's start with step 1: getting a list of all Python files in the specified directory.\",\"function_call\": {\"name\": \"run_code\",\"arguments\": \"{\\n \\\"language\\\": \\\"python\\\",\\n \\\"code\\\": \\\"import os\\\\nimport glob\\\\n\\\\n# Get the current working directory\\\\ncwd = os.getcwd()\\\\n\\\\n# Get a list of all Python files in the directory\\\\npython_files = glob.glob(os.path.join(cwd, '*.py'))\\\\n\\\\npython_files\\\"\\n}\"}}])\n >>> skill = creator.create(messages_json_path=\"./messages_example.json\")\n >>> skill = creator.create(file_path=\"../creator/utils/ask_human.py\")\n >>> skill = creator.create(huggingface_repo_id=\"Sayoyo/skill-library\", huggingface_skill_path=\"extract_pdf_section\")\n >>> skill = creator.create(skill_json_path=os.path.expanduser(\"~\") + \"/.cache/open_creator/skill_library/create/skill.json\")\n \"\"\"\n if request is not None:\n skill = creator.create(request=request)\n elif messages is not None:\n skill = creator.create(messages=messages)\n elif messages_json_path is not None:\n skill = creator.create(messages_json_path=messages_json_path)\n elif skill_path is not None:\n skill = creator.create(skill_path=skill_path)\n elif skill_json_path is not None:\n skill = creator.create(skill_json_path=skill_json_path)\n elif file_content is not None:\n skill = creator.create(file_content=file_content)\n elif file_path is not None:\n skill = creator.create(file_path=file_path)\n elif huggingface_repo_id is not None and huggingface_skill_path is not None:\n skill = creator.create(\n huggingface_repo_id=huggingface_repo_id, huggingface_skill_path=huggingface_skill_path\n )\n else:\n raise ValueError(\"At least one argument must be provided.\")\n\n return skill\n"
- }
- ],
- "test_summary": null
-}
\ No newline at end of file
diff --git a/creator/skill_library/open-creator/create/skill_code.py b/creator/skill_library/open-creator/create/skill_code.py
deleted file mode 100644
index c722a4c..0000000
--- a/creator/skill_library/open-creator/create/skill_code.py
+++ /dev/null
@@ -1,61 +0,0 @@
-from creator.core import creator
-from creator.core.skill import CodeSkill
-from typing import Optional, List
-
-
-def create(
- request: Optional[str] = None,
- messages: Optional[List[dict]] = None,
- messages_json_path: Optional[str] = None,
- skill_path: Optional[str] = None,
- skill_json_path: Optional[str] = None,
- file_content: Optional[str] = None,
- file_path: Optional[str] = None,
- huggingface_repo_id: Optional[str] = None,
- huggingface_skill_path: Optional[str] = None,
-) -> CodeSkill:
- """Create a skill from various sources.
-
- Args:
- request (Optional[str], optional): Request string. Defaults to None.
- messages (Optional[List[dict]], optional): Messages in list of dict format. Defaults to None.
- messages_json_path (Optional[str], optional): Path to messages JSON file. Defaults to None.
- skill_path (Optional[str], optional): Path to skill directory. Defaults to None.
- skill_json_path (Optional[str], optional): Path to skill JSON file. Defaults to None.
- file_content (Optional[str], optional): File content. Defaults to None.
- file_path (Optional[str], optional): Path to file. Defaults to None.
- huggingface_repo_id (Optional[str], optional): Huggingface repo ID. Defaults to None.
- huggingface_skill_path (Optional[str], optional): Huggingface skill path. Defaults to None.
-
- Returns:
- CodeSkill: Created skill
- Example:
- >>> skill = create(request="filter how many prime numbers are in 201")
- >>> skill = create(messages=[{"role": "user","content": "write a program to list all the python functions and their docstrings in a directory"},{"role": "assistant","content": "Sure, I can help with that. Here's the plan:\n\n1. First, we need to get a list of all Python files in the specified directory. We can do this by using the `os` and `glob` modules in Python.\n2. Then, for each Python file, we will parse the file to find all function definitions. We can do this by using the `ast` module in Python, which can parse Python source code into an abstract syntax tree (AST).\n3. For each function definition, we will extract the function's name and its docstring. The `ast` module can also help us with this.\n4. Finally, we will print out the function names and their docstrings.\n\nLet's start with step 1: getting a list of all Python files in the specified directory.","function_call": {"name": "run_code","arguments": "{\n \"language\": \"python\",\n \"code\": \"import os\\nimport glob\\n\\n# Get the current working directory\\ncwd = os.getcwd()\\n\\n# Get a list of all Python files in the directory\\npython_files = glob.glob(os.path.join(cwd, '*.py'))\\n\\npython_files\"\n}"}}])
- >>> skill = create(messages_json_path="./messages_example.json")
- >>> skill = create(file_path="../creator/utils/ask_human.py")
- >>> skill = create(huggingface_repo_id="Sayoyo/skill-library", huggingface_skill_path="extract_pdf_section")
- >>> skill = create(skill_json_path=os.path.expanduser("~") + "/.cache/open_creator/skill_library/create/skill.json")
- """
- if request is not None:
- skill = creator.create(request=request)
- elif messages is not None:
- skill = creator.create(messages=messages)
- elif messages_json_path is not None:
- skill = creator.create(messages_json_path=messages_json_path)
- elif skill_path is not None:
- skill = creator.create(skill_path=skill_path)
- elif skill_json_path is not None:
- skill = creator.create(skill_json_path=skill_json_path)
- elif file_content is not None:
- skill = creator.create(file_content=file_content)
- elif file_path is not None:
- skill = creator.create(file_path=file_path)
- elif huggingface_repo_id is not None and huggingface_skill_path is not None:
- skill = creator.create(
- huggingface_repo_id=huggingface_repo_id, huggingface_skill_path=huggingface_skill_path
- )
- else:
- raise ValueError("At least one argument must be provided.")
-
- return skill
diff --git a/creator/skill_library/open-creator/create/skill_doc.md b/creator/skill_library/open-creator/create/skill_doc.md
deleted file mode 100644
index d134b0b..0000000
--- a/creator/skill_library/open-creator/create/skill_doc.md
+++ /dev/null
@@ -1,26 +0,0 @@
-## Skill Details:
-- **Name**: create
-- **Description**: Create a skill from various sources.
-- **Version**: 1.0.0
-- **Usage**:
-```python
-skill = create(request="filter how many prime numbers are in 201")
-skill = create(messages=[{"role": "user","content": "write a program to list all the python functions and their docstrings in a directory"},{"role": "assistant","content": "Sure, I can help with that. Here's the plan:\n\n1. First, we need to get a list of all Python files in the specified directory. We can do this by using the `os` and `glob` modules in Python.\n2. Then, for each Python file, we will parse the file to find all function definitions. We can do this by using the `ast` module in Python, which can parse Python source code into an abstract syntax tree (AST).\n3. For each function definition, we will extract the function's name and its docstring. The `ast` module can also help us with this.\n4. Finally, we will print out the function names and their docstrings.\n\nLet's start with step 1: getting a list of all Python files in the specified directory.","function_call": {"name": "run_code","arguments": "{\n \"language\": \"python\",\n \"code\": \"import os\\nimport glob\\n\\n# Get the current working directory\\ncwd = os.getcwd()\\n\\n# Get a list of all Python files in the directory\\npython_files = glob.glob(os.path.join(cwd, '*.py'))\\n\\npython_files\"\n}"}}])
-skill = create(messages_json_path="./messages_example.json")
-skill = create(file_path="../creator/utils/ask_human.py")
-skill = create(huggingface_repo_id="Sayoyo/skill-library", huggingface_skill_path="extract_pdf_section")
-skill = create(skill_json_path=os.path.expanduser("~") + "/.cache/open_creator/skill_library/create/skill.json")
-```
-- **Parameters**:
- - **request** (string): Request string.
- - **messages** (array): Messages in list of dict format.
- - **messages_json_path** (string): Path to messages JSON file.
- - **skill_path** (string): Path to skill directory.
- - **skill_json_path** (string): Path to skill JSON file.
- - **file_content** (string): File content.
- - **file_path** (string): Path to file.
- - **huggingface_repo_id** (string): Huggingface repo ID.
- - **huggingface_skill_path** (string): Huggingface skill path.
-
-- **Returns**:
- - **CodeSkill** (object): Created skill
\ No newline at end of file
diff --git a/creator/skill_library/open-creator/save/conversation_history.json b/creator/skill_library/open-creator/save/conversation_history.json
deleted file mode 100644
index 0ecf2e1..0000000
--- a/creator/skill_library/open-creator/save/conversation_history.json
+++ /dev/null
@@ -1,6 +0,0 @@
-[
- {
- "role": "user",
- "content": "# file name: save.py\nimport creator\nfrom creator.schema.skill import CodeSkill\n\n\ndef save(skill: CodeSkill, huggingface_repo_id: str = None, skill_path: str = None):\n \"\"\"\n Save a skill to a local path or a huggingface repo.\n \n Parameters:\n skill: CodeSkill object, the skill to be saved.\n huggingface_repo_id: str, optional, the ID of the huggingface repo. If provided, the skill will be saved to this repo.\n skill_path: str, optional, the local path. If provided, the skill will be saved to this path.\n \n Returns:\n None\n \n Usage examples:\n ```python\n >>> import creator\n >>> import os\n >>> skill_json_path = os.path.expanduser(\"~\") + \"/.cache/open_creator/skill_library/ask_run_code_confirm/skill.json\"\n >>> skill = creator.create(skill_json_path=skill_json_path)\n >>> creator.save(skill=skill, huggingface_repo_id=\"ChuxiJ/skill_library\")\n ```\n or\n ```python\n >>> import creator\n >>> import os\n >>> skill_json_path = os.path.expanduser(\"~\") + \"/.cache/open_creator/skill_library/ask_run_code_confirm/skill.json\"\n >>> skill = creator.create(skill_json_path=skill_json_path)\n >>> creator.save(skill=skill, skill_path=\"/path/to/save\")\n ```\n \"\"\"\n if huggingface_repo_id is not None:\n creator.save_to_hub(skill=skill, huggingface_repo_id=huggingface_repo_id)\n elif skill_path is not None:\n creator.save_to_skill_path(skill=skill, skill_path=skill_path)\n else:\n raise ValueError(\"Either huggingface_repo_id or skill_path must be provided.\")\n \n"
- }
-]
\ No newline at end of file
diff --git a/creator/skill_library/open-creator/save/embedding_text.txt b/creator/skill_library/open-creator/save/embedding_text.txt
deleted file mode 100644
index 5ac09a5..0000000
--- a/creator/skill_library/open-creator/save/embedding_text.txt
+++ /dev/null
@@ -1,5 +0,0 @@
-save
-Save a skill to a local path or a huggingface repo.
-Usage examples:
-save(skill=skill) or save(skill=skill, huggingface_repo_id='xxxx/skill_library') or save(skill=skill, skill_path='/path/to/save')
-['save', 'skill', 'huggingface', 'local path']
\ No newline at end of file
diff --git a/creator/skill_library/open-creator/save/function_call.json b/creator/skill_library/open-creator/save/function_call.json
deleted file mode 100644
index 7bf0f1d..0000000
--- a/creator/skill_library/open-creator/save/function_call.json
+++ /dev/null
@@ -1,24 +0,0 @@
-{
- "name": "save",
- "description": "Save a skill to a local path or a huggingface repo.\n\nsave(skill=skill, huggingface_repo_id='xxxx/skill_library') or save(skill=skill, skill_path='/path/to/save')",
- "parameters": {
- "type": "object",
- "properties": {
- "skill": {
- "type": "object",
- "description": "CodeSkill object, the skill to be saved."
- },
- "huggingface_repo_id": {
- "type": "string",
- "description": "optional, the ID of the huggingface repo. If provided, the skill will be saved to this repo."
- },
- "skill_path": {
- "type": "string",
- "description": "optional, the local path. If provided, the skill will be saved to this path."
- }
- },
- "required": [
- "skill"
- ]
- }
-}
\ No newline at end of file
diff --git a/creator/skill_library/open-creator/save/install_dependencies.sh b/creator/skill_library/open-creator/save/install_dependencies.sh
deleted file mode 100644
index eb1ad2d..0000000
--- a/creator/skill_library/open-creator/save/install_dependencies.sh
+++ /dev/null
@@ -1 +0,0 @@
-pip install -U "open-creator"
\ No newline at end of file
diff --git a/creator/skill_library/open-creator/save/skill.json b/creator/skill_library/open-creator/save/skill.json
deleted file mode 100644
index af93afe..0000000
--- a/creator/skill_library/open-creator/save/skill.json
+++ /dev/null
@@ -1,59 +0,0 @@
-{
- "skill_name": "save",
- "skill_description": "Save a skill to a local path or a huggingface repo.",
- "skill_metadata": {
- "created_at": "2023-10-04 09:54:43",
- "author": "gongjunmin",
- "updated_at": "2023-10-04 09:54:43",
- "usage_count": 0,
- "version": "1.0.0",
- "additional_kwargs": {}
- },
- "skill_tags": [
- "save",
- "skill",
- "huggingface",
- "local path"
- ],
- "skill_usage_example": "save(skill=skill, huggingface_repo_id='ChuxiJ/skill_library') or save(skill=skill, skill_path='/path/to/save')",
- "skill_program_language": "python",
- "skill_code": "from creator.core import creator\nfrom creator.core.skill import CodeSkill\n\n\ndef save(skill: CodeSkill, huggingface_repo_id: str = None, skill_path: str = None):\n \"\"\"\n Save a skill to a local path or a huggingface repo.\n \n Parameters:\n skill: CodeSkill object, the skill to be saved.\n huggingface_repo_id: str, optional, the ID of the huggingface repo. If provided, the skill will be saved to this repo.\n skill_path: str, optional, the local path. If provided, the skill will be saved to this path.\n \n Returns:\n None\n \n Example:\n >>> import creator\n >>> import os\n >>> skill_json_path = os.path.expanduser(\"~\") + \"/.cache/open_creator/skill_library/ask_run_code_confirm/skill.json\"\n >>> skill = creator.create(skill_json_path=skill_json_path)\n >>> save(skill=skill, huggingface_repo_id=\"ChuxiJ/skill_library\") # save to remote\n >>> save(skill=skill, skill_path=\"/path/to/save\") # save to local\n \"\"\"\n if huggingface_repo_id is not None:\n creator.save(skill=skill, huggingface_repo_id=huggingface_repo_id)\n elif skill_path is not None:\n creator.save(skill=skill, skill_path=skill_path)\n else:\n creator.save(skill=skill)",
- "skill_parameters": [
- {
- "param_name": "skill",
- "param_type": "object",
- "param_description": "CodeSkill object, the skill to be saved.",
- "param_required": true,
- "param_default": null
- },
- {
- "param_name": "huggingface_repo_id",
- "param_type": "string",
- "param_description": "optional, the ID of the huggingface repo. If provided, the skill will be saved to this repo.",
- "param_required": false,
- "param_default": null
- },
- {
- "param_name": "skill_path",
- "param_type": "string",
- "param_description": "optional, the local path. If provided, the skill will be saved to this path.",
- "param_required": false,
- "param_default": null
- }
- ],
- "skill_return": null,
- "skill_dependencies": [
- {
- "dependency_name": "open-creator",
- "dependency_version": "latest",
- "dependency_type": "package"
- }
- ],
- "conversation_history": [
- {
- "role": "user",
- "content": "# file name: save.py\nimport creator\nfrom creator.schema.skill import CodeSkill\n\n\ndef save(skill: CodeSkill, huggingface_repo_id: str = None, skill_path: str = None):\n \"\"\"\n Save a skill to a local path or a huggingface repo.\n \n Parameters:\n skill: CodeSkill object, the skill to be saved.\n huggingface_repo_id: str, optional, the ID of the huggingface repo. If provided, the skill will be saved to this repo.\n skill_path: str, optional, the local path. If provided, the skill will be saved to this path.\n \n Returns:\n None\n \n Usage examples:\n ```python\n >>> import creator\n >>> import os\n >>> skill_json_path = os.path.expanduser(\"~\") + \"/.cache/open_creator/skill_library/ask_run_code_confirm/skill.json\"\n >>> skill = creator.create(skill_json_path=skill_json_path)\n >>> creator.save(skill=skill, huggingface_repo_id=\"ChuxiJ/skill_library\")\n ```\n or\n ```python\n >>> import creator\n >>> import os\n >>> skill_json_path = os.path.expanduser(\"~\") + \"/.cache/open_creator/skill_library/ask_run_code_confirm/skill.json\"\n >>> skill = creator.create(skill_json_path=skill_json_path)\n >>> creator.save(skill=skill, skill_path=\"/path/to/save\")\n ```\n \"\"\"\n if huggingface_repo_id is not None:\n creator.save_to_hub(skill=skill, huggingface_repo_id=huggingface_repo_id)\n elif skill_path is not None:\n creator.save_to_skill_path(skill=skill, skill_path=skill_path)\n else:\n raise ValueError(\"Either huggingface_repo_id or skill_path must be provided.\")\n \n"
- }
- ],
- "test_summary": null
-}
\ No newline at end of file
diff --git a/creator/skill_library/open-creator/save/skill_code.py b/creator/skill_library/open-creator/save/skill_code.py
deleted file mode 100644
index 01dfbdb..0000000
--- a/creator/skill_library/open-creator/save/skill_code.py
+++ /dev/null
@@ -1,30 +0,0 @@
-from creator.core import creator
-from creator.core.skill import CodeSkill
-
-
-def save(skill: CodeSkill, huggingface_repo_id: str = None, skill_path: str = None):
- """
- Save a skill to a local path or a huggingface repo.
-
- Parameters:
- skill: CodeSkill object, the skill to be saved.
- huggingface_repo_id: str, optional, the ID of the huggingface repo. If provided, the skill will be saved to this repo.
- skill_path: str, optional, the local path. If provided, the skill will be saved to this path.
-
- Returns:
- None
-
- Example:
- >>> import creator
- >>> import os
- >>> skill_json_path = os.path.expanduser("~") + "/.cache/open_creator/skill_library/ask_run_code_confirm/skill.json"
- >>> skill = creator.create(skill_json_path=skill_json_path)
- >>> save(skill=skill, huggingface_repo_id="ChuxiJ/skill_library") # save to remote
- >>> save(skill=skill, skill_path="/path/to/save") # save to local
- """
- if huggingface_repo_id is not None:
- creator.save(skill=skill, huggingface_repo_id=huggingface_repo_id)
- elif skill_path is not None:
- creator.save(skill=skill, skill_path=skill_path)
- else:
- creator.save(skill=skill)
\ No newline at end of file
diff --git a/creator/skill_library/open-creator/save/skill_doc.md b/creator/skill_library/open-creator/save/skill_doc.md
deleted file mode 100644
index 08b1c63..0000000
--- a/creator/skill_library/open-creator/save/skill_doc.md
+++ /dev/null
@@ -1,26 +0,0 @@
-## Skill Details:
-- **Name**: save
-- **Description**: Save a skill to a local path or a huggingface repo.
-- **Version**: 1.0.0
-- **Usage**:
-You need to create a skill first
-```python
-import creator
-import os
-skill_json_path = os.path.expanduser("~") + "/.cache/open_creator/skill_library/ask_run_code_confirm/skill.json"
-skill = creator.create(skill_json_path=skill_json_path)
-```
-```python
-save(skill=skill, huggingface_repo_id="ChuxiJ/skill_library")
-```
-or
-```python
-save(skill=skill, skill_path="/path/to/save")
-```
-- **Parameters**:
- - **skill** (object): CodeSkill object, the skill to be saved.
- - Required: True
- - **huggingface_repo_id** (string): optional, the ID of the huggingface repo. If provided, the skill will be saved to this repo.
- - **skill_path** (string): optional, the local path. If provided, the skill will be saved to this path.
-
-- **Returns**:
\ No newline at end of file
diff --git a/creator/skill_library/open-creator/search/conversation_history.json b/creator/skill_library/open-creator/search/conversation_history.json
deleted file mode 100644
index 847ef4f..0000000
--- a/creator/skill_library/open-creator/search/conversation_history.json
+++ /dev/null
@@ -1,6 +0,0 @@
-[
- {
- "role": "user",
- "content": "# file name: search.py\nimport creator\nfrom creator.schema.skill import CodeSkill\n\n\ndef search(query: str, top_k=1, threshold=0.8) -> list[CodeSkill]:\n \"\"\"\n Search skills by query.\n \n Parameters:\n query: str, the query.\n top_k: int, optional, the maximum number of skills to return.\n threshold: float, optional, the minimum similarity score to return a skill.\n Returns:\n a list of CodeSkill objects.\n\n Example:\n >>> import creator\n >>> skills = search(\"I want to extract some pages from a pdf\")\n \"\"\"\n\n return creator.search(query=query, top_k=top_k, threshold=threshold)\n\n"
- }
-]
\ No newline at end of file
diff --git a/creator/skill_library/open-creator/search/embedding_text.txt b/creator/skill_library/open-creator/search/embedding_text.txt
deleted file mode 100644
index 1651ac4..0000000
--- a/creator/skill_library/open-creator/search/embedding_text.txt
+++ /dev/null
@@ -1,4 +0,0 @@
-search
-This skill allows users to search for skills by query.
-skills = search('I want to extract some pages from a pdf')
-['search', 'query', 'CodeSkill']
\ No newline at end of file
diff --git a/creator/skill_library/open-creator/search/function_call.json b/creator/skill_library/open-creator/search/function_call.json
deleted file mode 100644
index 3a175cd..0000000
--- a/creator/skill_library/open-creator/search/function_call.json
+++ /dev/null
@@ -1,26 +0,0 @@
-{
- "name": "search",
- "description": "This skill allows users to search for skills by query.\n\nskills = search('I want to extract some pages from a pdf')",
- "parameters": {
- "type": "object",
- "properties": {
- "query": {
- "type": "string",
- "description": "The query to search for skills."
- },
- "top_k": {
- "type": "integer",
- "description": "The maximum number of skills to return.",
- "default": 1
- },
- "threshold": {
- "type": "float",
- "description": "The minimum similarity score to return a skill.",
- "default": 0.8
- }
- },
- "required": [
- "query"
- ]
- }
-}
\ No newline at end of file
diff --git a/creator/skill_library/open-creator/search/install_dependencies.sh b/creator/skill_library/open-creator/search/install_dependencies.sh
deleted file mode 100644
index eb1ad2d..0000000
--- a/creator/skill_library/open-creator/search/install_dependencies.sh
+++ /dev/null
@@ -1 +0,0 @@
-pip install -U "open-creator"
\ No newline at end of file
diff --git a/creator/skill_library/open-creator/search/search.py b/creator/skill_library/open-creator/search/search.py
deleted file mode 100644
index 9c1a6df..0000000
--- a/creator/skill_library/open-creator/search/search.py
+++ /dev/null
@@ -1,22 +0,0 @@
-from creator.core import creator
-from creator.core.skill import CodeSkill
-
-
-def search(query: str, top_k=1, threshold=0.8) -> list[CodeSkill]:
- """
- Search skills by query.
-
- Parameters:
- query: str, the query.
- top_k: int, optional, the maximum number of skills to return.
- threshold: float, optional, the minimum similarity score to return a skill.
- Returns:
- a list of CodeSkill objects.
-
- Example:
- >>> import creator
- >>> skills = search("I want to extract some pages from a pdf")
- """
-
- return creator.search(query=query, top_k=top_k, threshold=threshold)
-
diff --git a/creator/skill_library/open-creator/search/skill.json b/creator/skill_library/open-creator/search/skill.json
deleted file mode 100644
index 24202a7..0000000
--- a/creator/skill_library/open-creator/search/skill.json
+++ /dev/null
@@ -1,64 +0,0 @@
-{
- "skill_name": "search",
- "skill_description": "This skill allows users to search for skills by query.",
- "skill_metadata": {
- "created_at": "2023-10-04 14:51:53",
- "author": "gongjunmin",
- "updated_at": "2023-10-04 14:51:53",
- "usage_count": 0,
- "version": "1.0.0",
- "additional_kwargs": {}
- },
- "skill_tags": [
- "search",
- "query",
- "CodeSkill"
- ],
- "skill_usage_example": "skills = search('I want to extract some pages from a pdf')",
- "skill_program_language": "python",
- "skill_code": "from creator.core import creator\nfrom creator.core.skill import CodeSkill\n\ndef search(query: str, top_k=1, threshold=0.8) -> list[CodeSkill]:\n '''\n Search skills by query.\n \n Parameters:\n query: str, the query.\n top_k: int, optional, the maximum number of skills to return.\n threshold: float, optional, the minimum similarity score to return a skill.\n Returns:\n a list of CodeSkill objects.\n\n Example:\n >>> import creator\n >>> skills = search('I want to extract some pages from a pdf')\n '''\n\n return creator.search(query=query, top_k=top_k, threshold=threshold)",
- "skill_parameters": [
- {
- "param_name": "query",
- "param_type": "string",
- "param_description": "The query to search for skills.",
- "param_required": true,
- "param_default": null
- },
- {
- "param_name": "top_k",
- "param_type": "integer",
- "param_description": "The maximum number of skills to return.",
- "param_required": false,
- "param_default": 1
- },
- {
- "param_name": "threshold",
- "param_type": "float",
- "param_description": "The minimum similarity score to return a skill.",
- "param_required": false,
- "param_default": 0.8
- }
- ],
- "skill_return": {
- "param_name": "skills",
- "param_type": "array",
- "param_description": "A list of CodeSkill objects.",
- "param_required": true,
- "param_default": null
- },
- "skill_dependencies": [
- {
- "dependency_name": "open-creator",
- "dependency_version": "latest",
- "dependency_type": "package"
- }
- ],
- "conversation_history": [
- {
- "role": "user",
- "content": "# file name: search.py\nimport creator\nfrom creator.schema.skill import CodeSkill\n\n\ndef search(query: str, top_k=1, threshold=0.8) -> list[CodeSkill]:\n \"\"\"\n Search skills by query.\n \n Parameters:\n query: str, the query.\n top_k: int, optional, the maximum number of skills to return.\n threshold: float, optional, the minimum similarity score to return a skill.\n Returns:\n a list of CodeSkill objects.\n\n Example:\n >>> import creator\n >>> skills = search(\"I want to extract some pages from a pdf\")\n \"\"\"\n\n return creator.search(query=query, top_k=top_k, threshold=threshold)\n\n"
- }
- ],
- "test_summary": null
-}
\ No newline at end of file
diff --git a/creator/skill_library/open-creator/search/skill_code.py b/creator/skill_library/open-creator/search/skill_code.py
deleted file mode 100644
index c8ff216..0000000
--- a/creator/skill_library/open-creator/search/skill_code.py
+++ /dev/null
@@ -1,20 +0,0 @@
-import creator
-from creator.core.skill import CodeSkill
-
-def search(query: str, top_k=1, threshold=0.8) -> list[CodeSkill]:
- '''
- Search skills by query.
-
- Parameters:
- query: str, the query.
- top_k: int, optional, the maximum number of skills to return.
- threshold: float, optional, the minimum similarity score to return a skill.
- Returns:
- a list of CodeSkill objects.
-
- Example:
- >>> import creator
- >>> skills = search('I want to extract some pages from a pdf')
- '''
-
- return creator.search(query=query, top_k=top_k, threshold=threshold)
\ No newline at end of file
diff --git a/creator/skill_library/open-creator/search/skill_doc.md b/creator/skill_library/open-creator/search/skill_doc.md
deleted file mode 100644
index 9387901..0000000
--- a/creator/skill_library/open-creator/search/skill_doc.md
+++ /dev/null
@@ -1,18 +0,0 @@
-## Skill Details:
-- **Name**: search
-- **Description**: This skill allows users to search for skills by query.
-- **Version**: 1.0.0
-- **Usage**:
-```python
-skills = search('I want to extract some pages from a pdf')
-```
-- **Parameters**:
- - **query** (string): The query to search for skills.
- - Required: True
- - **top_k** (integer): The maximum number of skills to return.
- - Default: 1
- - **threshold** (float): The minimum similarity score to return a skill.
- - Default: 0.8
-
-- **Returns**:
- - **skills** (array): A list of CodeSkill objects.
\ No newline at end of file
diff --git a/creator/utils/__init__.py b/creator/utils/__init__.py
index 31f5498..15cb188 100644
--- a/creator/utils/__init__.py
+++ b/creator/utils/__init__.py
@@ -6,11 +6,14 @@
from .ask_human import ask_run_code_confirm
from .dict2list import convert_to_values_list
from .user_info import get_user_info
-from .load_prompt import load_system_prompt
+from .load_prompt import load_system_prompt, load_json_schema
from .printer import print
from .code_split import split_code_blocks
from .valid_code import is_valid_code, is_expression
from .tips_utils import remove_tips
+from .runnable_decorator import runnable, print_run_url
+from .attri_dict import AttrDict
+from .uuid_generator import generate_uuid_like_string
__all__ = [
@@ -23,9 +26,14 @@
"convert_to_values_list",
"get_user_info",
"load_system_prompt",
+ "load_json_schema",
"print",
"split_code_blocks",
"is_valid_code",
"is_expression",
- "remove_tips"
+ "remove_tips",
+ "runnable",
+ "print_run_url",
+ "AttrDict",
+ "generate_uuid_like_string"
]
diff --git a/creator/utils/attri_dict.py b/creator/utils/attri_dict.py
new file mode 100644
index 0000000..71bb396
--- /dev/null
+++ b/creator/utils/attri_dict.py
@@ -0,0 +1,28 @@
+import os
+
+
+class AttrDict(dict):
+ def __init__(self, *args, **kwargs):
+ super().__init__(*args, **kwargs)
+ for key, value in self.items():
+ if isinstance(value, dict):
+ self[key] = AttrDict(value)
+
+ def __getattr__(self, key):
+ try:
+ return self[key]
+ except KeyError:
+ return None
+
+ def __setattr__(self, key, value):
+ if isinstance(value, dict):
+ value = AttrDict(value)
+ self[key] = value
+ if isinstance(value, (str, int, float)):
+ os.environ[key] = value
+ elif isinstance(value, bool):
+ os.environ[key] = str(value).lower()
+
+ def __delattr__(self, key):
+ if key in self:
+ del self[key]
diff --git a/creator/utils/install_command.py b/creator/utils/install_command.py
index c9e5894..b5baa14 100644
--- a/creator/utils/install_command.py
+++ b/creator/utils/install_command.py
@@ -15,7 +15,7 @@ def generate_install_command(language: str, dependencies):
return _generate_html_install_command(dependencies)
else:
raise NotImplementedError
-
+
def _generate_python_install_command(dependencies):
shell_command_str = 'pip show {package_name} || pip install "{package_name}'
diff --git a/creator/utils/langsmith_utils.py b/creator/utils/langsmith_utils.py
new file mode 100644
index 0000000..e2f406e
--- /dev/null
+++ b/creator/utils/langsmith_utils.py
@@ -0,0 +1,22 @@
+import os
+from langsmith import Client
+from langsmith.utils import LangSmithConnectionError
+from .printer import print
+
+
+def check_langsmith_ok():
+ cli = Client()
+ if os.environ.get("LANGCHAIN_TRACING_V2", "false") == "false":
+ return False
+ try:
+ cli.read_project(project_name="open-creator")
+ except LangSmithConnectionError as e:
+ if "Connection error" in str(e):
+ print("[red]Warning:[/red] [yellow]Langsmith is not running. Please run `langsmith start`.[/yellow]")
+ return False
+ else:
+ cli.create_project(project_name="open-creator")
+ return True
+
+
+langsmith_ok = check_langsmith_ok()
diff --git a/creator/utils/load_prompt.py b/creator/utils/load_prompt.py
index 00bccde..c73cb89 100644
--- a/creator/utils/load_prompt.py
+++ b/creator/utils/load_prompt.py
@@ -1,4 +1,13 @@
+import json
+
+
def load_system_prompt(prompt_path):
with open(prompt_path, encoding='utf-8') as f:
prompt = f.read()
return prompt
+
+
+def load_json_schema(json_schema_path):
+ with open(json_schema_path, encoding="utf-8") as f:
+ json_schema = json.load(f)
+ return json_schema
diff --git a/creator/utils/printer.py b/creator/utils/printer.py
index 6e62d99..895b7aa 100644
--- a/creator/utils/printer.py
+++ b/creator/utils/printer.py
@@ -2,10 +2,10 @@
import sys
from rich.markdown import Markdown
from rich.console import Console
-from rich import print as rich_print
from rich.json import JSON
import io
+
# Save the original print function
original_print = print
@@ -51,7 +51,10 @@ def add_default_callback(self):
def default_print(message, end='\n', file=None, flush=False, output_option='terminal'):
target_file = file or self.output_capture
if output_option in ['terminal', 'both']:
- console = Console(force_jupyter=self.is_jupyter, force_terminal=self.is_terminal, force_interactive=self.is_interactive, file=target_file)
+ if self.is_jupyter:
+ console = Console(force_jupyter=self.is_jupyter, force_terminal=self.is_terminal, force_interactive=self.is_interactive, file=target_file)
+ else:
+ console = Console(force_jupyter=self.is_jupyter, force_terminal=self.is_terminal, force_interactive=self.is_interactive)
console.print(message, end=end)
# if output_option in ['stdout', 'both']:
# rich_print(message, end=end, file=sys.stdout, flush=flush)
diff --git a/creator/utils/runnable_decorator.py b/creator/utils/runnable_decorator.py
new file mode 100644
index 0000000..52342c2
--- /dev/null
+++ b/creator/utils/runnable_decorator.py
@@ -0,0 +1,24 @@
+from langchain.schema.runnable import RunnableLambda
+from langchain.callbacks import tracing_v2_enabled
+from .printer import print
+from .langsmith_utils import langsmith_ok
+
+
+def runnable(run_name):
+ def decorator(func):
+ return RunnableLambda(func).with_config({"run_name": run_name})
+ return decorator
+
+
+def print_run_url(func):
+ def wrapper(*args, **kwargs):
+ if langsmith_ok:
+ with tracing_v2_enabled() as cb:
+ result = func(*args, **kwargs)
+ run_url = cb.get_run_url()
+ if run_url is not None:
+ print(f"Langsmith Run URL: [{run_url}]({run_url})", print_type="markdown")
+ else:
+ result = func(*args, **kwargs)
+ return result
+ return wrapper
diff --git a/creator/utils/skill_doc.py b/creator/utils/skill_doc.py
index aef4f57..e35c97c 100644
--- a/creator/utils/skill_doc.py
+++ b/creator/utils/skill_doc.py
@@ -39,4 +39,3 @@ def format_return(ret):
doc += format_return(skill.skill_return) + "\n"
return doc.strip()
-
diff --git a/creator/utils/uuid_generator.py b/creator/utils/uuid_generator.py
new file mode 100644
index 0000000..307a7ff
--- /dev/null
+++ b/creator/utils/uuid_generator.py
@@ -0,0 +1,23 @@
+import hashlib
+
+
+def generate_uuid_like_string(text):
+ """
+ Generates a UUID-like string based on the given text.
+
+ The function uses the SHA-256 hash function to hash the input text.
+ It then extracts 32 characters from the hash result and formats
+ them to mimic the structure of a UUID.
+
+ Parameters:
+ text (str): The input text to be hashed and converted into a UUID-like string.
+
+ Returns:
+ str: A UUID-like string derived from the input text.
+ """
+ # Use the SHA-256 hash function to hash the input text
+ hash_object = hashlib.sha256(text.encode())
+ hex_dig = hash_object.hexdigest()
+
+ # Extract 32 characters from the hash result and add separators to mimic the UUID format
+ return f"{hex_dig[:8]}-{hex_dig[8:12]}-{hex_dig[12:16]}-{hex_dig[16:20]}-{hex_dig[20:32]}"
diff --git a/docs/api_doc.md b/docs/api_doc.md
index eca7baa..628a610 100644
--- a/docs/api_doc.md
+++ b/docs/api_doc.md
@@ -1,7 +1,7 @@
## Open-Creator API Documentation
-### Function: `create`
-Generates a `CodeSkill` instance using different input sources.
+### Function: `#!python create`
+Generates a `#!python CodeSkill` instance using different input sources.
#### Parameters:
- `request`: String detailing the skill functionality.
@@ -12,46 +12,46 @@ Generates a `CodeSkill` instance using different input sources.
- `huggingface_skill_path`: Path to the skill within the Huggingface repository.
#### Returns:
-- `CodeSkill`: The created skill.
+- `#!python CodeSkill`: The created skill.
#### Usage:
1. Creating Skill using a Request String:
-```python
+``` py
skill = create(request="filter how many prime numbers are in 201")
```
2. Creating Skill using Messages:
- Directly:
-```python
+``` py
skill = create(messages=[{"role": "user", "content": "write a program..."}])
```
- Via JSON Path:
-```python
+``` py
skill = create(messages_json_path="./messages_example.json")
```
3. Creating Skill using File Content or File Path:
- Direct Content:
-```python
+``` py
skill = create(file_content="def example_function(): pass")
```
- File Path:
-```python
+``` py
skill = create(file_path="../creator/utils/example.py")
```
4. Creating Skill using Skill Path or Skill JSON Path:
- JSON Path:
-```python
+``` py
skill = create(skill_json_path="~/.cache/open_creator/skill_library/create/skill.json")
```
- Skill Path:
-```python
+``` py
skill = create(skill_path="~/.cache/open_creator/skill_library/create")
```
5. Creating Skill using Huggingface Repository ID and Skill Path:
If a skill is hosted in a Huggingface repository, you can create it by specifying the repository ID and the skill path within the repository.
-```python
+``` py
skill = create(huggingface_repo_id="YourRepo/skill-library", huggingface_skill_path="specific_skill")
```
@@ -64,7 +64,7 @@ skill = create(huggingface_repo_id="YourRepo/skill-library", huggingface_skill_p
### Function: `save`
-Stores a `CodeSkill` instance either to a local path or a Huggingface repository. In default just use `save(skill)` and it will store the skill into the default path. Only save the skill when the user asks to do so.
+Stores a `#!python CodeSkill` instance either to a local path or a Huggingface repository. In default just use `save(skill)` and it will store the skill into the default path. Only save the skill when the user asks to do so.
#### Parameters:
- `skill` (CodeSkill): The skill instance to be saved.
@@ -75,15 +75,15 @@ Stores a `CodeSkill` instance either to a local path or a Huggingface repository
- None
#### Usage:
-The `save` function allows for the persistent storage of a `CodeSkill` instance by saving it either locally or to a specified Huggingface repository.
+The `save` function allows for the persistent storage of a `#!python CodeSkill` instance by saving it either locally or to a specified Huggingface repository.
1. **Save to Huggingface Repository:**
-```python
+``` py
save(skill=skill, huggingface_repo_id="YourRepo/skill_library")
```
2. **Save Locally:**
-```python
+``` py
save(skill=skill, skill_path="/path/to/save")
```
@@ -101,18 +101,18 @@ Retrieve skills related to a specified query from the available pool of skills.
- `threshold` (Optional[float]): Minimum similarity score to return a skill. Default is 0.8.
#### Returns:
-- List[CodeSkill]: A list of retrieved `CodeSkill` objects that match the query.
+- List[CodeSkill]: A list of retrieved `#!python CodeSkill` objects that match the query.
#### Usage:
The `search` function allows users to locate skills related to a particular query string. This is particularly useful for identifying pre-existing skills within a skill library that may fulfill a requirement or for exploring available functionalities.
1. **Basic Search:**
-```python
+``` py
skills = search("extract pages from a pdf")
```
2. **Refined Search:**
-```python
+``` py
skills = search("extract pages from a pdf", top_k=3, threshold=0.85)
```
@@ -131,7 +131,7 @@ Execute a skill with provided arguments or request.
- **Example Usage**:
-```python
+``` py linenums="1"
skills = search("pdf extract section")
if skills:
skill = skills[0]
@@ -149,7 +149,7 @@ Validate a skill using a tester agent.
- **Example Usage**:
-```python
+``` py linenums="1"
skill = create(request="filter prime numbers in a range, e.g., filter_prime_numbers(2, 201)")
test_summary = skill.test()
print(test_summary)
@@ -161,17 +161,17 @@ Modify and refine skills using operator overloading.
1. **Combining Skills**: Utilize the `+` operator to chain or execute skills in parallel, detailing the coordination with the `>` operator.
-```python
+``` py
new_skill = skillA + skillB > "Explanation of how skills A and B operate together"
```
2. **Refactoring Skills**: Employ the `>` operator to enhance or modify existing skills.
- ```python
+ ``` py
refactored_skill = skill > "Descriptive alterations or enhancements"
```
3. **Decomposing Skills**: Use the `<` operator to break down a skill into simpler components.
- ```python
+ ``` py
simpler_skills = skill < "Description of how the skill should be decomposed"
```
diff --git a/docs/commands.md b/docs/commands.md
index a8f1fa7..d5e8f32 100644
--- a/docs/commands.md
+++ b/docs/commands.md
@@ -13,8 +13,8 @@ creator -h
show this help message and exit
- `-c, --config`
open config.yaml file in text editor
-- `-i, --interactive`
- Enter interactive mode
+- `-i, --interpreter`
+ Enter interpreter mode
- COMMANDS `{create,save,search,server,ui}`
@@ -146,7 +146,7 @@ creator
or
```shell
-creator [-i] [--interactive] [-q] [--quiet]
+creator [-i] [--interpreter] [-q] [--quiet]
```
- `q, --quiet` Quiet mode to enter interactive mode and not rich_print LOGO and help
diff --git a/docs/configurations.md b/docs/configurations.md
index 18f11c7..04cb5b8 100644
--- a/docs/configurations.md
+++ b/docs/configurations.md
@@ -1,17 +1,21 @@
# Configurations
+```shell
+creator -c
+```
+
```yaml
LOCAL_SKILL_LIBRARY_PATH: .cache/open_creator/skill_library
REMOTE_SKILL_LIBRARY_PATH: .cache/open_creator/remote
-LOCAL_SKILL_LIBRARY_VECTORD_PATH: .cache/open_creator/vectordb/
PROMPT_CACHE_HISTORY_PATH: .cache/open_creator/prompt_cache/
+VECTORD_PATH: .cache/open_creator/vectordb/
LOGGER_CACHE_PATH: .cache/open_creator/logs/
-SKILL_EXTRACT_AGENT_CACHE_PATH: .cache/open_creator/llm_cache
+LLM_CACHE_PATH: .cache/open_creator/llm_cache
+EMBEDDING_CACHE_PATH: .cache/open_creator/embeddings/
OFFICIAL_SKILL_LIBRARY_PATH: timedomain/skill-library
OFFICIAL_SKILL_LIBRARY_TEMPLATE_PATH: timedomain/skill-library-template
-BUILD_IN_SKILL_LIBRARY_DIR: skill_library/open-creator/
# for AZURE, it is your_deployment_id
# for ANTHROPIC, it is claude-2
@@ -19,7 +23,7 @@ BUILD_IN_SKILL_LIBRARY_DIR: skill_library/open-creator/
# for huggingface, it is huggingface/WizardLM/WizardCoder-Python-34B-V1.0 model path
# for ollama, it is like ollama/llama2
# the default is openai/gpt-3.5
-MODEL_NAME: gpt-3.5-turbo-16k
+MODEL_NAME: gpt-4
TEMPERATURE: 0 # only 0 can use llm_cache
USE_AZURE: false
@@ -37,4 +41,41 @@ VERTEX_LOCATION: ""
HUGGINGFACE_API_KEY: ""
HUGGINGFACE_API_BASE: ""
-```
\ No newline at end of file
+
+# for langsmith trace
+LANGCHAIN_ENDPOINT:
+LANGCHAIN_API_KEY:
+LANGCHAIN_TRACING_V2: true
+LANGCHAIN_PROJECT: "open-creator"
+
+# for memgpt
+MEMGPT_CONFIG:
+ MEMORY_PATH: .cache/open_creator/memory
+ PERSONA: |
+ The following is a blank slate starter persona, I need to expand this to develop my own personality.
+
+ My name is MemGPT.
+ I am kind, thoughtful, and inquisitive.
+
+ HUMAN: |
+ This is what I know so far about the user, I should expand this as I learn more about them.
+
+ First name: Chad
+ Last name: ?
+ Gender: Male
+ Age: ?
+ Nationality: ?
+ Occupation: Computer science PhD student at UC Berkeley
+ Interests: Formula 1, Sailing, Taste of the Himalayas Restaurant in Berkeley, CSGO
+
+ AGENT_SUBTASKS: |
+ - create/save/search skill
+ - run/test/refactor skill
+ - show skill
+
+ SUMMARY_WARNING_TOKENS: 6000
+ CORE_MEMORY_PERSONA_CHAR_LIMIT: 2000
+ CORE_MEMORY_HUMAN_CHAR_LIMIT: 2000
+ PAGE_SIZE: 5
+ USE_VECTOR_SEARCH: true
+```
diff --git a/docs/examples/01_skills_create.ipynb b/docs/examples/01_skills_create.ipynb
index a1acc44..e82b1be 100644
--- a/docs/examples/01_skills_create.ipynb
+++ b/docs/examples/01_skills_create.ipynb
@@ -62,6 +62,7 @@
"\u001b[0;34m\u001b[0m \u001b[0mfile_path\u001b[0m\u001b[0;34m:\u001b[0m \u001b[0mOptional\u001b[0m\u001b[0;34m[\u001b[0m\u001b[0mstr\u001b[0m\u001b[0;34m]\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0;32mNone\u001b[0m\u001b[0;34m,\u001b[0m\u001b[0;34m\u001b[0m\n",
"\u001b[0;34m\u001b[0m \u001b[0mhuggingface_repo_id\u001b[0m\u001b[0;34m:\u001b[0m \u001b[0mOptional\u001b[0m\u001b[0;34m[\u001b[0m\u001b[0mstr\u001b[0m\u001b[0;34m]\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0;32mNone\u001b[0m\u001b[0;34m,\u001b[0m\u001b[0;34m\u001b[0m\n",
"\u001b[0;34m\u001b[0m \u001b[0mhuggingface_skill_path\u001b[0m\u001b[0;34m:\u001b[0m \u001b[0mOptional\u001b[0m\u001b[0;34m[\u001b[0m\u001b[0mstr\u001b[0m\u001b[0;34m]\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0;32mNone\u001b[0m\u001b[0;34m,\u001b[0m\u001b[0;34m\u001b[0m\n",
+ "\u001b[0;34m\u001b[0m \u001b[0msave\u001b[0m\u001b[0;34m:\u001b[0m \u001b[0mbool\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0;32mFalse\u001b[0m\u001b[0;34m,\u001b[0m\u001b[0;34m\u001b[0m\n",
"\u001b[0;34m\u001b[0m\u001b[0;34m)\u001b[0m \u001b[0;34m->\u001b[0m \u001b[0mcreator\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mcore\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mskill\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mCodeSkill\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n",
"\u001b[0;31mDocstring:\u001b[0m Main method to create a new skill.\n",
"\u001b[0;31mFile:\u001b[0m ~/miniconda3/envs/open_creator_online/lib/python3.10/site-packages/creator/core/core.py\n",
@@ -93,7 +94,7 @@
{
"data": {
"application/vnd.jupyter.widget-view+json": {
- "model_id": "480c4325950f46d9a9802e3952ef5ddb",
+ "model_id": "59eeaa5d83ec4cf9bdea0dbead71929c",
"version_major": 2,
"version_minor": 0
},
@@ -107,7 +108,7 @@
{
"data": {
"application/vnd.jupyter.widget-view+json": {
- "model_id": "00c9080c652f4b26b163fd80933c8bf3",
+ "model_id": "b49892b54f2547d2a74ae02760d79ab0",
"version_major": 2,
"version_minor": 0
},
@@ -141,7 +142,55 @@
{
"data": {
"application/vnd.jupyter.widget-view+json": {
- "model_id": "74f96ccdc008481d92dd54ad13aecde2",
+ "model_id": "c1fc6b0e407b48fd9709e1fb878a6b3a",
+ "version_major": 2,
+ "version_minor": 0
+ },
+ "text/plain": [
+ "Output()"
+ ]
+ },
+ "metadata": {},
+ "output_type": "display_data"
+ },
+ {
+ "data": {
+ "application/vnd.jupyter.widget-view+json": {
+ "model_id": "2dfb101217d84fff944e06c72db7eee7",
+ "version_major": 2,
+ "version_minor": 0
+ },
+ "text/plain": [
+ "Output()"
+ ]
+ },
+ "metadata": {},
+ "output_type": "display_data"
+ },
+ {
+ "data": {
+ "text/html": [
+ " \n"
+ ],
+ "text/plain": []
+ },
+ "metadata": {},
+ "output_type": "display_data"
+ },
+ {
+ "data": {
+ "text/html": [
+ " \n"
+ ],
+ "text/plain": []
+ },
+ "metadata": {},
+ "output_type": "display_data"
+ },
+ {
+ "data": {
+ "application/vnd.jupyter.widget-view+json": {
+ "model_id": "bdd53aed57ca41e7a6b945d3e17f4908",
"version_major": 2,
"version_minor": 0
},
@@ -175,7 +224,7 @@
{
"data": {
"application/vnd.jupyter.widget-view+json": {
- "model_id": "3792eb41434a4afab72082018131f2e9",
+ "model_id": "8978e8e0ba4b4955a0b97b3ba2a549e0",
"version_major": 2,
"version_minor": 0
},
@@ -205,6 +254,23 @@
},
"metadata": {},
"output_type": "display_data"
+ },
+ {
+ "data": {
+ "text/html": [
+ "Langsmith Run URL: \n",
+ "http://localhost/o/00000000-0000-0000-0000-000000000000/projects/p/856d5024-48fe-4a9c-b1f9-5236f8d3aebd/r/c2491d5f- \n",
+ "20de-4067-af80-134cafb3a449?poll=true \n",
+ " \n"
+ ],
+ "text/plain": [
+ "Langsmith Run URL: \n",
+ "\u001b]8;id=641371;http://localhost/o/00000000-0000-0000-0000-000000000000/projects/p/856d5024-48fe-4a9c-b1f9-5236f8d3aebd/r/c2491d5f-20de-4067-af80-134cafb3a449?poll=true\u001b\\\u001b[4;34mhttp://localhost/o/00000000-0000-0000-0000-000000000000/projects/p/856d5024-48fe-4a9c-b1f9-5236f8d3aebd/r/c2491d5f-\u001b[0m\u001b]8;;\u001b\\\n",
+ "\u001b]8;id=641371;http://localhost/o/00000000-0000-0000-0000-000000000000/projects/p/856d5024-48fe-4a9c-b1f9-5236f8d3aebd/r/c2491d5f-20de-4067-af80-134cafb3a449?poll=true\u001b\\\u001b[4;34m20de-4067-af80-134cafb3a449?poll=true\u001b[0m\u001b]8;;\u001b\\ \n"
+ ]
+ },
+ "metadata": {},
+ "output_type": "display_data"
}
],
"source": [
@@ -223,21 +289,20 @@
" Skill Details: \n",
"\n",
" • Name : count_prime_numbers \n",
- " • Description : This skill counts the number of prime numbers in a given range. \n",
+ " • Description : This skill counts the number of prime numbers within a given range. A prime number is a natural \n",
+ " number greater than 1 that has no positive divisors other than 1 and itself. \n",
" • Version : 1.0.0 \n",
" • Usage : \n",
"\n",
" \n",
- " count_prime_numbers( 2 , 201 ) \n",
+ " count_prime_numbers( 201 ) # returns 46 \n",
" \n",
"\n",
" • Parameters : \n",
- " • start (integer): The starting number of the range. \n",
- " • Required: True \n",
- " • end (integer): The ending number of the range. \n",
+ " • n (integer): The upper limit of the range within which to count prime numbers. \n",
" • Required: True \n",
" • Returns : \n",
- " • count (integer): The number of prime numbers in the given range. \n",
+ " • prime_count (integer): The number of prime numbers within the given range. \n",
"\n"
],
"text/plain": [
@@ -245,21 +310,20 @@
" \u001b[1;4mSkill Details:\u001b[0m \n",
"\n",
"\u001b[1;33m • \u001b[0m\u001b[1mName\u001b[0m: count_prime_numbers \n",
- "\u001b[1;33m • \u001b[0m\u001b[1mDescription\u001b[0m: This skill counts the number of prime numbers in a given range. \n",
+ "\u001b[1;33m • \u001b[0m\u001b[1mDescription\u001b[0m: This skill counts the number of prime numbers within a given range. A prime number is a natural \n",
+ "\u001b[1;33m \u001b[0mnumber greater than 1 that has no positive divisors other than 1 and itself. \n",
"\u001b[1;33m • \u001b[0m\u001b[1mVersion\u001b[0m: 1.0.0 \n",
"\u001b[1;33m • \u001b[0m\u001b[1mUsage\u001b[0m: \n",
"\n",
"\u001b[48;2;39;40;34m \u001b[0m\n",
- "\u001b[48;2;39;40;34m \u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34mcount_prime_numbers\u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34m(\u001b[0m\u001b[38;2;174;129;255;48;2;39;40;34m2\u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34m,\u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34m \u001b[0m\u001b[38;2;174;129;255;48;2;39;40;34m201\u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34m)\u001b[0m\u001b[48;2;39;40;34m \u001b[0m\u001b[48;2;39;40;34m \u001b[0m\n",
+ "\u001b[48;2;39;40;34m \u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34mcount_prime_numbers\u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34m(\u001b[0m\u001b[38;2;174;129;255;48;2;39;40;34m201\u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34m)\u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34m \u001b[0m\u001b[38;2;149;144;119;48;2;39;40;34m# returns 46\u001b[0m\u001b[48;2;39;40;34m \u001b[0m\u001b[48;2;39;40;34m \u001b[0m\n",
"\u001b[48;2;39;40;34m \u001b[0m\n",
"\n",
"\u001b[1;33m • \u001b[0m\u001b[1mParameters\u001b[0m: \n",
- "\u001b[1;33m \u001b[0m\u001b[1;33m • \u001b[0m\u001b[1mstart\u001b[0m (integer): The starting number of the range. \n",
- "\u001b[1;33m \u001b[0m\u001b[1;33m \u001b[0m\u001b[1;33m • \u001b[0mRequired: True \n",
- "\u001b[1;33m \u001b[0m\u001b[1;33m • \u001b[0m\u001b[1mend\u001b[0m (integer): The ending number of the range. \n",
+ "\u001b[1;33m \u001b[0m\u001b[1;33m • \u001b[0m\u001b[1mn\u001b[0m (integer): The upper limit of the range within which to count prime numbers. \n",
"\u001b[1;33m \u001b[0m\u001b[1;33m \u001b[0m\u001b[1;33m • \u001b[0mRequired: True \n",
"\u001b[1;33m • \u001b[0m\u001b[1mReturns\u001b[0m: \n",
- "\u001b[1;33m \u001b[0m\u001b[1;33m • \u001b[0m\u001b[1mcount\u001b[0m (integer): The number of prime numbers in the given range. \n"
+ "\u001b[1;33m \u001b[0m\u001b[1;33m • \u001b[0m\u001b[1mprime_count\u001b[0m (integer): The number of prime numbers within the given range. \n"
]
},
"metadata": {},
@@ -310,7 +374,7 @@
{
"data": {
"application/vnd.jupyter.widget-view+json": {
- "model_id": "1409e75812d1493a8885087f03ac6f25",
+ "model_id": "91d14a513cff4b68ac9cba04dc1a6db4",
"version_major": 2,
"version_minor": 0
},
@@ -340,6 +404,23 @@
},
"metadata": {},
"output_type": "display_data"
+ },
+ {
+ "data": {
+ "text/html": [
+ "Langsmith Run URL: \n",
+ "http://localhost/o/00000000-0000-0000-0000-000000000000/projects/p/856d5024-48fe-4a9c-b1f9-5236f8d3aebd/r/1e9ecd99- \n",
+ "be29-4eee-8b21-449f279b7f4b?poll=true \n",
+ " \n"
+ ],
+ "text/plain": [
+ "Langsmith Run URL: \n",
+ "\u001b]8;id=254594;http://localhost/o/00000000-0000-0000-0000-000000000000/projects/p/856d5024-48fe-4a9c-b1f9-5236f8d3aebd/r/1e9ecd99-be29-4eee-8b21-449f279b7f4b?poll=true\u001b\\\u001b[4;34mhttp://localhost/o/00000000-0000-0000-0000-000000000000/projects/p/856d5024-48fe-4a9c-b1f9-5236f8d3aebd/r/1e9ecd99-\u001b[0m\u001b]8;;\u001b\\\n",
+ "\u001b]8;id=254594;http://localhost/o/00000000-0000-0000-0000-000000000000/projects/p/856d5024-48fe-4a9c-b1f9-5236f8d3aebd/r/1e9ecd99-be29-4eee-8b21-449f279b7f4b?poll=true\u001b\\\u001b[4;34mbe29-4eee-8b21-449f279b7f4b?poll=true\u001b[0m\u001b]8;;\u001b\\ \n"
+ ]
+ },
+ "metadata": {},
+ "output_type": "display_data"
}
],
"source": [
@@ -357,17 +438,19 @@
"\n",
" Skill Details: \n",
"\n",
- " • Name : list_python_functions \n",
- " • Description : This skill lists all the Python functions and their docstrings in a specified directory. \n",
+ " • Name : list_python_functions_and_docstrings \n",
+ " • Description : This skill lists all the Python functions and their docstrings in a specified directory. It first \n",
+ " gets a list of all Python files in the directory, then parses each file to find all function definitions. For \n",
+ " each function definition, it extracts the function's name and its docstring. \n",
" • Version : 1.0.0 \n",
" • Usage : \n",
"\n",
" \n",
- " list_python_functions( '/path/to/directory' ) \n",
+ " list_python_functions_and_docstrings( '/path/to/directory' ) \n",
" \n",
"\n",
" • Parameters : \n",
- " • directory (string): The directory path where the Python files are located. \n",
+ " • directory (string): The directory to search for Python files. \n",
" • Required: True \n",
" • Returns : \n",
" \n"
@@ -376,17 +459,19 @@
"\n",
" \u001b[1;4mSkill Details:\u001b[0m \n",
"\n",
- "\u001b[1;33m • \u001b[0m\u001b[1mName\u001b[0m: list_python_functions \n",
- "\u001b[1;33m • \u001b[0m\u001b[1mDescription\u001b[0m: This skill lists all the Python functions and their docstrings in a specified directory. \n",
+ "\u001b[1;33m • \u001b[0m\u001b[1mName\u001b[0m: list_python_functions_and_docstrings \n",
+ "\u001b[1;33m • \u001b[0m\u001b[1mDescription\u001b[0m: This skill lists all the Python functions and their docstrings in a specified directory. It first \n",
+ "\u001b[1;33m \u001b[0mgets a list of all Python files in the directory, then parses each file to find all function definitions. For \n",
+ "\u001b[1;33m \u001b[0meach function definition, it extracts the function's name and its docstring. \n",
"\u001b[1;33m • \u001b[0m\u001b[1mVersion\u001b[0m: 1.0.0 \n",
"\u001b[1;33m • \u001b[0m\u001b[1mUsage\u001b[0m: \n",
"\n",
"\u001b[48;2;39;40;34m \u001b[0m\n",
- "\u001b[48;2;39;40;34m \u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34mlist_python_functions\u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34m(\u001b[0m\u001b[38;2;230;219;116;48;2;39;40;34m'\u001b[0m\u001b[38;2;230;219;116;48;2;39;40;34m/path/to/directory\u001b[0m\u001b[38;2;230;219;116;48;2;39;40;34m'\u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34m)\u001b[0m\u001b[48;2;39;40;34m \u001b[0m\u001b[48;2;39;40;34m \u001b[0m\n",
+ "\u001b[48;2;39;40;34m \u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34mlist_python_functions_and_docstrings\u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34m(\u001b[0m\u001b[38;2;230;219;116;48;2;39;40;34m'\u001b[0m\u001b[38;2;230;219;116;48;2;39;40;34m/path/to/directory\u001b[0m\u001b[38;2;230;219;116;48;2;39;40;34m'\u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34m)\u001b[0m\u001b[48;2;39;40;34m \u001b[0m\u001b[48;2;39;40;34m \u001b[0m\n",
"\u001b[48;2;39;40;34m \u001b[0m\n",
"\n",
"\u001b[1;33m • \u001b[0m\u001b[1mParameters\u001b[0m: \n",
- "\u001b[1;33m \u001b[0m\u001b[1;33m • \u001b[0m\u001b[1mdirectory\u001b[0m (string): The directory path where the Python files are located. \n",
+ "\u001b[1;33m \u001b[0m\u001b[1;33m • \u001b[0m\u001b[1mdirectory\u001b[0m (string): The directory to search for Python files. \n",
"\u001b[1;33m \u001b[0m\u001b[1;33m \u001b[0m\u001b[1;33m • \u001b[0mRequired: True \n",
"\u001b[1;33m • \u001b[0m\u001b[1mReturns\u001b[0m: \n"
]
@@ -415,7 +500,7 @@
{
"data": {
"application/vnd.jupyter.widget-view+json": {
- "model_id": "ef6cc0331df844629f215b0e657c58e9",
+ "model_id": "72229d73f3cc4980bb7bcac6009fe5b9",
"version_major": 2,
"version_minor": 0
},
@@ -445,6 +530,23 @@
},
"metadata": {},
"output_type": "display_data"
+ },
+ {
+ "data": {
+ "text/html": [
+ "Langsmith Run URL: \n",
+ "http://localhost/o/00000000-0000-0000-0000-000000000000/projects/p/856d5024-48fe-4a9c-b1f9-5236f8d3aebd/r/c94c0ec7- \n",
+ "ce23-44f5-91d5-6bd5ec1f1e39?poll=true \n",
+ " \n"
+ ],
+ "text/plain": [
+ "Langsmith Run URL: \n",
+ "\u001b]8;id=672460;http://localhost/o/00000000-0000-0000-0000-000000000000/projects/p/856d5024-48fe-4a9c-b1f9-5236f8d3aebd/r/c94c0ec7-ce23-44f5-91d5-6bd5ec1f1e39?poll=true\u001b\\\u001b[4;34mhttp://localhost/o/00000000-0000-0000-0000-000000000000/projects/p/856d5024-48fe-4a9c-b1f9-5236f8d3aebd/r/c94c0ec7-\u001b[0m\u001b]8;;\u001b\\\n",
+ "\u001b]8;id=672460;http://localhost/o/00000000-0000-0000-0000-000000000000/projects/p/856d5024-48fe-4a9c-b1f9-5236f8d3aebd/r/c94c0ec7-ce23-44f5-91d5-6bd5ec1f1e39?poll=true\u001b\\\u001b[4;34mce23-44f5-91d5-6bd5ec1f1e39?poll=true\u001b[0m\u001b]8;;\u001b\\ \n"
+ ]
+ },
+ "metadata": {},
+ "output_type": "display_data"
}
],
"source": [
@@ -462,62 +564,56 @@
"\n",
" Skill Details: \n",
"\n",
- " • Name : extract_pdf_pages \n",
- " • Description : This skill extracts a specified section from a PDF file and saves it as a new PDF. \n",
+ " • Name : extract_pages_from_pdf \n",
+ " • Description : This skill extracts a specified range of pages from a PDF file and saves them as a new PDF file. \n",
+ " The user needs to provide the path to the original PDF file and the range of pages to be extracted. The \n",
+ " extracted pages are saved in a new PDF file in the current working directory. \n",
" • Version : 1.0.0 \n",
" • Usage : \n",
"\n",
" \n",
- " pdf_path = '~/Downloads/voyager.pdf' \n",
- " start_page = 2 \n",
- " end_page = 5 \n",
- " output_path = 'extracted_pages.pdf' \n",
- " \n",
- " extract_pdf_pages(pdf_path, start_page, end_page, output_path) \n",
+ " extract_pages_from_pdf( '~/Downloads/voyager.pdf' , 2 , 5 , 'voyager_extracted.pdf' ) \n",
" \n",
"\n",
" • Parameters : \n",
- " • pdf_path (string): The path to the PDF file. \n",
- " • Required: True \n",
- " • start_page (integer): The starting page number to extract. \n",
+ " • pdf_path (string): Path to the original PDF file. \n",
" • Required: True \n",
- " • end_page (integer): The ending page number to extract. \n",
+ " • start_page (integer): The first page to be extracted. Page numbers start from 1. \n",
" • Required: True \n",
- " • output_path (string): The path to save the extracted pages as a new PDF file. \n",
+ " • end_page (integer): The last page to be extracted. This page is included in the extraction. \n",
" • Required: True \n",
+ " • output_file (string): Name of the output file where the extracted pages will be saved. \n",
+ " • Default: 'extracted_pages.pdf' \n",
" • Returns : \n",
- " • output_path (string): The path to the extracted pages PDF file. \n",
+ " • output_file (string): Name of the output file where the extracted pages were saved. \n",
" \n"
],
"text/plain": [
"\n",
" \u001b[1;4mSkill Details:\u001b[0m \n",
"\n",
- "\u001b[1;33m • \u001b[0m\u001b[1mName\u001b[0m: extract_pdf_pages \n",
- "\u001b[1;33m • \u001b[0m\u001b[1mDescription\u001b[0m: This skill extracts a specified section from a PDF file and saves it as a new PDF. \n",
+ "\u001b[1;33m • \u001b[0m\u001b[1mName\u001b[0m: extract_pages_from_pdf \n",
+ "\u001b[1;33m • \u001b[0m\u001b[1mDescription\u001b[0m: This skill extracts a specified range of pages from a PDF file and saves them as a new PDF file. \n",
+ "\u001b[1;33m \u001b[0mThe user needs to provide the path to the original PDF file and the range of pages to be extracted. The \n",
+ "\u001b[1;33m \u001b[0mextracted pages are saved in a new PDF file in the current working directory. \n",
"\u001b[1;33m • \u001b[0m\u001b[1mVersion\u001b[0m: 1.0.0 \n",
"\u001b[1;33m • \u001b[0m\u001b[1mUsage\u001b[0m: \n",
"\n",
"\u001b[48;2;39;40;34m \u001b[0m\n",
- "\u001b[48;2;39;40;34m \u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34mpdf_path\u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34m \u001b[0m\u001b[38;2;255;70;137;48;2;39;40;34m=\u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34m \u001b[0m\u001b[38;2;230;219;116;48;2;39;40;34m'\u001b[0m\u001b[38;2;230;219;116;48;2;39;40;34m~/Downloads/voyager.pdf\u001b[0m\u001b[38;2;230;219;116;48;2;39;40;34m'\u001b[0m\u001b[48;2;39;40;34m \u001b[0m\u001b[48;2;39;40;34m \u001b[0m\n",
- "\u001b[48;2;39;40;34m \u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34mstart_page\u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34m \u001b[0m\u001b[38;2;255;70;137;48;2;39;40;34m=\u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34m \u001b[0m\u001b[38;2;174;129;255;48;2;39;40;34m2\u001b[0m\u001b[48;2;39;40;34m \u001b[0m\u001b[48;2;39;40;34m \u001b[0m\n",
- "\u001b[48;2;39;40;34m \u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34mend_page\u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34m \u001b[0m\u001b[38;2;255;70;137;48;2;39;40;34m=\u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34m \u001b[0m\u001b[38;2;174;129;255;48;2;39;40;34m5\u001b[0m\u001b[48;2;39;40;34m \u001b[0m\u001b[48;2;39;40;34m \u001b[0m\n",
- "\u001b[48;2;39;40;34m \u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34moutput_path\u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34m \u001b[0m\u001b[38;2;255;70;137;48;2;39;40;34m=\u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34m \u001b[0m\u001b[38;2;230;219;116;48;2;39;40;34m'\u001b[0m\u001b[38;2;230;219;116;48;2;39;40;34mextracted_pages.pdf\u001b[0m\u001b[38;2;230;219;116;48;2;39;40;34m'\u001b[0m\u001b[48;2;39;40;34m \u001b[0m\u001b[48;2;39;40;34m \u001b[0m\n",
- "\u001b[48;2;39;40;34m \u001b[0m\u001b[48;2;39;40;34m \u001b[0m\u001b[48;2;39;40;34m \u001b[0m\n",
- "\u001b[48;2;39;40;34m \u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34mextract_pdf_pages\u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34m(\u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34mpdf_path\u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34m,\u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34m \u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34mstart_page\u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34m,\u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34m \u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34mend_page\u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34m,\u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34m \u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34moutput_path\u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34m)\u001b[0m\u001b[48;2;39;40;34m \u001b[0m\u001b[48;2;39;40;34m \u001b[0m\n",
+ "\u001b[48;2;39;40;34m \u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34mextract_pages_from_pdf\u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34m(\u001b[0m\u001b[38;2;230;219;116;48;2;39;40;34m'\u001b[0m\u001b[38;2;230;219;116;48;2;39;40;34m~/Downloads/voyager.pdf\u001b[0m\u001b[38;2;230;219;116;48;2;39;40;34m'\u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34m,\u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34m \u001b[0m\u001b[38;2;174;129;255;48;2;39;40;34m2\u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34m,\u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34m \u001b[0m\u001b[38;2;174;129;255;48;2;39;40;34m5\u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34m,\u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34m \u001b[0m\u001b[38;2;230;219;116;48;2;39;40;34m'\u001b[0m\u001b[38;2;230;219;116;48;2;39;40;34mvoyager_extracted.pdf\u001b[0m\u001b[38;2;230;219;116;48;2;39;40;34m'\u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34m)\u001b[0m\u001b[48;2;39;40;34m \u001b[0m\u001b[48;2;39;40;34m \u001b[0m\n",
"\u001b[48;2;39;40;34m \u001b[0m\n",
"\n",
"\u001b[1;33m • \u001b[0m\u001b[1mParameters\u001b[0m: \n",
- "\u001b[1;33m \u001b[0m\u001b[1;33m • \u001b[0m\u001b[1mpdf_path\u001b[0m (string): The path to the PDF file. \n",
- "\u001b[1;33m \u001b[0m\u001b[1;33m \u001b[0m\u001b[1;33m • \u001b[0mRequired: True \n",
- "\u001b[1;33m \u001b[0m\u001b[1;33m • \u001b[0m\u001b[1mstart_page\u001b[0m (integer): The starting page number to extract. \n",
+ "\u001b[1;33m \u001b[0m\u001b[1;33m • \u001b[0m\u001b[1mpdf_path\u001b[0m (string): Path to the original PDF file. \n",
"\u001b[1;33m \u001b[0m\u001b[1;33m \u001b[0m\u001b[1;33m • \u001b[0mRequired: True \n",
- "\u001b[1;33m \u001b[0m\u001b[1;33m • \u001b[0m\u001b[1mend_page\u001b[0m (integer): The ending page number to extract. \n",
+ "\u001b[1;33m \u001b[0m\u001b[1;33m • \u001b[0m\u001b[1mstart_page\u001b[0m (integer): The first page to be extracted. Page numbers start from 1. \n",
"\u001b[1;33m \u001b[0m\u001b[1;33m \u001b[0m\u001b[1;33m • \u001b[0mRequired: True \n",
- "\u001b[1;33m \u001b[0m\u001b[1;33m • \u001b[0m\u001b[1moutput_path\u001b[0m (string): The path to save the extracted pages as a new PDF file. \n",
+ "\u001b[1;33m \u001b[0m\u001b[1;33m • \u001b[0m\u001b[1mend_page\u001b[0m (integer): The last page to be extracted. This page is included in the extraction. \n",
"\u001b[1;33m \u001b[0m\u001b[1;33m \u001b[0m\u001b[1;33m • \u001b[0mRequired: True \n",
+ "\u001b[1;33m \u001b[0m\u001b[1;33m • \u001b[0m\u001b[1moutput_file\u001b[0m (string): Name of the output file where the extracted pages will be saved. \n",
+ "\u001b[1;33m \u001b[0m\u001b[1;33m \u001b[0m\u001b[1;33m • \u001b[0mDefault: 'extracted_pages.pdf' \n",
"\u001b[1;33m • \u001b[0m\u001b[1mReturns\u001b[0m: \n",
- "\u001b[1;33m \u001b[0m\u001b[1;33m • \u001b[0m\u001b[1moutput_path\u001b[0m (string): The path to the extracted pages PDF file. \n"
+ "\u001b[1;33m \u001b[0m\u001b[1;33m • \u001b[0m\u001b[1moutput_file\u001b[0m (string): Name of the output file where the extracted pages were saved. \n"
]
},
"metadata": {},
@@ -571,13 +667,13 @@
},
{
"cell_type": "code",
- "execution_count": 12,
+ "execution_count": 11,
"metadata": {},
"outputs": [
{
"data": {
"application/vnd.jupyter.widget-view+json": {
- "model_id": "1025bc88f4c2439dbf1a24a060c7db54",
+ "model_id": "25767f6c233d4d3a986d5859754d2fed",
"version_major": 2,
"version_minor": 0
},
@@ -607,6 +703,23 @@
},
"metadata": {},
"output_type": "display_data"
+ },
+ {
+ "data": {
+ "text/html": [
+ "Langsmith Run URL: \n",
+ "http://localhost/o/00000000-0000-0000-0000-000000000000/projects/p/856d5024-48fe-4a9c-b1f9-5236f8d3aebd/r/e71ac5c0- \n",
+ "685f-4f2b-9690-b77fb264eaec?poll=true \n",
+ " \n"
+ ],
+ "text/plain": [
+ "Langsmith Run URL: \n",
+ "\u001b]8;id=90796;http://localhost/o/00000000-0000-0000-0000-000000000000/projects/p/856d5024-48fe-4a9c-b1f9-5236f8d3aebd/r/e71ac5c0-685f-4f2b-9690-b77fb264eaec?poll=true\u001b\\\u001b[4;34mhttp://localhost/o/00000000-0000-0000-0000-000000000000/projects/p/856d5024-48fe-4a9c-b1f9-5236f8d3aebd/r/e71ac5c0-\u001b[0m\u001b]8;;\u001b\\\n",
+ "\u001b]8;id=90796;http://localhost/o/00000000-0000-0000-0000-000000000000/projects/p/856d5024-48fe-4a9c-b1f9-5236f8d3aebd/r/e71ac5c0-685f-4f2b-9690-b77fb264eaec?poll=true\u001b\\\u001b[4;34m685f-4f2b-9690-b77fb264eaec?poll=true\u001b[0m\u001b]8;;\u001b\\ \n"
+ ]
+ },
+ "metadata": {},
+ "output_type": "display_data"
}
],
"source": [
@@ -615,7 +728,7 @@
},
{
"cell_type": "code",
- "execution_count": 13,
+ "execution_count": 12,
"metadata": {},
"outputs": [
{
@@ -625,25 +738,13 @@
" Skill Details: \n",
"\n",
" • Name : display_markdown_message \n",
- " • Description : This skill is used to display a markdown message in a formatted way. It takes a multiline string as\n",
- " input and prints the message with proper formatting. It supports markdown syntax and can handle indentation and \n",
- " multiline strings. \n",
+ " • Description : This skill is used to display a markdown message. It works with multiline strings with lots of \n",
+ " indentation and will automatically make single line > tags beautiful. \n",
" • Version : 1.0.0 \n",
" • Usage : \n",
"\n",
" \n",
- " message = \"\"\" \n",
- " # Heading \n",
- " \n",
- " - List item 1 \n",
- " - List item 2 \n",
- " \n",
- " --- \n",
- " \n",
- " > Blockquote \n",
- " \n",
- " \"\"\" \n",
- " display_markdown_message(message) \n",
+ " display_markdown_message( '> This is a markdown message.' ) \n",
" \n",
"\n",
" • Parameters : \n",
@@ -657,25 +758,13 @@
" \u001b[1;4mSkill Details:\u001b[0m \n",
"\n",
"\u001b[1;33m • \u001b[0m\u001b[1mName\u001b[0m: display_markdown_message \n",
- "\u001b[1;33m • \u001b[0m\u001b[1mDescription\u001b[0m: This skill is used to display a markdown message in a formatted way. It takes a multiline string as\n",
- "\u001b[1;33m \u001b[0minput and prints the message with proper formatting. It supports markdown syntax and can handle indentation and \n",
- "\u001b[1;33m \u001b[0mmultiline strings. \n",
+ "\u001b[1;33m • \u001b[0m\u001b[1mDescription\u001b[0m: This skill is used to display a markdown message. It works with multiline strings with lots of \n",
+ "\u001b[1;33m \u001b[0mindentation and will automatically make single line > tags beautiful. \n",
"\u001b[1;33m • \u001b[0m\u001b[1mVersion\u001b[0m: 1.0.0 \n",
"\u001b[1;33m • \u001b[0m\u001b[1mUsage\u001b[0m: \n",
"\n",
"\u001b[48;2;39;40;34m \u001b[0m\n",
- "\u001b[48;2;39;40;34m \u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34mmessage\u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34m \u001b[0m\u001b[38;2;255;70;137;48;2;39;40;34m=\u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34m \u001b[0m\u001b[38;2;230;219;116;48;2;39;40;34m\"\"\"\u001b[0m\u001b[48;2;39;40;34m \u001b[0m\u001b[48;2;39;40;34m \u001b[0m\n",
- "\u001b[48;2;39;40;34m \u001b[0m\u001b[38;2;230;219;116;48;2;39;40;34m# Heading\u001b[0m\u001b[48;2;39;40;34m \u001b[0m\u001b[48;2;39;40;34m \u001b[0m\n",
- "\u001b[48;2;39;40;34m \u001b[0m\u001b[48;2;39;40;34m \u001b[0m\u001b[48;2;39;40;34m \u001b[0m\n",
- "\u001b[48;2;39;40;34m \u001b[0m\u001b[38;2;230;219;116;48;2;39;40;34m- List item 1\u001b[0m\u001b[48;2;39;40;34m \u001b[0m\u001b[48;2;39;40;34m \u001b[0m\n",
- "\u001b[48;2;39;40;34m \u001b[0m\u001b[38;2;230;219;116;48;2;39;40;34m- List item 2\u001b[0m\u001b[48;2;39;40;34m \u001b[0m\u001b[48;2;39;40;34m \u001b[0m\n",
- "\u001b[48;2;39;40;34m \u001b[0m\u001b[48;2;39;40;34m \u001b[0m\u001b[48;2;39;40;34m \u001b[0m\n",
- "\u001b[48;2;39;40;34m \u001b[0m\u001b[38;2;230;219;116;48;2;39;40;34m---\u001b[0m\u001b[48;2;39;40;34m \u001b[0m\u001b[48;2;39;40;34m \u001b[0m\n",
- "\u001b[48;2;39;40;34m \u001b[0m\u001b[48;2;39;40;34m \u001b[0m\u001b[48;2;39;40;34m \u001b[0m\n",
- "\u001b[48;2;39;40;34m \u001b[0m\u001b[38;2;230;219;116;48;2;39;40;34m> Blockquote\u001b[0m\u001b[48;2;39;40;34m \u001b[0m\u001b[48;2;39;40;34m \u001b[0m\n",
- "\u001b[48;2;39;40;34m \u001b[0m\u001b[48;2;39;40;34m \u001b[0m\u001b[48;2;39;40;34m \u001b[0m\n",
- "\u001b[48;2;39;40;34m \u001b[0m\u001b[38;2;230;219;116;48;2;39;40;34m\"\"\"\u001b[0m\u001b[48;2;39;40;34m \u001b[0m\u001b[48;2;39;40;34m \u001b[0m\n",
- "\u001b[48;2;39;40;34m \u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34mdisplay_markdown_message\u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34m(\u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34mmessage\u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34m)\u001b[0m\u001b[48;2;39;40;34m \u001b[0m\u001b[48;2;39;40;34m \u001b[0m\n",
+ "\u001b[48;2;39;40;34m \u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34mdisplay_markdown_message\u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34m(\u001b[0m\u001b[38;2;230;219;116;48;2;39;40;34m'\u001b[0m\u001b[38;2;230;219;116;48;2;39;40;34m> This is a markdown message.\u001b[0m\u001b[38;2;230;219;116;48;2;39;40;34m'\u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34m)\u001b[0m\u001b[48;2;39;40;34m \u001b[0m\u001b[48;2;39;40;34m \u001b[0m\n",
"\u001b[48;2;39;40;34m \u001b[0m\n",
"\n",
"\u001b[1;33m • \u001b[0m\u001b[1mParameters\u001b[0m: \n",
@@ -702,13 +791,13 @@
},
{
"cell_type": "code",
- "execution_count": 16,
+ "execution_count": 13,
"metadata": {},
"outputs": [
{
"data": {
"application/vnd.jupyter.widget-view+json": {
- "model_id": "541ad5ff7e6146bbb08bde2c69149fe2",
+ "model_id": "4efa90f0f2004cd9beb5895cd879b702",
"version_major": 2,
"version_minor": 0
},
@@ -738,6 +827,23 @@
},
"metadata": {},
"output_type": "display_data"
+ },
+ {
+ "data": {
+ "text/html": [
+ "Langsmith Run URL: \n",
+ "http://localhost/o/00000000-0000-0000-0000-000000000000/projects/p/856d5024-48fe-4a9c-b1f9-5236f8d3aebd/r/f3a428b6- \n",
+ "e8ef-4d61-b9a1-503afe355eaa?poll=true \n",
+ " \n"
+ ],
+ "text/plain": [
+ "Langsmith Run URL: \n",
+ "\u001b]8;id=429238;http://localhost/o/00000000-0000-0000-0000-000000000000/projects/p/856d5024-48fe-4a9c-b1f9-5236f8d3aebd/r/f3a428b6-e8ef-4d61-b9a1-503afe355eaa?poll=true\u001b\\\u001b[4;34mhttp://localhost/o/00000000-0000-0000-0000-000000000000/projects/p/856d5024-48fe-4a9c-b1f9-5236f8d3aebd/r/f3a428b6-\u001b[0m\u001b]8;;\u001b\\\n",
+ "\u001b]8;id=429238;http://localhost/o/00000000-0000-0000-0000-000000000000/projects/p/856d5024-48fe-4a9c-b1f9-5236f8d3aebd/r/f3a428b6-e8ef-4d61-b9a1-503afe355eaa?poll=true\u001b\\\u001b[4;34me8ef-4d61-b9a1-503afe355eaa?poll=true\u001b[0m\u001b]8;;\u001b\\ \n"
+ ]
+ },
+ "metadata": {},
+ "output_type": "display_data"
}
],
"source": [
@@ -746,7 +852,7 @@
},
{
"cell_type": "code",
- "execution_count": 17,
+ "execution_count": 14,
"metadata": {},
"outputs": [
{
@@ -755,20 +861,24 @@
"\n",
" Skill Details: \n",
"\n",
- " • Name : create_api \n",
- " • Description : Generates a CodeSkill instance using different input sources. \n",
+ " • Name : create \n",
+ " • Description : This function generates a CodeSkill instance using different input sources. It can take in a \n",
+ " request string detailing the skill functionality, messages or a path to a JSON file containing messages, a \n",
+ " string of file content or path to a code/API doc file, a directory path with skill name as stem or file path \n",
+ " with skill.json as stem, an identifier for a Huggingface repository, or a path to the skill within the \n",
+ " Huggingface repository. \n",
" • Version : 1.0.0 \n",
" • Usage : \n",
"\n",
" \n",
- " create_api(request, messages = messages, messages_json_path = messages_json_path, file_content = file_content, \n",
- " file_path = file_path, skill_path = skill_path, skill_json_path = skill_json_path, \n",
- " huggingface_repo_id = huggingface_repo_id, huggingface_skill_path = huggingface_skill_path) \n",
+ " from creator import create \n",
+ " \n",
+ " skill = create(request = '...' , messages = [ ... ], file_content = '...' , file_path = '...' , skill_path = '...' , \n",
+ " skill_json_path = '...' , huggingface_repo_id = '...' , huggingface_skill_path = '...' ) \n",
" \n",
"\n",
" • Parameters : \n",
" • request (string): String detailing the skill functionality. \n",
- " • Required: True \n",
" • messages (array): Messages as a list of dictionaries. \n",
" • messages_json_path (string): Path to a JSON file containing messages. \n",
" • file_content (string): String of file content. \n",
@@ -785,20 +895,24 @@
"\n",
" \u001b[1;4mSkill Details:\u001b[0m \n",
"\n",
- "\u001b[1;33m • \u001b[0m\u001b[1mName\u001b[0m: create_api \n",
- "\u001b[1;33m • \u001b[0m\u001b[1mDescription\u001b[0m: Generates a \u001b[1;36;40mCodeSkill\u001b[0m instance using different input sources. \n",
+ "\u001b[1;33m • \u001b[0m\u001b[1mName\u001b[0m: create \n",
+ "\u001b[1;33m • \u001b[0m\u001b[1mDescription\u001b[0m: This function generates a \u001b[1;36;40mCodeSkill\u001b[0m instance using different input sources. It can take in a \n",
+ "\u001b[1;33m \u001b[0mrequest string detailing the skill functionality, messages or a path to a JSON file containing messages, a \n",
+ "\u001b[1;33m \u001b[0mstring of file content or path to a code/API doc file, a directory path with skill name as stem or file path \n",
+ "\u001b[1;33m \u001b[0mwith \u001b[1;36;40mskill.json\u001b[0m as stem, an identifier for a Huggingface repository, or a path to the skill within the \n",
+ "\u001b[1;33m \u001b[0mHuggingface repository. \n",
"\u001b[1;33m • \u001b[0m\u001b[1mVersion\u001b[0m: 1.0.0 \n",
"\u001b[1;33m • \u001b[0m\u001b[1mUsage\u001b[0m: \n",
"\n",
"\u001b[48;2;39;40;34m \u001b[0m\n",
- "\u001b[48;2;39;40;34m \u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34mcreate_api\u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34m(\u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34mrequest\u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34m,\u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34m \u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34mmessages\u001b[0m\u001b[38;2;255;70;137;48;2;39;40;34m=\u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34mmessages\u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34m,\u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34m \u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34mmessages_json_path\u001b[0m\u001b[38;2;255;70;137;48;2;39;40;34m=\u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34mmessages_json_path\u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34m,\u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34m \u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34mfile_content\u001b[0m\u001b[38;2;255;70;137;48;2;39;40;34m=\u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34mfile_content\u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34m,\u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34m \u001b[0m\u001b[48;2;39;40;34m \u001b[0m\u001b[48;2;39;40;34m \u001b[0m\n",
- "\u001b[48;2;39;40;34m \u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34mfile_path\u001b[0m\u001b[38;2;255;70;137;48;2;39;40;34m=\u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34mfile_path\u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34m,\u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34m \u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34mskill_path\u001b[0m\u001b[38;2;255;70;137;48;2;39;40;34m=\u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34mskill_path\u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34m,\u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34m \u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34mskill_json_path\u001b[0m\u001b[38;2;255;70;137;48;2;39;40;34m=\u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34mskill_json_path\u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34m,\u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34m \u001b[0m\u001b[48;2;39;40;34m \u001b[0m\u001b[48;2;39;40;34m \u001b[0m\n",
- "\u001b[48;2;39;40;34m \u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34mhuggingface_repo_id\u001b[0m\u001b[38;2;255;70;137;48;2;39;40;34m=\u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34mhuggingface_repo_id\u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34m,\u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34m \u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34mhuggingface_skill_path\u001b[0m\u001b[38;2;255;70;137;48;2;39;40;34m=\u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34mhuggingface_skill_path\u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34m)\u001b[0m\u001b[48;2;39;40;34m \u001b[0m\u001b[48;2;39;40;34m \u001b[0m\n",
+ "\u001b[48;2;39;40;34m \u001b[0m\u001b[38;2;255;70;137;48;2;39;40;34mfrom\u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34m \u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34mcreator\u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34m \u001b[0m\u001b[38;2;255;70;137;48;2;39;40;34mimport\u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34m \u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34mcreate\u001b[0m\u001b[48;2;39;40;34m \u001b[0m\u001b[48;2;39;40;34m \u001b[0m\n",
+ "\u001b[48;2;39;40;34m \u001b[0m\u001b[48;2;39;40;34m \u001b[0m\u001b[48;2;39;40;34m \u001b[0m\n",
+ "\u001b[48;2;39;40;34m \u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34mskill\u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34m \u001b[0m\u001b[38;2;255;70;137;48;2;39;40;34m=\u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34m \u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34mcreate\u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34m(\u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34mrequest\u001b[0m\u001b[38;2;255;70;137;48;2;39;40;34m=\u001b[0m\u001b[38;2;230;219;116;48;2;39;40;34m'\u001b[0m\u001b[38;2;230;219;116;48;2;39;40;34m...\u001b[0m\u001b[38;2;230;219;116;48;2;39;40;34m'\u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34m,\u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34m \u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34mmessages\u001b[0m\u001b[38;2;255;70;137;48;2;39;40;34m=\u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34m[\u001b[0m\u001b[38;2;255;70;137;48;2;39;40;34m.\u001b[0m\u001b[38;2;255;70;137;48;2;39;40;34m.\u001b[0m\u001b[38;2;255;70;137;48;2;39;40;34m.\u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34m]\u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34m,\u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34m \u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34mfile_content\u001b[0m\u001b[38;2;255;70;137;48;2;39;40;34m=\u001b[0m\u001b[38;2;230;219;116;48;2;39;40;34m'\u001b[0m\u001b[38;2;230;219;116;48;2;39;40;34m...\u001b[0m\u001b[38;2;230;219;116;48;2;39;40;34m'\u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34m,\u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34m \u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34mfile_path\u001b[0m\u001b[38;2;255;70;137;48;2;39;40;34m=\u001b[0m\u001b[38;2;230;219;116;48;2;39;40;34m'\u001b[0m\u001b[38;2;230;219;116;48;2;39;40;34m...\u001b[0m\u001b[38;2;230;219;116;48;2;39;40;34m'\u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34m,\u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34m \u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34mskill_path\u001b[0m\u001b[38;2;255;70;137;48;2;39;40;34m=\u001b[0m\u001b[38;2;230;219;116;48;2;39;40;34m'\u001b[0m\u001b[38;2;230;219;116;48;2;39;40;34m...\u001b[0m\u001b[38;2;230;219;116;48;2;39;40;34m'\u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34m,\u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34m \u001b[0m\u001b[48;2;39;40;34m \u001b[0m\u001b[48;2;39;40;34m \u001b[0m\n",
+ "\u001b[48;2;39;40;34m \u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34mskill_json_path\u001b[0m\u001b[38;2;255;70;137;48;2;39;40;34m=\u001b[0m\u001b[38;2;230;219;116;48;2;39;40;34m'\u001b[0m\u001b[38;2;230;219;116;48;2;39;40;34m...\u001b[0m\u001b[38;2;230;219;116;48;2;39;40;34m'\u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34m,\u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34m \u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34mhuggingface_repo_id\u001b[0m\u001b[38;2;255;70;137;48;2;39;40;34m=\u001b[0m\u001b[38;2;230;219;116;48;2;39;40;34m'\u001b[0m\u001b[38;2;230;219;116;48;2;39;40;34m...\u001b[0m\u001b[38;2;230;219;116;48;2;39;40;34m'\u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34m,\u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34m \u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34mhuggingface_skill_path\u001b[0m\u001b[38;2;255;70;137;48;2;39;40;34m=\u001b[0m\u001b[38;2;230;219;116;48;2;39;40;34m'\u001b[0m\u001b[38;2;230;219;116;48;2;39;40;34m...\u001b[0m\u001b[38;2;230;219;116;48;2;39;40;34m'\u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34m)\u001b[0m\u001b[48;2;39;40;34m \u001b[0m\u001b[48;2;39;40;34m \u001b[0m\n",
"\u001b[48;2;39;40;34m \u001b[0m\n",
"\n",
"\u001b[1;33m • \u001b[0m\u001b[1mParameters\u001b[0m: \n",
"\u001b[1;33m \u001b[0m\u001b[1;33m • \u001b[0m\u001b[1mrequest\u001b[0m (string): String detailing the skill functionality. \n",
- "\u001b[1;33m \u001b[0m\u001b[1;33m \u001b[0m\u001b[1;33m • \u001b[0mRequired: True \n",
"\u001b[1;33m \u001b[0m\u001b[1;33m • \u001b[0m\u001b[1mmessages\u001b[0m (array): Messages as a list of dictionaries. \n",
"\u001b[1;33m \u001b[0m\u001b[1;33m • \u001b[0m\u001b[1mmessages_json_path\u001b[0m (string): Path to a JSON file containing messages. \n",
"\u001b[1;33m \u001b[0m\u001b[1;33m • \u001b[0m\u001b[1mfile_content\u001b[0m (string): String of file content. \n",
@@ -833,19 +947,19 @@
},
{
"cell_type": "code",
- "execution_count": 18,
+ "execution_count": 15,
"metadata": {},
"outputs": [
{
"data": {
"text/html": [
"\n",
- "▌ saved to /Users/gongjunmin/.cache/open_creator/skill_library/create_api \n",
+ "▌ saved to /Users/gongjunmin/.cache/open_creator/skill_library/create \n",
" \n"
],
"text/plain": [
"\n",
- "\u001b[35m▌ \u001b[0m\u001b[35msaved to /Users/gongjunmin/.cache/open_creator/skill_library/create_api\u001b[0m\u001b[35m \u001b[0m\n"
+ "\u001b[35m▌ \u001b[0m\u001b[35msaved to /Users/gongjunmin/.cache/open_creator/skill_library/create\u001b[0m\u001b[35m \u001b[0m\n"
]
},
"metadata": {},
@@ -862,7 +976,7 @@
},
{
"cell_type": "code",
- "execution_count": 19,
+ "execution_count": 16,
"metadata": {},
"outputs": [
{
diff --git a/docs/examples/02_skills_library.ipynb b/docs/examples/02_skills_library.ipynb
index f88ff36..7d3aec6 100644
--- a/docs/examples/02_skills_library.ipynb
+++ b/docs/examples/02_skills_library.ipynb
@@ -59,7 +59,7 @@
{
"data": {
"application/vnd.jupyter.widget-view+json": {
- "model_id": "d19fad8112b548a18fc1a1a0e05a246d",
+ "model_id": "0fad0f77899147108a9a42beeab4bfc9",
"version_major": 2,
"version_minor": 0
},
@@ -89,6 +89,23 @@
},
"metadata": {},
"output_type": "display_data"
+ },
+ {
+ "data": {
+ "text/html": [
+ "Langsmith Run URL: \n",
+ "http://localhost/o/00000000-0000-0000-0000-000000000000/projects/p/856d5024-48fe-4a9c-b1f9-5236f8d3aebd/r/b898dfdb- \n",
+ "c596-4538-b4c9-d88136c3fb4f?poll=true \n",
+ " \n"
+ ],
+ "text/plain": [
+ "Langsmith Run URL: \n",
+ "\u001b]8;id=81787;http://localhost/o/00000000-0000-0000-0000-000000000000/projects/p/856d5024-48fe-4a9c-b1f9-5236f8d3aebd/r/b898dfdb-c596-4538-b4c9-d88136c3fb4f?poll=true\u001b\\\u001b[4;34mhttp://localhost/o/00000000-0000-0000-0000-000000000000/projects/p/856d5024-48fe-4a9c-b1f9-5236f8d3aebd/r/b898dfdb-\u001b[0m\u001b]8;;\u001b\\\n",
+ "\u001b]8;id=81787;http://localhost/o/00000000-0000-0000-0000-000000000000/projects/p/856d5024-48fe-4a9c-b1f9-5236f8d3aebd/r/b898dfdb-c596-4538-b4c9-d88136c3fb4f?poll=true\u001b\\\u001b[4;34mc596-4538-b4c9-d88136c3fb4f?poll=true\u001b[0m\u001b]8;;\u001b\\ \n"
+ ]
+ },
+ "metadata": {},
+ "output_type": "display_data"
}
],
"source": [
@@ -106,20 +123,24 @@
"\n",
" Skill Details: \n",
"\n",
- " • Name : create_api \n",
- " • Description : Generates a CodeSkill instance using different input sources. \n",
+ " • Name : create \n",
+ " • Description : This function generates a CodeSkill instance using different input sources. It can take in a \n",
+ " request string detailing the skill functionality, messages or a path to a JSON file containing messages, a \n",
+ " string of file content or path to a code/API doc file, a directory path with skill name as stem or file path \n",
+ " with skill.json as stem, an identifier for a Huggingface repository, or a path to the skill within the \n",
+ " Huggingface repository. \n",
" • Version : 1.0.0 \n",
" • Usage : \n",
"\n",
" \n",
- " create_api(request, messages = messages, messages_json_path = messages_json_path, file_content = file_content, \n",
- " file_path = file_path, skill_path = skill_path, skill_json_path = skill_json_path, \n",
- " huggingface_repo_id = huggingface_repo_id, huggingface_skill_path = huggingface_skill_path) \n",
+ " from creator import create \n",
+ " \n",
+ " skill = create(request = '...' , messages = [ ... ], file_content = '...' , file_path = '...' , skill_path = '...' , \n",
+ " skill_json_path = '...' , huggingface_repo_id = '...' , huggingface_skill_path = '...' ) \n",
" \n",
"\n",
" • Parameters : \n",
" • request (string): String detailing the skill functionality. \n",
- " • Required: True \n",
" • messages (array): Messages as a list of dictionaries. \n",
" • messages_json_path (string): Path to a JSON file containing messages. \n",
" • file_content (string): String of file content. \n",
@@ -136,20 +157,24 @@
"\n",
" \u001b[1;4mSkill Details:\u001b[0m \n",
"\n",
- "\u001b[1;33m • \u001b[0m\u001b[1mName\u001b[0m: create_api \n",
- "\u001b[1;33m • \u001b[0m\u001b[1mDescription\u001b[0m: Generates a \u001b[1;36;40mCodeSkill\u001b[0m instance using different input sources. \n",
+ "\u001b[1;33m • \u001b[0m\u001b[1mName\u001b[0m: create \n",
+ "\u001b[1;33m • \u001b[0m\u001b[1mDescription\u001b[0m: This function generates a \u001b[1;36;40mCodeSkill\u001b[0m instance using different input sources. It can take in a \n",
+ "\u001b[1;33m \u001b[0mrequest string detailing the skill functionality, messages or a path to a JSON file containing messages, a \n",
+ "\u001b[1;33m \u001b[0mstring of file content or path to a code/API doc file, a directory path with skill name as stem or file path \n",
+ "\u001b[1;33m \u001b[0mwith \u001b[1;36;40mskill.json\u001b[0m as stem, an identifier for a Huggingface repository, or a path to the skill within the \n",
+ "\u001b[1;33m \u001b[0mHuggingface repository. \n",
"\u001b[1;33m • \u001b[0m\u001b[1mVersion\u001b[0m: 1.0.0 \n",
"\u001b[1;33m • \u001b[0m\u001b[1mUsage\u001b[0m: \n",
"\n",
"\u001b[48;2;39;40;34m \u001b[0m\n",
- "\u001b[48;2;39;40;34m \u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34mcreate_api\u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34m(\u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34mrequest\u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34m,\u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34m \u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34mmessages\u001b[0m\u001b[38;2;255;70;137;48;2;39;40;34m=\u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34mmessages\u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34m,\u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34m \u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34mmessages_json_path\u001b[0m\u001b[38;2;255;70;137;48;2;39;40;34m=\u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34mmessages_json_path\u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34m,\u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34m \u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34mfile_content\u001b[0m\u001b[38;2;255;70;137;48;2;39;40;34m=\u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34mfile_content\u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34m,\u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34m \u001b[0m\u001b[48;2;39;40;34m \u001b[0m\u001b[48;2;39;40;34m \u001b[0m\n",
- "\u001b[48;2;39;40;34m \u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34mfile_path\u001b[0m\u001b[38;2;255;70;137;48;2;39;40;34m=\u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34mfile_path\u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34m,\u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34m \u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34mskill_path\u001b[0m\u001b[38;2;255;70;137;48;2;39;40;34m=\u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34mskill_path\u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34m,\u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34m \u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34mskill_json_path\u001b[0m\u001b[38;2;255;70;137;48;2;39;40;34m=\u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34mskill_json_path\u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34m,\u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34m \u001b[0m\u001b[48;2;39;40;34m \u001b[0m\u001b[48;2;39;40;34m \u001b[0m\n",
- "\u001b[48;2;39;40;34m \u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34mhuggingface_repo_id\u001b[0m\u001b[38;2;255;70;137;48;2;39;40;34m=\u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34mhuggingface_repo_id\u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34m,\u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34m \u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34mhuggingface_skill_path\u001b[0m\u001b[38;2;255;70;137;48;2;39;40;34m=\u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34mhuggingface_skill_path\u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34m)\u001b[0m\u001b[48;2;39;40;34m \u001b[0m\u001b[48;2;39;40;34m \u001b[0m\n",
+ "\u001b[48;2;39;40;34m \u001b[0m\u001b[38;2;255;70;137;48;2;39;40;34mfrom\u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34m \u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34mcreator\u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34m \u001b[0m\u001b[38;2;255;70;137;48;2;39;40;34mimport\u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34m \u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34mcreate\u001b[0m\u001b[48;2;39;40;34m \u001b[0m\u001b[48;2;39;40;34m \u001b[0m\n",
+ "\u001b[48;2;39;40;34m \u001b[0m\u001b[48;2;39;40;34m \u001b[0m\u001b[48;2;39;40;34m \u001b[0m\n",
+ "\u001b[48;2;39;40;34m \u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34mskill\u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34m \u001b[0m\u001b[38;2;255;70;137;48;2;39;40;34m=\u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34m \u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34mcreate\u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34m(\u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34mrequest\u001b[0m\u001b[38;2;255;70;137;48;2;39;40;34m=\u001b[0m\u001b[38;2;230;219;116;48;2;39;40;34m'\u001b[0m\u001b[38;2;230;219;116;48;2;39;40;34m...\u001b[0m\u001b[38;2;230;219;116;48;2;39;40;34m'\u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34m,\u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34m \u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34mmessages\u001b[0m\u001b[38;2;255;70;137;48;2;39;40;34m=\u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34m[\u001b[0m\u001b[38;2;255;70;137;48;2;39;40;34m.\u001b[0m\u001b[38;2;255;70;137;48;2;39;40;34m.\u001b[0m\u001b[38;2;255;70;137;48;2;39;40;34m.\u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34m]\u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34m,\u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34m \u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34mfile_content\u001b[0m\u001b[38;2;255;70;137;48;2;39;40;34m=\u001b[0m\u001b[38;2;230;219;116;48;2;39;40;34m'\u001b[0m\u001b[38;2;230;219;116;48;2;39;40;34m...\u001b[0m\u001b[38;2;230;219;116;48;2;39;40;34m'\u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34m,\u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34m \u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34mfile_path\u001b[0m\u001b[38;2;255;70;137;48;2;39;40;34m=\u001b[0m\u001b[38;2;230;219;116;48;2;39;40;34m'\u001b[0m\u001b[38;2;230;219;116;48;2;39;40;34m...\u001b[0m\u001b[38;2;230;219;116;48;2;39;40;34m'\u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34m,\u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34m \u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34mskill_path\u001b[0m\u001b[38;2;255;70;137;48;2;39;40;34m=\u001b[0m\u001b[38;2;230;219;116;48;2;39;40;34m'\u001b[0m\u001b[38;2;230;219;116;48;2;39;40;34m...\u001b[0m\u001b[38;2;230;219;116;48;2;39;40;34m'\u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34m,\u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34m \u001b[0m\u001b[48;2;39;40;34m \u001b[0m\u001b[48;2;39;40;34m \u001b[0m\n",
+ "\u001b[48;2;39;40;34m \u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34mskill_json_path\u001b[0m\u001b[38;2;255;70;137;48;2;39;40;34m=\u001b[0m\u001b[38;2;230;219;116;48;2;39;40;34m'\u001b[0m\u001b[38;2;230;219;116;48;2;39;40;34m...\u001b[0m\u001b[38;2;230;219;116;48;2;39;40;34m'\u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34m,\u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34m \u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34mhuggingface_repo_id\u001b[0m\u001b[38;2;255;70;137;48;2;39;40;34m=\u001b[0m\u001b[38;2;230;219;116;48;2;39;40;34m'\u001b[0m\u001b[38;2;230;219;116;48;2;39;40;34m...\u001b[0m\u001b[38;2;230;219;116;48;2;39;40;34m'\u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34m,\u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34m \u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34mhuggingface_skill_path\u001b[0m\u001b[38;2;255;70;137;48;2;39;40;34m=\u001b[0m\u001b[38;2;230;219;116;48;2;39;40;34m'\u001b[0m\u001b[38;2;230;219;116;48;2;39;40;34m...\u001b[0m\u001b[38;2;230;219;116;48;2;39;40;34m'\u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34m)\u001b[0m\u001b[48;2;39;40;34m \u001b[0m\u001b[48;2;39;40;34m \u001b[0m\n",
"\u001b[48;2;39;40;34m \u001b[0m\n",
"\n",
"\u001b[1;33m • \u001b[0m\u001b[1mParameters\u001b[0m: \n",
"\u001b[1;33m \u001b[0m\u001b[1;33m • \u001b[0m\u001b[1mrequest\u001b[0m (string): String detailing the skill functionality. \n",
- "\u001b[1;33m \u001b[0m\u001b[1;33m \u001b[0m\u001b[1;33m • \u001b[0mRequired: True \n",
"\u001b[1;33m \u001b[0m\u001b[1;33m • \u001b[0m\u001b[1mmessages\u001b[0m (array): Messages as a list of dictionaries. \n",
"\u001b[1;33m \u001b[0m\u001b[1;33m • \u001b[0m\u001b[1mmessages_json_path\u001b[0m (string): Path to a JSON file containing messages. \n",
"\u001b[1;33m \u001b[0m\u001b[1;33m • \u001b[0m\u001b[1mfile_content\u001b[0m (string): String of file content. \n",
@@ -179,12 +204,12 @@
"data": {
"text/html": [
"\n",
- "▌ saved to /Users/gongjunmin/.cache/open_creator/skill_library/create_api \n",
+ "▌ saved to /Users/gongjunmin/.cache/open_creator/skill_library/create \n",
" \n"
],
"text/plain": [
"\n",
- "\u001b[35m▌ \u001b[0m\u001b[35msaved to /Users/gongjunmin/.cache/open_creator/skill_library/create_api\u001b[0m\u001b[35m \u001b[0m\n"
+ "\u001b[35m▌ \u001b[0m\u001b[35msaved to /Users/gongjunmin/.cache/open_creator/skill_library/create\u001b[0m\u001b[35m \u001b[0m\n"
]
},
"metadata": {},
@@ -204,19 +229,19 @@
"name": "stdout",
"output_type": "stream",
"text": [
- "/Users/gongjunmin/.cache/open_creator/remote/ChuxiJ/skill_library/create_api\n"
+ "/Users/gongjunmin/.cache/open_creator/remote/ChuxiJ/skill_library/create\n"
]
},
{
"data": {
"text/html": [
"\n",
- "▌ saved to /Users/gongjunmin/.cache/open_creator/skill_library/create_api \n",
+ "▌ saved to /Users/gongjunmin/.cache/open_creator/skill_library/create \n",
" \n"
],
"text/plain": [
"\n",
- "\u001b[35m▌ \u001b[0m\u001b[35msaved to /Users/gongjunmin/.cache/open_creator/skill_library/create_api\u001b[0m\u001b[35m \u001b[0m\n"
+ "\u001b[35m▌ \u001b[0m\u001b[35msaved to /Users/gongjunmin/.cache/open_creator/skill_library/create\u001b[0m\u001b[35m \u001b[0m\n"
]
},
"metadata": {},
@@ -236,11 +261,12 @@
"name": "stdout",
"output_type": "stream",
"text": [
- "\u001b[1m\u001b[36mask_run_code_confirm\u001b[m\u001b[m \u001b[1m\u001b[36mextract_pdf_section\u001b[m\u001b[m \u001b[1m\u001b[36mlist_python_functions\u001b[m\u001b[m\n",
- "\u001b[1m\u001b[36mcount_prime_numbers\u001b[m\u001b[m \u001b[1m\u001b[36mextract_section_from_pdf\u001b[m\u001b[m \u001b[1m\u001b[36msolve_24\u001b[m\u001b[m\n",
- "\u001b[1m\u001b[36mcreate\u001b[m\u001b[m \u001b[1m\u001b[36mfilter_prime_numbers\u001b[m\u001b[m \u001b[1m\u001b[36msolve_game_of_24\u001b[m\u001b[m\n",
- "\u001b[1m\u001b[36mcreate_api\u001b[m\u001b[m \u001b[1m\u001b[36mgame_of_24\u001b[m\u001b[m\n",
- "\u001b[1m\u001b[36mdisplay_markdown_message\u001b[m\u001b[m \u001b[1m\u001b[36mgame_of_24_solver\u001b[m\u001b[m\n"
+ "\u001b[1m\u001b[36mask_run_code_confirm\u001b[m\u001b[m \u001b[1m\u001b[36mextract_pdf_section\u001b[m\u001b[m \u001b[1m\u001b[36msolve_24\u001b[m\u001b[m\n",
+ "\u001b[1m\u001b[36mcount_prime_numbers\u001b[m\u001b[m \u001b[1m\u001b[36mextract_section_from_pdf\u001b[m\u001b[m \u001b[1m\u001b[36msolve_game_of_24\u001b[m\u001b[m\n",
+ "\u001b[1m\u001b[36mcreate\u001b[m\u001b[m \u001b[1m\u001b[36mfilter_prime_numbers\u001b[m\u001b[m \u001b[1m\u001b[36msolve_quadratic_equation\u001b[m\u001b[m\n",
+ "\u001b[1m\u001b[36mcreate_api\u001b[m\u001b[m \u001b[1m\u001b[36mgame_of_24\u001b[m\u001b[m \u001b[1m\u001b[36msolve_random_maze\u001b[m\u001b[m\n",
+ "\u001b[1m\u001b[36mcreate_scatter_plot\u001b[m\u001b[m \u001b[1m\u001b[36mgame_of_24_solver\u001b[m\u001b[m\n",
+ "\u001b[1m\u001b[36mdisplay_markdown_message\u001b[m\u001b[m \u001b[1m\u001b[36mlist_python_functions\u001b[m\u001b[m\n"
]
}
],
@@ -265,12 +291,12 @@
"data": {
"text/html": [
"\n",
- "▌ saved to /Users/gongjunmin/.cache/open_creator/remote/ChuxiJ/skill_library/create_api/create_api \n",
+ "▌ saved to /Users/gongjunmin/.cache/open_creator/skill_library/create_api \n",
" \n"
],
"text/plain": [
"\n",
- "\u001b[35m▌ \u001b[0m\u001b[35msaved to /Users/gongjunmin/.cache/open_creator/remote/ChuxiJ/skill_library/create_api/create_api\u001b[0m\u001b[35m \u001b[0m\n"
+ "\u001b[35m▌ \u001b[0m\u001b[35msaved to /Users/gongjunmin/.cache/open_creator/skill_library/create_api\u001b[0m\u001b[35m \u001b[0m\n"
]
},
"metadata": {},
@@ -388,14 +414,12 @@
"data": {
"text/html": [
"\n",
- "▌ saved to \n",
- "▌ /Users/gongjunmin/.cache/open_creator/remote/Sayoyo/skill-library/extract_pdf_section/extract_pdf_section \n",
+ "▌ saved to /Users/gongjunmin/.cache/open_creator/skill_library/extract_pdf_section \n",
" \n"
],
"text/plain": [
"\n",
- "\u001b[35m▌ \u001b[0m\u001b[35msaved to \u001b[0m\u001b[35m \u001b[0m\n",
- "\u001b[35m▌ \u001b[0m\u001b[35m/Users/gongjunmin/.cache/open_creator/remote/Sayoyo/skill-library/extract_pdf_section/extract_pdf_section\u001b[0m\u001b[35m \u001b[0m\n"
+ "\u001b[35m▌ \u001b[0m\u001b[35msaved to /Users/gongjunmin/.cache/open_creator/skill_library/extract_pdf_section\u001b[0m\u001b[35m \u001b[0m\n"
]
},
"metadata": {},
diff --git a/docs/examples/03_skills_search.ipynb b/docs/examples/03_skills_search.ipynb
index 72b2a7b..de5ffca 100644
--- a/docs/examples/03_skills_search.ipynb
+++ b/docs/examples/03_skills_search.ipynb
@@ -52,11 +52,12 @@
"name": "stdout",
"output_type": "stream",
"text": [
- "\u001b[1m\u001b[36mask_run_code_confirm\u001b[m\u001b[m \u001b[1m\u001b[36mextract_pdf_section\u001b[m\u001b[m \u001b[1m\u001b[36mlist_python_functions\u001b[m\u001b[m\n",
- "\u001b[1m\u001b[36mcount_prime_numbers\u001b[m\u001b[m \u001b[1m\u001b[36mextract_section_from_pdf\u001b[m\u001b[m \u001b[1m\u001b[36msolve_24\u001b[m\u001b[m\n",
- "\u001b[1m\u001b[36mcreate\u001b[m\u001b[m \u001b[1m\u001b[36mfilter_prime_numbers\u001b[m\u001b[m \u001b[1m\u001b[36msolve_game_of_24\u001b[m\u001b[m\n",
- "\u001b[1m\u001b[36mcreate_api\u001b[m\u001b[m \u001b[1m\u001b[36mgame_of_24\u001b[m\u001b[m\n",
- "\u001b[1m\u001b[36mdisplay_markdown_message\u001b[m\u001b[m \u001b[1m\u001b[36mgame_of_24_solver\u001b[m\u001b[m\n"
+ "\u001b[1m\u001b[36mask_run_code_confirm\u001b[m\u001b[m \u001b[1m\u001b[36mextract_pdf_section\u001b[m\u001b[m \u001b[1m\u001b[36msolve_24\u001b[m\u001b[m\n",
+ "\u001b[1m\u001b[36mcount_prime_numbers\u001b[m\u001b[m \u001b[1m\u001b[36mextract_section_from_pdf\u001b[m\u001b[m \u001b[1m\u001b[36msolve_game_of_24\u001b[m\u001b[m\n",
+ "\u001b[1m\u001b[36mcreate\u001b[m\u001b[m \u001b[1m\u001b[36mfilter_prime_numbers\u001b[m\u001b[m \u001b[1m\u001b[36msolve_quadratic_equation\u001b[m\u001b[m\n",
+ "\u001b[1m\u001b[36mcreate_api\u001b[m\u001b[m \u001b[1m\u001b[36mgame_of_24\u001b[m\u001b[m \u001b[1m\u001b[36msolve_random_maze\u001b[m\u001b[m\n",
+ "\u001b[1m\u001b[36mcreate_scatter_plot\u001b[m\u001b[m \u001b[1m\u001b[36mgame_of_24_solver\u001b[m\u001b[m\n",
+ "\u001b[1m\u001b[36mdisplay_markdown_message\u001b[m\u001b[m \u001b[1m\u001b[36mlist_python_functions\u001b[m\u001b[m\n"
]
}
],
diff --git a/docs/examples/04_skills_run.ipynb b/docs/examples/04_skills_run.ipynb
index a4d26d0..0a6aff5 100644
--- a/docs/examples/04_skills_run.ipynb
+++ b/docs/examples/04_skills_run.ipynb
@@ -38,7 +38,7 @@
"name": "stdout",
"output_type": "stream",
"text": [
- "../docs/tech_report/open-creator.pdf\n"
+ "ls: ./tech_report/open-creator.pdf: No such file or directory\n"
]
}
],
@@ -48,9 +48,24 @@
},
{
"cell_type": "code",
- "execution_count": 5,
+ "execution_count": 3,
"metadata": {},
"outputs": [
+ {
+ "data": {
+ "text/html": [
+ "\n",
+ "▌ loading vector database... \n",
+ " \n"
+ ],
+ "text/plain": [
+ "\n",
+ "\u001b[35m▌ \u001b[0m\u001b[35mloading vector database...\u001b[0m\u001b[35m \u001b[0m\n"
+ ]
+ },
+ "metadata": {},
+ "output_type": "display_data"
+ },
{
"data": {
"text/html": [
@@ -118,7 +133,7 @@
},
{
"cell_type": "code",
- "execution_count": 7,
+ "execution_count": 4,
"metadata": {},
"outputs": [
{
@@ -184,7 +199,7 @@
},
{
"cell_type": "code",
- "execution_count": 8,
+ "execution_count": 5,
"metadata": {},
"outputs": [],
"source": [
@@ -211,7 +226,7 @@
},
{
"cell_type": "code",
- "execution_count": 9,
+ "execution_count": 6,
"metadata": {},
"outputs": [],
"source": [
@@ -225,7 +240,7 @@
},
{
"cell_type": "code",
- "execution_count": 10,
+ "execution_count": 7,
"metadata": {},
"outputs": [
{
@@ -234,7 +249,7 @@
"'./data/open-creator2-5.pdf'"
]
},
- "execution_count": 10,
+ "execution_count": 7,
"metadata": {},
"output_type": "execute_result"
}
@@ -252,19 +267,19 @@
},
{
"cell_type": "code",
- "execution_count": 3,
+ "execution_count": 8,
"metadata": {},
"outputs": [
{
"data": {
"text/html": [
"\n",
- "▌ loading vector database... \n",
+ "▌ Installing dependencies \n",
" \n"
],
"text/plain": [
"\n",
- "\u001b[35m▌ \u001b[0m\u001b[35mloading vector database...\u001b[0m\u001b[35m \u001b[0m\n"
+ "\u001b[35m▌ \u001b[0m\u001b[35mInstalling dependencies\u001b[0m\u001b[35m \u001b[0m\n"
]
},
"metadata": {},
@@ -273,19 +288,15 @@
{
"data": {
"text/html": [
- "{ \n",
- " \"status\" : \"success\" ,\n",
- " \"stdout\" : \"\" ,\n",
- " \"stderr\" : \"\" \n",
- "} \n",
+ " \n",
+ " pip show PyPDF2 || pip install \"PyPDF2>=1.26.0\" \n",
+ " \n",
" \n"
],
"text/plain": [
- "\u001b[1m{\u001b[0m\n",
- " \u001b[1;34m\"status\"\u001b[0m: \u001b[32m\"success\"\u001b[0m,\n",
- " \u001b[1;34m\"stdout\"\u001b[0m: \u001b[32m\"\"\u001b[0m,\n",
- " \u001b[1;34m\"stderr\"\u001b[0m: \u001b[32m\"\"\u001b[0m\n",
- "\u001b[1m}\u001b[0m\n"
+ "\u001b[48;2;39;40;34m \u001b[0m\n",
+ "\u001b[48;2;39;40;34m \u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34mpip\u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34m \u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34mshow\u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34m \u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34mPyPDF2\u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34m \u001b[0m\u001b[38;2;255;70;137;48;2;39;40;34m||\u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34m \u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34mpip\u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34m \u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34minstall\u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34m \u001b[0m\u001b[38;2;230;219;116;48;2;39;40;34m\"PyPDF2>=1.26.0\"\u001b[0m\u001b[48;2;39;40;34m \u001b[0m\u001b[48;2;39;40;34m \u001b[0m\n",
+ "\u001b[48;2;39;40;34m \u001b[0m\n"
]
},
"metadata": {},
@@ -293,13 +304,22 @@
},
{
"data": {
- "application/vnd.jupyter.widget-view+json": {
- "model_id": "decbe223ddf0462b99a8819c2891d198",
- "version_major": 2,
- "version_minor": 0
- },
+ "text/html": [
+ "\n",
+ "▌ Install dependencies result: {'status': 'success', 'stdout': 'Name: PyPDF2\\nVersion: 3.0.1\\nSummary: A \n",
+ "▌ pure-python PDF library capable of splitting, merging, cropping, and transforming PDF files\\nHome-page: \n",
+ "▌ \\nAuthor: \\nAuthor-email: Mathieu Fenniak biziqe@mathieu.fenniak.net \\nLicense: \\nLocation: \n",
+ "▌ /Users/gongjunmin/miniconda3/envs/open_creator_online/lib/python3.10/site-packages\\nRequires: \\nRequired-by: \n",
+ "▌ \\n', 'stderr': ''} \n",
+ " \n"
+ ],
"text/plain": [
- "Output()"
+ "\n",
+ "\u001b[35m▌ \u001b[0m\u001b[35mInstall dependencies result: {'status': 'success', 'stdout': 'Name: PyPDF2\\nVersion: 3.0.1\\nSummary: A \u001b[0m\u001b[35m \u001b[0m\n",
+ "\u001b[35m▌ \u001b[0m\u001b[35mpure-python PDF library capable of splitting, merging, cropping, and transforming PDF files\\nHome-page: \u001b[0m\u001b[35m \u001b[0m\n",
+ "\u001b[35m▌ \u001b[0m\u001b[35m\\nAuthor: \\nAuthor-email: Mathieu Fenniak \u001b[0m\u001b]8;id=463524;mailto:biziqe@mathieu.fenniak.net\u001b\\\u001b[4;34mbiziqe@mathieu.fenniak.net\u001b[0m\u001b]8;;\u001b\\\u001b[35m\\nLicense: \\nLocation: \u001b[0m\u001b[35m \u001b[0m\n",
+ "\u001b[35m▌ \u001b[0m\u001b[35m/Users/gongjunmin/miniconda3/envs/open_creator_online/lib/python3.10/site-packages\\nRequires: \\nRequired-by: \u001b[0m\u001b[35m \u001b[0m\n",
+ "\u001b[35m▌ \u001b[0m\u001b[35m\\n', 'stderr': ''}\u001b[0m\u001b[35m \u001b[0m\n"
]
},
"metadata": {},
@@ -308,7 +328,7 @@
{
"data": {
"application/vnd.jupyter.widget-view+json": {
- "model_id": "6b9f6a7914a541268042529481a0af39",
+ "model_id": "3b2d13db34d543c29e28eca4539ff644",
"version_major": 2,
"version_minor": 0
},
@@ -342,7 +362,7 @@
{
"data": {
"application/vnd.jupyter.widget-view+json": {
- "model_id": "1b5d53466fbf4132938cb62f06f948ee",
+ "model_id": "72d9c1b7db8449c899744345865925f4",
"version_major": 2,
"version_minor": 0
},
@@ -373,6 +393,23 @@
"metadata": {},
"output_type": "display_data"
},
+ {
+ "data": {
+ "text/html": [
+ "Langsmith Run URL: \n",
+ "http://localhost/o/00000000-0000-0000-0000-000000000000/projects/p/856d5024-48fe-4a9c-b1f9-5236f8d3aebd/r/8e9f879c- \n",
+ "1112-4a54-aa62-e1430ccddad6?poll=true \n",
+ " \n"
+ ],
+ "text/plain": [
+ "Langsmith Run URL: \n",
+ "\u001b]8;id=340957;http://localhost/o/00000000-0000-0000-0000-000000000000/projects/p/856d5024-48fe-4a9c-b1f9-5236f8d3aebd/r/8e9f879c-1112-4a54-aa62-e1430ccddad6?poll=true\u001b\\\u001b[4;34mhttp://localhost/o/00000000-0000-0000-0000-000000000000/projects/p/856d5024-48fe-4a9c-b1f9-5236f8d3aebd/r/8e9f879c-\u001b[0m\u001b]8;;\u001b\\\n",
+ "\u001b]8;id=340957;http://localhost/o/00000000-0000-0000-0000-000000000000/projects/p/856d5024-48fe-4a9c-b1f9-5236f8d3aebd/r/8e9f879c-1112-4a54-aa62-e1430ccddad6?poll=true\u001b\\\u001b[4;34m1112-4a54-aa62-e1430ccddad6?poll=true\u001b[0m\u001b]8;;\u001b\\ \n"
+ ]
+ },
+ "metadata": {},
+ "output_type": "display_data"
+ },
{
"data": {
"text/html": [
@@ -397,30 +434,29 @@
" } ,\n",
" { \n",
" \"role\" : \"user\" ,\n",
- " \"content\" : \"{\\\"pdf_path\\\": \\\"../docs/tech_report/open-creator.pdf\\\", \\\"start_page\\\": 2, \\\"end_page\\\": 5, \n",
+ " \"content\" : \"{\\\"pdf_path\\\": \\\"../tech_report/open-creator.pdf\\\", \\\"start_page\\\": 2, \\\"end_page\\\": 5, \n",
"\\\"output_path\\\": \\\"./data/open-creator2-5.pdf\\\"}\" \n",
" } ,\n",
" { \n",
" \"role\" : \"assistant\" ,\n",
- " \"content\" : \"To extract pages 2 to 5 from the PDF file \\\"../docs/tech_report/open-creator.pdf\\\" and save them as \n",
- "a new file at \\\"./data/open-creator2-5.pdf\\\", I will use the `extract_pdf_section` function.\" ,\n",
+ " \"content\" : null ,\n",
" \"function_call\" : { \n",
" \"name\" : \"run_code\" ,\n",
" \"arguments\" : \"{\\\"language\\\": \\\"python\\\", \\\"code\\\": \\\"{\\\\\\\"language\\\\\\\": \\\\\\\"python\\\\\\\", \\\\\\\"code\\\\\\\": \n",
- "\\\\\\\"extract_pdf_section('../docs/tech_report/open-creator.pdf', 2, 5, './data/open-creator2-5.pdf')\\\\\\\"}\\\"}\" \n",
+ "\\\\\\\"extract_pdf_section('../tech_report/open-creator.pdf', 2, 5, './data/open-creator2-5.pdf')\\\\\\\"}\\\"}\" \n",
" } \n",
" } ,\n",
" { \n",
" \"role\" : \"function\" ,\n",
" \"content\" : \"{\\\"status\\\": \\\"success\\\", \\\"stdout\\\": \\\"{'language': 'python', 'code': \n",
- "\\\\\\\"extract_pdf_section('../docs/tech_report/open-creator.pdf', 2, 5, './data/open-creator2-5.pdf')\\\\\\\"}\\\\n\\\", \n",
- "\\\"stderr\\\": \\\"\\\"}\" ,\n",
+ "\\\\\\\"extract_pdf_section('../tech_report/open-creator.pdf', 2, 5, './data/open-creator2-5.pdf')\\\\\\\"}\\\", \\\"stderr\\\": \n",
+ "\\\"\\\"}\" ,\n",
" \"name\" : \"run_code\" \n",
" } ,\n",
" { \n",
" \"role\" : \"assistant\" ,\n",
- " \"content\" : \"The pages 2 to 5 have been extracted from the PDF file and saved as a new file at \n",
- "\\\"./data/open-creator2-5.pdf\\\".\" \n",
+ " \"content\" : \"The code has been executed successfully. The PDF section has been extracted and saved to the \n",
+ "specified output path. You can now access the extracted section from the output file.\" \n",
" } \n",
"] \n",
" \n"
@@ -447,30 +483,29 @@
" \u001b[1m}\u001b[0m,\n",
" \u001b[1m{\u001b[0m\n",
" \u001b[1;34m\"role\"\u001b[0m: \u001b[32m\"user\"\u001b[0m,\n",
- " \u001b[1;34m\"content\"\u001b[0m: \u001b[32m\"{\\\"pdf_path\\\": \\\"../docs/tech_report/open-creator.pdf\\\", \\\"start_page\\\": 2, \\\"end_page\\\": 5, \u001b[0m\n",
+ " \u001b[1;34m\"content\"\u001b[0m: \u001b[32m\"{\\\"pdf_path\\\": \\\"../tech_report/open-creator.pdf\\\", \\\"start_page\\\": 2, \\\"end_page\\\": 5, \u001b[0m\n",
"\u001b[32m\\\"output_path\\\": \\\"./data/open-creator2-5.pdf\\\"}\"\u001b[0m\n",
" \u001b[1m}\u001b[0m,\n",
" \u001b[1m{\u001b[0m\n",
" \u001b[1;34m\"role\"\u001b[0m: \u001b[32m\"assistant\"\u001b[0m,\n",
- " \u001b[1;34m\"content\"\u001b[0m: \u001b[32m\"To extract pages 2 to 5 from the PDF file \\\"../docs/tech_report/open-creator.pdf\\\" and save them as\u001b[0m\n",
- "\u001b[32ma new file at \\\"./data/open-creator2-5.pdf\\\", I will use the `extract_pdf_section` function.\"\u001b[0m,\n",
+ " \u001b[1;34m\"content\"\u001b[0m: \u001b[3;35mnull\u001b[0m,\n",
" \u001b[1;34m\"function_call\"\u001b[0m: \u001b[1m{\u001b[0m\n",
" \u001b[1;34m\"name\"\u001b[0m: \u001b[32m\"run_code\"\u001b[0m,\n",
" \u001b[1;34m\"arguments\"\u001b[0m: \u001b[32m\"{\\\"language\\\": \\\"python\\\", \\\"code\\\": \\\"{\\\\\\\"language\\\\\\\": \\\\\\\"python\\\\\\\", \\\\\\\"code\\\\\\\": \u001b[0m\n",
- "\u001b[32m\\\\\\\"extract_pdf_section('../docs/tech_report/open-creator.pdf', 2, 5, './data/open-creator2-5.pdf')\\\\\\\"}\\\"}\"\u001b[0m\n",
+ "\u001b[32m\\\\\\\"extract_pdf_section('../tech_report/open-creator.pdf', 2, 5, './data/open-creator2-5.pdf')\\\\\\\"}\\\"}\"\u001b[0m\n",
" \u001b[1m}\u001b[0m\n",
" \u001b[1m}\u001b[0m,\n",
" \u001b[1m{\u001b[0m\n",
" \u001b[1;34m\"role\"\u001b[0m: \u001b[32m\"function\"\u001b[0m,\n",
" \u001b[1;34m\"content\"\u001b[0m: \u001b[32m\"{\\\"status\\\": \\\"success\\\", \\\"stdout\\\": \\\"{'language': 'python', 'code': \u001b[0m\n",
- "\u001b[32m\\\\\\\"extract_pdf_section('../docs/tech_report/open-creator.pdf', 2, 5, './data/open-creator2-5.pdf')\\\\\\\"}\\\\n\\\", \u001b[0m\n",
- "\u001b[32m\\\"stderr\\\": \\\"\\\"}\"\u001b[0m,\n",
+ "\u001b[32m\\\\\\\"extract_pdf_section('../tech_report/open-creator.pdf', 2, 5, './data/open-creator2-5.pdf')\\\\\\\"}\\\", \\\"stderr\\\": \u001b[0m\n",
+ "\u001b[32m\\\"\\\"}\"\u001b[0m,\n",
" \u001b[1;34m\"name\"\u001b[0m: \u001b[32m\"run_code\"\u001b[0m\n",
" \u001b[1m}\u001b[0m,\n",
" \u001b[1m{\u001b[0m\n",
" \u001b[1;34m\"role\"\u001b[0m: \u001b[32m\"assistant\"\u001b[0m,\n",
- " \u001b[1;34m\"content\"\u001b[0m: \u001b[32m\"The pages 2 to 5 have been extracted from the PDF file and saved as a new file at \u001b[0m\n",
- "\u001b[32m\\\"./data/open-creator2-5.pdf\\\".\"\u001b[0m\n",
+ " \u001b[1;34m\"content\"\u001b[0m: \u001b[32m\"The code has been executed successfully. The PDF section has been extracted and saved to the \u001b[0m\n",
+ "\u001b[32mspecified output path. You can now access the extracted section from the output file.\"\u001b[0m\n",
" \u001b[1m}\u001b[0m\n",
"\u001b[1m]\u001b[0m\n"
]
@@ -503,25 +538,59 @@
},
{
"cell_type": "code",
- "execution_count": 4,
+ "execution_count": 9,
"metadata": {},
"outputs": [
{
"data": {
"text/html": [
- "{ \n",
- " \"status\" : \"success\" ,\n",
- " \"stdout\" : \"\" ,\n",
- " \"stderr\" : \"\" \n",
- "} \n",
+ "\n",
+ "▌ Installing dependencies \n",
+ " \n"
+ ],
+ "text/plain": [
+ "\n",
+ "\u001b[35m▌ \u001b[0m\u001b[35mInstalling dependencies\u001b[0m\u001b[35m \u001b[0m\n"
+ ]
+ },
+ "metadata": {},
+ "output_type": "display_data"
+ },
+ {
+ "data": {
+ "text/html": [
+ " \n",
+ " pip show PyPDF2 || pip install \"PyPDF2>=1.26.0\" \n",
+ " \n",
+ " \n"
+ ],
+ "text/plain": [
+ "\u001b[48;2;39;40;34m \u001b[0m\n",
+ "\u001b[48;2;39;40;34m \u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34mpip\u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34m \u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34mshow\u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34m \u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34mPyPDF2\u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34m \u001b[0m\u001b[38;2;255;70;137;48;2;39;40;34m||\u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34m \u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34mpip\u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34m \u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34minstall\u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34m \u001b[0m\u001b[38;2;230;219;116;48;2;39;40;34m\"PyPDF2>=1.26.0\"\u001b[0m\u001b[48;2;39;40;34m \u001b[0m\u001b[48;2;39;40;34m \u001b[0m\n",
+ "\u001b[48;2;39;40;34m \u001b[0m\n"
+ ]
+ },
+ "metadata": {},
+ "output_type": "display_data"
+ },
+ {
+ "data": {
+ "text/html": [
+ "\n",
+ "▌ Install dependencies result: {'status': 'success', 'stdout': 'Name: PyPDF2\\nVersion: 3.0.1\\nSummary: A \n",
+ "▌ pure-python PDF library capable of splitting, merging, cropping, and transforming PDF files\\nHome-page: \n",
+ "▌ \\nAuthor: \\nAuthor-email: Mathieu Fenniak biziqe@mathieu.fenniak.net \\nLicense: \\nLocation: \n",
+ "▌ /Users/gongjunmin/miniconda3/envs/open_creator_online/lib/python3.10/site-packages\\nRequires: \\nRequired-by: \n",
+ "▌ \\n', 'stderr': ''} \n",
" \n"
],
"text/plain": [
- "\u001b[1m{\u001b[0m\n",
- " \u001b[1;34m\"status\"\u001b[0m: \u001b[32m\"success\"\u001b[0m,\n",
- " \u001b[1;34m\"stdout\"\u001b[0m: \u001b[32m\"\"\u001b[0m,\n",
- " \u001b[1;34m\"stderr\"\u001b[0m: \u001b[32m\"\"\u001b[0m\n",
- "\u001b[1m}\u001b[0m\n"
+ "\n",
+ "\u001b[35m▌ \u001b[0m\u001b[35mInstall dependencies result: {'status': 'success', 'stdout': 'Name: PyPDF2\\nVersion: 3.0.1\\nSummary: A \u001b[0m\u001b[35m \u001b[0m\n",
+ "\u001b[35m▌ \u001b[0m\u001b[35mpure-python PDF library capable of splitting, merging, cropping, and transforming PDF files\\nHome-page: \u001b[0m\u001b[35m \u001b[0m\n",
+ "\u001b[35m▌ \u001b[0m\u001b[35m\\nAuthor: \\nAuthor-email: Mathieu Fenniak \u001b[0m\u001b]8;id=265924;mailto:biziqe@mathieu.fenniak.net\u001b\\\u001b[4;34mbiziqe@mathieu.fenniak.net\u001b[0m\u001b]8;;\u001b\\\u001b[35m\\nLicense: \\nLocation: \u001b[0m\u001b[35m \u001b[0m\n",
+ "\u001b[35m▌ \u001b[0m\u001b[35m/Users/gongjunmin/miniconda3/envs/open_creator_online/lib/python3.10/site-packages\\nRequires: \\nRequired-by: \u001b[0m\u001b[35m \u001b[0m\n",
+ "\u001b[35m▌ \u001b[0m\u001b[35m\\n', 'stderr': ''}\u001b[0m\u001b[35m \u001b[0m\n"
]
},
"metadata": {},
@@ -530,7 +599,7 @@
{
"data": {
"application/vnd.jupyter.widget-view+json": {
- "model_id": "fc4e1f4dd91642c5aef19c8022ffb169",
+ "model_id": "42950aa643bc4ef68fa89b28c76938b0",
"version_major": 2,
"version_minor": 0
},
@@ -564,7 +633,7 @@
{
"data": {
"application/vnd.jupyter.widget-view+json": {
- "model_id": "8f0cd6de709744018cf3f00241094e0f",
+ "model_id": "89e7e660999d4305bc57ae5575a1c250",
"version_major": 2,
"version_minor": 0
},
@@ -595,6 +664,23 @@
"metadata": {},
"output_type": "display_data"
},
+ {
+ "data": {
+ "text/html": [
+ "Langsmith Run URL: \n",
+ "http://localhost/o/00000000-0000-0000-0000-000000000000/projects/p/856d5024-48fe-4a9c-b1f9-5236f8d3aebd/r/6d48421f- \n",
+ "b99e-41f3-b794-14d4a553fd1b?poll=true \n",
+ " \n"
+ ],
+ "text/plain": [
+ "Langsmith Run URL: \n",
+ "\u001b]8;id=124701;http://localhost/o/00000000-0000-0000-0000-000000000000/projects/p/856d5024-48fe-4a9c-b1f9-5236f8d3aebd/r/6d48421f-b99e-41f3-b794-14d4a553fd1b?poll=true\u001b\\\u001b[4;34mhttp://localhost/o/00000000-0000-0000-0000-000000000000/projects/p/856d5024-48fe-4a9c-b1f9-5236f8d3aebd/r/6d48421f-\u001b[0m\u001b]8;;\u001b\\\n",
+ "\u001b]8;id=124701;http://localhost/o/00000000-0000-0000-0000-000000000000/projects/p/856d5024-48fe-4a9c-b1f9-5236f8d3aebd/r/6d48421f-b99e-41f3-b794-14d4a553fd1b?poll=true\u001b\\\u001b[4;34mb99e-41f3-b794-14d4a553fd1b?poll=true\u001b[0m\u001b]8;;\u001b\\ \n"
+ ]
+ },
+ "metadata": {},
+ "output_type": "display_data"
+ },
{
"data": {
"text/html": [
@@ -619,7 +705,7 @@
" } ,\n",
" { \n",
" \"role\" : \"user\" ,\n",
- " \"content\" : \"extract 2-5 pages section from pdf path '../docs/tech_report/open-creator.pdf' to \n",
+ " \"content\" : \"extract 2-5 pages section from pdf path '../tech_report/open-creator.pdf' to \n",
"'./data/open-creator2-5.pdf'\" \n",
" } ,\n",
" { \n",
@@ -627,21 +713,19 @@
" \"content\" : null ,\n",
" \"function_call\" : { \n",
" \"name\" : \"run_code\" ,\n",
- " \"arguments\" : \"{\\\"language\\\": \\\"python\\\", \\\"code\\\": \\\"{\\\\\\\"language\\\\\\\": \\\\\\\"python\\\\\\\", \\\\\\\"code\\\\\\\": \n",
- "\\\\\\\"extract_pdf_section('../docs/tech_report/open-creator.pdf', 2, 5, './data/open-creator2-5.pdf')\\\\\\\"}\\\"}\" \n",
+ " \"arguments\" : \"{\\n\\\"language\\\": \\\"python\\\", \\n\\\"code\\\": \n",
+ "\\\"extract_pdf_section('../tech_report/open-creator.pdf', 2, 5, './data/open-creator2-5.pdf')\\\"\\n}\" \n",
" } \n",
" } ,\n",
" { \n",
" \"role\" : \"function\" ,\n",
- " \"content\" : \"{\\\"status\\\": \\\"success\\\", \\\"stdout\\\": \\\"{'language': 'python', 'code': \n",
- "\\\\\\\"extract_pdf_section('../docs/tech_report/open-creator.pdf', 2, 5, './data/open-creator2-5.pdf')\\\\\\\"}\\\\n\\\", \n",
- "\\\"stderr\\\": \\\"\\\"}\" ,\n",
+ " \"content\" : \"{\\\"status\\\": \\\"success\\\", \\\"stdout\\\": \\\"./data/open-creator2-5.pdf\\\", \\\"stderr\\\": \\\"\\\"}\" ,\n",
" \"name\" : \"run_code\" \n",
" } ,\n",
" { \n",
" \"role\" : \"assistant\" ,\n",
- " \"content\" : \"The PDF section has been extracted successfully. You can download the extracted section from \n",
- "[here](sandbox:/Users/gongjunmin/LLM/open_creator_dev/open-creator/examples/data/open-creator2-5.pdf).\" \n",
+ " \"content\" : \"The extraction of pages 2-5 from the PDF at '../tech_report/open-creator.pdf' has been successful. \n",
+ "The extracted section has been saved to './data/open-creator2-5.pdf'.\" \n",
" } \n",
"] \n",
" \n"
@@ -668,7 +752,7 @@
" \u001b[1m}\u001b[0m,\n",
" \u001b[1m{\u001b[0m\n",
" \u001b[1;34m\"role\"\u001b[0m: \u001b[32m\"user\"\u001b[0m,\n",
- " \u001b[1;34m\"content\"\u001b[0m: \u001b[32m\"extract 2-5 pages section from pdf path '../docs/tech_report/open-creator.pdf' to \u001b[0m\n",
+ " \u001b[1;34m\"content\"\u001b[0m: \u001b[32m\"extract 2-5 pages section from pdf path '../tech_report/open-creator.pdf' to \u001b[0m\n",
"\u001b[32m'./data/open-creator2-5.pdf'\"\u001b[0m\n",
" \u001b[1m}\u001b[0m,\n",
" \u001b[1m{\u001b[0m\n",
@@ -676,21 +760,19 @@
" \u001b[1;34m\"content\"\u001b[0m: \u001b[3;35mnull\u001b[0m,\n",
" \u001b[1;34m\"function_call\"\u001b[0m: \u001b[1m{\u001b[0m\n",
" \u001b[1;34m\"name\"\u001b[0m: \u001b[32m\"run_code\"\u001b[0m,\n",
- " \u001b[1;34m\"arguments\"\u001b[0m: \u001b[32m\"{\\\"language\\\": \\\"python\\\", \\\"code\\\": \\\"{\\\\\\\"language\\\\\\\": \\\\\\\"python\\\\\\\", \\\\\\\"code\\\\\\\": \u001b[0m\n",
- "\u001b[32m\\\\\\\"extract_pdf_section('../docs/tech_report/open-creator.pdf', 2, 5, './data/open-creator2-5.pdf')\\\\\\\"}\\\"}\"\u001b[0m\n",
+ " \u001b[1;34m\"arguments\"\u001b[0m: \u001b[32m\"{\\n\\\"language\\\": \\\"python\\\", \\n\\\"code\\\": \u001b[0m\n",
+ "\u001b[32m\\\"extract_pdf_section('../tech_report/open-creator.pdf', 2, 5, './data/open-creator2-5.pdf')\\\"\\n}\"\u001b[0m\n",
" \u001b[1m}\u001b[0m\n",
" \u001b[1m}\u001b[0m,\n",
" \u001b[1m{\u001b[0m\n",
" \u001b[1;34m\"role\"\u001b[0m: \u001b[32m\"function\"\u001b[0m,\n",
- " \u001b[1;34m\"content\"\u001b[0m: \u001b[32m\"{\\\"status\\\": \\\"success\\\", \\\"stdout\\\": \\\"{'language': 'python', 'code': \u001b[0m\n",
- "\u001b[32m\\\\\\\"extract_pdf_section('../docs/tech_report/open-creator.pdf', 2, 5, './data/open-creator2-5.pdf')\\\\\\\"}\\\\n\\\", \u001b[0m\n",
- "\u001b[32m\\\"stderr\\\": \\\"\\\"}\"\u001b[0m,\n",
+ " \u001b[1;34m\"content\"\u001b[0m: \u001b[32m\"{\\\"status\\\": \\\"success\\\", \\\"stdout\\\": \\\"./data/open-creator2-5.pdf\\\", \\\"stderr\\\": \\\"\\\"}\"\u001b[0m,\n",
" \u001b[1;34m\"name\"\u001b[0m: \u001b[32m\"run_code\"\u001b[0m\n",
" \u001b[1m}\u001b[0m,\n",
" \u001b[1m{\u001b[0m\n",
" \u001b[1;34m\"role\"\u001b[0m: \u001b[32m\"assistant\"\u001b[0m,\n",
- " \u001b[1;34m\"content\"\u001b[0m: \u001b[32m\"The PDF section has been extracted successfully. You can download the extracted section from \u001b[0m\n",
- "\u001b[32m[here](sandbox:/Users/gongjunmin/LLM/open_creator_dev/open-creator/examples/data/open-creator2-5.pdf).\"\u001b[0m\n",
+ " \u001b[1;34m\"content\"\u001b[0m: \u001b[32m\"The extraction of pages 2-5 from the PDF at '../tech_report/open-creator.pdf' has been successful. \u001b[0m\n",
+ "\u001b[32mThe extracted section has been saved to './data/open-creator2-5.pdf'.\"\u001b[0m\n",
" \u001b[1m}\u001b[0m\n",
"\u001b[1m]\u001b[0m\n"
]
@@ -703,7 +785,7 @@
"skills = search(\"pdf extract section\")\n",
"if skills:\n",
" skill = skills[0]\n",
- " request = \"extract 2-5 pages section from pdf path '../docs/tech_report/open-creator.pdf' to './data/open-creator2-5.pdf'\"\n",
+ " request = \"extract 2-5 pages section from pdf path '../tech_report/open-creator.pdf' to './data/open-creator2-5.pdf'\"\n",
" messages = skill.run(request)\n",
" print(messages, print_type=\"json\")"
]
diff --git a/docs/examples/05_skills_test.ipynb b/docs/examples/05_skills_test.ipynb
index 8429294..aae434c 100644
--- a/docs/examples/05_skills_test.ipynb
+++ b/docs/examples/05_skills_test.ipynb
@@ -29,11 +29,12 @@
"name": "stdout",
"output_type": "stream",
"text": [
- "\u001b[1m\u001b[36mask_run_code_confirm\u001b[m\u001b[m \u001b[1m\u001b[36mextract_pdf_section\u001b[m\u001b[m \u001b[1m\u001b[36mlist_python_functions\u001b[m\u001b[m\n",
- "\u001b[1m\u001b[36mcount_prime_numbers\u001b[m\u001b[m \u001b[1m\u001b[36mextract_section_from_pdf\u001b[m\u001b[m \u001b[1m\u001b[36msolve_24\u001b[m\u001b[m\n",
- "\u001b[1m\u001b[36mcreate\u001b[m\u001b[m \u001b[1m\u001b[36mfilter_prime_numbers\u001b[m\u001b[m \u001b[1m\u001b[36msolve_game_of_24\u001b[m\u001b[m\n",
- "\u001b[1m\u001b[36mcreate_api\u001b[m\u001b[m \u001b[1m\u001b[36mgame_of_24\u001b[m\u001b[m\n",
- "\u001b[1m\u001b[36mdisplay_markdown_message\u001b[m\u001b[m \u001b[1m\u001b[36mgame_of_24_solver\u001b[m\u001b[m\n"
+ "\u001b[1m\u001b[36mask_run_code_confirm\u001b[m\u001b[m \u001b[1m\u001b[36mextract_pdf_section\u001b[m\u001b[m \u001b[1m\u001b[36msolve_24\u001b[m\u001b[m\n",
+ "\u001b[1m\u001b[36mcount_prime_numbers\u001b[m\u001b[m \u001b[1m\u001b[36mextract_section_from_pdf\u001b[m\u001b[m \u001b[1m\u001b[36msolve_game_of_24\u001b[m\u001b[m\n",
+ "\u001b[1m\u001b[36mcreate\u001b[m\u001b[m \u001b[1m\u001b[36mfilter_prime_numbers\u001b[m\u001b[m \u001b[1m\u001b[36msolve_quadratic_equation\u001b[m\u001b[m\n",
+ "\u001b[1m\u001b[36mcreate_api\u001b[m\u001b[m \u001b[1m\u001b[36mgame_of_24\u001b[m\u001b[m \u001b[1m\u001b[36msolve_random_maze\u001b[m\u001b[m\n",
+ "\u001b[1m\u001b[36mcreate_scatter_plot\u001b[m\u001b[m \u001b[1m\u001b[36mgame_of_24_solver\u001b[m\u001b[m\n",
+ "\u001b[1m\u001b[36mdisplay_markdown_message\u001b[m\u001b[m \u001b[1m\u001b[36mlist_python_functions\u001b[m\u001b[m\n"
]
}
],
@@ -113,13 +114,13 @@
},
{
"cell_type": "code",
- "execution_count": 6,
+ "execution_count": 5,
"metadata": {},
"outputs": [
{
"data": {
"application/vnd.jupyter.widget-view+json": {
- "model_id": "b6ada793fc744a30934de8526909bab7",
+ "model_id": "747b76bb1aba4f8c930caeb9a40797de",
"version_major": 2,
"version_minor": 0
},
@@ -133,7 +134,7 @@
{
"data": {
"application/vnd.jupyter.widget-view+json": {
- "model_id": "b612d423a0b149aa950a0a0d0dd70aa6",
+ "model_id": "3c838524ec9042ce85d51eca088af2f4",
"version_major": 2,
"version_minor": 0
},
@@ -167,7 +168,7 @@
{
"data": {
"application/vnd.jupyter.widget-view+json": {
- "model_id": "25fcc1f76d314cce9a7ad35776b64989",
+ "model_id": "edfeec012c7d42d9b2d69cf9252f2fba",
"version_major": 2,
"version_minor": 0
},
@@ -197,6 +198,23 @@
},
"metadata": {},
"output_type": "display_data"
+ },
+ {
+ "data": {
+ "text/html": [
+ "Langsmith Run URL: \n",
+ "http://localhost/o/00000000-0000-0000-0000-000000000000/projects/p/856d5024-48fe-4a9c-b1f9-5236f8d3aebd/r/2bff98db- \n",
+ "7d63-4608-a487-ea7ec12f05a2?poll=true \n",
+ " \n"
+ ],
+ "text/plain": [
+ "Langsmith Run URL: \n",
+ "\u001b]8;id=883877;http://localhost/o/00000000-0000-0000-0000-000000000000/projects/p/856d5024-48fe-4a9c-b1f9-5236f8d3aebd/r/2bff98db-7d63-4608-a487-ea7ec12f05a2?poll=true\u001b\\\u001b[4;34mhttp://localhost/o/00000000-0000-0000-0000-000000000000/projects/p/856d5024-48fe-4a9c-b1f9-5236f8d3aebd/r/2bff98db-\u001b[0m\u001b]8;;\u001b\\\n",
+ "\u001b]8;id=883877;http://localhost/o/00000000-0000-0000-0000-000000000000/projects/p/856d5024-48fe-4a9c-b1f9-5236f8d3aebd/r/2bff98db-7d63-4608-a487-ea7ec12f05a2?poll=true\u001b\\\u001b[4;34m7d63-4608-a487-ea7ec12f05a2?poll=true\u001b[0m\u001b]8;;\u001b\\ \n"
+ ]
+ },
+ "metadata": {},
+ "output_type": "display_data"
}
],
"source": [
@@ -206,7 +224,7 @@
},
{
"cell_type": "code",
- "execution_count": 7,
+ "execution_count": 6,
"metadata": {},
"outputs": [
{
@@ -217,46 +235,46 @@
"\n",
" Test Case 0 \n",
"\n",
- " • Test Input: (2, 2) \n",
- " • Run Command: filter_prime_numbers(2, 2) \n",
- " • Expected Result: 1 \n",
- " • Actual Result: 1 \n",
+ " • Test Input: (2, 10) \n",
+ " • Run Command: filter_prime_numbers(2, 10) \n",
+ " • Expected Result: 4 \n",
+ " • Actual Result: 4 \n",
" • Is Passed: Yes \n",
"\n",
"─────────────────────────────────────────────────────────────────────────────────────────────────────────────────── \n",
" Test Case 1 \n",
"\n",
- " • Test Input: (2, 3) \n",
- " • Run Command: filter_prime_numbers(2, 3) \n",
- " • Expected Result: 2 \n",
- " • Actual Result: 2 \n",
+ " • Test Input: (11, 20) \n",
+ " • Run Command: filter_prime_numbers(11, 20) \n",
+ " • Expected Result: 4 \n",
+ " • Actual Result: 4 \n",
" • Is Passed: Yes \n",
"\n",
"─────────────────────────────────────────────────────────────────────────────────────────────────────────────────── \n",
" Test Case 2 \n",
"\n",
- " • Test Input: (2, 10) \n",
- " • Run Command: filter_prime_numbers(2, 10) \n",
- " • Expected Result: 4 \n",
- " • Actual Result: 4 \n",
+ " • Test Input: (21, 30) \n",
+ " • Run Command: filter_prime_numbers(21, 30) \n",
+ " • Expected Result: 2 \n",
+ " • Actual Result: 2 \n",
" • Is Passed: Yes \n",
"\n",
"─────────────────────────────────────────────────────────────────────────────────────────────────────────────────── \n",
" Test Case 3 \n",
"\n",
- " • Test Input: (2, 20) \n",
- " • Run Command: filter_prime_numbers(2, 20) \n",
- " • Expected Result: 8 \n",
- " • Actual Result: 8 \n",
+ " • Test Input: (31, 40) \n",
+ " • Run Command: filter_prime_numbers(31, 40) \n",
+ " • Expected Result: 2 \n",
+ " • Actual Result: 2 \n",
" • Is Passed: Yes \n",
"\n",
"─────────────────────────────────────────────────────────────────────────────────────────────────────────────────── \n",
" Test Case 4 \n",
"\n",
- " • Test Input: (2, 100) \n",
- " • Run Command: filter_prime_numbers(2, 100) \n",
- " • Expected Result: 25 \n",
- " • Actual Result: 25 \n",
+ " • Test Input: (41, 50) \n",
+ " • Run Command: filter_prime_numbers(41, 50) \n",
+ " • Expected Result: 3 \n",
+ " • Actual Result: 3 \n",
" • Is Passed: Yes \n",
"\n",
"─────────────────────────────────────────────────────────────────────────────────────────────────────────────────── \n",
@@ -268,46 +286,46 @@
"\n",
" \u001b[1mTest Case 0\u001b[0m \n",
"\n",
- "\u001b[1;33m • \u001b[0m\u001b[1mTest Input:\u001b[0m (2, 2) \n",
- "\u001b[1;33m • \u001b[0m\u001b[1mRun Command:\u001b[0m filter_prime_numbers(2, 2) \n",
- "\u001b[1;33m • \u001b[0m\u001b[1mExpected Result:\u001b[0m 1 \n",
- "\u001b[1;33m • \u001b[0m\u001b[1mActual Result:\u001b[0m 1 \n",
+ "\u001b[1;33m • \u001b[0m\u001b[1mTest Input:\u001b[0m (2, 10) \n",
+ "\u001b[1;33m • \u001b[0m\u001b[1mRun Command:\u001b[0m filter_prime_numbers(2, 10) \n",
+ "\u001b[1;33m • \u001b[0m\u001b[1mExpected Result:\u001b[0m 4 \n",
+ "\u001b[1;33m • \u001b[0m\u001b[1mActual Result:\u001b[0m 4 \n",
"\u001b[1;33m • \u001b[0m\u001b[1mIs Passed:\u001b[0m Yes \n",
"\n",
"\u001b[33m───────────────────────────────────────────────────────────────────────────────────────────────────────────────────\u001b[0m\n",
" \u001b[1mTest Case 1\u001b[0m \n",
"\n",
- "\u001b[1;33m • \u001b[0m\u001b[1mTest Input:\u001b[0m (2, 3) \n",
- "\u001b[1;33m • \u001b[0m\u001b[1mRun Command:\u001b[0m filter_prime_numbers(2, 3) \n",
- "\u001b[1;33m • \u001b[0m\u001b[1mExpected Result:\u001b[0m 2 \n",
- "\u001b[1;33m • \u001b[0m\u001b[1mActual Result:\u001b[0m 2 \n",
+ "\u001b[1;33m • \u001b[0m\u001b[1mTest Input:\u001b[0m (11, 20) \n",
+ "\u001b[1;33m • \u001b[0m\u001b[1mRun Command:\u001b[0m filter_prime_numbers(11, 20) \n",
+ "\u001b[1;33m • \u001b[0m\u001b[1mExpected Result:\u001b[0m 4 \n",
+ "\u001b[1;33m • \u001b[0m\u001b[1mActual Result:\u001b[0m 4 \n",
"\u001b[1;33m • \u001b[0m\u001b[1mIs Passed:\u001b[0m Yes \n",
"\n",
"\u001b[33m───────────────────────────────────────────────────────────────────────────────────────────────────────────────────\u001b[0m\n",
" \u001b[1mTest Case 2\u001b[0m \n",
"\n",
- "\u001b[1;33m • \u001b[0m\u001b[1mTest Input:\u001b[0m (2, 10) \n",
- "\u001b[1;33m • \u001b[0m\u001b[1mRun Command:\u001b[0m filter_prime_numbers(2, 10) \n",
- "\u001b[1;33m • \u001b[0m\u001b[1mExpected Result:\u001b[0m 4 \n",
- "\u001b[1;33m • \u001b[0m\u001b[1mActual Result:\u001b[0m 4 \n",
+ "\u001b[1;33m • \u001b[0m\u001b[1mTest Input:\u001b[0m (21, 30) \n",
+ "\u001b[1;33m • \u001b[0m\u001b[1mRun Command:\u001b[0m filter_prime_numbers(21, 30) \n",
+ "\u001b[1;33m • \u001b[0m\u001b[1mExpected Result:\u001b[0m 2 \n",
+ "\u001b[1;33m • \u001b[0m\u001b[1mActual Result:\u001b[0m 2 \n",
"\u001b[1;33m • \u001b[0m\u001b[1mIs Passed:\u001b[0m Yes \n",
"\n",
"\u001b[33m───────────────────────────────────────────────────────────────────────────────────────────────────────────────────\u001b[0m\n",
" \u001b[1mTest Case 3\u001b[0m \n",
"\n",
- "\u001b[1;33m • \u001b[0m\u001b[1mTest Input:\u001b[0m (2, 20) \n",
- "\u001b[1;33m • \u001b[0m\u001b[1mRun Command:\u001b[0m filter_prime_numbers(2, 20) \n",
- "\u001b[1;33m • \u001b[0m\u001b[1mExpected Result:\u001b[0m 8 \n",
- "\u001b[1;33m • \u001b[0m\u001b[1mActual Result:\u001b[0m 8 \n",
+ "\u001b[1;33m • \u001b[0m\u001b[1mTest Input:\u001b[0m (31, 40) \n",
+ "\u001b[1;33m • \u001b[0m\u001b[1mRun Command:\u001b[0m filter_prime_numbers(31, 40) \n",
+ "\u001b[1;33m • \u001b[0m\u001b[1mExpected Result:\u001b[0m 2 \n",
+ "\u001b[1;33m • \u001b[0m\u001b[1mActual Result:\u001b[0m 2 \n",
"\u001b[1;33m • \u001b[0m\u001b[1mIs Passed:\u001b[0m Yes \n",
"\n",
"\u001b[33m───────────────────────────────────────────────────────────────────────────────────────────────────────────────────\u001b[0m\n",
" \u001b[1mTest Case 4\u001b[0m \n",
"\n",
- "\u001b[1;33m • \u001b[0m\u001b[1mTest Input:\u001b[0m (2, 100) \n",
- "\u001b[1;33m • \u001b[0m\u001b[1mRun Command:\u001b[0m filter_prime_numbers(2, 100) \n",
- "\u001b[1;33m • \u001b[0m\u001b[1mExpected Result:\u001b[0m 25 \n",
- "\u001b[1;33m • \u001b[0m\u001b[1mActual Result:\u001b[0m 25 \n",
+ "\u001b[1;33m • \u001b[0m\u001b[1mTest Input:\u001b[0m (41, 50) \n",
+ "\u001b[1;33m • \u001b[0m\u001b[1mRun Command:\u001b[0m filter_prime_numbers(41, 50) \n",
+ "\u001b[1;33m • \u001b[0m\u001b[1mExpected Result:\u001b[0m 3 \n",
+ "\u001b[1;33m • \u001b[0m\u001b[1mActual Result:\u001b[0m 3 \n",
"\u001b[1;33m • \u001b[0m\u001b[1mIs Passed:\u001b[0m Yes \n",
"\n",
"\u001b[33m───────────────────────────────────────────────────────────────────────────────────────────────────────────────────\u001b[0m\n"
@@ -323,7 +341,7 @@
},
{
"cell_type": "code",
- "execution_count": 8,
+ "execution_count": 7,
"metadata": {},
"outputs": [],
"source": [
@@ -345,7 +363,7 @@
},
{
"cell_type": "code",
- "execution_count": 12,
+ "execution_count": 8,
"metadata": {},
"outputs": [],
"source": [
diff --git a/docs/examples/06_skills_refactor.ipynb b/docs/examples/06_skills_refactor.ipynb
index 7c18ed8..44f246b 100644
--- a/docs/examples/06_skills_refactor.ipynb
+++ b/docs/examples/06_skills_refactor.ipynb
@@ -180,7 +180,7 @@
{
"data": {
"application/vnd.jupyter.widget-view+json": {
- "model_id": "9f7b8af45abf4ef598efa08de48ddc01",
+ "model_id": "8f80adfafef544098b2f5312b9b605b6",
"version_major": 2,
"version_minor": 0
},
@@ -210,6 +210,23 @@
},
"metadata": {},
"output_type": "display_data"
+ },
+ {
+ "data": {
+ "text/html": [
+ "Langsmith Run URL: \n",
+ "http://localhost/o/00000000-0000-0000-0000-000000000000/projects/p/856d5024-48fe-4a9c-b1f9-5236f8d3aebd/r/3753f345- \n",
+ "ceed-4928-9099-327d12fa31c8?poll=true \n",
+ " \n"
+ ],
+ "text/plain": [
+ "Langsmith Run URL: \n",
+ "\u001b]8;id=323673;http://localhost/o/00000000-0000-0000-0000-000000000000/projects/p/856d5024-48fe-4a9c-b1f9-5236f8d3aebd/r/3753f345-ceed-4928-9099-327d12fa31c8?poll=true\u001b\\\u001b[4;34mhttp://localhost/o/00000000-0000-0000-0000-000000000000/projects/p/856d5024-48fe-4a9c-b1f9-5236f8d3aebd/r/3753f345-\u001b[0m\u001b]8;;\u001b\\\n",
+ "\u001b]8;id=323673;http://localhost/o/00000000-0000-0000-0000-000000000000/projects/p/856d5024-48fe-4a9c-b1f9-5236f8d3aebd/r/3753f345-ceed-4928-9099-327d12fa31c8?poll=true\u001b\\\u001b[4;34mceed-4928-9099-327d12fa31c8?poll=true\u001b[0m\u001b]8;;\u001b\\ \n"
+ ]
+ },
+ "metadata": {},
+ "output_type": "display_data"
}
],
"source": [
@@ -295,7 +312,7 @@
{
"data": {
"application/vnd.jupyter.widget-view+json": {
- "model_id": "58dae241e0e5474bb2e974701ddc6021",
+ "model_id": "2465596d3dff439d9fce6d4ab1b46e74",
"version_major": 2,
"version_minor": 0
},
@@ -325,6 +342,23 @@
},
"metadata": {},
"output_type": "display_data"
+ },
+ {
+ "data": {
+ "text/html": [
+ "Langsmith Run URL: \n",
+ "http://localhost/o/00000000-0000-0000-0000-000000000000/projects/p/856d5024-48fe-4a9c-b1f9-5236f8d3aebd/r/9b5d62a4- \n",
+ "109d-4d56-a996-a9cbf928c840?poll=true \n",
+ " \n"
+ ],
+ "text/plain": [
+ "Langsmith Run URL: \n",
+ "\u001b]8;id=180123;http://localhost/o/00000000-0000-0000-0000-000000000000/projects/p/856d5024-48fe-4a9c-b1f9-5236f8d3aebd/r/9b5d62a4-109d-4d56-a996-a9cbf928c840?poll=true\u001b\\\u001b[4;34mhttp://localhost/o/00000000-0000-0000-0000-000000000000/projects/p/856d5024-48fe-4a9c-b1f9-5236f8d3aebd/r/9b5d62a4-\u001b[0m\u001b]8;;\u001b\\\n",
+ "\u001b]8;id=180123;http://localhost/o/00000000-0000-0000-0000-000000000000/projects/p/856d5024-48fe-4a9c-b1f9-5236f8d3aebd/r/9b5d62a4-109d-4d56-a996-a9cbf928c840?poll=true\u001b\\\u001b[4;34m109d-4d56-a996-a9cbf928c840?poll=true\u001b[0m\u001b]8;;\u001b\\ \n"
+ ]
+ },
+ "metadata": {},
+ "output_type": "display_data"
}
],
"source": [
@@ -350,7 +384,7 @@
" • Usage : \n",
"\n",
" \n",
- " cleaned_data, null_count, duplicate_count = data_cleaning_with_stats(input_data, remove_duplicates = True ) \n",
+ " cleaned_data, stats = data_cleaning_with_stats(input_data, remove_duplicates = True ) \n",
" \n",
"\n",
" • Parameters : \n",
@@ -362,8 +396,7 @@
" • Returns : \n",
" • cleaned_data (array): The cleaned dataset with string 'null'/'NaN' values converted to actual nulls, and \n",
" nulls and duplicates removed based on specified parameters. \n",
- " • null_count (integer): The number of null values removed from the dataset. \n",
- " • duplicate_count (integer): The number of duplicate values removed from the dataset. \n",
+ " • stats (dictionary): A dictionary containing the count of null and duplicate values removed from the dataset. \n",
" \n"
],
"text/plain": [
@@ -378,7 +411,7 @@
"\u001b[1;33m • \u001b[0m\u001b[1mUsage\u001b[0m: \n",
"\n",
"\u001b[48;2;39;40;34m \u001b[0m\n",
- "\u001b[48;2;39;40;34m \u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34mcleaned_data\u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34m,\u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34m \u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34mnull_count\u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34m,\u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34m \u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34mduplicate_count\u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34m \u001b[0m\u001b[38;2;255;70;137;48;2;39;40;34m=\u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34m \u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34mdata_cleaning_with_stats\u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34m(\u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34minput_data\u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34m,\u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34m \u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34mremove_duplicates\u001b[0m\u001b[38;2;255;70;137;48;2;39;40;34m=\u001b[0m\u001b[38;2;102;217;239;48;2;39;40;34mTrue\u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34m)\u001b[0m\u001b[48;2;39;40;34m \u001b[0m\u001b[48;2;39;40;34m \u001b[0m\n",
+ "\u001b[48;2;39;40;34m \u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34mcleaned_data\u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34m,\u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34m \u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34mstats\u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34m \u001b[0m\u001b[38;2;255;70;137;48;2;39;40;34m=\u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34m \u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34mdata_cleaning_with_stats\u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34m(\u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34minput_data\u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34m,\u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34m \u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34mremove_duplicates\u001b[0m\u001b[38;2;255;70;137;48;2;39;40;34m=\u001b[0m\u001b[38;2;102;217;239;48;2;39;40;34mTrue\u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34m)\u001b[0m\u001b[48;2;39;40;34m \u001b[0m\u001b[48;2;39;40;34m \u001b[0m\n",
"\u001b[48;2;39;40;34m \u001b[0m\n",
"\n",
"\u001b[1;33m • \u001b[0m\u001b[1mParameters\u001b[0m: \n",
@@ -390,8 +423,7 @@
"\u001b[1;33m • \u001b[0m\u001b[1mReturns\u001b[0m: \n",
"\u001b[1;33m \u001b[0m\u001b[1;33m • \u001b[0m\u001b[1mcleaned_data\u001b[0m (array): The cleaned dataset with string 'null'/'NaN' values converted to actual nulls, and \n",
"\u001b[1;33m \u001b[0m\u001b[1;33m \u001b[0mnulls and duplicates removed based on specified parameters. \n",
- "\u001b[1;33m \u001b[0m\u001b[1;33m • \u001b[0m\u001b[1mnull_count\u001b[0m (integer): The number of null values removed from the dataset. \n",
- "\u001b[1;33m \u001b[0m\u001b[1;33m • \u001b[0m\u001b[1mduplicate_count\u001b[0m (integer): The number of duplicate values removed from the dataset. \n"
+ "\u001b[1;33m \u001b[0m\u001b[1;33m • \u001b[0m\u001b[1mstats\u001b[0m (dictionary): A dictionary containing the count of null and duplicate values removed from the dataset. \n"
]
},
"metadata": {},
@@ -410,7 +442,7 @@
{
"data": {
"application/vnd.jupyter.widget-view+json": {
- "model_id": "58332991cad240af85d751bd13a5bc0c",
+ "model_id": "787dc04257b046fe96b96b80d630360f",
"version_major": 2,
"version_minor": 0
},
@@ -440,6 +472,23 @@
},
"metadata": {},
"output_type": "display_data"
+ },
+ {
+ "data": {
+ "text/html": [
+ "Langsmith Run URL: \n",
+ "http://localhost/o/00000000-0000-0000-0000-000000000000/projects/p/856d5024-48fe-4a9c-b1f9-5236f8d3aebd/r/a3c50ed0- \n",
+ "21e1-4d5d-8324-9a247f29c2f7?poll=true \n",
+ " \n"
+ ],
+ "text/plain": [
+ "Langsmith Run URL: \n",
+ "\u001b]8;id=438953;http://localhost/o/00000000-0000-0000-0000-000000000000/projects/p/856d5024-48fe-4a9c-b1f9-5236f8d3aebd/r/a3c50ed0-21e1-4d5d-8324-9a247f29c2f7?poll=true\u001b\\\u001b[4;34mhttp://localhost/o/00000000-0000-0000-0000-000000000000/projects/p/856d5024-48fe-4a9c-b1f9-5236f8d3aebd/r/a3c50ed0-\u001b[0m\u001b]8;;\u001b\\\n",
+ "\u001b]8;id=438953;http://localhost/o/00000000-0000-0000-0000-000000000000/projects/p/856d5024-48fe-4a9c-b1f9-5236f8d3aebd/r/a3c50ed0-21e1-4d5d-8324-9a247f29c2f7?poll=true\u001b\\\u001b[4;34m21e1-4d5d-8324-9a247f29c2f7?poll=true\u001b[0m\u001b]8;;\u001b\\ \n"
+ ]
+ },
+ "metadata": {},
+ "output_type": "display_data"
}
],
"source": [
@@ -812,7 +861,7 @@
{
"data": {
"application/vnd.jupyter.widget-view+json": {
- "model_id": "6a0e86dd85914422a1d9e70baf3e4119",
+ "model_id": "482c2f1f5ef64aec803292ea4137421a",
"version_major": 2,
"version_minor": 0
},
@@ -842,6 +891,23 @@
},
"metadata": {},
"output_type": "display_data"
+ },
+ {
+ "data": {
+ "text/html": [
+ "Langsmith Run URL: \n",
+ "http://localhost/o/00000000-0000-0000-0000-000000000000/projects/p/856d5024-48fe-4a9c-b1f9-5236f8d3aebd/r/02f47717- \n",
+ "dc2e-4908-af17-7403f9732e2d?poll=true \n",
+ " \n"
+ ],
+ "text/plain": [
+ "Langsmith Run URL: \n",
+ "\u001b]8;id=890260;http://localhost/o/00000000-0000-0000-0000-000000000000/projects/p/856d5024-48fe-4a9c-b1f9-5236f8d3aebd/r/02f47717-dc2e-4908-af17-7403f9732e2d?poll=true\u001b\\\u001b[4;34mhttp://localhost/o/00000000-0000-0000-0000-000000000000/projects/p/856d5024-48fe-4a9c-b1f9-5236f8d3aebd/r/02f47717-\u001b[0m\u001b]8;;\u001b\\\n",
+ "\u001b]8;id=890260;http://localhost/o/00000000-0000-0000-0000-000000000000/projects/p/856d5024-48fe-4a9c-b1f9-5236f8d3aebd/r/02f47717-dc2e-4908-af17-7403f9732e2d?poll=true\u001b\\\u001b[4;34mdc2e-4908-af17-7403f9732e2d?poll=true\u001b[0m\u001b]8;;\u001b\\ \n"
+ ]
+ },
+ "metadata": {},
+ "output_type": "display_data"
}
],
"source": [
@@ -859,42 +925,42 @@
"\n",
" Skill Details: \n",
"\n",
- " • Name : data_cleaning_and_visualization \n",
+ " • Name : clean_and_visualize_data \n",
" • Description : This skill is responsible for cleaning the input data by removing empty values and then visualizing\n",
- " it by generating a bar chart. It provides a simple way to preprocess and understand data distribution and \n",
- " patterns. \n",
+ " it by generating a bar chart. It provides a simple way to preprocess and understand data. \n",
" • Version : 1.0.0 \n",
" • Usage : \n",
"\n",
" \n",
- " data_cleaning_and_visualization([ 1 , 2 , None , 4 , 5 , None ]) \n",
+ " clean_and_visualize_data(input_data) \n",
" \n",
"\n",
" • Parameters : \n",
" • data (array): The input data that needs cleaning and visualization. It should be a list of values. \n",
" • Required: True \n",
" • Returns : \n",
+ " • cleaned_data (array): The cleaned data after removing empty values. \n",
" \n"
],
"text/plain": [
"\n",
" \u001b[1;4mSkill Details:\u001b[0m \n",
"\n",
- "\u001b[1;33m • \u001b[0m\u001b[1mName\u001b[0m: data_cleaning_and_visualization \n",
+ "\u001b[1;33m • \u001b[0m\u001b[1mName\u001b[0m: clean_and_visualize_data \n",
"\u001b[1;33m • \u001b[0m\u001b[1mDescription\u001b[0m: This skill is responsible for cleaning the input data by removing empty values and then visualizing\n",
- "\u001b[1;33m \u001b[0mit by generating a bar chart. It provides a simple way to preprocess and understand data distribution and \n",
- "\u001b[1;33m \u001b[0mpatterns. \n",
+ "\u001b[1;33m \u001b[0mit by generating a bar chart. It provides a simple way to preprocess and understand data. \n",
"\u001b[1;33m • \u001b[0m\u001b[1mVersion\u001b[0m: 1.0.0 \n",
"\u001b[1;33m • \u001b[0m\u001b[1mUsage\u001b[0m: \n",
"\n",
"\u001b[48;2;39;40;34m \u001b[0m\n",
- "\u001b[48;2;39;40;34m \u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34mdata_cleaning_and_visualization\u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34m(\u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34m[\u001b[0m\u001b[38;2;174;129;255;48;2;39;40;34m1\u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34m,\u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34m \u001b[0m\u001b[38;2;174;129;255;48;2;39;40;34m2\u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34m,\u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34m \u001b[0m\u001b[38;2;102;217;239;48;2;39;40;34mNone\u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34m,\u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34m \u001b[0m\u001b[38;2;174;129;255;48;2;39;40;34m4\u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34m,\u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34m \u001b[0m\u001b[38;2;174;129;255;48;2;39;40;34m5\u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34m,\u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34m \u001b[0m\u001b[38;2;102;217;239;48;2;39;40;34mNone\u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34m]\u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34m)\u001b[0m\u001b[48;2;39;40;34m \u001b[0m\u001b[48;2;39;40;34m \u001b[0m\n",
+ "\u001b[48;2;39;40;34m \u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34mclean_and_visualize_data\u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34m(\u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34minput_data\u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34m)\u001b[0m\u001b[48;2;39;40;34m \u001b[0m\u001b[48;2;39;40;34m \u001b[0m\n",
"\u001b[48;2;39;40;34m \u001b[0m\n",
"\n",
"\u001b[1;33m • \u001b[0m\u001b[1mParameters\u001b[0m: \n",
"\u001b[1;33m \u001b[0m\u001b[1;33m • \u001b[0m\u001b[1mdata\u001b[0m (array): The input data that needs cleaning and visualization. It should be a list of values. \n",
"\u001b[1;33m \u001b[0m\u001b[1;33m \u001b[0m\u001b[1;33m • \u001b[0mRequired: True \n",
- "\u001b[1;33m • \u001b[0m\u001b[1mReturns\u001b[0m: \n"
+ "\u001b[1;33m • \u001b[0m\u001b[1mReturns\u001b[0m: \n",
+ "\u001b[1;33m \u001b[0m\u001b[1;33m • \u001b[0m\u001b[1mcleaned_data\u001b[0m (array): The cleaned data after removing empty values. \n"
]
},
"metadata": {},
@@ -913,7 +979,7 @@
{
"data": {
"application/vnd.jupyter.widget-view+json": {
- "model_id": "985e68fa93734802a01ad6ebab432db7",
+ "model_id": "41f38ee4fe9244149b127558f8ed2992",
"version_major": 2,
"version_minor": 0
},
@@ -943,6 +1009,23 @@
},
"metadata": {},
"output_type": "display_data"
+ },
+ {
+ "data": {
+ "text/html": [
+ "Langsmith Run URL: \n",
+ "http://localhost/o/00000000-0000-0000-0000-000000000000/projects/p/856d5024-48fe-4a9c-b1f9-5236f8d3aebd/r/6c71660e- \n",
+ "f9fc-441c-a399-589ba14ef6d7?poll=true \n",
+ " \n"
+ ],
+ "text/plain": [
+ "Langsmith Run URL: \n",
+ "\u001b]8;id=569225;http://localhost/o/00000000-0000-0000-0000-000000000000/projects/p/856d5024-48fe-4a9c-b1f9-5236f8d3aebd/r/6c71660e-f9fc-441c-a399-589ba14ef6d7?poll=true\u001b\\\u001b[4;34mhttp://localhost/o/00000000-0000-0000-0000-000000000000/projects/p/856d5024-48fe-4a9c-b1f9-5236f8d3aebd/r/6c71660e-\u001b[0m\u001b]8;;\u001b\\\n",
+ "\u001b]8;id=569225;http://localhost/o/00000000-0000-0000-0000-000000000000/projects/p/856d5024-48fe-4a9c-b1f9-5236f8d3aebd/r/6c71660e-f9fc-441c-a399-589ba14ef6d7?poll=true\u001b\\\u001b[4;34mf9fc-441c-a399-589ba14ef6d7?poll=true\u001b[0m\u001b]8;;\u001b\\ \n"
+ ]
+ },
+ "metadata": {},
+ "output_type": "display_data"
}
],
"source": [
@@ -960,15 +1043,14 @@
"\n",
" Skill Details: \n",
"\n",
- " • Name : data_cleaning_visualization_statistics \n",
- " • Description : This skill is responsible for cleaning the input data by removing empty values, visualizing the \n",
- " data by generating a bar chart, and calculating the average value of the data. It provides a comprehensive way \n",
- " to preprocess, understand, and analyze data. \n",
+ " • Name : data_analysis \n",
+ " • Description : This skill is responsible for cleaning the input data, visualizing it by generating a bar chart, \n",
+ " and calculating its average. It provides a comprehensive way to preprocess, understand, and analyze data. \n",
" • Version : 1.0.0 \n",
" • Usage : \n",
"\n",
" \n",
- " cleaned_data, average = data_cleaning_visualization_statistics(input_data) \n",
+ " data_analysis(input_data) \n",
" \n",
"\n",
" • Parameters : \n",
@@ -977,22 +1059,22 @@
" • Required: True \n",
" • Returns : \n",
" • cleaned_data (array): The cleaned data after removing empty values. \n",
- " • the_average_value_of (float): The average value of the cleaned data. \n",
+ " • visualization (object): The visualization of the data. \n",
+ " • average (float): The average value of the input data. \n",
" \n"
],
"text/plain": [
"\n",
" \u001b[1;4mSkill Details:\u001b[0m \n",
"\n",
- "\u001b[1;33m • \u001b[0m\u001b[1mName\u001b[0m: data_cleaning_visualization_statistics \n",
- "\u001b[1;33m • \u001b[0m\u001b[1mDescription\u001b[0m: This skill is responsible for cleaning the input data by removing empty values, visualizing the \n",
- "\u001b[1;33m \u001b[0mdata by generating a bar chart, and calculating the average value of the data. It provides a comprehensive way \n",
- "\u001b[1;33m \u001b[0mto preprocess, understand, and analyze data. \n",
+ "\u001b[1;33m • \u001b[0m\u001b[1mName\u001b[0m: data_analysis \n",
+ "\u001b[1;33m • \u001b[0m\u001b[1mDescription\u001b[0m: This skill is responsible for cleaning the input data, visualizing it by generating a bar chart, \n",
+ "\u001b[1;33m \u001b[0mand calculating its average. It provides a comprehensive way to preprocess, understand, and analyze data. \n",
"\u001b[1;33m • \u001b[0m\u001b[1mVersion\u001b[0m: 1.0.0 \n",
"\u001b[1;33m • \u001b[0m\u001b[1mUsage\u001b[0m: \n",
"\n",
"\u001b[48;2;39;40;34m \u001b[0m\n",
- "\u001b[48;2;39;40;34m \u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34mcleaned_data\u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34m,\u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34m \u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34maverage\u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34m \u001b[0m\u001b[38;2;255;70;137;48;2;39;40;34m=\u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34m \u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34mdata_cleaning_visualization_statistics\u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34m(\u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34minput_data\u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34m)\u001b[0m\u001b[48;2;39;40;34m \u001b[0m\u001b[48;2;39;40;34m \u001b[0m\n",
+ "\u001b[48;2;39;40;34m \u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34mdata_analysis\u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34m(\u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34minput_data\u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34m)\u001b[0m\u001b[48;2;39;40;34m \u001b[0m\u001b[48;2;39;40;34m \u001b[0m\n",
"\u001b[48;2;39;40;34m \u001b[0m\n",
"\n",
"\u001b[1;33m • \u001b[0m\u001b[1mParameters\u001b[0m: \n",
@@ -1001,7 +1083,8 @@
"\u001b[1;33m \u001b[0m\u001b[1;33m \u001b[0m\u001b[1;33m • \u001b[0mRequired: True \n",
"\u001b[1;33m • \u001b[0m\u001b[1mReturns\u001b[0m: \n",
"\u001b[1;33m \u001b[0m\u001b[1;33m • \u001b[0m\u001b[1mcleaned_data\u001b[0m (array): The cleaned data after removing empty values. \n",
- "\u001b[1;33m \u001b[0m\u001b[1;33m • \u001b[0m\u001b[1mthe_average_value_of\u001b[0m (float): The average value of the cleaned data. \n"
+ "\u001b[1;33m \u001b[0m\u001b[1;33m • \u001b[0m\u001b[1mvisualization\u001b[0m (object): The visualization of the data. \n",
+ "\u001b[1;33m \u001b[0m\u001b[1;33m • \u001b[0m\u001b[1maverage\u001b[0m (float): The average value of the input data. \n"
]
},
"metadata": {},
@@ -1020,7 +1103,7 @@
{
"data": {
"application/vnd.jupyter.widget-view+json": {
- "model_id": "92d9faa66589490da6353c3930db7e89",
+ "model_id": "b5b2aeed12e648d8bf9151fe5c6bf67f",
"version_major": 2,
"version_minor": 0
},
@@ -1050,6 +1133,23 @@
},
"metadata": {},
"output_type": "display_data"
+ },
+ {
+ "data": {
+ "text/html": [
+ "Langsmith Run URL: \n",
+ "http://localhost/o/00000000-0000-0000-0000-000000000000/projects/p/856d5024-48fe-4a9c-b1f9-5236f8d3aebd/r/4e02c38f- \n",
+ "ebda-4e18-949a-bc8667454564?poll=true \n",
+ " \n"
+ ],
+ "text/plain": [
+ "Langsmith Run URL: \n",
+ "\u001b]8;id=469878;http://localhost/o/00000000-0000-0000-0000-000000000000/projects/p/856d5024-48fe-4a9c-b1f9-5236f8d3aebd/r/4e02c38f-ebda-4e18-949a-bc8667454564?poll=true\u001b\\\u001b[4;34mhttp://localhost/o/00000000-0000-0000-0000-000000000000/projects/p/856d5024-48fe-4a9c-b1f9-5236f8d3aebd/r/4e02c38f-\u001b[0m\u001b]8;;\u001b\\\n",
+ "\u001b]8;id=469878;http://localhost/o/00000000-0000-0000-0000-000000000000/projects/p/856d5024-48fe-4a9c-b1f9-5236f8d3aebd/r/4e02c38f-ebda-4e18-949a-bc8667454564?poll=true\u001b\\\u001b[4;34mebda-4e18-949a-bc8667454564?poll=true\u001b[0m\u001b]8;;\u001b\\ \n"
+ ]
+ },
+ "metadata": {},
+ "output_type": "display_data"
}
],
"source": [
@@ -1067,15 +1167,14 @@
"\n",
" Skill Details: \n",
"\n",
- " • Name : data_cleaning_visualization_statistics \n",
- " • Description : This skill is responsible for cleaning the input data by removing empty values, visualizing the \n",
- " data by generating a bar chart, and calculating the average value of the data. It provides a comprehensive way \n",
- " to preprocess, understand, and analyze data. \n",
+ " • Name : data_analysis \n",
+ " • Description : This skill is responsible for cleaning the input data, visualizing it by generating a bar chart, \n",
+ " and calculating its average. It provides a comprehensive way to preprocess, understand, and analyze data. \n",
" • Version : 1.0.0 \n",
" • Usage : \n",
"\n",
" \n",
- " cleaned_data, average = data_cleaning_visualization_statistics(input_data) \n",
+ " data_analysis(input_data) \n",
" \n",
"\n",
" • Parameters : \n",
@@ -1084,22 +1183,22 @@
" • Required: True \n",
" • Returns : \n",
" • cleaned_data (array): The cleaned data after removing empty values. \n",
- " • the_average_value_of (float): The average value of the cleaned data. \n",
+ " • visualization (object): The visualization of the data. \n",
+ " • average (float): The average value of the input data. \n",
" \n"
],
"text/plain": [
"\n",
" \u001b[1;4mSkill Details:\u001b[0m \n",
"\n",
- "\u001b[1;33m • \u001b[0m\u001b[1mName\u001b[0m: data_cleaning_visualization_statistics \n",
- "\u001b[1;33m • \u001b[0m\u001b[1mDescription\u001b[0m: This skill is responsible for cleaning the input data by removing empty values, visualizing the \n",
- "\u001b[1;33m \u001b[0mdata by generating a bar chart, and calculating the average value of the data. It provides a comprehensive way \n",
- "\u001b[1;33m \u001b[0mto preprocess, understand, and analyze data. \n",
+ "\u001b[1;33m • \u001b[0m\u001b[1mName\u001b[0m: data_analysis \n",
+ "\u001b[1;33m • \u001b[0m\u001b[1mDescription\u001b[0m: This skill is responsible for cleaning the input data, visualizing it by generating a bar chart, \n",
+ "\u001b[1;33m \u001b[0mand calculating its average. It provides a comprehensive way to preprocess, understand, and analyze data. \n",
"\u001b[1;33m • \u001b[0m\u001b[1mVersion\u001b[0m: 1.0.0 \n",
"\u001b[1;33m • \u001b[0m\u001b[1mUsage\u001b[0m: \n",
"\n",
"\u001b[48;2;39;40;34m \u001b[0m\n",
- "\u001b[48;2;39;40;34m \u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34mcleaned_data\u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34m,\u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34m \u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34maverage\u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34m \u001b[0m\u001b[38;2;255;70;137;48;2;39;40;34m=\u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34m \u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34mdata_cleaning_visualization_statistics\u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34m(\u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34minput_data\u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34m)\u001b[0m\u001b[48;2;39;40;34m \u001b[0m\u001b[48;2;39;40;34m \u001b[0m\n",
+ "\u001b[48;2;39;40;34m \u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34mdata_analysis\u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34m(\u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34minput_data\u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34m)\u001b[0m\u001b[48;2;39;40;34m \u001b[0m\u001b[48;2;39;40;34m \u001b[0m\n",
"\u001b[48;2;39;40;34m \u001b[0m\n",
"\n",
"\u001b[1;33m • \u001b[0m\u001b[1mParameters\u001b[0m: \n",
@@ -1108,7 +1207,8 @@
"\u001b[1;33m \u001b[0m\u001b[1;33m \u001b[0m\u001b[1;33m • \u001b[0mRequired: True \n",
"\u001b[1;33m • \u001b[0m\u001b[1mReturns\u001b[0m: \n",
"\u001b[1;33m \u001b[0m\u001b[1;33m • \u001b[0m\u001b[1mcleaned_data\u001b[0m (array): The cleaned data after removing empty values. \n",
- "\u001b[1;33m \u001b[0m\u001b[1;33m • \u001b[0m\u001b[1mthe_average_value_of\u001b[0m (float): The average value of the cleaned data. \n"
+ "\u001b[1;33m \u001b[0m\u001b[1;33m • \u001b[0m\u001b[1mvisualization\u001b[0m (object): The visualization of the data. \n",
+ "\u001b[1;33m \u001b[0m\u001b[1;33m • \u001b[0m\u001b[1maverage\u001b[0m (float): The average value of the input data. \n"
]
},
"metadata": {},
@@ -1176,7 +1276,7 @@
{
"data": {
"application/vnd.jupyter.widget-view+json": {
- "model_id": "5dcc82dbf2f44092920cd322f1ddb061",
+ "model_id": "2d5d50adfa434150a37aa177f2a14c25",
"version_major": 2,
"version_minor": 0
},
@@ -1206,6 +1306,23 @@
},
"metadata": {},
"output_type": "display_data"
+ },
+ {
+ "data": {
+ "text/html": [
+ "Langsmith Run URL: \n",
+ "http://localhost/o/00000000-0000-0000-0000-000000000000/projects/p/856d5024-48fe-4a9c-b1f9-5236f8d3aebd/r/b0de9ace- \n",
+ "2d39-4f25-80c8-19b8c36ec3bc?poll=true \n",
+ " \n"
+ ],
+ "text/plain": [
+ "Langsmith Run URL: \n",
+ "\u001b]8;id=320681;http://localhost/o/00000000-0000-0000-0000-000000000000/projects/p/856d5024-48fe-4a9c-b1f9-5236f8d3aebd/r/b0de9ace-2d39-4f25-80c8-19b8c36ec3bc?poll=true\u001b\\\u001b[4;34mhttp://localhost/o/00000000-0000-0000-0000-000000000000/projects/p/856d5024-48fe-4a9c-b1f9-5236f8d3aebd/r/b0de9ace-\u001b[0m\u001b]8;;\u001b\\\n",
+ "\u001b]8;id=320681;http://localhost/o/00000000-0000-0000-0000-000000000000/projects/p/856d5024-48fe-4a9c-b1f9-5236f8d3aebd/r/b0de9ace-2d39-4f25-80c8-19b8c36ec3bc?poll=true\u001b\\\u001b[4;34m2d39-4f25-80c8-19b8c36ec3bc?poll=true\u001b[0m\u001b]8;;\u001b\\ \n"
+ ]
+ },
+ "metadata": {},
+ "output_type": "display_data"
}
],
"source": [
@@ -1244,8 +1361,8 @@
" Skill Details: \n",
"\n",
" • Name : visualize_data \n",
- " • Description : This skill is responsible for visualizing the input data using a bar chart. It provides a \n",
- " comprehensive overview of the dataset. \n",
+ " • Description : This skill is responsible for visualizing the input data using a bar chart. It provides a visual \n",
+ " overview of the dataset. \n",
" • Version : 1.0.0 \n",
" • Usage : \n",
"\n",
@@ -1264,8 +1381,8 @@
" \u001b[1;4mSkill Details:\u001b[0m \n",
"\n",
"\u001b[1;33m • \u001b[0m\u001b[1mName\u001b[0m: visualize_data \n",
- "\u001b[1;33m • \u001b[0m\u001b[1mDescription\u001b[0m: This skill is responsible for visualizing the input data using a bar chart. It provides a \n",
- "\u001b[1;33m \u001b[0mcomprehensive overview of the dataset. \n",
+ "\u001b[1;33m • \u001b[0m\u001b[1mDescription\u001b[0m: This skill is responsible for visualizing the input data using a bar chart. It provides a visual \n",
+ "\u001b[1;33m \u001b[0moverview of the dataset. \n",
"\u001b[1;33m • \u001b[0m\u001b[1mVersion\u001b[0m: 1.0.0 \n",
"\u001b[1;33m • \u001b[0m\u001b[1mUsage\u001b[0m: \n",
"\n",
@@ -1289,19 +1406,20 @@
" Skill Details: \n",
"\n",
" • Name : calculate_average \n",
- " • Description : This skill is responsible for calculating the average of the input data. \n",
+ " • Description : This skill is responsible for calculating the average of the input data. It provides a statistical \n",
+ " analysis of the dataset. \n",
" • Version : 1.0.0 \n",
" • Usage : \n",
"\n",
" \n",
- " calculate_average(input_data) \n",
+ " average = calculate_average(input_data) \n",
" \n",
"\n",
" • Parameters : \n",
" • input_data (any): The input dataset to be analyzed. \n",
" • Required: True \n",
" • Returns : \n",
- " • this_function_return (float): This function returns the average of the input data. \n",
+ " • average (float): The average of the input data. \n",
" \n"
],
"text/plain": [
@@ -1309,19 +1427,20 @@
" \u001b[1;4mSkill Details:\u001b[0m \n",
"\n",
"\u001b[1;33m • \u001b[0m\u001b[1mName\u001b[0m: calculate_average \n",
- "\u001b[1;33m • \u001b[0m\u001b[1mDescription\u001b[0m: This skill is responsible for calculating the average of the input data. \n",
+ "\u001b[1;33m • \u001b[0m\u001b[1mDescription\u001b[0m: This skill is responsible for calculating the average of the input data. It provides a statistical \n",
+ "\u001b[1;33m \u001b[0manalysis of the dataset. \n",
"\u001b[1;33m • \u001b[0m\u001b[1mVersion\u001b[0m: 1.0.0 \n",
"\u001b[1;33m • \u001b[0m\u001b[1mUsage\u001b[0m: \n",
"\n",
"\u001b[48;2;39;40;34m \u001b[0m\n",
- "\u001b[48;2;39;40;34m \u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34mcalculate_average\u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34m(\u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34minput_data\u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34m)\u001b[0m\u001b[48;2;39;40;34m \u001b[0m\u001b[48;2;39;40;34m \u001b[0m\n",
+ "\u001b[48;2;39;40;34m \u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34maverage\u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34m \u001b[0m\u001b[38;2;255;70;137;48;2;39;40;34m=\u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34m \u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34mcalculate_average\u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34m(\u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34minput_data\u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34m)\u001b[0m\u001b[48;2;39;40;34m \u001b[0m\u001b[48;2;39;40;34m \u001b[0m\n",
"\u001b[48;2;39;40;34m \u001b[0m\n",
"\n",
"\u001b[1;33m • \u001b[0m\u001b[1mParameters\u001b[0m: \n",
"\u001b[1;33m \u001b[0m\u001b[1;33m • \u001b[0m\u001b[1minput_data\u001b[0m (any): The input dataset to be analyzed. \n",
"\u001b[1;33m \u001b[0m\u001b[1;33m \u001b[0m\u001b[1;33m • \u001b[0mRequired: True \n",
"\u001b[1;33m • \u001b[0m\u001b[1mReturns\u001b[0m: \n",
- "\u001b[1;33m \u001b[0m\u001b[1;33m • \u001b[0m\u001b[1mthis_function_return\u001b[0m (float): This function returns the average of the input data. \n"
+ "\u001b[1;33m \u001b[0m\u001b[1;33m • \u001b[0m\u001b[1maverage\u001b[0m (float): The average of the input data. \n"
]
},
"metadata": {},
diff --git a/docs/examples/07_skills_auto_optimize.ipynb b/docs/examples/07_skills_auto_optimize.ipynb
index 075140c..92a3103 100644
--- a/docs/examples/07_skills_auto_optimize.ipynb
+++ b/docs/examples/07_skills_auto_optimize.ipynb
@@ -196,28 +196,106 @@
{
"data": {
"text/html": [
- "{ \n",
- " \"status\" : \"success\" ,\n",
- " \"stdout\" : \"\" ,\n",
- " \"stderr\" : \"\\u001b[33mWARNING: Package(s) not found: itertools\\u001b[0m\\u001b[33m\\n\" \n",
- "} \n",
+ "\n",
+ "▌ Installing dependencies \n",
+ " \n"
+ ],
+ "text/plain": [
+ "\n",
+ "\u001b[35m▌ \u001b[0m\u001b[35mInstalling dependencies\u001b[0m\u001b[35m \u001b[0m\n"
+ ]
+ },
+ "metadata": {},
+ "output_type": "display_data"
+ },
+ {
+ "data": {
+ "text/html": [
+ " \n",
+ " pip show itertools || pip install \"itertools\" \n",
+ " \n",
+ " \n"
+ ],
+ "text/plain": [
+ "\u001b[48;2;39;40;34m \u001b[0m\n",
+ "\u001b[48;2;39;40;34m \u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34mpip\u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34m \u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34mshow\u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34m \u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34mitertools\u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34m \u001b[0m\u001b[38;2;255;70;137;48;2;39;40;34m||\u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34m \u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34mpip\u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34m \u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34minstall\u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34m \u001b[0m\u001b[38;2;230;219;116;48;2;39;40;34m\"itertools\"\u001b[0m\u001b[48;2;39;40;34m \u001b[0m\u001b[48;2;39;40;34m \u001b[0m\n",
+ "\u001b[48;2;39;40;34m \u001b[0m\n"
+ ]
+ },
+ "metadata": {},
+ "output_type": "display_data"
+ },
+ {
+ "data": {
+ "text/html": [
+ "\n",
+ "▌ Install dependencies result: {'status': 'success', 'stdout': '', 'stderr': '\\x1b[33mWARNING: Package(s) not \n",
+ "▌ found: itertools\\x1b[0m\\x1b[33m\\n\\x1b[0m\\x1b[31mERROR: Could not find a version that satisfies the requirement \n",
+ "▌ itertools (from versions: none)\\x1b[0m\\x1b[31m\\n'} \n",
" \n"
],
"text/plain": [
- "\u001b[1m{\u001b[0m\n",
- " \u001b[1;34m\"status\"\u001b[0m: \u001b[32m\"success\"\u001b[0m,\n",
- " \u001b[1;34m\"stdout\"\u001b[0m: \u001b[32m\"\"\u001b[0m,\n",
- " \u001b[1;34m\"stderr\"\u001b[0m: \u001b[32m\"\\u001b[33mWARNING: Package(s) not found: itertools\\u001b[0m\\u001b[33m\\n\"\u001b[0m\n",
- "\u001b[1m}\u001b[0m\n"
+ "\n",
+ "\u001b[35m▌ \u001b[0m\u001b[35mInstall dependencies result: {'status': 'success', 'stdout': '', 'stderr': '\\x1b[33mWARNING: Package(s) not \u001b[0m\u001b[35m \u001b[0m\n",
+ "\u001b[35m▌ \u001b[0m\u001b[35mfound: itertools\\x1b[0m\\x1b[33m\\n\\x1b[0m\\x1b[31mERROR: Could not find a version that satisfies the requirement \u001b[0m\n",
+ "\u001b[35m▌ \u001b[0m\u001b[35mitertools (from versions: none)\\x1b[0m\\x1b[31m\\n'}\u001b[0m\u001b[35m \u001b[0m\n"
+ ]
+ },
+ "metadata": {},
+ "output_type": "display_data"
+ },
+ {
+ "data": {
+ "application/vnd.jupyter.widget-view+json": {
+ "model_id": "c772f6a3b2e74b279099a57efef602f6",
+ "version_major": 2,
+ "version_minor": 0
+ },
+ "text/plain": [
+ "Output()"
+ ]
+ },
+ "metadata": {},
+ "output_type": "display_data"
+ },
+ {
+ "data": {
+ "application/vnd.jupyter.widget-view+json": {
+ "model_id": "725ae410049c4c9c857d330e4e385640",
+ "version_major": 2,
+ "version_minor": 0
+ },
+ "text/plain": [
+ "Output()"
]
},
"metadata": {},
"output_type": "display_data"
},
+ {
+ "data": {
+ "text/html": [
+ " \n"
+ ],
+ "text/plain": []
+ },
+ "metadata": {},
+ "output_type": "display_data"
+ },
+ {
+ "data": {
+ "text/html": [
+ " \n"
+ ],
+ "text/plain": []
+ },
+ "metadata": {},
+ "output_type": "display_data"
+ },
{
"data": {
"application/vnd.jupyter.widget-view+json": {
- "model_id": "02dba3e12d8446fe9435a3213dff13d2",
+ "model_id": "a2504e3fc79b43e688dce661b35dd81b",
"version_major": 2,
"version_minor": 0
},
@@ -231,7 +309,7 @@
{
"data": {
"application/vnd.jupyter.widget-view+json": {
- "model_id": "7ef28624ec87490d934ca62661618de1",
+ "model_id": "0924ad029b6247b3a47cb76c584c16dc",
"version_major": 2,
"version_minor": 0
},
@@ -265,7 +343,7 @@
{
"data": {
"application/vnd.jupyter.widget-view+json": {
- "model_id": "de9bd6adeea544949701f07d6454a9df",
+ "model_id": "e999daf7a6e1492e8c546b899b741426",
"version_major": 2,
"version_minor": 0
},
@@ -314,7 +392,7 @@
{
"data": {
"application/vnd.jupyter.widget-view+json": {
- "model_id": "62ed1213deb543d8a0ff18b06e8396ce",
+ "model_id": "44cef59cfedd4730818910c8218ddccb",
"version_major": 2,
"version_minor": 0
},
@@ -344,6 +422,40 @@
},
"metadata": {},
"output_type": "display_data"
+ },
+ {
+ "data": {
+ "text/html": [
+ "Langsmith Run URL: \n",
+ "http://localhost/o/00000000-0000-0000-0000-000000000000/projects/p/856d5024-48fe-4a9c-b1f9-5236f8d3aebd/r/61f07104- \n",
+ "29a1-420c-b831-cbda8df3668e?poll=true \n",
+ " \n"
+ ],
+ "text/plain": [
+ "Langsmith Run URL: \n",
+ "\u001b]8;id=734838;http://localhost/o/00000000-0000-0000-0000-000000000000/projects/p/856d5024-48fe-4a9c-b1f9-5236f8d3aebd/r/61f07104-29a1-420c-b831-cbda8df3668e?poll=true\u001b\\\u001b[4;34mhttp://localhost/o/00000000-0000-0000-0000-000000000000/projects/p/856d5024-48fe-4a9c-b1f9-5236f8d3aebd/r/61f07104-\u001b[0m\u001b]8;;\u001b\\\n",
+ "\u001b]8;id=734838;http://localhost/o/00000000-0000-0000-0000-000000000000/projects/p/856d5024-48fe-4a9c-b1f9-5236f8d3aebd/r/61f07104-29a1-420c-b831-cbda8df3668e?poll=true\u001b\\\u001b[4;34m29a1-420c-b831-cbda8df3668e?poll=true\u001b[0m\u001b]8;;\u001b\\ \n"
+ ]
+ },
+ "metadata": {},
+ "output_type": "display_data"
+ },
+ {
+ "data": {
+ "text/html": [
+ "Langsmith Run URL: \n",
+ "http://localhost/o/00000000-0000-0000-0000-000000000000/projects/p/856d5024-48fe-4a9c-b1f9-5236f8d3aebd/r/7f305c8e- \n",
+ "9bb6-45fc-9035-5af496184866?poll=true \n",
+ " \n"
+ ],
+ "text/plain": [
+ "Langsmith Run URL: \n",
+ "\u001b]8;id=99153;http://localhost/o/00000000-0000-0000-0000-000000000000/projects/p/856d5024-48fe-4a9c-b1f9-5236f8d3aebd/r/7f305c8e-9bb6-45fc-9035-5af496184866?poll=true\u001b\\\u001b[4;34mhttp://localhost/o/00000000-0000-0000-0000-000000000000/projects/p/856d5024-48fe-4a9c-b1f9-5236f8d3aebd/r/7f305c8e-\u001b[0m\u001b]8;;\u001b\\\n",
+ "\u001b]8;id=99153;http://localhost/o/00000000-0000-0000-0000-000000000000/projects/p/856d5024-48fe-4a9c-b1f9-5236f8d3aebd/r/7f305c8e-9bb6-45fc-9035-5af496184866?poll=true\u001b\\\u001b[4;34m9bb6-45fc-9035-5af496184866?poll=true\u001b[0m\u001b]8;;\u001b\\ \n"
+ ]
+ },
+ "metadata": {},
+ "output_type": "display_data"
}
],
"source": [
@@ -425,9 +537,10 @@
" \n",
" def solve_game_of_24 (numbers): \n",
" for permutation in permutations(numbers): \n",
+ " a, b, c, d = permutation \n",
+ " # Try all possible combinations of arithmetic operations \n",
" for ops in product([ '+' , '-' , '*' , '/' ], repeat = 3 ): \n",
- " expression = f'(({ permutation[ 0 ] } { ops[ 0 ] } { permutation[ 1 ] }) { ops[ 1 ] } { permutation[ 2 ] }) { ops[ 2 ] } \n",
- " { permutation[ 3 ] }' \n",
+ " expression = f'(({ a } { ops[ 0 ] } { b }) { ops[ 1 ] } { c }) { ops[ 2 ] } { d }' \n",
" try : \n",
" result = eval(expression) \n",
" if result == 24 : \n",
@@ -445,9 +558,10 @@
"\u001b[48;2;39;40;34m \u001b[0m\u001b[48;2;39;40;34m \u001b[0m\u001b[48;2;39;40;34m \u001b[0m\n",
"\u001b[48;2;39;40;34m \u001b[0m\u001b[38;2;102;217;239;48;2;39;40;34mdef\u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34m \u001b[0m\u001b[38;2;166;226;46;48;2;39;40;34msolve_game_of_24\u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34m(\u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34mnumbers\u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34m)\u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34m:\u001b[0m\u001b[48;2;39;40;34m \u001b[0m\u001b[48;2;39;40;34m \u001b[0m\n",
"\u001b[48;2;39;40;34m \u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34m \u001b[0m\u001b[38;2;102;217;239;48;2;39;40;34mfor\u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34m \u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34mpermutation\u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34m \u001b[0m\u001b[38;2;255;70;137;48;2;39;40;34min\u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34m \u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34mpermutations\u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34m(\u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34mnumbers\u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34m)\u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34m:\u001b[0m\u001b[48;2;39;40;34m \u001b[0m\u001b[48;2;39;40;34m \u001b[0m\n",
+ "\u001b[48;2;39;40;34m \u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34m \u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34ma\u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34m,\u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34m \u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34mb\u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34m,\u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34m \u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34mc\u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34m,\u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34m \u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34md\u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34m \u001b[0m\u001b[38;2;255;70;137;48;2;39;40;34m=\u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34m \u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34mpermutation\u001b[0m\u001b[48;2;39;40;34m \u001b[0m\u001b[48;2;39;40;34m \u001b[0m\n",
+ "\u001b[48;2;39;40;34m \u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34m \u001b[0m\u001b[38;2;149;144;119;48;2;39;40;34m# Try all possible combinations of arithmetic operations\u001b[0m\u001b[48;2;39;40;34m \u001b[0m\u001b[48;2;39;40;34m \u001b[0m\n",
"\u001b[48;2;39;40;34m \u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34m \u001b[0m\u001b[38;2;102;217;239;48;2;39;40;34mfor\u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34m \u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34mops\u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34m \u001b[0m\u001b[38;2;255;70;137;48;2;39;40;34min\u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34m \u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34mproduct\u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34m(\u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34m[\u001b[0m\u001b[38;2;230;219;116;48;2;39;40;34m'\u001b[0m\u001b[38;2;230;219;116;48;2;39;40;34m+\u001b[0m\u001b[38;2;230;219;116;48;2;39;40;34m'\u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34m,\u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34m \u001b[0m\u001b[38;2;230;219;116;48;2;39;40;34m'\u001b[0m\u001b[38;2;230;219;116;48;2;39;40;34m-\u001b[0m\u001b[38;2;230;219;116;48;2;39;40;34m'\u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34m,\u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34m \u001b[0m\u001b[38;2;230;219;116;48;2;39;40;34m'\u001b[0m\u001b[38;2;230;219;116;48;2;39;40;34m*\u001b[0m\u001b[38;2;230;219;116;48;2;39;40;34m'\u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34m,\u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34m \u001b[0m\u001b[38;2;230;219;116;48;2;39;40;34m'\u001b[0m\u001b[38;2;230;219;116;48;2;39;40;34m/\u001b[0m\u001b[38;2;230;219;116;48;2;39;40;34m'\u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34m]\u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34m,\u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34m \u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34mrepeat\u001b[0m\u001b[38;2;255;70;137;48;2;39;40;34m=\u001b[0m\u001b[38;2;174;129;255;48;2;39;40;34m3\u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34m)\u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34m:\u001b[0m\u001b[48;2;39;40;34m \u001b[0m\u001b[48;2;39;40;34m \u001b[0m\n",
- "\u001b[48;2;39;40;34m \u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34m \u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34mexpression\u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34m \u001b[0m\u001b[38;2;255;70;137;48;2;39;40;34m=\u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34m \u001b[0m\u001b[38;2;230;219;116;48;2;39;40;34mf\u001b[0m\u001b[38;2;230;219;116;48;2;39;40;34m'\u001b[0m\u001b[38;2;230;219;116;48;2;39;40;34m((\u001b[0m\u001b[38;2;230;219;116;48;2;39;40;34m{\u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34mpermutation\u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34m[\u001b[0m\u001b[38;2;174;129;255;48;2;39;40;34m0\u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34m]\u001b[0m\u001b[38;2;230;219;116;48;2;39;40;34m}\u001b[0m\u001b[38;2;230;219;116;48;2;39;40;34m \u001b[0m\u001b[38;2;230;219;116;48;2;39;40;34m{\u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34mops\u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34m[\u001b[0m\u001b[38;2;174;129;255;48;2;39;40;34m0\u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34m]\u001b[0m\u001b[38;2;230;219;116;48;2;39;40;34m}\u001b[0m\u001b[38;2;230;219;116;48;2;39;40;34m \u001b[0m\u001b[38;2;230;219;116;48;2;39;40;34m{\u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34mpermutation\u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34m[\u001b[0m\u001b[38;2;174;129;255;48;2;39;40;34m1\u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34m]\u001b[0m\u001b[38;2;230;219;116;48;2;39;40;34m}\u001b[0m\u001b[38;2;230;219;116;48;2;39;40;34m) \u001b[0m\u001b[38;2;230;219;116;48;2;39;40;34m{\u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34mops\u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34m[\u001b[0m\u001b[38;2;174;129;255;48;2;39;40;34m1\u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34m]\u001b[0m\u001b[38;2;230;219;116;48;2;39;40;34m}\u001b[0m\u001b[38;2;230;219;116;48;2;39;40;34m \u001b[0m\u001b[38;2;230;219;116;48;2;39;40;34m{\u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34mpermutation\u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34m[\u001b[0m\u001b[38;2;174;129;255;48;2;39;40;34m2\u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34m]\u001b[0m\u001b[38;2;230;219;116;48;2;39;40;34m}\u001b[0m\u001b[38;2;230;219;116;48;2;39;40;34m) \u001b[0m\u001b[38;2;230;219;116;48;2;39;40;34m{\u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34mops\u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34m[\u001b[0m\u001b[38;2;174;129;255;48;2;39;40;34m2\u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34m]\u001b[0m\u001b[38;2;230;219;116;48;2;39;40;34m}\u001b[0m\u001b[38;2;230;219;116;48;2;39;40;34m \u001b[0m\u001b[48;2;39;40;34m \u001b[0m\u001b[48;2;39;40;34m \u001b[0m\n",
- "\u001b[48;2;39;40;34m \u001b[0m\u001b[38;2;230;219;116;48;2;39;40;34m{\u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34mpermutation\u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34m[\u001b[0m\u001b[38;2;174;129;255;48;2;39;40;34m3\u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34m]\u001b[0m\u001b[38;2;230;219;116;48;2;39;40;34m}\u001b[0m\u001b[38;2;230;219;116;48;2;39;40;34m'\u001b[0m\u001b[48;2;39;40;34m \u001b[0m\u001b[48;2;39;40;34m \u001b[0m\n",
+ "\u001b[48;2;39;40;34m \u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34m \u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34mexpression\u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34m \u001b[0m\u001b[38;2;255;70;137;48;2;39;40;34m=\u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34m \u001b[0m\u001b[38;2;230;219;116;48;2;39;40;34mf\u001b[0m\u001b[38;2;230;219;116;48;2;39;40;34m'\u001b[0m\u001b[38;2;230;219;116;48;2;39;40;34m((\u001b[0m\u001b[38;2;230;219;116;48;2;39;40;34m{\u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34ma\u001b[0m\u001b[38;2;230;219;116;48;2;39;40;34m}\u001b[0m\u001b[38;2;230;219;116;48;2;39;40;34m \u001b[0m\u001b[38;2;230;219;116;48;2;39;40;34m{\u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34mops\u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34m[\u001b[0m\u001b[38;2;174;129;255;48;2;39;40;34m0\u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34m]\u001b[0m\u001b[38;2;230;219;116;48;2;39;40;34m}\u001b[0m\u001b[38;2;230;219;116;48;2;39;40;34m \u001b[0m\u001b[38;2;230;219;116;48;2;39;40;34m{\u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34mb\u001b[0m\u001b[38;2;230;219;116;48;2;39;40;34m}\u001b[0m\u001b[38;2;230;219;116;48;2;39;40;34m) \u001b[0m\u001b[38;2;230;219;116;48;2;39;40;34m{\u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34mops\u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34m[\u001b[0m\u001b[38;2;174;129;255;48;2;39;40;34m1\u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34m]\u001b[0m\u001b[38;2;230;219;116;48;2;39;40;34m}\u001b[0m\u001b[38;2;230;219;116;48;2;39;40;34m \u001b[0m\u001b[38;2;230;219;116;48;2;39;40;34m{\u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34mc\u001b[0m\u001b[38;2;230;219;116;48;2;39;40;34m}\u001b[0m\u001b[38;2;230;219;116;48;2;39;40;34m) \u001b[0m\u001b[38;2;230;219;116;48;2;39;40;34m{\u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34mops\u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34m[\u001b[0m\u001b[38;2;174;129;255;48;2;39;40;34m2\u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34m]\u001b[0m\u001b[38;2;230;219;116;48;2;39;40;34m}\u001b[0m\u001b[38;2;230;219;116;48;2;39;40;34m \u001b[0m\u001b[38;2;230;219;116;48;2;39;40;34m{\u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34md\u001b[0m\u001b[38;2;230;219;116;48;2;39;40;34m}\u001b[0m\u001b[38;2;230;219;116;48;2;39;40;34m'\u001b[0m\u001b[48;2;39;40;34m \u001b[0m\u001b[48;2;39;40;34m \u001b[0m\n",
"\u001b[48;2;39;40;34m \u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34m \u001b[0m\u001b[38;2;102;217;239;48;2;39;40;34mtry\u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34m:\u001b[0m\u001b[48;2;39;40;34m \u001b[0m\u001b[48;2;39;40;34m \u001b[0m\n",
"\u001b[48;2;39;40;34m \u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34m \u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34mresult\u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34m \u001b[0m\u001b[38;2;255;70;137;48;2;39;40;34m=\u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34m \u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34meval\u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34m(\u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34mexpression\u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34m)\u001b[0m\u001b[48;2;39;40;34m \u001b[0m\u001b[48;2;39;40;34m \u001b[0m\n",
"\u001b[48;2;39;40;34m \u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34m \u001b[0m\u001b[38;2;102;217;239;48;2;39;40;34mif\u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34m \u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34mresult\u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34m \u001b[0m\u001b[38;2;255;70;137;48;2;39;40;34m==\u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34m \u001b[0m\u001b[38;2;174;129;255;48;2;39;40;34m24\u001b[0m\u001b[38;2;248;248;242;48;2;39;40;34m:\u001b[0m\u001b[48;2;39;40;34m \u001b[0m\u001b[48;2;39;40;34m \u001b[0m\n",
diff --git a/docs/examples/08_creator_agent.ipynb b/docs/examples/08_creator_agent.ipynb
index 71b1f35..9ef93d5 100644
--- a/docs/examples/08_creator_agent.ipynb
+++ b/docs/examples/08_creator_agent.ipynb
@@ -16,7 +16,8 @@
"metadata": {},
"outputs": [],
"source": [
- "from creator.agents.creator_agent import open_creator_agent"
+ "from creator.agents import create_creator_agent\n",
+ "from creator import config"
]
},
{
@@ -24,6 +25,15 @@
"execution_count": 2,
"metadata": {},
"outputs": [],
+ "source": [
+ "open_creator_agent = create_creator_agent(config)"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 3,
+ "metadata": {},
+ "outputs": [],
"source": [
"messages = [\n",
" {\n",
@@ -35,13 +45,13 @@
},
{
"cell_type": "code",
- "execution_count": 3,
+ "execution_count": 4,
"metadata": {},
"outputs": [
{
"data": {
"application/vnd.jupyter.widget-view+json": {
- "model_id": "6e7dca82c2dd437d943dba009b57d9a9",
+ "model_id": "9f536b3686f94b1dab2dfd584f4851cb",
"version_major": 2,
"version_minor": 0
},
@@ -79,7 +89,7 @@
},
{
"cell_type": "code",
- "execution_count": 4,
+ "execution_count": 5,
"metadata": {},
"outputs": [],
"source": [
@@ -93,13 +103,13 @@
},
{
"cell_type": "code",
- "execution_count": 5,
+ "execution_count": 6,
"metadata": {},
"outputs": [
{
"data": {
"application/vnd.jupyter.widget-view+json": {
- "model_id": "ba05f0dbad2748379194af32d2818c1d",
+ "model_id": "a59a5d9a43ad4ab59827fbca51396623",
"version_major": 2,
"version_minor": 0
},
@@ -113,7 +123,21 @@
{
"data": {
"application/vnd.jupyter.widget-view+json": {
- "model_id": "d66203c2d2104397837cc61c9763f2dc",
+ "model_id": "1e4ddee0f8614a0d8d5829b3c08261b0",
+ "version_major": 2,
+ "version_minor": 0
+ },
+ "text/plain": [
+ "Output()"
+ ]
+ },
+ "metadata": {},
+ "output_type": "display_data"
+ },
+ {
+ "data": {
+ "application/vnd.jupyter.widget-view+json": {
+ "model_id": "031a74ab9b284404814fb41b31920d5a",
"version_major": 2,
"version_minor": 0
},
@@ -147,7 +171,7 @@
{
"data": {
"application/vnd.jupyter.widget-view+json": {
- "model_id": "c6d74112eaa7438ea8dad9bfcfab4764",
+ "model_id": "8c67158044da4b96aa45366119e7ffb1",
"version_major": 2,
"version_minor": 0
},
@@ -161,7 +185,7 @@
{
"data": {
"application/vnd.jupyter.widget-view+json": {
- "model_id": "28463b69814b46d5bd286b47fe3898cf",
+ "model_id": "185103ec05594133a317a8f4fe1768fa",
"version_major": 2,
"version_minor": 0
},
@@ -195,7 +219,7 @@
{
"data": {
"application/vnd.jupyter.widget-view+json": {
- "model_id": "abf238be7b554d81bd845596e39e33d1",
+ "model_id": "65425fd5707c40159b5bd51d4fbd412a",
"version_major": 2,
"version_minor": 0
},
@@ -209,7 +233,7 @@
{
"data": {
"application/vnd.jupyter.widget-view+json": {
- "model_id": "536ad56670e545b785b3aab24246c8dd",
+ "model_id": "9d680c3af9c54b849aaf07b0a9a9a95b",
"version_major": 2,
"version_minor": 0
},
@@ -220,6 +244,16 @@
"metadata": {},
"output_type": "display_data"
},
+ {
+ "data": {
+ "image/png": "iVBORw0KGgoAAAANSUhEUgAAAYUAAAGFCAYAAAASI+9IAAAAOXRFWHRTb2Z0d2FyZQBNYXRwbG90bGliIHZlcnNpb24zLjguMCwgaHR0cHM6Ly9tYXRwbG90bGliLm9yZy81sbWrAAAACXBIWXMAAA9hAAAPYQGoP6dpAAAJ/0lEQVR4nO3db4hldR3H8e9ZZ3N3zXSXzV20VUv6a5hUK5tgbUUUEmtSSUWYBdGjHoc9UXwQZBARJD0IinqUPqgINoiCRDG1IiPLLdTNajVdtd0YzNxtTw/u+pkZ0+XOzL3nzjnzesHA3Nkzc38cztn3/O6c87tN27ZtAUBVbZj1AABYO0QBgBAFAEIUAAhRACBEAYAQBQBCFACIuXE3bJppDmOa+jlw9xQyrqa/J2dvDfn8NFMAIEQBgBAFAEIUAAhRACBEAYAQBQBCFACIsW9em6gLf1F18a1V591bddYjVZuOVp04req5M6uOnl/11OuqDu2u+usVVY++rfp6AxpA3zTjvh3nRG6a3P5A1VWfqdp19/jf88TFVbfcv4on7WdQhnzHJJPljubuDfn87G6msPO3VZ96T9XmIwtfm99R9ejbq+Z3VrVN1Zanqs65v2rbg1XNyZ2+6ciL/TQApqCbKGw4VvXhTywE4V/nVu3/RtWf9lW1L/JnjS2Hq97wo6pLvle19eFOhghAVy8fvem2qmuuGX1+bHPVN+8b/d1gHFsfqvrnRat48n5OrYc8PWWyvHzUvSGfn91cfXTRTxc+P3DV+EGoWmUQAFiObqLwikMLnx+9oJOnBGD5uonC4r8bnH2wk6cEYPm6icLTi14Cev2Pq175x06eFoDl6SYKBz608PnGf1d9+p1Vl3+l6sxDL/ktAHSvu5vXPr5vNEtYrG1O3r182eh+hb/vqXrsrVUnJnmlbD+vzBjy1Q1MlquPujfk87O7KLxsvurqa6ve+INTb/fcGVV//mDVrz9X9Zd3r/JJq0SBoROF7g35/Ox2mYuqqtfur9rztapX/7xqw4lTb3tgX9UPv1P17NZVPGE/T5ghH3RMlih0b8jnZ/dReN6Ww6OF8XbdVXXub0bLYJw+///bPXFx1bd+OVosb0X6ecIM+aBjskShe0M+P2cXhRfacLzqVXdXXfrtqrd8t+q04wv/ds/nq37y9RX+4H6eMEM+6JgsUejekM/PtROFxXbdVfXJ9y/MHI5tqvry01XHN6/gh/XzhBnyQcdkiUL3hnx+rs032fnb5VV3fHHh8cZnq8771ezGA7BOrM0oVFU9+IGlj1/+2GzGAbCOrN0oHN+09PF/T5/NOADWkbUbhZ2/W/r46PmzGQfAOtJNFN7x1arX/Gz87Tc+U3XFlxYez++o+selEx8WAEt1E4Xz7q269n1Vn91dtfuWqjMeP8W291Rd966qHb9f+NqdX3jxd2gDYKK6uST1Ix+revP3l37t6YtGN6Y9s3201tEZh6t23le19QVLaz9wddVtt65iPaR+Xq435EvemCyXpHZvyOdnN+/R/PB7R7OFxf/hb3to9PFSjm2uuuP6qjuvn/ACeQC8lG5vXjvn/qoLbh/dubz9QNXZj1SdfrSqaav+c2bV/M6qxy8ZLYT3h4+ucs2j5/Xzt6gh/ybCZJkpdG/I5+favKN5ovo58CEfdEyWKHRvyOenv94CEKIAQIgCACEKAIQoABCiAECIAgAhCgCEKAAQogBAiAIAsYzlR62v0qU+r2fT13Vh+rrP+7q/+2zIx4qZAgAhCgCEKAAQogBAiAIAIQoAhCgAEKIAQIgCACEKAIQoABCiAECIAgAhCgCEKAAQogBAiAIAIQoAhCgAEKIAQIgCACEKAIQoABCiAECIAgAhCgCEKAAQogBAiAIAIQoAhCgAEKIAQIgCACEKAIQoABCiAECIAgAhCgCEKAAQogBAiAIAIQoAhCgAEKIAQIgCADE37oZt205zHLxA0zSzHgKwDpkpABCiAECIAgAhCgCEKAAQogBAiAIAIQoAhCgAEKIAQIgCACEKAIQoABCiAECIAgAhCgCEKAAQogBAiAIAIQoAhCgAEKIAQIgCACEKAIQoABCiAECIAgAhCgCEKAAQogBAiAIAIQoAhCgAEKIAQIgCACEKAIQoABCiAECIAgAhCgCEKAAQogBAiAIAIQoAhCgAEKIAQMzNegCwVrRtO+shrEjTNLMewor0dX8PnZkCACEKAIQoABCiAECIAgAhCgCEKAAQogBAiAIAIQoAhCgAEKIAQIgCACEKAIQoABCiAECIAgAhCgCEKAAQogBAiAIAIQoAhCgAEKIAQIgCACEKAIQoABCiAECIAgAhCgCEKAAQogBAiAIAIQoAhCgAEKIAQIgCACEKAIQoABCiAECIAgAhCgCEKAAQogBAiAIAMTfuhk3TTHMcU9O27ayHsO44VrrV13GzNpkpABCiAECIAgAhCgCEKAAQogBAiAIAIQoAhCgAEKIAQIgCACEKAIQoABCiAECIAgAhCgCEKAAQogBAiAIAIQoAhCgAEKIAQIgCACEKAIQoABCiAECIAgAhCgCEKAAQogBAiAIAIQoAhCgAEKIAQIgCACEKAIQoABCiAECIAgAhCgCEKAAQogBAiAIAIQoAhCgAEHPjbti27TTHMTVN08x6COtOX4+VvnKMM0lmCgCEKAAQogBAiAIAIQoAhCgAEKIAQIgCACEKAIQoABCiAECIAgAhCgCEKAAQogBAiAIAIQoAhCgAEKIAQIgCACEKAIQoABCiAECIAgAhCgCEKAAQogBAiAIAIQoAhCgAEKIAQIgCACEKAIQoABCiAECIAgAhCgCEKAAQogBAiAIAIQoAhCgAEKIAQIgCACEKAMTcrAcwbW3bznoI607TNLMeAkzVkP9fMVMAIEQBgBAFAEIUAAhRACBEAYAQBQBCFAAIUQAgRAGAEAUAQhQACFEAIEQBgBAFAEIUAAhRACBEAYAQBQBCFAAIUQAgRAGAEAUAQhQACFEAIEQBgBAFAEIUAAhRACBEAYAQBQBCFAAIUQAgRAGAEAUAQhQACFEAIEQBgBAFAEIUAAhRACBEAYAQBQBCFAAIUQAgRAGAmBt3w6ZppjmOqWnbdtZDWJG+7u8q+xz6zEwBgBAFAEIUAAhRACBEAYAQBQBCFAAIUQDo0t69VU2zso/rrpv68EQBgBj7jmYAJmz37qrLLht/+z17pjeWk0QBYFauvLLqxhtnPYolvHwEQIgCACEKAIQoABCiAECIAgDhklSAWdm/v+rJJ8ff/qabqrZtm954qqppx3zvxL6+VaG3huyefc7QreoY37u36vbbV/a9Bw9WXXjhyp97DF4+AiBEAWBWbrihqm3H/5jyLKFKFABYRBQACFEAIEQBgBAFAEIUAAhRACBEAYCw9hHArCx37aMtW6puvnl64ylrH61Zfd3fVfY5wzeztY/OOqvqyJGVP/cYvHwEQJgprFF93d9V9jnD19djfBxmCgCEKAAQogBAiAIAIQoAhCgAEKIAQIgCACEKAIQoABCiAECIAgAx9oJ4AAyfmQIAIQoAhCgAEKIAQIgCACEKAIQoABCiAECIAgDxP9+DpGehN8qKAAAAAElFTkSuQmCC",
+ "text/plain": [
+ ""
+ ]
+ },
+ "metadata": {},
+ "output_type": "display_data"
+ },
{
"data": {
"text/html": [
@@ -243,7 +277,7 @@
{
"data": {
"application/vnd.jupyter.widget-view+json": {
- "model_id": "4ccc78982c0d4cbd906b8d9a0ace7306",
+ "model_id": "262ed48cde63468ca3f695f67131d00d",
"version_major": 2,
"version_minor": 0
},
@@ -277,7 +311,7 @@
{
"data": {
"application/vnd.jupyter.widget-view+json": {
- "model_id": "5a2e9a34523b4d57ada811f712f751d8",
+ "model_id": "4b74f781084e4861b0d3cffa933762da",
"version_major": 2,
"version_minor": 0
},
@@ -310,13 +344,16 @@
},
{
"data": {
- "application/vnd.jupyter.widget-view+json": {
- "model_id": "de0d200823d047e8bd963b6c3321e194",
- "version_major": 2,
- "version_minor": 0
- },
+ "text/html": [
+ "Langsmith Run URL: \n",
+ "http://localhost/o/00000000-0000-0000-0000-000000000000/projects/p/856d5024-48fe-4a9c-b1f9-5236f8d3aebd/r/a790ec8f- \n",
+ "71ad-4de2-b81c-74f1575ebb0b?poll=true \n",
+ " \n"
+ ],
"text/plain": [
- "Output()"
+ "Langsmith Run URL: \n",
+ "\u001b]8;id=656281;http://localhost/o/00000000-0000-0000-0000-000000000000/projects/p/856d5024-48fe-4a9c-b1f9-5236f8d3aebd/r/a790ec8f-71ad-4de2-b81c-74f1575ebb0b?poll=true\u001b\\\u001b[4;34mhttp://localhost/o/00000000-0000-0000-0000-000000000000/projects/p/856d5024-48fe-4a9c-b1f9-5236f8d3aebd/r/a790ec8f-\u001b[0m\u001b]8;;\u001b\\\n",
+ "\u001b]8;id=656281;http://localhost/o/00000000-0000-0000-0000-000000000000/projects/p/856d5024-48fe-4a9c-b1f9-5236f8d3aebd/r/a790ec8f-71ad-4de2-b81c-74f1575ebb0b?poll=true\u001b\\\u001b[4;34m71ad-4de2-b81c-74f1575ebb0b?poll=true\u001b[0m\u001b]8;;\u001b\\ \n"
]
},
"metadata": {},
@@ -324,14 +361,13 @@
},
{
"data": {
- "text/html": [
- "\n",
- "▌ saved to /Users/gongjunmin/.cache/open_creator/skill_library/solve_random_maze \n",
- " \n"
- ],
+ "application/vnd.jupyter.widget-view+json": {
+ "model_id": "671359bd229c41a3b2c69a96b05faefe",
+ "version_major": 2,
+ "version_minor": 0
+ },
"text/plain": [
- "\n",
- "\u001b[35m▌ \u001b[0m\u001b[35msaved to /Users/gongjunmin/.cache/open_creator/skill_library/solve_random_maze\u001b[0m\u001b[35m \u001b[0m\n"
+ "Output()"
]
},
"metadata": {},
@@ -340,7 +376,7 @@
{
"data": {
"application/vnd.jupyter.widget-view+json": {
- "model_id": "be609bd2309e41e99329d856893b39df",
+ "model_id": "d3e46fda387046bc816af54504b016ee",
"version_major": 2,
"version_minor": 0
},
@@ -351,6 +387,21 @@
"metadata": {},
"output_type": "display_data"
},
+ {
+ "data": {
+ "text/html": [
+ "\n",
+ "▌ saved to /Users/gongjunmin/.cache/open_creator/skill_library/solve_maze \n",
+ " \n"
+ ],
+ "text/plain": [
+ "\n",
+ "\u001b[35m▌ \u001b[0m\u001b[35msaved to /Users/gongjunmin/.cache/open_creator/skill_library/solve_maze\u001b[0m\u001b[35m \u001b[0m\n"
+ ]
+ },
+ "metadata": {},
+ "output_type": "display_data"
+ },
{
"data": {
"text/html": [
@@ -374,7 +425,7 @@
{
"data": {
"application/vnd.jupyter.widget-view+json": {
- "model_id": "48a0f6cb1f6d4311bd0f7db3e7dd3994",
+ "model_id": "51ec3eacd34c40688129654e96f9fb31",
"version_major": 2,
"version_minor": 0
},
@@ -412,7 +463,7 @@
},
{
"cell_type": "code",
- "execution_count": 6,
+ "execution_count": 7,
"metadata": {},
"outputs": [],
"source": [
@@ -426,9 +477,37 @@
},
{
"cell_type": "code",
- "execution_count": 7,
+ "execution_count": 8,
"metadata": {},
"outputs": [
+ {
+ "data": {
+ "application/vnd.jupyter.widget-view+json": {
+ "model_id": "3f6bd9ad7125466381d2e546d0a76338",
+ "version_major": 2,
+ "version_minor": 0
+ },
+ "text/plain": [
+ "Output()"
+ ]
+ },
+ "metadata": {},
+ "output_type": "display_data"
+ },
+ {
+ "data": {
+ "application/vnd.jupyter.widget-view+json": {
+ "model_id": "548c2d4a340e4f1ab798cbc2361152e2",
+ "version_major": 2,
+ "version_minor": 0
+ },
+ "text/plain": [
+ "Output()"
+ ]
+ },
+ "metadata": {},
+ "output_type": "display_data"
+ },
{
"data": {
"text/html": [
@@ -558,34 +637,6 @@
"metadata": {},
"output_type": "display_data"
},
- {
- "data": {
- "application/vnd.jupyter.widget-view+json": {
- "model_id": "418c6e79ab4047b5bcac823778e0659c",
- "version_major": 2,
- "version_minor": 0
- },
- "text/plain": [
- "Output()"
- ]
- },
- "metadata": {},
- "output_type": "display_data"
- },
- {
- "data": {
- "application/vnd.jupyter.widget-view+json": {
- "model_id": "9b2ce948f375427d9d80e5e0372fec1e",
- "version_major": 2,
- "version_minor": 0
- },
- "text/plain": [
- "Output()"
- ]
- },
- "metadata": {},
- "output_type": "display_data"
- },
{
"data": {
"text/html": [
@@ -609,7 +660,7 @@
{
"data": {
"application/vnd.jupyter.widget-view+json": {
- "model_id": "4b137fec8d4a46b3a1f2fa49f77517da",
+ "model_id": "41340c9f71fa45e3b306ec07eee7ca56",
"version_major": 2,
"version_minor": 0
},
diff --git a/docs/examples/09_memgpt.ipynb b/docs/examples/09_memgpt.ipynb
new file mode 100644
index 0000000..20c6fc9
--- /dev/null
+++ b/docs/examples/09_memgpt.ipynb
@@ -0,0 +1,78 @@
+{
+ "cells": [
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "from creator import config\n",
+ "from creator.memgpt import create_memgpt"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "session_id = \"test_session\"\n",
+ "config.memgpt_config.session_id = session_id\n",
+ "memgpt = create_memgpt(config)"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "memgpt.memory_manager.clear()"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "memgpt.memory_manager.messages"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "\n",
+ "while 1:\n",
+ " user_request = input(\"> \")\n",
+ " if user_request.startswith(\"/exit\"):\n",
+ " break\n",
+ " session_id = await memgpt.arun({\"user_request\": user_request, \"session_id\": session_id})"
+ ]
+ }
+ ],
+ "metadata": {
+ "kernelspec": {
+ "display_name": "open_creator_online",
+ "language": "python",
+ "name": "python3"
+ },
+ "language_info": {
+ "codemirror_mode": {
+ "name": "ipython",
+ "version": 3
+ },
+ "file_extension": ".py",
+ "mimetype": "text/x-python",
+ "name": "python",
+ "nbconvert_exporter": "python",
+ "pygments_lexer": "ipython3",
+ "version": "3.10.0"
+ }
+ },
+ "nbformat": 4,
+ "nbformat_minor": 2
+}
diff --git a/docs/index.md b/docs/index.md
index 53e5c88..79f9c45 100644
--- a/docs/index.md
+++ b/docs/index.md
@@ -33,6 +33,3 @@
## Framework

-
----
-
diff --git a/docs/pics/logo.png b/docs/pics/logo.png
new file mode 100644
index 0000000..f1ab997
Binary files /dev/null and b/docs/pics/logo.png differ
diff --git a/mkdocs.yml b/mkdocs.yml
index 5b2d929..1b40141 100644
--- a/mkdocs.yml
+++ b/mkdocs.yml
@@ -1,5 +1,7 @@
-site_name: Open-Creator
+site_name: Open Creator
site_url: https://open-creator.github.io
+repo_url: https://github.com/timedomain-tech/open-creator
+repo_name: open-creator
nav:
- Getting Start:
- Overview: index.md
@@ -18,7 +20,56 @@ nav:
- Commands: commands.md
- Configurations: configurations.md
-theme: readthedocs
+theme:
+ name: material
+ logo: pics/logo.png
+ icon:
+ repo: fontawesome/brands/github
+ palette:
+ # Palette toggle for automatic mode
+ - media: "(prefers-color-scheme)"
+ toggle:
+ icon: material/brightness-auto
+ name: Switch to light mode
+
+ # Palette toggle for light mode
+ - media: "(prefers-color-scheme: light)"
+ scheme: default
+ toggle:
+ icon: material/brightness-7
+ name: Switch to dark mode
+
+ # Palette toggle for dark mode
+ - media: "(prefers-color-scheme: dark)"
+ scheme: slate
+ primary: black
+ toggle:
+ icon: material/brightness-4
+ name: Switch to system preference
+
+ font:
+ text: Roboto
+ code: Roboto Mono
+ features:
+ - content.code.copy
+ - content.code.select
+ - content.code.annotate
+ - content.tabs.link
+ - header.autohide
+ - announce.dismiss
+ - navigation.instant
+ - navigation.instant.prefetch
+ - navigation.instant.progress
+ - navigation.tracking
+ - navigation.tabs
+ - navigation.tabs.sticky
+ - navigation.path
+ - navigation.expand
+ - search
+ - search.suggest
+ - search.highlight
+ - search.share
+ - navigation.footer
plugins:
- search
@@ -26,3 +77,24 @@ plugins:
kernel_name: python3
ignore_h1_titles: true
include_requirejs: true
+ - git-revision-date-localized:
+ enable_creation_date: true
+ - git-authors
+
+markdown_extensions:
+ - pymdownx.highlight:
+ anchor_linenums: true
+ line_spans: __span
+ pygments_lang_class: true
+ - pymdownx.inlinehilite
+ - pymdownx.snippets
+ - pymdownx.superfences
+
+extra:
+ social:
+ - icon: fontawesome/brands/twitter
+ link: https://twitter.com/UseOpenCreator
+ - icon: fontawesome/brands/github
+ link: https://github.com/timedomain-tech/open-creator
+ - icon: fontawesome/brands/discord
+ link: https://discord.gg/eEraZEry53
diff --git a/pyproject.toml b/pyproject.toml
index 289108f..073df81 100644
--- a/pyproject.toml
+++ b/pyproject.toml
@@ -3,7 +3,7 @@ name = "open-creator"
packages = [
{include = "creator"},
]
-version = "0.1.2"
+version = "0.1.3"
description = "Build your costomized skill library"
authors = ["JunminGONG "]
readme = "README.md"
@@ -12,23 +12,23 @@ include = ["creator/config.yaml"]
[tool.poetry.dependencies]
python = "^3.10"
rich = "^13.5.2"
-langchain = ">=0.0.317"
+langchain = ">=0.0.323"
huggingface_hub = "^0.17.2"
loguru = "^0.7.2"
pydantic = "^2.0.3"
python-dotenv = "^1.0.0"
openai = "^0.28.1"
tiktoken = "^0.5.1"
-prompt_toolkit = "^3.0.39"
+prompt_toolkit = ">=3.0.36"
inquirer = "^3.1.3"
pyyaml = "^6.0.1"
appdirs = "^1.4.4"
-urllib3 = "^2.0.6"
fastapi = "^0.103.1"
uvicorn = "^0.23.2"
streamlit = "^1.27.2"
-
-
+questionary = "^2.0.1"
+langsmith = "^0.0.43"
+qdrant-client = "^1.6.4"
[tool.poetry.dependencies.pyreadline3]
version = "^3.4.1"