Conversation
Merge branch 'main' of https://github.com/stride-research/flowgentic into feat/abstractions_redesign
…Going for asyncflow integration
Remove all results/ files from git index (git rm --cached) so they are no longer tracked. Broaden the gitignore rule from **/experiments/** to tests/benchmark/results/ to cover the full output directory. Made-with: Cursor
📝 WalkthroughWalkthroughThis pull request reimplements Flowgentic's architecture, replacing the previous AsyncFlow-centric execution model with a new decorator-based HPC task/block system. It introduces backend engines (AsyncFlow, Parsl), orchestrators for LangGraph and AutoGen, core agent abstractions, and comprehensive documentation updates. Major legacy features (memory, fault tolerance, mutable graphs) and old example patterns are removed. Changes
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes Possibly related PRs
Poem
✨ Finishing Touches
🧪 Generate unit tests (beta)
|
Summary of ChangesHello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request significantly refactors the core architecture of Flowgentic, moving towards a more decoupled and modular design. The primary goal is to enhance the clarity and usability of the API for integrating agent orchestration frameworks with High-Performance Computing (HPC) backends. This refactor introduces a new event model for better observability and streamlines the process of offloading tasks and blocks to HPC resources, making the system more robust and easier to extend. Highlights
🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console. Changelog
Ignored Files
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for GitHub and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
There was a problem hiding this comment.
Code Review
This pull request introduces a significant and welcome refactoring to a more modular, decorator-based API, which greatly improves clarity and extensibility. The new architecture with distinct agent_orchestration_frameworks and backend_engines is a solid improvement.
However, there are two major concerns. Firstly, several significant features appear to have been removed without explanation, including memory management, fault tolerance, mutable graphs, and detailed introspection reports. This is a critical regression if unintentional. Secondly, the Makefile has been stripped of essential developer commands like lint, format, and tests, which negatively impacts the local development workflow. These issues should be addressed before merging.
I am having trouble creating individual review comments. Click here to see my feedback.
Makefile (42-62)
The format, lint, docs, and tests targets have been removed from the Makefile, although they are still listed in the .PHONY declaration. These are essential commands for maintaining code quality and running tests locally. Was their removal intentional? If so, the new process for linting, formatting, and testing should be documented. If not, please consider re-adding them.
examples/langgraph_asyncflow.py (83)
This call to set_entry_point is redundant as it was already set on line 78. Please remove this line.
examples/langgraph_parsl.py (88)
This call to set_entry_point is redundant as it was already set on line 83. Please remove this line.
There was a problem hiding this comment.
Actionable comments posted: 2
Note
Due to the large number of review comments, Critical severity comments were prioritized as inline comments.
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (2)
.github/workflows/continuous_integration.yml (1)
10-16:⚠️ Potential issue | 🟠 MajorRestore the actual lint pass.
The
lint_and_formatjob only runsruff format --checknow;ruff checkis commented out. The project has extensive lint rules configured inpyproject.toml(N, ANN, TID, E, C4, G, FAST*, ASYNC100, S*, isort) that won't be caught without the linting step.Proposed fix
- # - name: Check linting - # uses: astral-sh/ruff-action@v1 - # with: - # args: "check" + - name: Check linting + uses: astral-sh/ruff-action@v1 + with: + args: "check" - - uses: astral-sh/ruff-action@v1 + - name: Check formatting + uses: astral-sh/ruff-action@v1 with: args: "format --check"🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.github/workflows/continuous_integration.yml around lines 10 - 16, The lint_and_format job currently only runs "ruff format --check" which misses configured lint rules; restore the full lint pass by adding back the "ruff check" invocation using the astral-sh/ruff-action@v1 action (e.g., include a step that runs args: "check" before or alongside args: "format --check"), ensuring the job executes both the lint (check) and format checks so rules like N, ANN, TID, E, C4, G, FAST*, ASYNC100, S*, and isort are enforced; target the existing step using astral-sh/ruff-action@v1 in the lint_and_format job to apply this change.pyproject.toml (1)
55-70:⚠️ Potential issue | 🟠 MajorDuplicate
devdependencies with conflicting versions create reproducibility risk.Two separate
devdependency definitions exist:
[project.optional-dependencies].dev(lines 55-70): pytest>=7.0.0, pytest-asyncio>=0.21.0[dependency-groups].dev(lines 115-119): pytest>=8.4.2, pytest-asyncio>=1.2.0Installing via
pip install .[dev]anduv sync --group devwill resolve different toolchains, making local and CI failures difficult to reproduce. Consolidate these definitions or ensure consistent version constraints across both.Also, line 123 pins
radical-asyncflowtobranch = "main", which is a moving target. Consider pinning to a specific release or commit hash for reproducibility.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pyproject.toml` around lines 55 - 70, There are duplicate dev dependency groups with conflicting constraints: [project.optional-dependencies].dev (pytest>=7.0.0, pytest-asyncio>=0.21.0) and [dependency-groups].dev (pytest>=8.4.2, pytest-asyncio>=1.2.0); consolidate to a single source of truth (either remove the older block or merge into [dependency-groups].dev) and make the pytest and pytest-asyncio version constraints consistent across the file (choose the desired versions and update both references or delete the redundant block). Also replace the moving-target branch reference for radical-asyncflow (branch = "main") with a reproducible pin (a released version or a commit hash) so builds are deterministic.
🟠 Major comments (22)
src/flowgentic/__init__.py-4-6 (1)
4-6:⚠️ Potential issue | 🟠 MajorHardcoded debug log level is not production-ready.
Setting
logger_level="debug"unconditionally means every deployment will emit verbose debug logs. This can cause excessive log volume, potential exposure of sensitive data, and performance degradation in production environments. It also removes configurability that previously existed viaAPP_SETTINGS.Consider falling back to an environment variable or a sensible default:
🔧 Proposed fix using environment variable with fallback
+import os from .utils import Logger logger_instance = Logger( - colorful_output=True, logger_level="debug" + colorful_output=True, + logger_level=os.getenv("FLOWGENTIC_LOG_LEVEL", "warning") ) # Initiating loggerAlternatively, if
APP_SETTINGSwas intentionally removed as part of the refactor, consider documenting the new approach for configuring log levels.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/flowgentic/__init__.py` around lines 4 - 6, The logger is hardcoded to debug via logger_instance = Logger(..., logger_level="debug"), which should be configurable for production; update the initialization to derive the level from environment or existing config (e.g., check APP_SETTINGS if present, otherwise os.environ["LOG_LEVEL"] with a sensible default like "info"), convert/validate the value to the expected format, and pass that variable into Logger instead of the literal "debug" so deployments can control verbosity without code changes.src/flowgentic/core/memory/memory_manager.py-8-12 (1)
8-12:⚠️ Potential issue | 🟠 Major
set_context()is missing a parameter to accept context, andrecord_state()has inconsistent signatures.
set_context()has no parameter to receive the context value to store.record_state()in the base class takes no arguments, butNullMemoryManager.record_state(self, state)expects astateparameter. This signature mismatch will cause issues when using the base class interface polymorphically.🔧 Proposed fix with type hints and ABC
+from abc import ABC, abstractmethod +from typing import Any, Optional + + -class MemoryManger: +class MemoryManager(ABC): + """Base class for memory management in agents.""" + def __init__(self) -> None: pass - def get_context(self): + `@abstractmethod` + def get_context(self) -> Optional[str]: pass - def set_context(self): + `@abstractmethod` + def set_context(self, context: Any) -> None: pass - def record_state(self): + `@abstractmethod` + def record_state(self, state: Any) -> str: pass -class NullMemoryManager(MemoryManger): +class NullMemoryManager(MemoryManager): """A no-op memory manager for stateless agents.""" def __init__(self) -> None: pass - def get_context(self): - pass + def get_context(self) -> Optional[str]: + return None - def set_context(self): + def set_context(self, context: Any) -> None: pass def record_state(self, state): return "" # Returns empty context🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/flowgentic/core/memory/memory_manager.py` around lines 8 - 12, The base MemoryManager API is inconsistent: add a context parameter to set_context (e.g., set_context(self, context: Any)) and change the base class record_state signature to accept a state parameter (record_state(self, state: Any)) so it matches implementations like NullMemoryManager.record_state(self, state); update type hints and, if desired, mark both methods as abstract in the MemoryManager class (using ABC/abstractmethod) to enforce the contract.src/flowgentic/backend_engines/radical_asyncflow.py-32-33 (1)
32-33:⚠️ Potential issue | 🟠 MajorDon't assume
task_kwargsare hashable.
tuple(sorted(task_kwargs.items()))blows up as soon as one value is alist,dict, or other unhashable object, which is common for scheduler/resource configs. That turns caching into a hard runtime error before the task is even submitted.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/flowgentic/backend_engines/radical_asyncflow.py` around lines 32 - 33, The code assumes task_kwargs is hashable by doing key = (func, tuple(sorted(task_kwargs.items()))) which fails for unhashable values (lists/dicts); replace that with a stable recursive freeze/serialization of task_kwargs and use that in the key (e.g., implement a helper like _freeze(obj) that converts dicts to sorted tuples of (k,_freeze(v)), lists/sets to tuples of _freeze(items), and leaves scalars as-is or uses repr for unknown types) and then set key = (func, _freeze(task_kwargs)) so the cache key is deterministic and won't explode on unhashable values.src/flowgentic/agent_orchestration_frameworks/langgraph.py-15-18 (1)
15-18: 🛠️ Refactor suggestion | 🟠 MajorRename the public orchestrator type before release.
LanGraphOrchestratoris a misspelled public symbol in thelanggraphintegration. If this ships as-is, either the docs/examples import a different name or you'll need to preserve the typo indefinitely for compatibility.Proposed fix
- "langchain-core is required for LanGraphOrchestrator. " + "langchain-core is required for LangGraphOrchestrator. " 'Install it with: pip install "flowgentic[langgraph]"' ) from exc return _tool(func) -class LanGraphOrchestrator(AgentOrchestrator): +class LangGraphOrchestrator(AgentOrchestrator): def __init__(self, engine: BaseEngine) -> None: self.engine = engine + +# Backwards-compatible alias if needed. +LanGraphOrchestrator = LangGraphOrchestratorAlso applies to: 22-24
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/flowgentic/agent_orchestration_frameworks/langgraph.py` around lines 15 - 18, The public orchestrator symbol is misspelled as LanGraphOrchestrator; rename it to the correct LangGraphOrchestrator across the module (class/name definition, any imports, exports and __all__) and update all references (e.g., usages in factory functions or type hints) to the new name, then add a backward-compatible alias named LanGraphOrchestrator = LangGraphOrchestrator (in the same file) that emits a deprecation warning when accessed so existing consumers keep working until the next major release; ensure you update any docstrings/comments and the export list accordingly.src/flowgentic/agent_orchestration_frameworks/langgraph.py-57-75 (1)
57-75:⚠️ Potential issue | 🟠 MajorEmit
tool_invoke_endfrom afinallyblock.If
engine.execute_tool()raises, the end event is skipped and the lifecycle trace stays half-open. That breaks the event model this PR is introducing.Proposed fix
- result = await self.engine.execute_tool( - f, - *args, - task_kwargs=task_kwargs, - invocation_id=invocation_id, - **kwargs, - ) - - # Ts_collect_end: Result returned to LangGraph - self.engine.emit( - { - "event": "tool_invoke_end", - "ts": time.perf_counter(), - "tool_name": task_name, - "invocation_id": invocation_id, - } - ) - - return result + try: + return await self.engine.execute_tool( + f, + *args, + task_kwargs=task_kwargs, + invocation_id=invocation_id, + **kwargs, + ) + finally: + self.engine.emit( + { + "event": "tool_invoke_end", + "ts": time.perf_counter(), + "tool_name": task_name, + "invocation_id": invocation_id, + } + )🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/flowgentic/agent_orchestration_frameworks/langgraph.py` around lines 57 - 75, The call to self.engine.execute_tool may raise and currently the "tool_invoke_end" event emit (the block emitting {"event":"tool_invoke_end",...}) is only executed on success; move that emit into a finally block that always runs after calling self.engine.execute_tool in the surrounding method (where result = await self.engine.execute_tool(...) appears) so the lifecycle trace is closed even on exceptions, re-raising any caught exception after emitting; ensure you reference the same invocation_id and task_name when emitting and preserve the original return value when no error occurs.src/flowgentic/backend_engines/radical_asyncflow.py-39-52 (1)
39-52:⚠️ Potential issue | 🟠 Major
tool_resolve_endis emitted before resolution on cache misses.On a cold path, the event fires before
self.flow.function_task(...)runs, so the reported resolve timestamp excludes the work it is supposed to measure and can misorder the lifecycle timeline.Proposed fix
- # Ts_resolve_end: Task descriptor resolved, about to enter AsyncFlow - self.emit( - { - "event": "tool_resolve_end", - "ts": time.perf_counter(), - "tool_name": task_name, - "invocation_id": invocation_id, - "cache_hit": cache_hit, - } - ) - if not cache_hit: self._task_registry[key] = self.flow.function_task(func, **task_kwargs) + + self.emit( + { + "event": "tool_resolve_end", + "ts": time.perf_counter(), + "tool_name": task_name, + "invocation_id": invocation_id, + "cache_hit": cache_hit, + } + )🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/flowgentic/backend_engines/radical_asyncflow.py` around lines 39 - 52, The "tool_resolve_end" event is being emitted before the actual work on cache misses, so move the emit to occur after creating/registering the task when cache_hit is False: call self.flow.function_task(...) and assign to self._task_registry[key] first (inside the if not cache_hit block), then emit the "tool_resolve_end" event with the timestamp so the measured resolve duration includes task creation; keep the existing behavior for cache_hit True (emit immediately) and ensure you reference the same symbols (self.flow.function_task, self._task_registry, cache_hit, invocation_id, task_name) when relocating the emit.src/flowgentic/backend_engines/base.py-30-31 (1)
30-31:⚠️ Potential issue | 🟠 MajorMake the
wrap_nodecontract synchronous.All implementations in the codebase (
AsyncFlowEngine,ParslEngine) definewrap_nodeas a regulardef, and every caller invokes it withoutawait. The abstract method declaring it asasynccreates an interface mismatch that risks future implementations incorrectly following the declared contract and leaking coroutines.Proposed fix
`@abstractmethod` - async def wrap_node(self, node_func: Callable): + def wrap_node(self, node_func: Callable) -> Callable: pass🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/flowgentic/backend_engines/base.py` around lines 30 - 31, The abstract method wrap_node in the base class is declared async but all implementations (e.g., AsyncFlowEngine, ParslEngine) and callers treat it as a synchronous function; change the contract by replacing "async def wrap_node(self, node_func: Callable)" with a regular synchronous "def wrap_node(...)" in the base class so implementations match the interface and no coroutines are accidentally produced—ensure the abstractmethod decorator remains and adjust any type hints if needed so existing implementations (def wrap_node(...)) continue to satisfy the abstract base.src/flowgentic/backend_engines/base.py-14-17 (1)
14-17:⚠️ Potential issue | 🟠 MajorKeep observer failures from aborting tool execution.
emit()currently lets any exception from the profiling/benchmark callback bubble through the tool/block path. A metrics bug should not fail the underlying execution.Proposed fix
+import logging from abc import ABC, abstractmethod from typing import Any, Callable, Dict, List, Optional, Tuple +logger = logging.getLogger(__name__) + class BaseEngine(ABC): ... def emit(self, event: Dict[str, Any]) -> None: """Emit an event to the observer if one is registered.""" if self._observer is not None: - self._observer(event) + try: + self._observer(event) + except Exception: + logger.exception("BaseEngine observer failed")🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/flowgentic/backend_engines/base.py` around lines 14 - 17, The emit method lets exceptions from the observer bubble up and can abort tool/block execution; wrap the call to self._observer(event) inside a try/except Exception block so observer failures are caught and not re-raised, and if a logger is available (e.g. self._logger or getattr(self, "_logger", None)) log the exception (logger.exception or logger.error with the exception) before returning; do not re-raise the exception so the underlying execution continues.pyproject.toml-121-123 (1)
121-123:⚠️ Potential issue | 🟠 MajorPin
radical-asyncflowto an immutable ref.Using
branch = "main"means every fresh install resolves to potentially different commits, making CI and benchmark results non-reproducible. Replacebranch = "main"with a commit hash usingrev = "<commit-sha>"or a release tag.Note:
academy-py(line 122) has the same issue—it lacks an explicit branch/tag/rev specification and defaults to the repository's main branch.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pyproject.toml` around lines 121 - 123, The git dependencies in pyproject.toml currently pin to branches which makes installs non-reproducible; update both sources to reference immutable refs by replacing branch = "main" (for radical-asyncflow) and adding an explicit ref for academy-py with rev = "<commit-sha>" or a stable tag (e.g., rev = "abcdef123456..." or tag = "vX.Y.Z"); locate the entries named radical-asyncflow and academy-py in the [tool.uv.sources] section and change their git specifiers to use rev = "<commit-sha>" or a release tag instead of branch.src/flowgentic/backend_engines/parsl.py-36-38 (1)
36-38:⚠️ Potential issue | 🟠 Major
task_kwargsis accepted but never applied.Upstream callers can pass backend/task options here, but this implementation drops them and always submits
task_app(*args, **kwargs). Any executor or resource selection requested throughtask_kwargsis silently ignored.Also applies to: 60-61
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/flowgentic/backend_engines/parsl.py` around lines 36 - 38, The code accepts task_kwargs but never applies them when invoking task_app; update the call sites that currently do task_app(*args, **kwargs) to merge task_kwargs into the actual call (e.g., build an effective_kwargs = {**(kwargs or {}), **(task_kwargs or {})} and call task_app(*args, **effective_kwargs}) so executor/resource options passed in task_kwargs are honored; apply the same change in the other identical site (the second function using task_kwargs around lines 60-61), and ensure task_kwargs can be None safely by defaulting to an empty dict before merging.src/flowgentic/backend_engines/parsl.py-40-47 (1)
40-47:⚠️ Potential issue | 🟠 MajorCache the app by function identity, not just
__name__.Two different callables can share the same
__name__. In that case the second tool reuses the first cached Parsl app and executes the wrong function.Suggested fix
- task_name = getattr(func, "__name__", str(func)) + task_name = getattr(func, "__name__", str(func)) + cache_key = func - cache_hit = task_name in self._task_registry + cache_hit = cache_key in self._task_registry if not cache_hit: - self._task_registry[task_name] = self._make_parsl_app(func) + self._task_registry[cache_key] = self._make_parsl_app(func) - task_app = self._task_registry[task_name] + task_app = self._task_registry[cache_key]🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/flowgentic/backend_engines/parsl.py` around lines 40 - 47, The cache currently keys Parsl apps by task_name (derived from getattr(func, "__name__", str(func))) which can collide for distinct callables with the same __name__; change the registry to key by the function identity instead (use the callable itself or a stable identity like id(func) or (func.__module__, func.__qualname__)) so each unique func gets its own app; update references to _task_registry, task_name and _make_parsl_app in this block (and any lookups) to use the chosen identity key (e.g., func) when checking cache_hit, inserting the app, and retrieving task_app.src/flowgentic/core/models/implementations/dummyProvider.py-30-38 (1)
30-38:⚠️ Potential issue | 🟠 MajorThis never calls “all the functions”.
aprompt()always selectsself.tool_names[0]and repeats itn_of_tool_callstimes. That ignores every other configured tool, andtool_names=[]crashes withIndexError.Suggested fix
- name = self.tool_names[0] # Always caling the first function - tools_to_use = [ - ChatCompletionMessageFunctionToolCall( - id=f"tool_{name}_{i}", - function=Function(name=name, arguments="{}"), - type="function", - index=i, - ) - for i in range(self.n_of_tool_calls) - ] + if not self.tool_names: + return ProvidersResponse( + message="No tools available", + reasoning="", + tools_to_use=[], + ) + + selected_names = self.tool_names[: self.n_of_tool_calls] + tools_to_use = [ + ChatCompletionMessageFunctionToolCall( + id=f"tool_{name}_{i}", + function=Function(name=name, arguments="{}"), + type="function", + index=i, + ) + for i, name in enumerate(selected_names) + ]🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/flowgentic/core/models/implementations/dummyProvider.py` around lines 30 - 38, aprompt() currently always picks the first tool name (self.tool_names[0]) and repeats it n_of_tool_calls times, causing other tools to be ignored and raising IndexError when self.tool_names is empty; change the selection so each ChatCompletionMessageFunctionToolCall uses a chosen name from self.tool_names (e.g., cycle/round-robin or select by index: self.tool_names[i % len(self.tool_names)]) and add a guard that raises a clear error or returns gracefully if self.tool_names is empty; update the id/Function(name=...) construction to use this chosen name so each call references the intended function (refer to aprompt(), self.tool_names, n_of_tool_calls, ChatCompletionMessageFunctionToolCall, Function).src/flowgentic/backend_engines/parsl.py-18-19 (1)
18-19:⚠️ Potential issue | 🟠 MajorGuard repeated
parsl.load()calls; addparsl.clear()before loading or use context manager.
parsl.load(config)initializes Parsl process-wide, maintaining a single activeDataFlowKernel. Calling it without first clearing the previous kernel (viaparsl.clear()) will fail. Creating a secondParslEnginein the same interpreter crashes unless the first is cleaned up, breaking multi-engine setups and test isolation.Use either explicit
parsl.clear()beforeparsl.load(), or wrap usage in a context manager as Parsl's documentation recommends for lifecycle management.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/flowgentic/backend_engines/parsl.py` around lines 18 - 19, The code calls parsl.load(config) directly in the initialization (after super().__init__) which can fail if a DataFlowKernel is already active; update the ParslEngine initialization to clear any existing kernel before loading by calling parsl.clear() immediately before parsl.load(config), or refactor to use Parsl's lifecycle/context manager pattern so loading and unloading are paired; specifically modify the code paths in the ParslEngine constructor (the location that currently calls parsl.load) to invoke parsl.clear() (or enter the documented context) to avoid repeated parsl.load() crashes.docs/index.md-89-92 (1)
89-92:⚠️ Potential issue | 🟠 MajorUndefined
llmvariable in minimal example.The
agent_nodefunction referencesllmon line 91, but this variable is never defined in the example. This will confuse readers trying to run the quickstart.📝 Suggested fix
Add the LLM initialization before the decorator, for example:
# After imports, add: from langchain_openai import ChatOpenAI # Before `@orchestrator.hpc_block`, add: llm = ChatOpenAI(model="gpt-4o-mini") llm = llm.bind_tools(tools)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@docs/index.md` around lines 89 - 92, The example's agent_node decorated with `@orchestrator.hpc_block` references an undefined llm; add an LLM initialization and binding before the decorator (e.g., import and instantiate the ChatOpenAI/LLM you expect and assign it to llm, then call llm.bind_tools(tools) if tools are used) so agent_node can call await llm.ainvoke(state.messages); ensure the import and llm variable exist in the snippet above the agent_node definition and that any referenced tools variable is also initialized.examples/autogen_parsl.py-1-42 (1)
1-42: 🛠️ Refactor suggestion | 🟠 MajorRemove duplicate and unused imports.
This file has significant import issues:
asyncioimported twice (lines 2, 20)loggingimported twice (lines 3, 35)timeimported twice (lines 4, 36)ThreadPoolExecutorimported from two different modules (lines 8, 25, 39)ConcurrentExecutionBackend,WorkflowEngineimported twice (lines 7, 24)AsyncFlowEngineimported (lines 13, 29) but never used — this file usesParslEngine- LangGraph imports (
StateGraph,add_messages,ToolNode,BaseModel) are unused♻️ Suggested cleaned imports
import os import asyncio import logging import time from dotenv import load_dotenv -from radical.asyncflow import ConcurrentExecutionBackend, WorkflowEngine -from concurrent.futures import ThreadPoolExecutor - import autogen from autogen import AssistantAgent, UserProxyAgent -from flowgentic.backend_engines.radical_asyncflow import AsyncFlowEngine from flowgentic.agent_orchestration_frameworks.autogen import AutoGenOrchestrator from flowgentic.core.models.implementations.dummy.autogen import ( create_assistant_with_dummy_model, + DummyAutoGenClient, ) -from flowgentic.core.models.implementations.dummy.autogen import DummyAutoGenClient - -import asyncio -from typing import Annotated -from langgraph.graph import StateGraph, add_messages -from pydantic import BaseModel -from radical.asyncflow import ConcurrentExecutionBackend, WorkflowEngine -from concurrent.futures import ThreadPoolExecutor - -from flowgentic.agent_orchestration_frameworks.langgraph import LanGraphOrchestrator from flowgentic.backend_engines.parsl import ParslEngine -from flowgentic.backend_engines.radical_asyncflow import AsyncFlowEngine - -from flowgentic.core.models.implementations.dummy.langgraph import ( - DummyLanggraphModelProvider, -) -from flowgentic.old.utils.llm_providers import ChatLLMProvider -import logging -import time from parsl.config import Config from parsl.executors import ThreadPoolExecutor - -from langgraph.prebuilt import ToolNode🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@examples/autogen_parsl.py` around lines 1 - 42, The imports file contains duplicate and unused imports; remove repeated imports of asyncio, logging, and time, keep a single import of ThreadPoolExecutor from the correct module used by Parsl (parsl.executors) and delete other ThreadPoolExecutor imports, remove duplicate ConcurrentExecutionBackend and WorkflowEngine imports, and drop unused AsyncFlowEngine, StateGraph, add_messages, ToolNode, and BaseModel imports so only required symbols for ParslEngine, LanGraphOrchestrator, ChatLLMProvider, DummyLanggraphModelProvider, AssistantAgent/UserProxyAgent, and autogen remain (scan for symbols like ParslEngine, LanGraphOrchestrator, ChatLLMProvider, DummyLanggraphModelProvider, AssistantAgent, UserProxyAgent to confirm which imports to keep).src/flowgentic/core/models/implementations/openRouter.py-35-37 (1)
35-37:⚠️ Potential issue | 🟠 MajorHandle empty
choicesarray to prevent IndexError.If the API returns an empty
choicesarray (e.g., due to content filtering or an error), accessingresponse.choices[0]will raise anIndexError.🛡️ Suggested defensive check
logger.debug(f"Chat completion from OpenRouter is: {response}") + if not response.choices: + raise ValueError("OpenRouter returned no choices") msg = response.choices[0].message.content🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/flowgentic/core/models/implementations/openRouter.py` around lines 35 - 37, The code assumes response.choices[0] always exists, which can raise IndexError when the API returns an empty choices array; update the block that assigns msg, reasoning, and tools_to_use to first check that response.choices is truthy and has length > 0 (e.g., if not response.choices: handle gracefully), and if empty either log/raise a clear error or set safe defaults for msg, reasoning, and tools_to_use; modify the lines that create msg, reasoning, and tools_to_use (the assignments using response.choices[0].message) to use this defensive check so the function in openRouter.py fails gracefully instead of crashing.src/flowgentic/core/models/implementations/dummy/autogen.py-1-179 (1)
1-179:⚠️ Potential issue | 🟠 MajorThis module ships no executable implementation.
Every line in this file is commented out, so the module exports none of the dummy AutoGen symbols its path implies. Any consumer importing those symbols from
flowgentic.core.models.implementations.dummy.autogenwill fail, and if this work is intentionally deferred the file should stay out ofsrc/until it is runnable.If you'd like, I can help turn the commented prototype into a runnable module or outline a smaller follow-up issue to land it safely.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/flowgentic/core/models/implementations/dummy/autogen.py` around lines 1 - 179, The file currently has the entire prototype commented out so nothing is exported; restore a runnable module by uncommenting and validating the primary symbols: DummyAutoGenClient, DummyResponse, DummyChoice, and create_assistant_with_dummy_model, ensuring their signatures match the usages in the codebase (create(params), message_retrieval(response), cost(response), get_usage(response)); add module-level exports (or __all__) for those names and run lint/tests; if the implementation is intentionally deferred instead remove the file from src or replace it with a minimal placeholder that raises NotImplementedError to avoid import failures.src/flowgentic/core/tool/tool.py-16-20 (1)
16-20:⚠️ Potential issue | 🟠 Major
Tool.execute()breaks on synchronous callables.When
Tool.__init__acceptsfunc: Callablewithout async type constraints, wrapping a regulardeffunction and callingexecute()will raiseTypeErroron invocation—awaitonly works with awaitable objects. Either narrow the type hint to async callables only or branch oninspect.isawaitable(result)to support both sync and async functions.🔧 Suggested fix
async def execute(self, **kwargs): - return await self.func(**kwargs) + result = self.func(**kwargs) + if inspect.isawaitable(result): + return await result + return resultAlso applies to: 59-60
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/flowgentic/core/tool/tool.py` around lines 16 - 20, Tool.__init__ currently accepts any Callable but Tool.execute assumes the callable is async, causing a TypeError for sync functions; update Tool.execute to handle both sync and async callables by invoking self.func(...) and then checking the result with inspect.isawaitable(result) (or asyncio.iscoroutine) and awaiting only when appropriate, or alternatively restrict the type in Tool.__init__ to an async-only callable; modify the execute method (reference: Tool.execute and Tool.__init__) to call the function, branch on inspect.isawaitable(result), and await only when necessary so synchronous functions work without errors.examples/langgraph_asyncflow.py-31-35 (1)
31-35:⚠️ Potential issue | 🟠 MajorWrap the entire function in try/finally to ensure proper cleanup.
start_app()createsflowviaWorkflowEngine.create()but never tears it down. The RADICAL AsyncFlow backend requires explicit shutdown to properly release executor resources. Wrap all code afterflowcreation intry/finallyand callawait flow.shutdown()in the finally block.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@examples/langgraph_asyncflow.py` around lines 31 - 35, The start_app() coroutine creates a WorkflowEngine instance (flow = await WorkflowEngine.create(...)) but never shuts it down; wrap the code that runs after flow creation in a try/finally block and call await flow.shutdown() in the finally to ensure the executor and RADICAL AsyncFlow backend are cleaned up (i.e., create flow, then try: ... finally: await flow.shutdown()).docs/design_patterns/chatbot.md-120-126 (1)
120-126:⚠️ Potential issue | 🟠 MajorRemove
set_finish_point("agent")—it conflicts with the conditional routing.The
should_continuefunction already routes to"__end__"when termination is needed. Addingset_finish_point("agent")creates a static unconditional edge alongside the conditional edges from the same node, which LangGraph's own guidance warns against. This pattern causes undefined behavior—use conditional routing for all termination logic, not both static and conditional paths.Lines affected
workflow.add_conditional_edges( "agent", should_continue, {"tools": "tools", "end": "__end__"} ) workflow.add_edge("tools", "agent") workflow.set_finish_point("agent") # ← Remove this lineAlso applies to: 211-217
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@docs/design_patterns/chatbot.md` around lines 120 - 126, The finish point call is conflicting with the conditional routing: remove the call to workflow.set_finish_point("agent") so termination is handled solely via workflow.add_conditional_edges("agent", should_continue, {"tools":"tools","end":"__end__"}); ensure the conditional path from "agent" to "__end__" remains and keep the existing workflow.add_edge("tools", "agent") loop intact (do this for the other occurrence around lines 211-217 as well).examples/langgraph_asyncflow.py-78-84 (1)
78-84:⚠️ Potential issue | 🟠 MajorRemove the duplicate
set_entry_point("agent")andset_finish_point("agent").Line 78 already sets "agent" as the entry point, so line 83's duplicate call is unnecessary. More critically,
set_finish_point("agent")adds a direct edge from "agent" toEND, which conflicts with the conditional routing logic on lines 79–81. In LangGraph, when conditional edges are the primary routing mechanism, additional edges toENDfrom the same node should not be added separately; termination should flow through the conditional edge that returns"end"(mapped to"__end__"). Keeping both can cause the workflow to exit prematurely before thetoolsbranch executes in theagent -> tools -> agentloop.✂️ Fix
workflow.set_entry_point("agent") workflow.add_conditional_edges( "agent", should_continue, {"tools": "tools", "end": "__end__"} ) workflow.add_edge("tools", "agent") # Loop back to agent after tools - workflow.set_entry_point("agent") - workflow.set_finish_point("agent")🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@examples/langgraph_asyncflow.py` around lines 78 - 84, Remove the redundant and conflicting calls that prematurely terminate the flow: delete the second set_entry_point("agent") and the set_finish_point("agent") so routing relies solely on add_conditional_edges("agent", should_continue, {"tools": "tools", "end": "__end__"}) and the loop edge add_edge("tools", "agent"); keep set_entry_point("agent") only once, retain add_edge("tools", "agent"), and ensure set_finish_point is not called for "agent" so the "end" path is handled exclusively by the conditional mapping to "__end__".examples/langgraph_parsl.py-83-89 (1)
83-89:⚠️ Potential issue | 🟠 MajorRemove the duplicate
set_entry_point()andset_finish_point("agent").The conditional edge from
"agent"already routes to"__end__"via{"tools": "tools", "end": "__end__"}. In LangGraph,set_finish_point()adds an unconditional edge to the terminal node; combining this with conditional routing causes the agent node to exit immediately instead of allowing the tools branch to execute. Additionally,set_entry_point("agent")is called twice (lines 83 and 88), which is redundant.✂️ Minimal fix
workflow.set_entry_point("agent") workflow.add_conditional_edges( "agent", should_continue, {"tools": "tools", "end": "__end__"} ) workflow.add_edge("tools", "agent") # Loop back to agent after tools - workflow.set_entry_point("agent") - workflow.set_finish_point("agent")🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@examples/langgraph_parsl.py` around lines 83 - 89, Remove the duplicate workflow.set_entry_point("agent") call and delete the workflow.set_finish_point("agent") invocation: keep workflow.set_entry_point("agent") once, rely on workflow.add_conditional_edges("agent", should_continue, {"tools":"tools","end":"__end__"}) plus workflow.add_edge("tools","agent") to handle branching and looping, and do not call workflow.set_finish_point("agent") since it creates an unconditional edge to the terminal node that bypasses the "tools" branch.
🟡 Minor comments (12)
src/flowgentic/core/__init__.py-1-1 (1)
1-1:⚠️ Potential issue | 🟡 MinorTypo in docstring: "flowngetic" should be "flowgentic".
The module docstring has a misspelling of the project name.
📝 Proposed fix
-"""Core abstractions from flowngetic's agents""" +"""Core abstractions from flowgentic's agents"""🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/flowgentic/core/__init__.py` at line 1, Module docstring contains a typo: change the string literal in src/flowgentic/core/__init__.py from "flowngetic's agents" to "flowgentic's agents" so the project name is spelled correctly; update the top-level docstring in the module (the triple-quoted string) to replace "flowngetic" with "flowgentic".src/flowgentic/core/memory/memory_manager.py-1-1 (1)
1-1:⚠️ Potential issue | 🟡 MinorTypo in class name:
MemoryMangershould beMemoryManager.The base class name is missing an 'a' in "Manager".
📝 Proposed fix
-class MemoryManger: +class MemoryManager:🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/flowgentic/core/memory/memory_manager.py` at line 1, Rename the misspelled class MemoryManger to MemoryManager and update all references to it (constructor definitions, inheritance, instantiations, type hints, and imports) so they match the corrected name; ensure the class definition line uses "class MemoryManager:" and search for occurrences of "MemoryManger" in the module and other modules to replace them with "MemoryManager" to avoid import/NameError issues.docs/features/overview.md-32-36 (1)
32-36:⚠️ Potential issue | 🟡 MinorInclude the resolve/collect events in this lifecycle summary.
The current event model also exposes
tool_resolve_endandtool_collect_start. Listing only wrap/invoke events undersells what the observer actually emits and what the benchmarking tests now validate.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@docs/features/overview.md` around lines 32 - 36, The lifecycle summary omits two observer events that are actually emitted and validated: add `tool_resolve_end` and `tool_collect_start` to the Observability — Lifecycle events paragraph alongside the existing `tool_wrap_*`, `block_wrap_*`, and `tool_invoke_*` entries; update the sentence to state that decorators emit registration, resolve/collect, and per-invocation timing events through the engine observer (reference the decorator emissions and event names `tool_resolve_end` and `tool_collect_start` and keep the link to the Architecture — Event Model).Makefile-34-42 (1)
34-42:⚠️ Potential issue | 🟡 MinorFix the example descriptions shown by
make help.
examples-lg-parslis labeled “AutoGen + Parsl” even though it runsexamples.langgraph_parsl, andAsyncfowis misspelled in two asyncflow targets. The commands work, but the new help output is misleading.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@Makefile` around lines 34 - 42, Update the Makefile target descriptions so the help output matches the actual commands: fix the typos "Asyncfow" → "Asyncflow" for targets examples-ca-asyncflow and examples-ag-asyncflow, change the description for examples-lg-parsl from "AutoGen + Parsl" to "Langgraph + Parsl" to match that it runs examples.langgraph_parsl, and ensure examples-ag-parsl has the correct "AutoGen + Parsl" description; edit the comment strings following the target names (examples-lg-asyncflow, examples-ca-asyncflow, examples-ag-asyncflow, examples-lg-parsl, examples-ag-parsl) accordingly.src/flowgentic/core/agent.py-4-4 (1)
4-4:⚠️ Potential issue | 🟡 MinorTypo in class name:
MemoryMangershould beMemoryManager.This appears to be a typo in the source module as well. Consider fixing it there to maintain consistency.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/flowgentic/core/agent.py` at line 4, The import references a misspelled class name MemoryManger; rename it to MemoryManager in this file and across the codebase. Update the class definition inside the memory manager module (e.g., class MemoryManger -> class MemoryManager in memory_manager.py) and adjust all imports/usages (including NullMemoryManager references) to use MemoryManager so names are consistent; run tests or search for "MemoryManger" to ensure no leftover occurrences remain.src/flowgentic/core/models/implementations/openRouter.py-34-34 (1)
34-34:⚠️ Potential issue | 🟡 MinorFix typo in log message.
"OpenRotuer" should be "OpenRouter".
- logger.debug(f"Chat completition from OpenRotuer is: {response}") + logger.debug(f"Chat completion from OpenRouter is: {response}")🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/flowgentic/core/models/implementations/openRouter.py` at line 34, Fix the typo in the debug message inside the logger.debug call in openRouter.py: change the string "Chat completition from OpenRotuer is: {response}" to "Chat completion from OpenRouter is: {response}" (the logger.debug invocation and the response variable remain the same).src/flowgentic/core/agent.py-35-40 (1)
35-40:⚠️ Potential issue | 🟡 MinorFix typos and comment numbering.
- Line 35: "retreival" → "retrieval"
- Line 40: Comment says "3. Reasoning" but there's no step 2 (step 1 is memory retrieval)
- # 1. Memory retreival + # 1. Memory retrieval state = AgentState() state.add_user_input(prompt_input) context = self.memory.get_context() - # 3. Reasoning (the brain of the agent) + # 2. Reasoning (the brain of the agent)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/flowgentic/core/agent.py` around lines 35 - 40, Fix the comment typos and numbering around the memory setup in Agent.run (where AgentState(), state.add_user_input(prompt_input) and self.memory.get_context() are invoked): correct "retreival" to "retrieval", and insert or renumber the missing step 2 so the comments read consistently (e.g., "1. Memory retrieval", "2. Update state / retrieve context", "3. Reasoning") adjacent to the AgentState, add_user_input, and memory.get_context calls to clearly reflect the intended sequence.src/flowgentic/core/models/model_provider.py-1-4 (1)
1-4:⚠️ Potential issue | 🟡 MinorRemove duplicate import.
PromptInputis imported twice (lines 1 and 3).-from flowgentic.core.schemas.prompt_input import PromptInput from abc import ABC, abstractmethod from flowgentic.core.schemas.prompt_input import PromptInput from flowgentic.core.schemas.providers_response import ProvidersResponse🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/flowgentic/core/models/model_provider.py` around lines 1 - 4, The file imports PromptInput twice; remove the duplicate import so only a single import of PromptInput remains (keep the line "from flowgentic.core.schemas.prompt_input import PromptInput") and ensure ProvidersResponse (from flowgentic.core.schemas.providers_response) stays imported; update the import block around PromptInput and ProvidersResponse to contain each symbol only once.docs/index.md-9-9 (1)
9-9:⚠️ Potential issue | 🟡 MinorHeading level should increment by one level at a time.
The
###heading at line 9 follows the#heading at line 5, skipping##. This violates Markdown best practices and accessibility guidelines.📝 Suggested fix
-### What can I use this for? +## What can I use this for?🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@docs/index.md` at line 9, The document uses a level-3 heading "### What can I use this for?" immediately after the top-level "#" title; change that heading to level-2 ("## What can I use this for?") so heading levels increment by one at a time and follow accessibility/Markdown best practices—update the heading text in the docs/index.md content where the "### What can I use this for?" line appears.src/flowgentic/core/agent.py-41-41 (1)
41-41:⚠️ Potential issue | 🟡 MinorVariable shadows imported module name.
The variable
providers_responseon line 41 shadows the imported moduleproviders_responsefrom line 6. This can cause confusion and potential bugs.- providers_response = await self.reasoner.plan(prompt_input, context) - reasoning = providers_response.reasoning - message = providers_response.message - tools_to_use = providers_response.tools_to_use + plan_response = await self.reasoner.plan(prompt_input, context) + reasoning = plan_response.reasoning + message = plan_response.message + tools_to_use = plan_response.tools_to_useAlternatively, remove the unused import on line 6.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/flowgentic/core/agent.py` at line 41, The local variable providers_response returned from self.reasoner.plan in the method where that call exists is shadowing the imported module providers_response (imported at top of file), so rename the local variable (e.g., plan_response or reasoner_response) wherever it’s assigned and used (the assignment from self.reasoner.plan and subsequent references) or, if the imported providers_response module is unused, remove that import; update all references to the new local variable name to avoid the naming collision.docs/design_patterns/chatbot.md-136-145 (1)
136-145:⚠️ Potential issue | 🟡 MinorWrap
app.ainvoke()intry/finallyto ensure safe shutdown.The antipatterns section correctly demonstrates the safe pattern, but both code examples above still skip cleanup if
app.ainvoke()raises. Both occurrences should wrap the invocation intry/finally:Example fix:
try: result = await app.ainvoke( {"messages": [("user", "What's the weather and traffic in SFO?")]} ) print(result) finally: await flow.shutdown()🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@docs/design_patterns/chatbot.md` around lines 136 - 145, Wrap the app.ainvoke() calls inside a try/finally so flow.shutdown() always runs even if ainvoke raises: call app.ainvoke(...) and print the result in the try block, and move await flow.shutdown() into the finally block (preserve awaiting shutdown); update both occurrences that currently call app.ainvoke() directly (e.g., inside main) to use this try/finally pattern around the invocation.docs/design_patterns/chatbot.md-272-276 (1)
272-276:⚠️ Potential issue | 🟡 MinorThe
invocation_idtroubleshooting note is backwards.
**kwargsaccepts arbitrary extra keyword arguments, so it prevents—not causes—got unexpected keyword argument 'invocation_id'. A tool function lacking**kwargswill encounter this error when the engine injects the parameter. This row should point users to either add**kwargsto their tool signature or check for engine/version mismatches.📝 Suggested row text
-| `TypeError: got unexpected keyword argument 'invocation_id'` | Your tool function has a `**kwargs` catch-all; remove it or check the engine version | +| `TypeError: got unexpected keyword argument 'invocation_id'` | Your tool function does not accept the injected keyword arguments (add `**kwargs` to its signature) or the engine/tool call signatures are mismatched |🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@docs/design_patterns/chatbot.md` around lines 272 - 276, The troubleshooting row that claims "`TypeError: got unexpected keyword argument 'invocation_id'`" is backwards: change the guidance so it explains that the error occurs when a tool function does NOT accept extra keyword args (i.e., is missing **kwargs), and recommend either adding **kwargs to the tool function signature or ensuring engine/version compatibility; update the table row that mentions `invocation_id` and `**kwargs` (the symptom/likely cause pair) to reflect this corrected advice.
🧹 Nitpick comments (7)
.gitignore (1)
46-51: Duplicate.venv/entry in gitignore.
.venv/is already listed on line 20 under "Virtual environments". The duplicate on line 48 is redundant and can be removed for cleaner organization.🧹 Proposed cleanup
uv.lock `#Ignore` vscode AI rules .github/instructions/codacy.instructions.md -.venv/ - tests/benchmark/results/🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.gitignore around lines 46 - 51, Remove the duplicate ".venv/" entry from .gitignore (the second occurrence shown in the diff) so only the existing ".venv/" under the "Virtual environments" section remains; simply delete the redundant line containing ".venv/" to keep the file clean and avoid duplication.src/flowgentic/core/schemas/agent_response.py (2)
5-8: Consider adding type parameters toDictfor better type safety.Using
Dict[str, Any]instead of bareDictprovides clearer documentation of the expected structure and better static analysis support.📝 Proposed improvement
+from typing import Any, Dict, Optional + + class AgentResponse(BaseModel): reasoning: Optional[str] = None - tools_results: Optional[Dict] = None + tools_results: Optional[Dict[str, Any]] = None message: Optional[str] = None🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/flowgentic/core/schemas/agent_response.py` around lines 5 - 8, The tools_results field in the AgentResponse model uses a bare Dict which reduces type safety; update the AgentResponse class to annotate tools_results as Dict[str, Any] (i.e., change tools_results: Optional[Dict] = None to tools_results: Optional[Dict[str, Any]] = None) and add Any to the typing imports so static checkers and readers know the expected key/value types.
1-2: Unused import:Listis not used in this file.The
Listtype is imported but not referenced anywhere in the module.🧹 Proposed fix
-from typing import Dict, List, Optional +from typing import Any, Dict, Optional🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/flowgentic/core/schemas/agent_response.py` around lines 1 - 2, Remove the unused List import from the top of the module: update the import line that currently reads "from typing import Dict, List, Optional" to only import the types actually used (Dict and Optional) so the module no longer imports the unused symbol List referenced in agent_response.py; keep the pydantic BaseModel import as-is.src/flowgentic/core/schemas/prompt_input.py (1)
5-11: Consider stronger typing forconversation_history.The inline comment documents the expected structure
[{"role": "user", "content": "..."}]. Consider defining a typed model for conversation messages to catch structural errors at validation time.📝 Proposed improvement
-from typing import Optional, List -from pydantic import BaseModel +from typing import Any, Dict, List, Literal, Optional +from pydantic import BaseModel + + +class ConversationMessage(BaseModel): + role: Literal["user", "assistant", "system"] + content: str class PromptInput(BaseModel): user_input: str system_input: str - conversation_history: Optional[List[dict]] = ( - None # [{"role": "user", "content": "..."}, ...] - ) - metadata: Optional[str] = None + conversation_history: Optional[List[ConversationMessage]] = None + metadata: Optional[Dict[str, Any]] = NoneIf you prefer to keep
dictfor flexibility with various message formats, at minimum useDict[str, Any]for explicit typing.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/flowgentic/core/schemas/prompt_input.py` around lines 5 - 11, The conversation_history field in PromptInput is too loosely typed; define a typed model (e.g., Message or ConversationMessage) with required fields like role: str and content: str and then change PromptInput.conversation_history to Optional[List[Message]] to enable Pydantic validation (or if you must stay flexible, change the type to Optional[List[Dict[str, Any]]] to be explicit). Update imports for typing (Dict, Any) or reference the new Message model in the same module so validation catches structural errors at runtime.src/flowgentic/core/memory/memory_manager.py (1)
15-28:NullMemoryManagercorrectly implements the no-op pattern.The docstring clearly describes the purpose. However,
get_context()implicitly returnsNoneviapass- consider an explicitreturn Nonefor clarity.📝 Explicit return for clarity
def get_context(self): - pass + return None🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/flowgentic/core/memory/memory_manager.py` around lines 15 - 28, The get_context method in NullMemoryManager currently uses pass and implicitly returns None; update MemoryManger subclass NullMemoryManager by changing get_context to explicitly return None (and optionally make set_context explicitly return None as well) so the behavior is clear; locate the class NullMemoryManager and modify the get_context (and optionally set_context) methods to use an explicit return None for clarity.src/flowgentic/core/models/implementations/openRouter.py (1)
17-19: Consider validating API key presence.If
api_keyis not provided inconfig, the client will be initialized withapi_key=None, which will likely fail later at runtime with a less clear error message.🛡️ Suggested validation
def __init__(self, model_id: str, **config): super().__init__(model_id, **config) + api_key = config.get("api_key") + if not api_key: + raise ValueError("api_key is required for OpenRouterModelProvider") # Initialize client once self.aclient = openai.AsyncOpenAI( - base_url="https://openrouter.ai/api/v1", api_key=config.get("api_key") + base_url="https://openrouter.ai/api/v1", api_key=api_key )🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/flowgentic/core/models/implementations/openRouter.py` around lines 17 - 19, The constructor currently passes config.get("api_key") directly to openai.AsyncOpenAI which allows api_key=None; before creating self.aclient (where openai.AsyncOpenAI is called) validate that config contains a non-empty "api_key" (e.g., check config.get("api_key") truthiness) and if missing raise a clear exception (ValueError or custom) or log and raise so initialization of self.aclient fails fast with a descriptive message; update the code around the openai.AsyncOpenAI call (the self.aclient assignment) to perform this check and only construct openai.AsyncOpenAI when the key is present.src/flowgentic/core/reasoner.py (1)
19-21: Add type hint formemory_contextparameter.The
memory_contextparameter lacks a type annotation, which reduces code clarity and IDE support.async def plan( - self, prompt_input: PromptInput, memory_context + self, prompt_input: PromptInput, memory_context: str ) -> ProvidersResponse:🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/flowgentic/core/reasoner.py` around lines 19 - 21, Annotate the missing memory_context parameter on the async def plan method to improve type clarity: change "memory_context" to a concrete type such as "memory_context: Optional[MemoryContext]" (or "memory_context: Mapping[str, Any]" / "memory_context: Any" if MemoryContext isn't defined), and add the corresponding import (e.g., from typing import Optional, Mapping, Any or import the MemoryContext type used by your codebase); keep the rest of the signature returning ProvidersResponse and referencing PromptInput as-is.
|
@mturilli any updates? |
Objective: Added the new API for the flowgentic wrappers
Includes:
Excluded:
test/benchmarkcontentSummary by CodeRabbit
Release Notes
New Features
@hpc_taskand@hpc_blockdecorators for simplified HPC task offloading in LangGraph workflowsRefactoring
Documentation