diff --git a/.env.example b/.env.example index 78a3b72c0..4e2a83405 100644 --- a/.env.example +++ b/.env.example @@ -1,16 +1,16 @@ -# LLM API配置(支持 OpenAI SDK 格式的任意 LLM API) -# 推荐使用阿里百炼平台qwen-plus模型:https://bailian.console.aliyun.com/ -# 注意消耗较大,可先进行小于40轮的模拟尝试 +# LLM API configuration (supports any LLM API compatible with the OpenAI SDK format) +# Recommended: use the qwen-plus model on Alibaba Bailian: https://bailian.console.aliyun.com/ +# Note: usage can be expensive, so try simulations with fewer than 40 rounds first LLM_API_KEY=your_api_key_here -LLM_BASE_URL=https://dashscope.aliyuncs.com/compatible-mode/v1 -LLM_MODEL_NAME=qwen-plus +LLM_BASE_URL=https://api.openai.com/v1 +LLM_MODEL_NAME=gpt-4o -# ===== ZEP记忆图谱配置 ===== -# 每月免费额度即可支撑简单使用:https://app.getzep.com/ +# ===== ZEP memory graph configuration ===== +# The free monthly quota is enough for basic usage: https://app.getzep.com/ ZEP_API_KEY=your_zep_api_key_here -# ===== 加速 LLM 配置(可选)===== -# 注意如果不使用加速配置,env文件中就不要出现下面的配置项 +# ===== Accelerated LLM configuration (optional) ===== +# If you are not using accelerated configuration, do not include the fields below in your env file LLM_BOOST_API_KEY=your_api_key_here LLM_BOOST_BASE_URL=your_base_url_here -LLM_BOOST_MODEL_NAME=your_model_name_here \ No newline at end of file +LLM_BOOST_MODEL_NAME=your_model_name_here diff --git a/Dockerfile b/Dockerfile index e65646860..6a73d0a4b 100644 --- a/Dockerfile +++ b/Dockerfile @@ -1,29 +1,29 @@ FROM python:3.11 -# 安装 Node.js (满足 >=18)及必要工具 +# Install Node.js (version 18 or later) and required tools RUN apt-get update \ && apt-get install -y --no-install-recommends nodejs npm \ && rm -rf /var/lib/apt/lists/* -# 从 uv 官方镜像复制 uv +# Copy `uv` from the official uv image COPY --from=ghcr.io/astral-sh/uv:0.9.26 /uv /uvx /bin/ WORKDIR /app -# 先复制依赖描述文件以利用缓存 +# Copy dependency manifests first to take advantage of layer caching COPY package.json package-lock.json ./ COPY frontend/package.json frontend/package-lock.json ./frontend/ COPY backend/pyproject.toml backend/uv.lock ./backend/ -# 安装依赖(Node + Python) +# Install dependencies (Node + Python) RUN npm ci \ && npm ci --prefix frontend \ && cd backend && uv sync --frozen -# 复制项目源码 +# Copy the project source COPY . . EXPOSE 3000 5001 -# 同时启动前后端(开发模式) -CMD ["npm", "run", "dev"] \ No newline at end of file +# Start both frontend and backend services (development mode) +CMD ["npm", "run", "dev"] diff --git a/README-EN.md b/README-EN.md index 4b003a63f..023f304a9 100644 --- a/README-EN.md +++ b/README-EN.md @@ -4,7 +4,7 @@ 666ghj%2FMiroFish | Trendshift -简洁通用的群体智能引擎,预测万物 +A simple, universal swarm intelligence engine for predicting anything
A Simple and Universal Swarm Intelligence Engine, Predicting Anything @@ -20,7 +20,7 @@ [![X](https://img.shields.io/badge/X-Follow-000000?style=flat-square&logo=x&logoColor=white)](https://x.com/mirofish_ai) [![Instagram](https://img.shields.io/badge/Instagram-Follow-E4405F?style=flat-square&logo=instagram&logoColor=white)](https://www.instagram.com/mirofish_ai/) -[English](./README-EN.md) | [中文文档](./README.md) +[README](./README.md) | [English Copy](./README-EN.md) @@ -49,16 +49,16 @@ Welcome to visit our online demo environment and experience a prediction simulat
- - + + - - + + - - + +
Screenshot 1Screenshot 2Screenshot 1Screenshot 2
Screenshot 3Screenshot 4Screenshot 3Screenshot 4
Screenshot 5Screenshot 6Screenshot 5Screenshot 6
@@ -68,7 +68,7 @@ Welcome to visit our online demo environment and experience a prediction simulat ### 1. Wuhan University Public Opinion Simulation + MiroFish Project Introduction
-MiroFish Demo Video +MiroFish Demo Video Click the image to watch the complete demo video for prediction using BettaFish-generated "Wuhan University Public Opinion Report"
@@ -76,7 +76,7 @@ Click the image to watch the complete demo video for prediction using BettaFish- ### 2. Dream of the Red Chamber Lost Ending Simulation
-MiroFish Demo Video +MiroFish Demo Video Click the image to watch MiroFish's deep prediction of the lost ending based on hundreds of thousands of words from the first 80 chapters of "Dream of the Red Chamber"
@@ -179,7 +179,7 @@ Reads `.env` from root directory by default, maps ports `3000 (frontend) / 5001 ## 📬 Join the Conversation
-QQ Group +QQ Group
  @@ -200,4 +200,4 @@ MiroFish's simulation engine is powered by **[OASIS (Open Agent Social Interacti Star History Chart - \ No newline at end of file + diff --git a/README.md b/README.md index 4f5cffe74..0b1763b75 100644 --- a/README.md +++ b/README.md @@ -4,7 +4,7 @@ 666ghj%2FMiroFish | Trendshift -简洁通用的群体智能引擎,预测万物 +A simple, universal swarm intelligence engine for predicting anything
A Simple and Universal Swarm Intelligence Engine, Predicting Anything @@ -20,179 +20,179 @@ [![X](https://img.shields.io/badge/X-Follow-000000?style=flat-square&logo=x&logoColor=white)](https://x.com/mirofish_ai) [![Instagram](https://img.shields.io/badge/Instagram-Follow-E4405F?style=flat-square&logo=instagram&logoColor=white)](https://www.instagram.com/mirofish_ai/) -[English](./README-EN.md) | [中文文档](./README.md) +[README](./README.md) | [English Copy](./README-EN.md) -## ⚡ 项目概述 +## ⚡ Overview -**MiroFish** 是一款基于多智能体技术的新一代 AI 预测引擎。通过提取现实世界的种子信息(如突发新闻、政策草案、金融信号),自动构建出高保真的平行数字世界。在此空间内,成千上万个具备独立人格、长期记忆与行为逻辑的智能体进行自由交互与社会演化。你可透过「上帝视角」动态注入变量,精准推演未来走向——**让未来在数字沙盘中预演,助决策在百战模拟后胜出**。 +**MiroFish** is a next-generation AI prediction engine powered by multi-agent technology. By extracting seed information from the real world (such as breaking news, policy drafts, or financial signals), it automatically constructs a high-fidelity parallel digital world. Within this space, thousands of intelligent agents with independent personalities, long-term memory, and behavioral logic freely interact and undergo social evolution. You can inject variables dynamically from a "God's-eye view" to precisely deduce future trajectories — **rehearse the future in a digital sandbox, and win decisions after countless simulations**. -> 你只需:上传种子材料(数据分析报告或者有趣的小说故事),并用自然语言描述预测需求
-> MiroFish 将返回:一份详尽的预测报告,以及一个可深度交互的高保真数字世界 +> You only need to: upload seed materials (data analysis reports or interesting novel stories) and describe your prediction requirements in natural language
+> MiroFish will return: a detailed prediction report and a deeply interactive high-fidelity digital world -### 我们的愿景 +### Our Vision -MiroFish 致力于打造映射现实的群体智能镜像,通过捕捉个体互动引发的群体涌现,突破传统预测的局限: +MiroFish is dedicated to creating a swarm intelligence mirror that maps reality. By capturing the collective emergence triggered by individual interactions, we break through the limitations of traditional prediction: -- **于宏观**:我们是决策者的预演实验室,让政策与公关在零风险中试错 -- **于微观**:我们是个人用户的创意沙盘,无论是推演小说结局还是探索脑洞,皆可有趣、好玩、触手可及 +- **At the Macro Level**: We are a rehearsal laboratory for decision-makers, allowing policies and public relations to be tested at zero risk +- **At the Micro Level**: We are a creative sandbox for individual users, whether deducing novel endings or exploring imaginative scenarios, everything can be fun, playful, and accessible -从严肃预测到趣味仿真,我们让每一个如果都能看见结果,让预测万物成为可能。 +From serious predictions to playful simulations, we let every "what if" see its outcome, making it possible to predict anything. -## 🌐 在线体验 +## 🌐 Live Demo -欢迎访问在线 Demo 演示环境,体验我们为你准备的一次关于热点舆情事件的推演预测:[mirofish-live-demo](https://666ghj.github.io/mirofish-demo/) +Visit our online demo environment and experience a prediction simulation around a trending public-opinion event: [mirofish-live-demo](https://666ghj.github.io/mirofish-demo/) -## 📸 系统截图 +## 📸 Screenshots
- - + + - - + + - - + +
截图1截图2Screenshot 1Screenshot 2
截图3截图4Screenshot 3Screenshot 4
截图5截图6Screenshot 5Screenshot 6
-## 🎬 演示视频 +## 🎬 Demo Videos -### 1. 武汉大学舆情推演预测 + MiroFish项目讲解 +### 1. Wuhan University Public Opinion Simulation + MiroFish Project Introduction
-MiroFish Demo Video +MiroFish Demo Video -点击图片查看使用微舆BettaFish生成的《武大舆情报告》进行预测的完整演示视频 +Click the image to watch the complete demo video for prediction using the BettaFish-generated "Wuhan University Public Opinion Report."
-### 2. 《红楼梦》失传结局推演预测 +### 2. Dream of the Red Chamber Lost Ending Simulation
-MiroFish Demo Video +MiroFish Demo Video -点击图片查看基于《红楼梦》前80回数十万字,MiroFish深度预测失传结局 +Click the image to watch MiroFish predict the lost ending based on the first 80 chapters of *Dream of the Red Chamber*.
-> **金融方向推演预测**、**时政要闻推演预测**等示例陆续更新中... +> **Financial prediction**, **current-events forecasting**, and more examples are coming soon. -## 🔄 工作流程 +## 🔄 Workflow -1. **图谱构建**:现实种子提取 & 个体与群体记忆注入 & GraphRAG构建 -2. **环境搭建**:实体关系抽取 & 人设生成 & 环境配置Agent注入仿真参数 -3. **开始模拟**:双平台并行模拟 & 自动解析预测需求 & 动态更新时序记忆 -4. **报告生成**:ReportAgent拥有丰富的工具集与模拟后环境进行深度交互 -5. **深度互动**:与模拟世界中的任意一位进行对话 & 与ReportAgent进行对话 +1. **Graph Building**: Seed extraction, individual and collective memory injection, and GraphRAG construction +2. **Environment Setup**: Entity relationship extraction, persona generation, and agent configuration injection +3. **Simulation**: Dual-platform parallel simulation, automatic prediction-requirement parsing, and dynamic temporal memory updates +4. **Report Generation**: ReportAgent uses a rich toolset to interact deeply with the post-simulation environment +5. **Deep Interaction**: Chat with any agent in the simulated world and continue the conversation with ReportAgent -## 🚀 快速开始 +## 🚀 Quick Start -### 一、源码部署(推荐) +### Option 1: Source Deployment (Recommended) -#### 前置要求 +#### Prerequisites -| 工具 | 版本要求 | 说明 | 安装检查 | -|------|---------|------|---------| -| **Node.js** | 18+ | 前端运行环境,包含 npm | `node -v` | -| **Python** | ≥3.11, ≤3.12 | 后端运行环境 | `python --version` | -| **uv** | 最新版 | Python 包管理器 | `uv --version` | +| Tool | Version | Description | Check Installation | +|------|---------|-------------|-------------------| +| **Node.js** | 18+ | Frontend runtime, includes npm | `node -v` | +| **Python** | ≥3.11, ≤3.12 | Backend runtime | `python --version` | +| **uv** | Latest | Python package manager | `uv --version` | -#### 1. 配置环境变量 +#### 1. Configure Environment Variables ```bash -# 复制示例配置文件 +# Copy the example configuration file cp .env.example .env -# 编辑 .env 文件,填入必要的 API 密钥 +# Edit the .env file and fill in the required API keys ``` -**必需的环境变量:** +**Required Environment Variables:** ```env -# LLM API配置(支持 OpenAI SDK 格式的任意 LLM API) -# 推荐使用阿里百炼平台qwen-plus模型:https://bailian.console.aliyun.com/ -# 注意消耗较大,可先进行小于40轮的模拟尝试 +# LLM API configuration (supports any LLM API compatible with the OpenAI SDK format) +# Recommended: use the qwen-plus model on Alibaba Bailian: https://bailian.console.aliyun.com/ +# Note: usage can be expensive, so try simulations with fewer than 40 rounds first LLM_API_KEY=your_api_key LLM_BASE_URL=https://dashscope.aliyuncs.com/compatible-mode/v1 LLM_MODEL_NAME=qwen-plus -# Zep Cloud 配置 -# 每月免费额度即可支撑简单使用:https://app.getzep.com/ +# ZEP memory graph configuration +# The free monthly quota is enough for basic usage: https://app.getzep.com/ ZEP_API_KEY=your_zep_api_key ``` -#### 2. 安装依赖 +#### 2. Install Dependencies ```bash -# 一键安装所有依赖(根目录 + 前端 + 后端) +# One-click installation of all dependencies (root + frontend + backend) npm run setup:all ``` -或者分步安装: +Or install them step by step: ```bash -# 安装 Node 依赖(根目录 + 前端) +# Install Node dependencies (root + frontend) npm run setup -# 安装 Python 依赖(后端,自动创建虚拟环境) +# Install Python dependencies (backend, auto-creates virtual environment) npm run setup:backend ``` -#### 3. 启动服务 +#### 3. Start Services ```bash -# 同时启动前后端(在项目根目录执行) +# Start both frontend and backend (run from the project root) npm run dev ``` -**服务地址:** -- 前端:`http://localhost:3000` -- 后端 API:`http://localhost:5001` +**Service URLs:** +- Frontend: `http://localhost:3000` +- Backend API: `http://localhost:5001` -**单独启动:** +**Start Individually:** ```bash -npm run backend # 仅启动后端 -npm run frontend # 仅启动前端 +npm run backend # Start the backend only +npm run frontend # Start the frontend only ``` -### 二、Docker 部署 +### Option 2: Docker Deployment ```bash -# 1. 配置环境变量(同源码部署) +# 1. Configure environment variables (same as source deployment) cp .env.example .env -# 2. 拉取镜像并启动 +# 2. Pull the image and start docker compose up -d ``` -默认会读取根目录下的 `.env`,并映射端口 `3000(前端)/5001(后端)` +Docker reads `.env` from the project root by default and maps ports `3000 (frontend) / 5001 (backend)`. -> 在 `docker-compose.yml` 中已通过注释提供加速镜像地址,可按需替换 +> A mirror image URL is provided as a comment in `docker-compose.yml` if you need a faster pull source. -## 📬 更多交流 +## 📬 Join the Conversation
-QQ交流群 +QQ Group
  -MiroFish团队长期招募全职/实习,如果你对多Agent应用感兴趣,欢迎投递简历至:**mirofish@shanda.com** +The MiroFish team is recruiting for full-time and internship roles. If you are interested in multi-agent simulation and LLM applications, send your resume to: **mirofish@shanda.com** -## 📄 致谢 +## 📄 Acknowledgments -**MiroFish 得到了盛大集团的战略支持和孵化!** +**MiroFish has received strategic support and incubation from Shanda Group.** -MiroFish 的仿真引擎由 **[OASIS](https://github.com/camel-ai/oasis)** 驱动,我们衷心感谢 CAMEL-AI 团队的开源贡献! +MiroFish's simulation engine is powered by **[OASIS](https://github.com/camel-ai/oasis)**, and we sincerely thank the CAMEL-AI team for their open-source contributions. -## 📈 项目统计 +## 📈 Project Statistics diff --git a/backend/app/api/graph.py b/backend/app/api/graph.py index 12ff1ba2d..053fbbb60 100644 --- a/backend/app/api/graph.py +++ b/backend/app/api/graph.py @@ -42,7 +42,7 @@ def get_project(project_id: str): if not project: return jsonify({ "success": False, - "error": f"项目不存在: {project_id}" + "error": f"Project not found: {project_id}" }), 404 return jsonify({ @@ -76,12 +76,12 @@ def delete_project(project_id: str): if not success: return jsonify({ "success": False, - "error": f"项目不存在或删除失败: {project_id}" + "error": f"Project not found or could not be deleted: {project_id}" }), 404 return jsonify({ "success": True, - "message": f"项目已删除: {project_id}" + "message": f"Project deleted: {project_id}" }) @@ -95,7 +95,7 @@ def reset_project(project_id: str): if not project: return jsonify({ "success": False, - "error": f"项目不存在: {project_id}" + "error": f"Project not found: {project_id}" }), 404 # 重置到本体已生成状态 @@ -111,7 +111,7 @@ def reset_project(project_id: str): return jsonify({ "success": True, - "message": f"项目已重置: {project_id}", + "message": f"Project reset: {project_id}", "data": project.to_dict() }) @@ -147,20 +147,20 @@ def generate_ontology(): } """ try: - logger.info("=== 开始生成本体定义 ===") + logger.info("=== Starting ontology generation ===") # 获取参数 simulation_requirement = request.form.get('simulation_requirement', '') project_name = request.form.get('project_name', 'Unnamed Project') additional_context = request.form.get('additional_context', '') - logger.debug(f"项目名称: {project_name}") - logger.debug(f"模拟需求: {simulation_requirement[:100]}...") + logger.debug(f"Project name: {project_name}") + logger.debug(f"Simulation requirement: {simulation_requirement[:100]}...") if not simulation_requirement: return jsonify({ "success": False, - "error": "请提供模拟需求描述 (simulation_requirement)" + "error": "Please provide a simulation requirement description (simulation_requirement)." }), 400 # 获取上传的文件 @@ -168,13 +168,13 @@ def generate_ontology(): if not uploaded_files or all(not f.filename for f in uploaded_files): return jsonify({ "success": False, - "error": "请至少上传一个文档文件" + "error": "Please upload at least one document file." }), 400 # 创建项目 project = ProjectManager.create_project(name=project_name) project.simulation_requirement = simulation_requirement - logger.info(f"创建项目: {project.project_id}") + logger.info(f"Created project: {project.project_id}") # 保存文件并提取文本 document_texts = [] @@ -203,16 +203,16 @@ def generate_ontology(): ProjectManager.delete_project(project.project_id) return jsonify({ "success": False, - "error": "没有成功处理任何文档,请检查文件格式" + "error": "No documents were processed successfully. Please check the file format." }), 400 # 保存提取的文本 project.total_text_length = len(all_text) ProjectManager.save_extracted_text(project.project_id, all_text) - logger.info(f"文本提取完成,共 {len(all_text)} 字符") + logger.info(f"Text extraction completed: {len(all_text)} characters") # 生成本体 - logger.info("调用 LLM 生成本体定义...") + logger.info("Calling the LLM to generate the ontology...") generator = OntologyGenerator() ontology = generator.generate( document_texts=document_texts, @@ -223,7 +223,7 @@ def generate_ontology(): # 保存本体到项目 entity_count = len(ontology.get("entity_types", [])) edge_count = len(ontology.get("edge_types", [])) - logger.info(f"本体生成完成: {entity_count} 个实体类型, {edge_count} 个关系类型") + logger.info(f"Ontology generation completed: {entity_count} entity types, {edge_count} edge types") project.ontology = { "entity_types": ontology.get("entity_types", []), @@ -232,7 +232,7 @@ def generate_ontology(): project.analysis_summary = ontology.get("analysis_summary", "") project.status = ProjectStatus.ONTOLOGY_GENERATED ProjectManager.save_project(project) - logger.info(f"=== 本体生成完成 === 项目ID: {project.project_id}") + logger.info(f"=== Ontology generation completed === project_id: {project.project_id}") return jsonify({ "success": True, @@ -275,33 +275,33 @@ def build_graph(): "data": { "project_id": "proj_xxxx", "task_id": "task_xxxx", - "message": "图谱构建任务已启动" + "message": "Graph build task started" } } """ try: - logger.info("=== 开始构建图谱 ===") + logger.info("=== Starting graph build ===") # 检查配置 errors = [] if not Config.ZEP_API_KEY: - errors.append("ZEP_API_KEY未配置") + errors.append("ZEP_API_KEY is not configured") if errors: - logger.error(f"配置错误: {errors}") + logger.error(f"Configuration error: {errors}") return jsonify({ "success": False, - "error": "配置错误: " + "; ".join(errors) + "error": "Configuration error: " + "; ".join(errors) }), 500 # 解析请求 data = request.get_json() or {} project_id = data.get('project_id') - logger.debug(f"请求参数: project_id={project_id}") + logger.debug(f"Request parameters: project_id={project_id}") if not project_id: return jsonify({ "success": False, - "error": "请提供 project_id" + "error": "Please provide project_id." }), 400 # 获取项目 @@ -309,7 +309,7 @@ def build_graph(): if not project: return jsonify({ "success": False, - "error": f"项目不存在: {project_id}" + "error": f"Project not found: {project_id}" }), 404 # 检查项目状态 @@ -318,13 +318,13 @@ def build_graph(): if project.status == ProjectStatus.CREATED: return jsonify({ "success": False, - "error": "项目尚未生成本体,请先调用 /ontology/generate" + "error": "The project does not have an ontology yet. Call /ontology/generate first." }), 400 if project.status == ProjectStatus.GRAPH_BUILDING and not force: return jsonify({ "success": False, - "error": "图谱正在构建中,请勿重复提交。如需强制重建,请添加 force: true", + "error": "A graph build is already in progress. To force a rebuild, set force: true.", "task_id": project.graph_build_task_id }), 400 @@ -349,7 +349,7 @@ def build_graph(): if not text: return jsonify({ "success": False, - "error": "未找到提取的文本内容" + "error": "Extracted text content was not found." }), 400 # 获取本体 @@ -357,13 +357,13 @@ def build_graph(): if not ontology: return jsonify({ "success": False, - "error": "未找到本体定义" + "error": "Ontology definition was not found." }), 400 # 创建异步任务 task_manager = TaskManager() - task_id = task_manager.create_task(f"构建图谱: {graph_name}") - logger.info(f"创建图谱构建任务: task_id={task_id}, project_id={project_id}") + task_id = task_manager.create_task(f"Build graph: {graph_name}") + logger.info(f"Created graph build task: task_id={task_id}, project_id={project_id}") # 更新项目状态 project.status = ProjectStatus.GRAPH_BUILDING @@ -374,11 +374,11 @@ def build_graph(): def build_task(): build_logger = get_logger('mirofish.build') try: - build_logger.info(f"[{task_id}] 开始构建图谱...") + build_logger.info(f"[{task_id}] Starting graph build...") task_manager.update_task( task_id, status=TaskStatus.PROCESSING, - message="初始化图谱构建服务..." + message="Initializing the graph build service..." ) # 创建图谱构建服务 @@ -387,7 +387,7 @@ def build_task(): # 分块 task_manager.update_task( task_id, - message="文本分块中...", + message="Chunking text...", progress=5 ) chunks = TextProcessor.split_text( @@ -400,7 +400,7 @@ def build_task(): # 创建图谱 task_manager.update_task( task_id, - message="创建Zep图谱...", + message="Creating the Zep graph...", progress=10 ) graph_id = builder.create_graph(name=graph_name) @@ -412,7 +412,7 @@ def build_task(): # 设置本体 task_manager.update_task( task_id, - message="设置本体定义...", + message="Applying the ontology definition...", progress=15 ) builder.set_ontology(graph_id, ontology) @@ -428,7 +428,7 @@ def add_progress_callback(msg, progress_ratio): task_manager.update_task( task_id, - message=f"开始添加 {total_chunks} 个文本块...", + message=f"Adding {total_chunks} text chunks...", progress=15 ) @@ -442,7 +442,7 @@ def add_progress_callback(msg, progress_ratio): # 等待Zep处理完成(查询每个episode的processed状态) task_manager.update_task( task_id, - message="等待Zep处理数据...", + message="Waiting for Zep to process the data...", progress=55 ) @@ -459,7 +459,7 @@ def wait_progress_callback(msg, progress_ratio): # 获取图谱数据 task_manager.update_task( task_id, - message="获取图谱数据...", + message="Fetching graph data...", progress=95 ) graph_data = builder.get_graph_data(graph_id) @@ -470,13 +470,13 @@ def wait_progress_callback(msg, progress_ratio): node_count = graph_data.get("node_count", 0) edge_count = graph_data.get("edge_count", 0) - build_logger.info(f"[{task_id}] 图谱构建完成: graph_id={graph_id}, 节点={node_count}, 边={edge_count}") + build_logger.info(f"[{task_id}] Graph build completed: graph_id={graph_id}, nodes={node_count}, edges={edge_count}") # 完成 task_manager.update_task( task_id, status=TaskStatus.COMPLETED, - message="图谱构建完成", + message="Graph build completed", progress=100, result={ "project_id": project_id, @@ -489,7 +489,7 @@ def wait_progress_callback(msg, progress_ratio): except Exception as e: # 更新项目状态为失败 - build_logger.error(f"[{task_id}] 图谱构建失败: {str(e)}") + build_logger.error(f"[{task_id}] Graph build failed: {str(e)}") build_logger.debug(traceback.format_exc()) project.status = ProjectStatus.FAILED @@ -499,7 +499,7 @@ def wait_progress_callback(msg, progress_ratio): task_manager.update_task( task_id, status=TaskStatus.FAILED, - message=f"构建失败: {str(e)}", + message=f"Build failed: {str(e)}", error=traceback.format_exc() ) @@ -512,7 +512,7 @@ def wait_progress_callback(msg, progress_ratio): "data": { "project_id": project_id, "task_id": task_id, - "message": "图谱构建任务已启动,请通过 /task/{task_id} 查询进度" + "message": "Graph build task started. Check progress via /task/{task_id}." } }) @@ -536,7 +536,7 @@ def get_task(task_id: str): if not task: return jsonify({ "success": False, - "error": f"任务不存在: {task_id}" + "error": f"Task not found: {task_id}" }), 404 return jsonify({ @@ -570,7 +570,7 @@ def get_graph_data(graph_id: str): if not Config.ZEP_API_KEY: return jsonify({ "success": False, - "error": "ZEP_API_KEY未配置" + "error": "ZEP_API_KEY is not configured" }), 500 builder = GraphBuilderService(api_key=Config.ZEP_API_KEY) @@ -598,7 +598,7 @@ def delete_graph(graph_id: str): if not Config.ZEP_API_KEY: return jsonify({ "success": False, - "error": "ZEP_API_KEY未配置" + "error": "ZEP_API_KEY is not configured" }), 500 builder = GraphBuilderService(api_key=Config.ZEP_API_KEY) @@ -606,7 +606,7 @@ def delete_graph(graph_id: str): return jsonify({ "success": True, - "message": f"图谱已删除: {graph_id}" + "message": f"Graph deleted: {graph_id}" }) except Exception as e: diff --git a/backend/app/models/task.py b/backend/app/models/task.py index e15f35fbd..f1fabd586 100644 --- a/backend/app/models/task.py +++ b/backend/app/models/task.py @@ -148,7 +148,7 @@ def complete_task(self, task_id: str, result: Dict): task_id, status=TaskStatus.COMPLETED, progress=100, - message="任务完成", + message="Task completed", result=result ) @@ -157,7 +157,7 @@ def fail_task(self, task_id: str, error: str): self.update_task( task_id, status=TaskStatus.FAILED, - message="任务失败", + message="Task failed", error=error ) @@ -181,4 +181,3 @@ def cleanup_old_tasks(self, max_age_hours: int = 24): ] for tid in old_ids: del self._tasks[tid] - diff --git a/backend/app/services/graph_builder.py b/backend/app/services/graph_builder.py index 0e0444bf3..e75f9700d 100644 --- a/backend/app/services/graph_builder.py +++ b/backend/app/services/graph_builder.py @@ -7,6 +7,7 @@ import uuid import time import threading +import logging from typing import Dict, Any, List, Optional, Callable from dataclasses import dataclass @@ -16,9 +17,13 @@ from ..config import Config from ..models.task import TaskManager, TaskStatus from ..utils.zep_paging import fetch_all_nodes, fetch_all_edges +from ..utils.ontology_normalizer import normalize_ontology_for_zep from .text_processor import TextProcessor +logger = logging.getLogger(__name__) + + @dataclass class GraphInfo: """图谱信息""" @@ -206,6 +211,15 @@ def set_ontology(self, graph_id: str, ontology: Dict[str, Any]): # 抑制 Pydantic v2 关于 Field(default=None) 的警告 # 这是 Zep SDK 要求的用法,警告来自动态类创建,可以安全忽略 warnings.filterwarnings('ignore', category=UserWarning, module='pydantic') + + ontology, entity_name_mapping = normalize_ontology_for_zep(ontology) + renamed_entities = { + original: normalized + for original, normalized in entity_name_mapping.items() + if original != normalized + } + if renamed_entities: + logger.info("Normalized ontology entity names for Zep compatibility: %s", renamed_entities) # Zep 保留名称,不能作为属性名 RESERVED_NAMES = {'uuid', 'name', 'group_id', 'name_embedding', 'summary', 'created_at'} @@ -497,4 +511,3 @@ def get_graph_data(self, graph_id: str) -> Dict[str, Any]: def delete_graph(self, graph_id: str): """删除图谱""" self.client.graph.delete(graph_id=graph_id) - diff --git a/backend/app/services/ontology_generator.py b/backend/app/services/ontology_generator.py index 2d3e39bd8..0e8d79369 100644 --- a/backend/app/services/ontology_generator.py +++ b/backend/app/services/ontology_generator.py @@ -6,6 +6,7 @@ import json from typing import Dict, Any, List, Optional from ..utils.llm_client import LLMClient +from ..utils.ontology_normalizer import normalize_ontology_for_zep # 本体生成的系统提示词 @@ -61,7 +62,7 @@ "name": "关系类型名称(英文,UPPER_SNAKE_CASE)", "description": "简短描述(英文,不超过100字符)", "source_targets": [ - {"source": "源实体类型", "target": "目标实体类型"} + {"source": "源实体类型(必须与实体类型名称完全一致)", "target": "目标实体类型(必须与实体类型名称完全一致)"} ], "attributes": [] } @@ -250,6 +251,7 @@ def _build_user_message( 3. 前8个是根据文本内容设计的具体类型 4. 所有实体类型必须是现实中可以发声的主体,不能是抽象概念 5. 属性名不能使用 name、uuid、group_id 等保留字,用 full_name、org_name 等替代 +6. 实体类型名称必须只包含字母和数字,不能包含下划线、空格、连字符,例如 `StudentLeader` 是合法的,`Student_Leader` 不合法 """ return message @@ -341,8 +343,9 @@ def _validate_and_process(self, result: Dict[str, Any]) -> Dict[str, Any]: if len(result["edge_types"]) > MAX_EDGE_TYPES: result["edge_types"] = result["edge_types"][:MAX_EDGE_TYPES] - - return result + + normalized_result, _ = normalize_ontology_for_zep(result) + return normalized_result def generate_python_code(self, ontology: Dict[str, Any]) -> str: """ @@ -450,4 +453,3 @@ def generate_python_code(self, ontology: Dict[str, Any]) -> str: code_lines.append('}') return '\n'.join(code_lines) - diff --git a/backend/app/services/simulation_manager.py b/backend/app/services/simulation_manager.py index 96c496fd4..15f4d7877 100644 --- a/backend/app/services/simulation_manager.py +++ b/backend/app/services/simulation_manager.py @@ -260,7 +260,7 @@ def prepare_simulation( """ state = self._load_simulation_state(simulation_id) if not state: - raise ValueError(f"模拟不存在: {simulation_id}") + raise ValueError(f"Simulation not found: {simulation_id}") try: state.status = SimulationStatus.PREPARING @@ -270,12 +270,12 @@ def prepare_simulation( # ========== 阶段1: 读取并过滤实体 ========== if progress_callback: - progress_callback("reading", 0, "正在连接Zep图谱...") + progress_callback("reading", 0, "Connecting to the Zep graph...") reader = ZepEntityReader() if progress_callback: - progress_callback("reading", 30, "正在读取节点数据...") + progress_callback("reading", 30, "Reading node data...") filtered = reader.filter_defined_entities( graph_id=state.graph_id, @@ -289,14 +289,14 @@ def prepare_simulation( if progress_callback: progress_callback( "reading", 100, - f"完成,共 {filtered.filtered_count} 个实体", + f"Completed. {filtered.filtered_count} entities found.", current=filtered.filtered_count, total=filtered.filtered_count ) if filtered.filtered_count == 0: state.status = SimulationStatus.FAILED - state.error = "没有找到符合条件的实体,请检查图谱是否正确构建" + state.error = "No matching entities were found. Please verify that the graph was built correctly." self._save_simulation_state(state) return state @@ -306,7 +306,7 @@ def prepare_simulation( if progress_callback: progress_callback( "generating_profiles", 0, - "开始生成...", + "Starting profile generation...", current=0, total=total_entities ) @@ -352,7 +352,7 @@ def profile_progress(current, total, msg): if progress_callback: progress_callback( "generating_profiles", 95, - "保存Profile文件...", + "Saving profile files...", current=total_entities, total=total_entities ) @@ -375,7 +375,7 @@ def profile_progress(current, total, msg): if progress_callback: progress_callback( "generating_profiles", 100, - f"完成,共 {len(profiles)} 个Profile", + f"Completed. {len(profiles)} profiles generated.", current=len(profiles), total=len(profiles) ) @@ -384,7 +384,7 @@ def profile_progress(current, total, msg): if progress_callback: progress_callback( "generating_config", 0, - "正在分析模拟需求...", + "Analyzing the simulation requirement...", current=0, total=3 ) @@ -394,7 +394,7 @@ def profile_progress(current, total, msg): if progress_callback: progress_callback( "generating_config", 30, - "正在调用LLM生成配置...", + "Generating configuration with the LLM...", current=1, total=3 ) @@ -413,7 +413,7 @@ def profile_progress(current, total, msg): if progress_callback: progress_callback( "generating_config", 70, - "正在保存配置文件...", + "Saving the configuration file...", current=2, total=3 ) @@ -429,7 +429,7 @@ def profile_progress(current, total, msg): if progress_callback: progress_callback( "generating_config", 100, - "配置生成完成", + "Configuration generation completed.", current=3, total=3 ) @@ -481,7 +481,7 @@ def get_profiles(self, simulation_id: str, platform: str = "reddit") -> List[Dic """获取模拟的Agent Profile""" state = self._load_simulation_state(simulation_id) if not state: - raise ValueError(f"模拟不存在: {simulation_id}") + raise ValueError(f"Simulation not found: {simulation_id}") sim_dir = self._get_simulation_dir(simulation_id) profile_path = os.path.join(sim_dir, f"{platform}_profiles.json") @@ -519,10 +519,10 @@ def get_run_instructions(self, simulation_id: str) -> Dict[str, str]: "parallel": f"python {scripts_dir}/run_parallel_simulation.py --config {config_path}", }, "instructions": ( - f"1. 激活conda环境: conda activate MiroFish\n" - f"2. 运行模拟 (脚本位于 {scripts_dir}):\n" - f" - 单独运行Twitter: python {scripts_dir}/run_twitter_simulation.py --config {config_path}\n" - f" - 单独运行Reddit: python {scripts_dir}/run_reddit_simulation.py --config {config_path}\n" - f" - 并行运行双平台: python {scripts_dir}/run_parallel_simulation.py --config {config_path}" + f"1. Activate the conda environment: conda activate MiroFish\n" + f"2. Run the simulation (scripts are in {scripts_dir}):\n" + f" - Twitter only: python {scripts_dir}/run_twitter_simulation.py --config {config_path}\n" + f" - Reddit only: python {scripts_dir}/run_reddit_simulation.py --config {config_path}\n" + f" - Run both platforms in parallel: python {scripts_dir}/run_parallel_simulation.py --config {config_path}" ) } diff --git a/backend/app/services/zep_tools.py b/backend/app/services/zep_tools.py index 384cf540f..144a22f15 100644 --- a/backend/app/services/zep_tools.py +++ b/backend/app/services/zep_tools.py @@ -43,10 +43,10 @@ def to_dict(self) -> Dict[str, Any]: def to_text(self) -> str: """转换为文本格式,供LLM理解""" - text_parts = [f"搜索查询: {self.query}", f"找到 {self.total_count} 条相关信息"] + text_parts = [f"Search Query: {self.query}", f"Found {self.total_count} relevant items"] if self.facts: - text_parts.append("\n### 相关事实:") + text_parts.append("\n### Related Facts:") for i, fact in enumerate(self.facts, 1): text_parts.append(f"{i}. {fact}") @@ -73,8 +73,8 @@ def to_dict(self) -> Dict[str, Any]: def to_text(self) -> str: """转换为文本格式""" - entity_type = next((l for l in self.labels if l not in ["Entity", "Node"]), "未知类型") - return f"实体: {self.name} (类型: {entity_type})\n摘要: {self.summary}" + entity_type = next((l for l in self.labels if l not in ["Entity", "Node"]), "Unknown Type") + return f"Entity: {self.name} (Type: {entity_type})\nSummary: {self.summary}" @dataclass @@ -112,14 +112,14 @@ def to_text(self, include_temporal: bool = False) -> str: """转换为文本格式""" source = self.source_node_name or self.source_node_uuid[:8] target = self.target_node_name or self.target_node_uuid[:8] - base_text = f"关系: {source} --[{self.name}]--> {target}\n事实: {self.fact}" + base_text = f"Relation: {source} --[{self.name}]--> {target}\nFact: {self.fact}" if include_temporal: - valid_at = self.valid_at or "未知" - invalid_at = self.invalid_at or "至今" - base_text += f"\n时效: {valid_at} - {invalid_at}" + valid_at = self.valid_at or "Unknown" + invalid_at = self.invalid_at or "Present" + base_text += f"\nValidity: {valid_at} - {invalid_at}" if self.expired_at: - base_text += f" (已过期: {self.expired_at})" + base_text += f" (Expired: {self.expired_at})" return base_text @@ -170,40 +170,40 @@ def to_dict(self) -> Dict[str, Any]: def to_text(self) -> str: """转换为详细的文本格式,供LLM理解""" text_parts = [ - f"## 未来预测深度分析", - f"分析问题: {self.query}", - f"预测场景: {self.simulation_requirement}", - f"\n### 预测数据统计", - f"- 相关预测事实: {self.total_facts}条", - f"- 涉及实体: {self.total_entities}个", - f"- 关系链: {self.total_relationships}条" + "## Deep Forecast Analysis", + f"Analysis Question: {self.query}", + f"Prediction Scenario: {self.simulation_requirement}", + "\n### Forecast Statistics", + f"- Relevant Forecast Facts: {self.total_facts}", + f"- Entities Involved: {self.total_entities}", + f"- Relationship Chains: {self.total_relationships}" ] # 子问题 if self.sub_queries: - text_parts.append(f"\n### 分析的子问题") + text_parts.append("\n### Analysis Sub-questions") for i, sq in enumerate(self.sub_queries, 1): text_parts.append(f"{i}. {sq}") # 语义搜索结果 if self.semantic_facts: - text_parts.append(f"\n### 【关键事实】(请在报告中引用这些原文)") + text_parts.append("\n### Key Facts") for i, fact in enumerate(self.semantic_facts, 1): text_parts.append(f"{i}. \"{fact}\"") # 实体洞察 if self.entity_insights: - text_parts.append(f"\n### 【核心实体】") + text_parts.append("\n### Core Entities") for entity in self.entity_insights: - text_parts.append(f"- **{entity.get('name', '未知')}** ({entity.get('type', '实体')})") + text_parts.append(f"- **{entity.get('name', 'Unknown')}** ({entity.get('type', 'Entity')})") if entity.get('summary'): - text_parts.append(f" 摘要: \"{entity.get('summary')}\"") + text_parts.append(f" Summary: \"{entity.get('summary')}\"") if entity.get('related_facts'): - text_parts.append(f" 相关事实: {len(entity.get('related_facts', []))}条") + text_parts.append(f" Related Facts: {len(entity.get('related_facts', []))}") # 关系链 if self.relationship_chains: - text_parts.append(f"\n### 【关系链】") + text_parts.append("\n### Relationship Chains") for chain in self.relationship_chains: text_parts.append(f"- {chain}") @@ -249,32 +249,32 @@ def to_dict(self) -> Dict[str, Any]: def to_text(self) -> str: """转换为文本格式(完整版本,不截断)""" text_parts = [ - f"## 广度搜索结果(未来全景视图)", - f"查询: {self.query}", - f"\n### 统计信息", - f"- 总节点数: {self.total_nodes}", - f"- 总边数: {self.total_edges}", - f"- 当前有效事实: {self.active_count}条", - f"- 历史/过期事实: {self.historical_count}条" + "## Panorama Search Results", + f"Query: {self.query}", + "\n### Statistics", + f"- Total Nodes: {self.total_nodes}", + f"- Total Edges: {self.total_edges}", + f"- Active Facts: {self.active_count}", + f"- Historical / Expired Facts: {self.historical_count}" ] # 当前有效的事实(完整输出,不截断) if self.active_facts: - text_parts.append(f"\n### 【当前有效事实】(模拟结果原文)") + text_parts.append("\n### Active Facts") for i, fact in enumerate(self.active_facts, 1): text_parts.append(f"{i}. \"{fact}\"") # 历史/过期事实(完整输出,不截断) if self.historical_facts: - text_parts.append(f"\n### 【历史/过期事实】(演变过程记录)") + text_parts.append("\n### Historical / Expired Facts") for i, fact in enumerate(self.historical_facts, 1): text_parts.append(f"{i}. \"{fact}\"") # 关键实体(完整输出,不截断) if self.all_nodes: - text_parts.append(f"\n### 【涉及实体】") + text_parts.append("\n### Entities Involved") for node in self.all_nodes: - entity_type = next((l for l in node.labels if l not in ["Entity", "Node"]), "实体") + entity_type = next((l for l in node.labels if l not in ["Entity", "Node"]), "Entity") text_parts.append(f"- **{node.name}** ({entity_type})") return "\n".join(text_parts) @@ -303,11 +303,11 @@ def to_dict(self) -> Dict[str, Any]: def to_text(self) -> str: text = f"**{self.agent_name}** ({self.agent_role})\n" # 显示完整的agent_bio,不截断 - text += f"_简介: {self.agent_bio}_\n\n" + text += f"_Bio: {self.agent_bio}_\n\n" text += f"**Q:** {self.question}\n\n" text += f"**A:** {self.response}\n" if self.key_quotes: - text += "\n**关键引言:**\n" + text += "\n**Key Quotes:**\n" for quote in self.key_quotes: # 清理各种引号 clean_quote = quote.replace('\u201c', '').replace('\u201d', '').replace('"', '') @@ -319,7 +319,7 @@ def to_text(self) -> str: # 过滤包含问题编号的垃圾内容(问题1-9) skip = False for d in '123456789': - if f'\u95ee\u9898{d}' in clean_quote: + if f'\u95ee\u9898{d}' in clean_quote or f'Question {d}' in clean_quote: skip = True break if skip: @@ -374,25 +374,25 @@ def to_dict(self) -> Dict[str, Any]: def to_text(self) -> str: """转换为详细的文本格式,供LLM理解和报告引用""" text_parts = [ - "## 深度采访报告", - f"**采访主题:** {self.interview_topic}", - f"**采访人数:** {self.interviewed_count} / {self.total_agents} 位模拟Agent", - "\n### 采访对象选择理由", - self.selection_reasoning or "(自动选择)", + "## In-Depth Interview Report", + f"**Interview Topic:** {self.interview_topic}", + f"**Interview Count:** {self.interviewed_count} / {self.total_agents} simulated agents", + "\n### Why These Agents Were Selected", + self.selection_reasoning or "(Selected automatically)", "\n---", - "\n### 采访实录", + "\n### Interview Transcript", ] if self.interviews: for i, interview in enumerate(self.interviews, 1): - text_parts.append(f"\n#### 采访 #{i}: {interview.agent_name}") + text_parts.append(f"\n#### Interview #{i}: {interview.agent_name}") text_parts.append(interview.to_text()) text_parts.append("\n---") else: - text_parts.append("(无采访记录)\n\n---") + text_parts.append("(No interview records)\n\n---") - text_parts.append("\n### 采访摘要与核心观点") - text_parts.append(self.summary or "(无摘要)") + text_parts.append("\n### Interview Summary & Key Takeaways") + text_parts.append(self.summary or "(No summary)") return "\n".join(text_parts) @@ -424,12 +424,12 @@ class ZepToolsService: def __init__(self, api_key: Optional[str] = None, llm_client: Optional[LLMClient] = None): self.api_key = api_key or Config.ZEP_API_KEY if not self.api_key: - raise ValueError("ZEP_API_KEY 未配置") + raise ValueError("ZEP_API_KEY is not configured") self.client = Zep(api_key=self.api_key) # LLM客户端用于InsightForge生成子问题 self._llm_client = llm_client - logger.info("ZepToolsService 初始化完成") + logger.info("ZepToolsService initialized") @property def llm(self) -> LLMClient: diff --git a/backend/app/utils/ontology_normalizer.py b/backend/app/utils/ontology_normalizer.py new file mode 100644 index 000000000..eae0c8b20 --- /dev/null +++ b/backend/app/utils/ontology_normalizer.py @@ -0,0 +1,119 @@ +""" +Utilities for normalizing ontology names before sending them to Zep. +""" + +from __future__ import annotations + +import copy +import re +from typing import Any, Dict, Tuple + + +PASCAL_CASE_PATTERN = re.compile(r"^[A-Z][A-Za-z0-9]*$") + + +def _split_name_parts(raw_name: str) -> list[str]: + text = str(raw_name or "").strip() + if not text: + return [] + + text = re.sub(r"[^A-Za-z0-9]+", " ", text) + text = re.sub(r"(?<=[a-z0-9])(?=[A-Z])", " ", text) + text = re.sub(r"(?<=[A-Z])(?=[A-Z][a-z])", " ", text) + text = re.sub(r"(?<=[A-Za-z])(?=[0-9])", " ", text) + text = re.sub(r"(?<=[0-9])(?=[A-Za-z])", " ", text) + return [part for part in text.split() if part] + + +def normalize_pascal_case_name(raw_name: str, default_prefix: str = "Entity") -> str: + """ + Convert an arbitrary label into Zep-safe PascalCase. + """ + text = str(raw_name or "").strip() + if text and PASCAL_CASE_PATTERN.match(text): + return text + + parts = _split_name_parts(text) + if not parts: + return default_prefix + + normalized_parts = [] + for part in parts: + if part.isdigit(): + normalized_parts.append(part) + elif part.isupper() and len(part) > 1: + normalized_parts.append(part) + else: + normalized_parts.append(part[0].upper() + part[1:].lower()) + + normalized = "".join(normalized_parts) + + if not normalized: + normalized = default_prefix + elif not normalized[0].isalpha(): + normalized = f"{default_prefix}{normalized}" + + return normalized + + +def _ensure_unique_name(base_name: str, used_names: set[str]) -> str: + candidate = base_name + suffix = 2 + + while candidate in used_names: + candidate = f"{base_name}{suffix}" + suffix += 1 + + used_names.add(candidate) + return candidate + + +def normalize_ontology_for_zep(ontology: Dict[str, Any]) -> Tuple[Dict[str, Any], Dict[str, str]]: + """ + Normalize ontology entity names and source/target references for Zep validation. + + Returns: + A tuple of (normalized_ontology, entity_name_mapping) + """ + normalized = copy.deepcopy(ontology or {}) + entity_types = normalized.setdefault("entity_types", []) + edge_types = normalized.setdefault("edge_types", []) + + used_entity_names: set[str] = set() + entity_name_mapping: Dict[str, str] = {} + + for entity in entity_types: + raw_name = str(entity.get("name", "")).strip() + safe_name = normalize_pascal_case_name(raw_name, default_prefix="Entity") + safe_name = _ensure_unique_name(safe_name, used_entity_names) + + entity["name"] = safe_name + + if raw_name: + entity_name_mapping[raw_name] = safe_name + entity_name_mapping[raw_name.strip()] = safe_name + entity_name_mapping[safe_name] = safe_name + + for edge in edge_types: + source_targets = edge.setdefault("source_targets", []) + for source_target in source_targets: + raw_source = str(source_target.get("source", "")).strip() + raw_target = str(source_target.get("target", "")).strip() + + if raw_source: + source_target["source"] = entity_name_mapping.get( + raw_source, + normalize_pascal_case_name(raw_source, default_prefix="Entity"), + ) + else: + source_target["source"] = "Entity" + + if raw_target: + source_target["target"] = entity_name_mapping.get( + raw_target, + normalize_pascal_case_name(raw_target, default_prefix="Entity"), + ) + else: + source_target["target"] = "Entity" + + return normalized, entity_name_mapping diff --git a/backend/pyproject.toml b/backend/pyproject.toml index 4f5361d53..ec46a6ab1 100644 --- a/backend/pyproject.toml +++ b/backend/pyproject.toml @@ -1,7 +1,7 @@ [project] name = "mirofish-backend" version = "0.1.0" -description = "MiroFish - 简洁通用的群体智能引擎,预测万物" +description = "MiroFish - A simple, universal swarm intelligence engine for predicting anything" requires-python = ">=3.11" license = { text = "AGPL-3.0" } authors = [ @@ -9,27 +9,27 @@ authors = [ ] dependencies = [ - # 核心框架 + # Core framework "flask>=3.0.0", "flask-cors>=6.0.0", - # LLM 相关 + # LLM support "openai>=1.0.0", # Zep Cloud "zep-cloud==3.13.0", - # OASIS 社交媒体模拟 + # OASIS social media simulation "camel-oasis==0.2.5", "camel-ai==0.2.78", - # 文件处理 + # File processing "PyMuPDF>=1.24.0", - # 编码检测(支持非UTF-8编码的文本文件) + # Encoding detection (supports text files that are not UTF-8) "charset-normalizer>=3.0.0", "chardet>=5.0.0", - # 工具库 + # Utilities "python-dotenv>=1.0.0", "pydantic>=2.0.0", ] diff --git a/backend/requirements.txt b/backend/requirements.txt index 4f146296b..93a3b5bf7 100644 --- a/backend/requirements.txt +++ b/backend/requirements.txt @@ -5,31 +5,31 @@ # Install: pip install -r requirements.txt # =========================================== -# ============= 核心框架 ============= +# ============= Core Framework ============= flask>=3.0.0 flask-cors>=6.0.0 -# ============= LLM 相关 ============= -# OpenAI SDK(统一使用 OpenAI 格式调用 LLM) +# ============= LLM Support ============= +# OpenAI SDK (all LLM calls use the OpenAI-compatible format) openai>=1.0.0 # ============= Zep Cloud ============= zep-cloud==3.13.0 -# ============= OASIS 社交媒体模拟 ============= -# OASIS 社交模拟框架 +# ============= OASIS Social Media Simulation ============= +# OASIS social simulation framework camel-oasis==0.2.5 camel-ai==0.2.78 -# ============= 文件处理 ============= +# ============= File Processing ============= PyMuPDF>=1.24.0 -# 编码检测(支持非UTF-8编码的文本文件) +# Encoding detection (supports text files that are not UTF-8) charset-normalizer>=3.0.0 chardet>=5.0.0 -# ============= 工具库 ============= -# 环境变量加载 +# ============= Utilities ============= +# Environment variable loading python-dotenv>=1.0.0 -# 数据验证 +# Data validation pydantic>=2.0.0 diff --git a/backend/tests/test_ontology_normalizer.py b/backend/tests/test_ontology_normalizer.py new file mode 100644 index 000000000..6e6402e74 --- /dev/null +++ b/backend/tests/test_ontology_normalizer.py @@ -0,0 +1,38 @@ +from app.utils.ontology_normalizer import normalize_ontology_for_zep + + +def test_normalize_ontology_entity_names_and_source_targets(): + ontology = { + "entity_types": [ + { + "name": "IH_Team", + "description": "Escalation team", + "attributes": [], + }, + { + "name": "billing department", + "description": "Billing org", + "attributes": [], + }, + ], + "edge_types": [ + { + "name": "LEADS", + "description": "Leadership relation", + "source_targets": [ + {"source": "IH_Team", "target": "billing department"}, + ], + "attributes": [], + } + ], + } + + normalized, entity_name_mapping = normalize_ontology_for_zep(ontology) + + assert entity_name_mapping["IH_Team"] == "IHTeam" + assert entity_name_mapping["billing department"] == "BillingDepartment" + assert normalized["entity_types"][0]["name"] == "IHTeam" + assert normalized["entity_types"][1]["name"] == "BillingDepartment" + assert normalized["edge_types"][0]["source_targets"] == [ + {"source": "IHTeam", "target": "BillingDepartment"} + ] diff --git a/docker-compose.yml b/docker-compose.yml index 637f1dfae..b7ea34507 100644 --- a/docker-compose.yml +++ b/docker-compose.yml @@ -1,7 +1,10 @@ services: mirofish: - image: ghcr.io/666ghj/mirofish:latest - # 加速镜像(如拉取缓慢可替换上方地址) + image: mirofish-local + build: + context: . + platform: linux/arm64 + # Mirror image for faster pulls if needed # image: ghcr.nju.edu.cn/666ghj/mirofish:latest container_name: mirofish env_file: @@ -11,4 +14,4 @@ services: - "5001:5001" restart: unless-stopped volumes: - - ./backend/uploads:/app/backend/uploads \ No newline at end of file + - ./backend/uploads:/app/backend/uploads diff --git a/frontend/index.html b/frontend/index.html index 009c924a4..72f28baec 100644 --- a/frontend/index.html +++ b/frontend/index.html @@ -1,5 +1,5 @@ - + @@ -7,8 +7,8 @@ - - MiroFish - 预测万物 + + MiroFish - Predict Anything
diff --git a/frontend/package-lock.json b/frontend/package-lock.json index 8c4fa710d..fee02cad8 100644 --- a/frontend/package-lock.json +++ b/frontend/package-lock.json @@ -1331,7 +1331,6 @@ "resolved": "https://registry.npmjs.org/d3-selection/-/d3-selection-3.0.0.tgz", "integrity": "sha512-fmTRWbNMmsmWq6xJV8D19U/gw/bwrHfNXxrIN+HfZgnzqTHp9jOmKMhsTUjXOJnZOdZY9Q28y4yebKzqDKlxlQ==", "license": "ISC", - "peer": true, "engines": { "node": ">=12" } @@ -1809,7 +1808,6 @@ "integrity": "sha512-5gTmgEY/sqK6gFXLIsQNH19lWb4ebPDLA4SdLP7dsWkIXHWlG66oPuVvXSGFPppYZz8ZDZq0dYYrbHfBCVUb1Q==", "dev": true, "license": "MIT", - "peer": true, "engines": { "node": ">=12" }, @@ -1943,7 +1941,6 @@ "integrity": "sha512-ITcnkFeR3+fI8P1wMgItjGrR10170d8auB4EpMLPqmx6uxElH3a/hHGQabSHKdqd4FXWO1nFIp9rRn7JQ34ACQ==", "dev": true, "license": "MIT", - "peer": true, "dependencies": { "esbuild": "^0.25.0", "fdir": "^6.5.0", @@ -2018,7 +2015,6 @@ "resolved": "https://registry.npmjs.org/vue/-/vue-3.5.25.tgz", "integrity": "sha512-YLVdgv2K13WJ6n+kD5owehKtEXwdwXuj2TTyJMsO7pSeKw2bfRNZGjhB7YzrpbMYj5b5QsUebHpOqR3R3ziy/g==", "license": "MIT", - "peer": true, "dependencies": { "@vue/compiler-dom": "3.5.25", "@vue/compiler-sfc": "3.5.25", diff --git a/frontend/src/App.vue b/frontend/src/App.vue index b7cd71ca6..a76fb0f97 100644 --- a/frontend/src/App.vue +++ b/frontend/src/App.vue @@ -3,11 +3,11 @@ \ No newline at end of file + diff --git a/frontend/src/components/Step4Report.vue b/frontend/src/components/Step4Report.vue index 22f2bdcfd..a50cd286e 100644 --- a/frontend/src/components/Step4Report.vue +++ b/frontend/src/components/Step4Report.vue @@ -58,7 +58,7 @@ - 正在生成{{ section.title }}... + Generating {{ section.title }}... @@ -129,7 +129,7 @@
@@ -141,7 +141,7 @@ - 发送问卷调查到世界中 + Send A Survey Into The World @@ -155,7 +155,7 @@
R
Report Agent - Chat
-
报告生成智能体的快速对话版本,可调用 4 种专业工具,拥有MiroFish的完整记忆
+
A fast conversational version of Report Agent with access to four specialized tools and the full MiroFish memory context.
+ | - +
- 问卷问题 + Survey Question
@@ -369,15 +369,15 @@ @click="submitSurvey" > - 发送问卷 + Send Survey
- 调查结果 - {{ surveyResults.length }} 条回复 + Survey Results + {{ surveyResults.length }} responses
{{ (result.agent_name || 'A')[0] }}
{{ result.agent_name }} - {{ result.profession || '未知职业' }} + {{ result.profession || 'Unknown profession' }}
@@ -535,7 +535,7 @@ const selectAgent = (agent, idx) => { // 恢复该 Agent 的对话记录 chatHistory.value = chatHistoryCache.value[`agent_${idx}`] || [] - addLog(`选择对话对象: ${agent.username}`) + addLog(`Selected conversation target: ${agent.username}`) } const formatTime = (timestamp) => { @@ -662,10 +662,10 @@ const sendMessage = async () => { await sendToAgent(message) } } catch (err) { - addLog(`发送失败: ${err.message}`) + addLog(`Send failed: ${err.message}`) chatHistory.value.push({ role: 'assistant', - content: `抱歉,发生了错误: ${err.message}`, + content: `Sorry, something went wrong: ${err.message}`, timestamp: new Date().toISOString() }) } finally { @@ -677,7 +677,7 @@ const sendMessage = async () => { } const sendToReportAgent = async (message) => { - addLog(`向 Report Agent 发送: ${message.substring(0, 50)}...`) + addLog(`Sending to Report Agent: ${message.substring(0, 50)}...`) // Build chat history for API const historyForApi = chatHistory.value @@ -697,21 +697,21 @@ const sendToReportAgent = async (message) => { if (res.success && res.data) { chatHistory.value.push({ role: 'assistant', - content: res.data.response || res.data.answer || '无响应', + content: res.data.response || res.data.answer || 'No response', timestamp: new Date().toISOString() }) - addLog('Report Agent 已回复') + addLog('Report Agent replied') } else { - throw new Error(res.error || '请求失败') + throw new Error(res.error || 'Request failed') } } const sendToAgent = async (message) => { if (!selectedAgent.value || selectedAgentIndex.value === null) { - throw new Error('请先选择一个模拟个体') + throw new Error('Please choose a simulated individual first') } - addLog(`向 ${selectedAgent.value.username} 发送: ${message.substring(0, 50)}...`) + addLog(`Sending to ${selectedAgent.value.username}: ${message.substring(0, 50)}...`) // Build prompt with chat history let prompt = message @@ -719,9 +719,9 @@ const sendToAgent = async (message) => { const historyContext = chatHistory.value .filter(msg => msg.content !== message) .slice(-6) - .map(msg => `${msg.role === 'user' ? '提问者' : '你'}:${msg.content}`) + .map(msg => `${msg.role === 'user' ? 'Interviewer' : 'You'}: ${msg.content}`) .join('\n') - prompt = `以下是我们之前的对话:\n${historyContext}\n\n现在我的新问题是:${message}` + prompt = `Here is our previous conversation:\n${historyContext}\n\nMy new question is: ${message}` } const res = await interviewAgents({ @@ -761,12 +761,12 @@ const sendToAgent = async (message) => { content: responseContent, timestamp: new Date().toISOString() }) - addLog(`${selectedAgent.value.username} 已回复`) + addLog(`${selectedAgent.value.username} replied`) } else { - throw new Error('无响应数据') + throw new Error('No response data') } } else { - throw new Error(res.error || '请求失败') + throw new Error(res.error || 'Request failed') } } @@ -803,7 +803,7 @@ const submitSurvey = async () => { if (selectedAgents.value.size === 0 || !surveyQuestion.value.trim()) return isSurveying.value = true - addLog(`发送问卷给 ${selectedAgents.value.size} 个对象...`) + addLog(`Sending a survey to ${selectedAgents.value.size} targets...`) try { const interviews = Array.from(selectedAgents.value).map(idx => ({ @@ -830,20 +830,20 @@ const submitSurvey = async () => { const agent = profiles.value[agentIdx] // 优先使用 reddit 平台回复,其次 twitter - let responseContent = '无响应' + let responseContent = 'No response' if (typeof resultsDict === 'object' && !Array.isArray(resultsDict)) { const redditKey = `reddit_${agentIdx}` const twitterKey = `twitter_${agentIdx}` const agentResult = resultsDict[redditKey] || resultsDict[twitterKey] if (agentResult) { - responseContent = agentResult.response || agentResult.answer || '无响应' + responseContent = agentResult.response || agentResult.answer || 'No response' } } else if (Array.isArray(resultsDict)) { // 兼容数组格式 const matchedResult = resultsDict.find(r => r.agent_id === agentIdx) if (matchedResult) { - responseContent = matchedResult.response || matchedResult.answer || '无响应' + responseContent = matchedResult.response || matchedResult.answer || 'No response' } } @@ -857,12 +857,12 @@ const submitSurvey = async () => { } surveyResults.value = surveyResultsList - addLog(`收到 ${surveyResults.value.length} 条回复`) + addLog(`Received ${surveyResults.value.length} responses`) } else { - throw new Error(res.error || '请求失败') + throw new Error(res.error || 'Request failed') } } catch (err) { - addLog(`问卷发送失败: ${err.message}`) + addLog(`Survey failed: ${err.message}`) } finally { isSurveying.value = false } @@ -873,7 +873,7 @@ const loadReportData = async () => { if (!props.reportId) return try { - addLog(`加载报告数据: ${props.reportId}`) + addLog(`Loading report data: ${props.reportId}`) // Get report info const reportRes = await getReport(props.reportId) @@ -882,7 +882,7 @@ const loadReportData = async () => { await loadAgentLogs() } } catch (err) { - addLog(`加载报告失败: ${err.message}`) + addLog(`Failed to load report: ${err.message}`) } } @@ -904,10 +904,10 @@ const loadAgentLogs = async () => { } }) - addLog('报告数据加载完成') + addLog('Report data loaded') } } catch (err) { - addLog(`加载报告日志失败: ${err.message}`) + addLog(`Failed to load report logs: ${err.message}`) } } @@ -918,10 +918,10 @@ const loadProfiles = async () => { const res = await getSimulationProfilesRealtime(props.simulationId, 'reddit') if (res.success && res.data) { profiles.value = res.data.profiles || [] - addLog(`加载了 ${profiles.value.length} 个模拟个体`) + addLog(`Loaded ${profiles.value.length} simulated individuals`) } } catch (err) { - addLog(`加载模拟个体失败: ${err.message}`) + addLog(`Failed to load simulated individuals: ${err.message}`) } } @@ -935,7 +935,7 @@ const handleClickOutside = (e) => { // Lifecycle onMounted(() => { - addLog('Step5 深度互动初始化') + addLog('Step5 deep interaction initialized') loadReportData() loadProfiles() document.addEventListener('click', handleClickOutside) diff --git a/frontend/src/store/pendingUpload.js b/frontend/src/store/pendingUpload.js index 958c3d0a6..bdac77cad 100644 --- a/frontend/src/store/pendingUpload.js +++ b/frontend/src/store/pendingUpload.js @@ -1,6 +1,6 @@ /** - * 临时存储待上传的文件和需求 - * 用于首页点击启动引擎后立即跳转,在Process页面再进行API调用 + * Temporarily store files and requirements before upload + * Used when the home page immediately navigates to Process and performs the API call there */ import { reactive } from 'vue' diff --git a/frontend/src/views/Home.vue b/frontend/src/views/Home.vue index afe01a0c4..5fa1cc94e 100644 --- a/frontend/src/views/Home.vue +++ b/frontend/src/views/Home.vue @@ -5,7 +5,7 @@ @@ -15,21 +15,21 @@
- 简洁通用的群体智能引擎 - / v0.1-预览版 + A lean, general-purpose collective intelligence engine + / v0.1-preview

- 上传任意报告
- 即刻推演未来 + Upload Any Report
+ Simulate the Future Instantly

- 即使只有一段文字,MiroFish 也能基于其中的现实种子,全自动生成与之对应的至多百万级Agent构成的平行世界。通过上帝视角注入变量,在复杂的群体交互中寻找动态环境下的“局部最优解” + Even a single paragraph is enough for MiroFish to extract real-world seeds and automatically build a parallel world with up to millions of agents. Inject variables from a god's-eye view and search for a dynamic "local optimum" across complex collective interactions.

- 让未来在 Agent 群中预演,让决策在百战后胜出_ + Let the future play out across agents, and let decisions win after many trials_

@@ -53,65 +53,65 @@
- 系统状态 + System Status
-

准备就绪

+

Ready

- 预测引擎待命中,可上传多份非结构化数据以初始化模拟序列 + The prediction engine is standing by. Upload multiple unstructured files to initialize a simulation run.

-
低成本
-
常规模拟平均5$/次
+
Low Cost
+
Typical runs average about $5 each
-
高可用
-
最多百万级Agent模拟
+
High Scale
+
Simulate up to millions of agents
- 工作流序列 + Workflow Sequence
01
-
图谱构建
-
现实种子提取 & 个体与群体记忆注入 & GraphRAG构建
+
Graph Build
+
Seed extraction, individual and collective memory injection, and GraphRAG construction
02
-
环境搭建
-
实体关系抽取 & 人设生成 & 环境配置Agent注入仿真参数
+
Environment Setup
+
Entity and relationship extraction, persona generation, and runtime parameter injection
03
-
开始模拟
-
双平台并行模拟 & 自动解析预测需求 & 动态更新时序记忆
+
Run Simulation
+
Parallel dual-platform simulation, automatic prediction parsing, and live temporal memory updates
04
-
报告生成
-
ReportAgent拥有丰富的工具集与模拟后环境进行深度交互
+
Report Generation
+
Report Agent uses a rich toolset to investigate the post-simulation world
05
-
深度互动
-
与模拟世界中的任意一位进行对话 & 与ReportAgent进行对话
+
Deep Interaction
+
Talk with any simulated individual or continue with Report Agent
@@ -124,8 +124,8 @@
- 01 / 现实种子 - 支持格式: PDF, MD, TXT + 01 / Seed Data + Supported formats: PDF, MD, TXT
-
拖拽文件上传
-
或点击浏览文件系统
+
Drag And Drop Files
+
or click to browse your files
@@ -164,23 +164,23 @@
- 输入参数 + Input Parameters
- >_ 02 / 模拟提示词 + >_ 02 / Simulation Prompt
-
引擎: MiroFish-V1.0
+
Engine: MiroFish-V1.0
@@ -191,8 +191,8 @@ @click="startSimulation" :disabled="!canSubmit || loading" > - 启动引擎 - 初始化中... + Start Engine + Initializing...
diff --git a/frontend/src/views/InteractionView.vue b/frontend/src/views/InteractionView.vue index b153590d7..c9761c0e0 100644 --- a/frontend/src/views/InteractionView.vue +++ b/frontend/src/views/InteractionView.vue @@ -15,7 +15,7 @@ :class="{ active: viewMode === mode }" @click="viewMode = mode" > - {{ { graph: '图谱', split: '双栏', workbench: '工作台' }[mode] }} + {{ { graph: 'Graph', split: 'Split', workbench: 'Workbench' }[mode] }}
@@ -23,7 +23,7 @@
Step 5/5 - 深度互动 + Deep Interaction
@@ -47,7 +47,7 @@ />
- +
{ // --- Data Logic --- const loadReportData = async () => { try { - addLog(`加载报告数据: ${currentReportId.value}`) + addLog(`Loading report data: ${currentReportId.value}`) - // 获取 report 信息以获取 simulation_id + // Fetch report info to get the simulation_id const reportRes = await getReport(currentReportId.value) if (reportRes.success && reportRes.data) { const reportData = reportRes.data simulationId.value = reportData.simulation_id if (simulationId.value) { - // 获取 simulation 信息 + // Fetch simulation info const simRes = await getSimulation(simulationId.value) if (simRes.success && simRes.data) { const simData = simRes.data - // 获取 project 信息 + // Fetch project info if (simData.project_id) { const projRes = await getProject(simData.project_id) if (projRes.success && projRes.data) { projectData.value = projRes.data - addLog(`项目加载成功: ${projRes.data.project_id}`) + addLog(`Project loaded successfully: ${projRes.data.project_id}`) - // 获取 graph 数据 + // Fetch graph data if (projRes.data.graph_id) { await loadGraph(projRes.data.graph_id) } @@ -170,10 +170,10 @@ const loadReportData = async () => { } } } else { - addLog(`获取报告信息失败: ${reportRes.error || '未知错误'}`) + addLog(`Failed to fetch report info: ${reportRes.error || 'Unknown error'}`) } } catch (err) { - addLog(`加载异常: ${err.message}`) + addLog(`Load error: ${err.message}`) } } @@ -184,10 +184,10 @@ const loadGraph = async (graphId) => { const res = await getGraphData(graphId) if (res.success) { graphData.value = res.data - addLog('图谱数据加载成功') + addLog('Graph data loaded successfully') } } catch (err) { - addLog(`图谱加载失败: ${err.message}`) + addLog(`Failed to load graph data: ${err.message}`) } finally { graphLoading.value = false } @@ -208,7 +208,7 @@ watch(() => route.params.reportId, (newId) => { }, { immediate: true }) onMounted(() => { - addLog('InteractionView 初始化') + addLog('InteractionView initialized') loadReportData() }) diff --git a/frontend/src/views/MainView.vue b/frontend/src/views/MainView.vue index 6ff299112..ff515621e 100644 --- a/frontend/src/views/MainView.vue +++ b/frontend/src/views/MainView.vue @@ -15,7 +15,7 @@ :class="{ active: viewMode === mode }" @click="viewMode = mode" > - {{ { graph: '图谱', split: '双栏', workbench: '工作台' }[mode] }} + {{ { graph: 'Graph', split: 'Split', workbench: 'Workbench' }[mode] }}
@@ -48,7 +48,7 @@
- + - + { const handleNextStep = (params = {}) => { if (currentStep.value < 5) { currentStep.value++ - addLog(`进入 Step ${currentStep.value}: ${stepNames[currentStep.value - 1]}`) + addLog(`Entering Step ${currentStep.value}: ${stepNames[currentStep.value - 1]}`) - // 如果是从 Step 2 进入 Step 3,记录模拟轮数配置 + // When moving from Step 2 to Step 3, log the round configuration if (currentStep.value === 3 && params.maxRounds) { - addLog(`自定义模拟轮数: ${params.maxRounds} 轮`) + addLog(`Custom simulation rounds: ${params.maxRounds}`) } } } @@ -171,7 +171,7 @@ const handleNextStep = (params = {}) => { const handleGoBack = () => { if (currentStep.value > 1) { currentStep.value-- - addLog(`返回 Step ${currentStep.value}: ${stepNames[currentStep.value - 1]}`) + addLog(`Returning to Step ${currentStep.value}: ${stepNames[currentStep.value - 1]}`) } } diff --git a/frontend/src/views/Process.vue b/frontend/src/views/Process.vue index 2d2d3cc1a..74eeff9cb 100644 --- a/frontend/src/views/Process.vue +++ b/frontend/src/views/Process.vue @@ -7,7 +7,7 @@ -

等待本体生成

-

生成完成后将自动开始构建图谱

+

Waiting for ontology generation

+

Graph building will start automatically once ontology generation finishes

@@ -200,8 +200,8 @@
-

图谱构建中

-

数据即将显示...

+

Building graph

+

Data will appear shortly...

@@ -225,7 +225,7 @@
- 构建流程 + Build Workflow
@@ -234,7 +234,7 @@
01
-
本体生成
+
Ontology Generation
/api/graph/ontology/generate
@@ -244,15 +244,15 @@
-
接口说明
+
Endpoint
- 上传文档后,LLM分析文档内容,自动生成适合舆论模拟的本体结构(实体类型 + 关系类型) + After the documents are uploaded, the LLM analyzes the content and generates an ontology tailored for public-opinion simulation with entity types and relation types.
-
生成进度
+
Generation Progress
{{ ontologyProgress.message }} @@ -261,7 +261,7 @@
-
生成的实体类型 ({{ projectData.ontology.entity_types?.length || 0 }})
+
Generated Entity Types ({{ projectData.ontology.entity_types?.length || 0 }})
-
生成的关系类型 ({{ projectData.ontology.relation_types?.length || 0 }})
+
Generated Relation Types ({{ projectData.ontology.relation_types?.length || 0 }})
{{ rel.target_type }}
- +{{ projectData.ontology.relation_types.length - 5 }} 更多关系... + +{{ projectData.ontology.relation_types.length - 5 }} more relations...
-
等待本体生成...
+
Waiting for ontology generation...
@@ -305,7 +305,7 @@
02
-
图谱构建
+
Graph Build
/api/graph/build
@@ -315,20 +315,20 @@
-
接口说明
+
Endpoint
- 基于生成的本体,将文档分块后调用 Zep API 构建知识图谱,提取实体和关系 + Using the generated ontology, the documents are chunked and sent to the Zep API to build the knowledge graph and extract entities and relationships.
-
等待本体生成完成...
+
Waiting for ontology generation to finish...
-
构建进度
+
Build Progress
@@ -339,19 +339,19 @@
-
构建结果
+
Build Results
{{ graphData.node_count }} - 实体节点 + Entity Nodes
{{ graphData.edge_count }} - 关系边 + Relationship Edges
{{ entityTypes.length }} - 实体类型 + Entity Types
@@ -363,8 +363,8 @@
03
-
构建完成
-
准备进入下一步骤
+
Build Complete
+
Ready for the next step
{{ getPhaseStatusText(2) }} @@ -375,7 +375,7 @@
@@ -385,23 +385,23 @@
- 项目信息 + Project Info
- 项目名称 + Project Name {{ projectData.name }}
- 项目ID + Project ID {{ projectData.project_id }}
- 图谱ID + Graph ID {{ projectData.graph_id }}
- 模拟需求 + Simulation Requirement {{ projectData.simulation_requirement || '-' }}
@@ -451,11 +451,11 @@ const statusClass = computed(() => { }) const statusText = computed(() => { - if (error.value) return '构建失败' - if (currentPhase.value >= 2) return '构建完成' - if (currentPhase.value === 1) return '图谱构建中' - if (currentPhase.value === 0) return '本体生成中' - return '初始化中' + if (error.value) return 'Build failed' + if (currentPhase.value >= 2) return 'Build complete' + if (currentPhase.value === 1) return 'Building graph' + if (currentPhase.value === 0) return 'Generating ontology' + return 'Initializing' }) const entityTypes = computed(() => { @@ -482,7 +482,7 @@ const goHome = () => { const goToNextStep = () => { // TODO: 进入环境搭建步骤 - alert('环境搭建功能开发中...') + alert('Environment setup is still in development...') } const toggleFullScreen = () => { @@ -503,7 +503,7 @@ const formatDate = (dateStr) => { if (!dateStr) return '-' try { const date = new Date(dateStr) - return date.toLocaleString('zh-CN', { + return date.toLocaleString('en-US', { year: 'numeric', month: 'short', day: 'numeric', @@ -540,14 +540,14 @@ const getPhaseStatusClass = (phase) => { } const getPhaseStatusText = (phase) => { - if (currentPhase.value > phase) return '已完成' + if (currentPhase.value > phase) return 'Completed' if (currentPhase.value === phase) { if (phase === 1 && buildProgress.value) { return `${buildProgress.value.progress}%` } - return '进行中' + return 'In Progress' } - return '等待中' + return 'Pending' } // 初始化 - 处理新建项目或加载已有项目 @@ -569,7 +569,7 @@ const handleNewProject = async () => { const pending = getPendingUpload() if (!pending.isPending || pending.files.length === 0) { - error.value = '没有待上传的文件,请返回首页重新操作' + error.value = 'No pending files were found. Please return to the home page and try again.' loading.value = false return } @@ -577,7 +577,7 @@ const handleNewProject = async () => { try { loading.value = true currentPhase.value = 0 // 本体生成阶段 - ontologyProgress.value = { message: '正在上传文件并分析文档...' } + ontologyProgress.value = { message: 'Uploading files and analyzing documents...' } // 构建 FormData const formDataObj = new FormData() @@ -608,11 +608,11 @@ const handleNewProject = async () => { // 自动开始图谱构建 await startBuildGraph() } else { - error.value = response.error || '本体生成失败' + error.value = response.error || 'Failed to generate ontology' } } catch (err) { console.error('Handle new project error:', err) - error.value = '项目初始化失败: ' + (err.message || '未知错误') + error.value = 'Project initialization failed: ' + (err.message || 'Unknown error') } finally { loading.value = false } @@ -645,11 +645,11 @@ const loadProject = async () => { await loadGraph(response.data.graph_id) } } else { - error.value = response.error || '加载项目失败' + error.value = response.error || 'Failed to load project' } } catch (err) { console.error('Load project error:', err) - error.value = '加载项目失败: ' + (err.message || '未知错误') + error.value = 'Failed to load project: ' + (err.message || 'Unknown error') } finally { loading.value = false } @@ -668,7 +668,7 @@ const updatePhaseByStatus = (status) => { currentPhase.value = 2 break case 'failed': - error.value = projectData.value?.error || '处理失败' + error.value = projectData.value?.error || 'Processing failed' break } } @@ -680,13 +680,13 @@ const startBuildGraph = async () => { // 设置初始进度 buildProgress.value = { progress: 0, - message: '正在启动图谱构建...' + message: 'Starting graph build...' } const response = await buildGraph({ project_id: currentProjectId.value }) if (response.success) { - buildProgress.value.message = '图谱构建任务已启动...' + buildProgress.value.message = 'Graph build task started...' // 保存 task_id 用于轮询 const taskId = response.data.task_id @@ -697,12 +697,12 @@ const startBuildGraph = async () => { // 启动任务状态轮询 startPollingTask(taskId) } else { - error.value = response.error || '启动图谱构建失败' + error.value = response.error || 'Failed to start graph build' buildProgress.value = null } } catch (err) { console.error('Build graph error:', err) - error.value = '启动图谱构建失败: ' + (err.message || '未知错误') + error.value = 'Failed to start graph build: ' + (err.message || 'Unknown error') buildProgress.value = null } } @@ -791,13 +791,13 @@ const pollTaskStatus = async (taskId) => { // 更新进度显示 buildProgress.value = { progress: task.progress || 0, - message: task.message || '处理中...' + message: task.message || 'Processing...' } console.log('Task status:', task.status, 'Progress:', task.progress) if (task.status === 'completed') { - console.log('✅ 图谱构建完成,正在加载完整数据...') + console.log('✅ Graph build complete, loading full data...') stopPolling() stopGraphPolling() @@ -806,7 +806,7 @@ const pollTaskStatus = async (taskId) => { // 更新进度显示为完成状态 buildProgress.value = { progress: 100, - message: '构建完成,正在加载图谱...' + message: 'Build complete, loading graph...' } // 重新加载项目数据获取 graph_id @@ -816,9 +816,9 @@ const pollTaskStatus = async (taskId) => { // 最终加载完整图谱数据 if (projectResponse.data.graph_id) { - console.log('📊 加载完整图谱:', projectResponse.data.graph_id) + console.log('📊 Loading full graph:', projectResponse.data.graph_id) await loadGraph(projectResponse.data.graph_id) - console.log('✅ 图谱加载完成') + console.log('✅ Graph loading complete') } } @@ -827,7 +827,7 @@ const pollTaskStatus = async (taskId) => { } else if (task.status === 'failed') { stopPolling() stopGraphPolling() - error.value = '图谱构建失败: ' + (task.error || '未知错误') + error.value = 'Graph build failed: ' + (task.error || 'Unknown error') buildProgress.value = null } } @@ -905,7 +905,7 @@ const renderGraph = () => { .attr('y', height / 2) .attr('text-anchor', 'middle') .attr('fill', '#999') - .text('等待图谱数据...') + .text('Waiting for graph data...') return } @@ -917,7 +917,7 @@ const renderGraph = () => { const nodes = nodesData.map(n => ({ id: n.uuid, - name: n.name || '未命名', + name: n.name || 'Untitled', type: n.labels?.find(l => l !== 'Entity' && l !== 'Node') || 'Entity', rawData: n // 保存原始数据 })) @@ -933,8 +933,8 @@ const renderGraph = () => { type: e.fact_type || e.name || 'RELATED_TO', rawData: { ...e, - source_name: nodeMap[e.source_node_uuid]?.name || '未知', - target_name: nodeMap[e.target_node_uuid]?.name || '未知' + source_name: nodeMap[e.source_node_uuid]?.name || 'Unknown', + target_name: nodeMap[e.target_node_uuid]?.name || 'Unknown' } })) @@ -2065,4 +2065,4 @@ onUnmounted(() => { display: none; } } - \ No newline at end of file + diff --git a/frontend/src/views/ReportView.vue b/frontend/src/views/ReportView.vue index 84a3e2a3f..576702a62 100644 --- a/frontend/src/views/ReportView.vue +++ b/frontend/src/views/ReportView.vue @@ -15,7 +15,7 @@ :class="{ active: viewMode === mode }" @click="viewMode = mode" > - {{ { graph: '图谱', split: '双栏', workbench: '工作台' }[mode] }} + {{ { graph: 'Graph', split: 'Split', workbench: 'Workbench' }[mode] }}
@@ -23,7 +23,7 @@
Step 4/5 - 报告生成 + Report Generation
@@ -47,7 +47,7 @@ />
- +
{ // --- Data Logic --- const loadReportData = async () => { try { - addLog(`加载报告数据: ${currentReportId.value}`) + addLog(`Loading report data: ${currentReportId.value}`) - // 获取 report 信息以获取 simulation_id + // Fetch report info to get the simulation_id const reportRes = await getReport(currentReportId.value) if (reportRes.success && reportRes.data) { const reportData = reportRes.data simulationId.value = reportData.simulation_id if (simulationId.value) { - // 获取 simulation 信息 + // Fetch simulation info const simRes = await getSimulation(simulationId.value) if (simRes.success && simRes.data) { const simData = simRes.data - // 获取 project 信息 + // Fetch project info if (simData.project_id) { const projRes = await getProject(simData.project_id) if (projRes.success && projRes.data) { projectData.value = projRes.data - addLog(`项目加载成功: ${projRes.data.project_id}`) + addLog(`Project loaded successfully: ${projRes.data.project_id}`) - // 获取 graph 数据 + // Fetch graph data if (projRes.data.graph_id) { await loadGraph(projRes.data.graph_id) } @@ -169,10 +169,10 @@ const loadReportData = async () => { } } } else { - addLog(`获取报告信息失败: ${reportRes.error || '未知错误'}`) + addLog(`Failed to fetch report info: ${reportRes.error || 'Unknown error'}`) } } catch (err) { - addLog(`加载异常: ${err.message}`) + addLog(`Load error: ${err.message}`) } } @@ -183,10 +183,10 @@ const loadGraph = async (graphId) => { const res = await getGraphData(graphId) if (res.success) { graphData.value = res.data - addLog('图谱数据加载成功') + addLog('Graph data loaded successfully') } } catch (err) { - addLog(`图谱加载失败: ${err.message}`) + addLog(`Failed to load graph data: ${err.message}`) } finally { graphLoading.value = false } @@ -207,7 +207,7 @@ watch(() => route.params.reportId, (newId) => { }, { immediate: true }) onMounted(() => { - addLog('ReportView 初始化') + addLog('ReportView initialized') loadReportData() }) diff --git a/frontend/src/views/SimulationRunView.vue b/frontend/src/views/SimulationRunView.vue index 14ebc5f9d..b67538a50 100644 --- a/frontend/src/views/SimulationRunView.vue +++ b/frontend/src/views/SimulationRunView.vue @@ -15,7 +15,7 @@ :class="{ active: viewMode === mode }" @click="viewMode = mode" > - {{ { graph: '图谱', split: '双栏', workbench: '工作台' }[mode] }} + {{ { graph: 'Graph', split: 'Split', workbench: 'Workbench' }[mode] }}
@@ -23,7 +23,7 @@
Step 3/5 - 开始模拟 + Start Simulation
@@ -47,7 +47,7 @@ />
- +
{ } const handleGoBack = async () => { - // 在返回 Step 2 之前,先关闭正在运行的模拟 - addLog('准备返回 Step 2,正在关闭模拟...') + // Close any running simulation before returning to Step 2 + addLog('Preparing to return to Step 2, shutting down the simulation...') - // 停止轮询 + // Stop polling stopGraphRefresh() try { - // 先尝试优雅关闭模拟环境 + // Try graceful environment shutdown first const envStatusRes = await getEnvStatus({ simulation_id: currentSimulationId.value }) if (envStatusRes.success && envStatusRes.data?.env_alive) { - addLog('正在关闭模拟环境...') + addLog('Closing the simulation environment...') try { await closeSimulationEnv({ simulation_id: currentSimulationId.value, timeout: 10 }) - addLog('✓ 模拟环境已关闭') + addLog('✓ Simulation environment closed') } catch (closeErr) { - addLog(`关闭模拟环境失败,尝试强制停止...`) + addLog('Failed to close the simulation environment, attempting a force-stop...') try { await stopSimulation({ simulation_id: currentSimulationId.value }) - addLog('✓ 模拟已强制停止') + addLog('✓ Simulation force-stopped') } catch (stopErr) { - addLog(`强制停止失败: ${stopErr.message}`) + addLog(`Force-stop failed: ${stopErr.message}`) } } } else { - // 环境未运行,检查是否需要停止进程 + // The environment is not running, but the process may still need to be stopped if (isSimulating.value) { - addLog('正在停止模拟进程...') + addLog('Stopping the simulation process...') try { await stopSimulation({ simulation_id: currentSimulationId.value }) - addLog('✓ 模拟已停止') + addLog('✓ Simulation stopped') } catch (err) { - addLog(`停止模拟失败: ${err.message}`) + addLog(`Failed to stop the simulation: ${err.message}`) } } } } catch (err) { - addLog(`检查模拟状态失败: ${err.message}`) + addLog(`Failed to check simulation status: ${err.message}`) } - // 返回到 Step 2 (环境搭建) + // Return to Step 2 (Environment Setup) router.push({ name: 'Simulation', params: { simulationId: currentSimulationId.value } }) } const handleNextStep = () => { - // Step3Simulation 组件会直接处理报告生成和路由跳转 - // 这个方法仅作为备用 - addLog('进入 Step 4: 报告生成') + // Step3Simulation handles report generation and routing directly + // This method only serves as a fallback + addLog('Entering Step 4: Report Generation') } // --- Data Logic --- const loadSimulationData = async () => { try { - addLog(`加载模拟数据: ${currentSimulationId.value}`) + addLog(`Loading simulation data: ${currentSimulationId.value}`) - // 获取 simulation 信息 + // Fetch simulation info const simRes = await getSimulation(currentSimulationId.value) if (simRes.success && simRes.data) { const simData = simRes.data - // 获取 simulation config 以获取 minutes_per_round + // Fetch the simulation config to get minutes_per_round try { const configRes = await getSimulationConfig(currentSimulationId.value) if (configRes.success && configRes.data?.time_config?.minutes_per_round) { minutesPerRound.value = configRes.data.time_config.minutes_per_round - addLog(`时间配置: 每轮 ${minutesPerRound.value} 分钟`) + addLog(`Time configuration: ${minutesPerRound.value} minutes per round`) } } catch (configErr) { - addLog(`获取时间配置失败,使用默认值: ${minutesPerRound.value}分钟/轮`) + addLog(`Failed to fetch time configuration, using default: ${minutesPerRound.value} minutes/round`) } - // 获取 project 信息 + // Fetch project info if (simData.project_id) { const projRes = await getProject(simData.project_id) if (projRes.success && projRes.data) { projectData.value = projRes.data - addLog(`项目加载成功: ${projRes.data.project_id}`) + addLog(`Project loaded successfully: ${projRes.data.project_id}`) - // 获取 graph 数据 + // Fetch graph data if (projRes.data.graph_id) { await loadGraph(projRes.data.graph_id) } } } } else { - addLog(`加载模拟数据失败: ${simRes.error || '未知错误'}`) + addLog(`Failed to load simulation data: ${simRes.error || 'Unknown error'}`) } } catch (err) { - addLog(`加载异常: ${err.message}`) + addLog(`Load error: ${err.message}`) } } const loadGraph = async (graphId) => { - // 当正在模拟时,自动刷新不显示全屏 loading,以免闪烁 - // 手动刷新或初始加载时显示 loading + // Avoid showing a fullscreen loader during auto-refresh while simulating + // Show loading only for manual refreshes or the initial load if (!isSimulating.value) { graphLoading.value = true } @@ -252,11 +252,11 @@ const loadGraph = async (graphId) => { if (res.success) { graphData.value = res.data if (!isSimulating.value) { - addLog('图谱数据加载成功') + addLog('Graph data loaded successfully') } } } catch (err) { - addLog(`图谱加载失败: ${err.message}`) + addLog(`Failed to load graph data: ${err.message}`) } finally { graphLoading.value = false } @@ -273,8 +273,8 @@ let graphRefreshTimer = null const startGraphRefresh = () => { if (graphRefreshTimer) return - addLog('开启图谱实时刷新 (30s)') - // 立即刷新一次,然后每30秒刷新 + addLog('Starting live graph refresh (30s)') + // Refresh immediately, then every 30 seconds graphRefreshTimer = setInterval(refreshGraph, 30000) } @@ -282,7 +282,7 @@ const stopGraphRefresh = () => { if (graphRefreshTimer) { clearInterval(graphRefreshTimer) graphRefreshTimer = null - addLog('停止图谱实时刷新') + addLog('Stopping live graph refresh') } } @@ -295,11 +295,11 @@ watch(isSimulating, (newValue) => { }, { immediate: true }) onMounted(() => { - addLog('SimulationRunView 初始化') + addLog('SimulationRunView initialized') - // 记录 maxRounds 配置(值已在初始化时从 query 参数获取) + // Log the maxRounds configuration retrieved from the query string if (maxRounds.value) { - addLog(`自定义模拟轮数: ${maxRounds.value}`) + addLog(`Custom simulation rounds: ${maxRounds.value}`) } loadSimulationData() @@ -444,4 +444,3 @@ onUnmounted(() => { border-right: 1px solid #EAEAEA; } - diff --git a/frontend/src/views/SimulationView.vue b/frontend/src/views/SimulationView.vue index 4b44b3972..c3692884d 100644 --- a/frontend/src/views/SimulationView.vue +++ b/frontend/src/views/SimulationView.vue @@ -15,7 +15,7 @@ :class="{ active: viewMode === mode }" @click="viewMode = mode" > - {{ { graph: '图谱', split: '双栏', workbench: '工作台' }[mode] }} + {{ { graph: 'Graph', split: 'Split', workbench: 'Workbench' }[mode] }}
@@ -23,7 +23,7 @@
Step 2/5 - 环境搭建 + Environment Setup
@@ -46,7 +46,7 @@ />
- +
{ } const handleGoBack = () => { - // 返回到 process 页面 + // Return to the process page if (projectData.value?.project_id) { router.push({ name: 'Process', params: { projectId: projectData.value.project_id } }) } else { @@ -146,122 +146,122 @@ const handleGoBack = () => { } const handleNextStep = (params = {}) => { - addLog('进入 Step 3: 开始模拟') + addLog('Entering Step 3: Start Simulation') - // 记录模拟轮数配置 + // Log the simulation round configuration if (params.maxRounds) { - addLog(`自定义模拟轮数: ${params.maxRounds} 轮`) + addLog(`Custom simulation rounds: ${params.maxRounds}`) } else { - addLog('使用自动配置的模拟轮数') + addLog('Using the automatically configured number of rounds') } - // 构建路由参数 + // Build route parameters const routeParams = { name: 'SimulationRun', params: { simulationId: currentSimulationId.value } } - // 如果有自定义轮数,通过 query 参数传递 + // Pass custom rounds through the query string when present if (params.maxRounds) { routeParams.query = { maxRounds: params.maxRounds } } - // 跳转到 Step 3 页面 + // Navigate to Step 3 router.push(routeParams) } // --- Data Logic --- /** - * 检查并关闭正在运行的模拟 - * 当用户从 Step 3 返回到 Step 2 时,默认用户要退出模拟 + * Check for a running simulation and stop it + * When the user returns from Step 3 to Step 2, assume they want to exit the simulation */ const checkAndStopRunningSimulation = async () => { if (!currentSimulationId.value) return try { - // 先检查模拟环境是否存活 + // Check whether the simulation environment is still alive first const envStatusRes = await getEnvStatus({ simulation_id: currentSimulationId.value }) if (envStatusRes.success && envStatusRes.data?.env_alive) { - addLog('检测到模拟环境正在运行,正在关闭...') + addLog('Detected a running simulation environment, closing it...') - // 尝试优雅关闭模拟环境 + // Try a graceful environment shutdown first try { const closeRes = await closeSimulationEnv({ simulation_id: currentSimulationId.value, - timeout: 10 // 10秒超时 + timeout: 10 // 10-second timeout }) if (closeRes.success) { - addLog('✓ 模拟环境已关闭') + addLog('✓ Simulation environment closed') } else { - addLog(`关闭模拟环境失败: ${closeRes.error || '未知错误'}`) - // 如果优雅关闭失败,尝试强制停止 + addLog(`Failed to close the simulation environment: ${closeRes.error || 'Unknown error'}`) + // Fall back to a forced stop if graceful shutdown fails await forceStopSimulation() } } catch (closeErr) { - addLog(`关闭模拟环境异常: ${closeErr.message}`) - // 如果优雅关闭异常,尝试强制停止 + addLog(`Simulation environment shutdown error: ${closeErr.message}`) + // Fall back to a forced stop if graceful shutdown errors out await forceStopSimulation() } } else { - // 环境未运行,但可能进程还在,检查模拟状态 + // The environment is not running, but the process may still exist const simRes = await getSimulation(currentSimulationId.value) if (simRes.success && simRes.data?.status === 'running') { - addLog('检测到模拟状态为运行中,正在停止...') + addLog('Simulation status is still running, stopping it...') await forceStopSimulation() } } } catch (err) { - // 检查环境状态失败不影响后续流程 - console.warn('检查模拟状态失败:', err) + // Failure to read environment status should not block the rest of the flow + console.warn('Failed to check simulation status:', err) } } /** - * 强制停止模拟 + * Force-stop the simulation */ const forceStopSimulation = async () => { try { const stopRes = await stopSimulation({ simulation_id: currentSimulationId.value }) if (stopRes.success) { - addLog('✓ 模拟已强制停止') + addLog('✓ Simulation force-stopped') } else { - addLog(`强制停止模拟失败: ${stopRes.error || '未知错误'}`) + addLog(`Failed to force-stop the simulation: ${stopRes.error || 'Unknown error'}`) } } catch (err) { - addLog(`强制停止模拟异常: ${err.message}`) + addLog(`Force-stop error: ${err.message}`) } } const loadSimulationData = async () => { try { - addLog(`加载模拟数据: ${currentSimulationId.value}`) + addLog(`Loading simulation data: ${currentSimulationId.value}`) - // 获取 simulation 信息 + // Fetch simulation info const simRes = await getSimulation(currentSimulationId.value) if (simRes.success && simRes.data) { const simData = simRes.data - // 获取 project 信息 + // Fetch project info if (simData.project_id) { const projRes = await getProject(simData.project_id) if (projRes.success && projRes.data) { projectData.value = projRes.data - addLog(`项目加载成功: ${projRes.data.project_id}`) + addLog(`Project loaded successfully: ${projRes.data.project_id}`) - // 获取 graph 数据 + // Fetch graph data if (projRes.data.graph_id) { await loadGraph(projRes.data.graph_id) } } } } else { - addLog(`加载模拟数据失败: ${simRes.error || '未知错误'}`) + addLog(`Failed to load simulation data: ${simRes.error || 'Unknown error'}`) } } catch (err) { - addLog(`加载异常: ${err.message}`) + addLog(`Load error: ${err.message}`) } } @@ -271,10 +271,10 @@ const loadGraph = async (graphId) => { const res = await getGraphData(graphId) if (res.success) { graphData.value = res.data - addLog('图谱数据加载成功') + addLog('Graph data loaded successfully') } } catch (err) { - addLog(`图谱加载失败: ${err.message}`) + addLog(`Failed to load graph data: ${err.message}`) } finally { graphLoading.value = false } @@ -287,12 +287,12 @@ const refreshGraph = () => { } onMounted(async () => { - addLog('SimulationView 初始化') + addLog('SimulationView initialized') - // 检查并关闭正在运行的模拟(用户从 Step 3 返回时) + // Check and stop any running simulation when returning from Step 3 await checkAndStopRunningSimulation() - // 加载模拟数据 + // Load simulation data loadSimulationData() }) @@ -431,4 +431,3 @@ onMounted(async () => { border-right: 1px solid #EAEAEA; } - diff --git a/package.json b/package.json index 63ace21a9..a22f2300c 100644 --- a/package.json +++ b/package.json @@ -1,7 +1,7 @@ { "name": "mirofish", "version": "0.1.0", - "description": "MiroFish - 简洁通用的群体智能引擎,预测万物", + "description": "MiroFish - A simple, universal swarm intelligence engine for predicting anything", "scripts": { "setup": "npm install && cd frontend && npm install", "setup:backend": "cd backend && uv sync", diff --git "a/static/image/Screenshot/\350\277\220\350\241\214\346\210\252\345\233\2761.png" b/static/image/Screenshot/screenshot-1.png similarity index 100% rename from "static/image/Screenshot/\350\277\220\350\241\214\346\210\252\345\233\2761.png" rename to static/image/Screenshot/screenshot-1.png diff --git "a/static/image/Screenshot/\350\277\220\350\241\214\346\210\252\345\233\2762.png" b/static/image/Screenshot/screenshot-2.png similarity index 100% rename from "static/image/Screenshot/\350\277\220\350\241\214\346\210\252\345\233\2762.png" rename to static/image/Screenshot/screenshot-2.png diff --git "a/static/image/Screenshot/\350\277\220\350\241\214\346\210\252\345\233\2763.png" b/static/image/Screenshot/screenshot-3.png similarity index 100% rename from "static/image/Screenshot/\350\277\220\350\241\214\346\210\252\345\233\2763.png" rename to static/image/Screenshot/screenshot-3.png diff --git "a/static/image/Screenshot/\350\277\220\350\241\214\346\210\252\345\233\2764.png" b/static/image/Screenshot/screenshot-4.png similarity index 100% rename from "static/image/Screenshot/\350\277\220\350\241\214\346\210\252\345\233\2764.png" rename to static/image/Screenshot/screenshot-4.png diff --git "a/static/image/Screenshot/\350\277\220\350\241\214\346\210\252\345\233\2765.png" b/static/image/Screenshot/screenshot-5.png similarity index 100% rename from "static/image/Screenshot/\350\277\220\350\241\214\346\210\252\345\233\2765.png" rename to static/image/Screenshot/screenshot-5.png diff --git "a/static/image/Screenshot/\350\277\220\350\241\214\346\210\252\345\233\2766.png" b/static/image/Screenshot/screenshot-6.png similarity index 100% rename from "static/image/Screenshot/\350\277\220\350\241\214\346\210\252\345\233\2766.png" rename to static/image/Screenshot/screenshot-6.png diff --git "a/static/image/\347\272\242\346\245\274\346\242\246\346\250\241\346\213\237\346\216\250\346\274\224\345\260\201\351\235\242.jpg" b/static/image/dream-of-red-chamber-cover.jpg similarity index 100% rename from "static/image/\347\272\242\346\245\274\346\242\246\346\250\241\346\213\237\346\216\250\346\274\224\345\260\201\351\235\242.jpg" rename to static/image/dream-of-red-chamber-cover.jpg diff --git "a/static/image/QQ\347\276\244.png" b/static/image/qq-group.png similarity index 100% rename from "static/image/QQ\347\276\244.png" rename to static/image/qq-group.png diff --git "a/static/image/\346\255\246\345\244\247\346\250\241\346\213\237\346\274\224\347\244\272\345\260\201\351\235\242.png" b/static/image/wuhan-demo-cover.png similarity index 100% rename from "static/image/\346\255\246\345\244\247\346\250\241\346\213\237\346\274\224\347\244\272\345\260\201\351\235\242.png" rename to static/image/wuhan-demo-cover.png