Compare commits

...

6 Commits

Author SHA1 Message Date
0b5319b31c Add GPU support and improve Docker deployment
- Add GPU deployment support with NVIDIA runtime
  - Update Dockerfile.allinone with GPU environment variables
  - Add comprehensive GPU_DEPLOYMENT.md guide

- Make port 11434 (Ollama) optional for security
  - Update DEPLOYMENT.md with CPU and GPU deployment options
  - Simplify default docker run commands
  - Update healthcheck to only check web application

- Add memory requirements documentation
  - Create MEMORY_REQUIREMENTS.md with model comparison
  - Add build-8b.sh script for lower memory usage
  - Document OOM troubleshooting steps

- Improve Docker build process
  - Add BUILD_TROUBLESHOOTING.md for common issues
  - Add DISTRIBUTION.md for image distribution methods
  - Update .gitignore to exclude large binary files
  - Improve docker-entrypoint.sh with better diagnostics

- Update .dockerignore to include ollama-linux-amd64.tgz
- Add backup file exclusions to .gitignore
2025-12-08 17:08:45 +08:00
6426b73a5e fix: export only required models instead of entire Ollama directory
- Changed export-ollama-models.sh to selectively copy only qwen3:14b and qwen3-embedding:4b
- Parses manifest files to identify required blob files
- Significantly reduces Docker image size by excluding unrelated models
- Added summary showing which models were skipped

This prevents accidentally including other models (like deepseek-r1, bge-m3, etc.)
that may exist in the user's Ollama directory but are not needed for the project.
2025-12-08 12:00:11 +08:00
26f8e0c648 feat: add Docker support for offline deployment with qwen3:14b
Major additions:
- All-in-One Docker image with Ollama + models bundled
- Separate deployment option for existing Ollama installations
- Changed default model from qwen3:8b to qwen3:14b
- Comprehensive deployment documentation

Files added:
- Dockerfile: Basic app-only image
- Dockerfile.allinone: Complete image with Ollama + models
- docker-compose.yml: Easy deployment configuration
- docker-entrypoint.sh: Startup script for all-in-one image
- requirements.txt: Python dependencies
- .dockerignore: Exclude unnecessary files from image

Scripts:
- export-ollama-models.sh: Export models from local Ollama
- build-allinone.sh: Build complete offline-deployable image
- build-and-export.sh: Build and export basic image

Documentation:
- DEPLOYMENT.md: Comprehensive deployment guide
- QUICK_START.md: Quick reference for common tasks

Configuration:
- Updated config.py: DEFAULT_CHAT_MODEL = qwen3:14b
- Updated frontend/opro.html: Page title to 系统提示词优化
2025-12-08 10:10:38 +08:00
65cdcf29dc refactor: replace OPRO with simple iterative refinement
Major changes:
- Remove fake OPRO evaluation (no more fake 0.5 scores)
- Add simple refinement based on user selection
- New endpoint: POST /opro/refine (selected + rejected instructions)
- Update prompt generation to focus on comprehensive coverage instead of style variety
- All generated instructions now start with role definition (你是一个...)
- Update README to reflect new approach and API endpoints

Technical details:
- Added refine_based_on_selection() in prompt_utils.py
- Added refine_instruction_candidates() in user_prompt_optimizer.py
- Added OPRORefineReq model and /opro/refine endpoint in api.py
- Updated frontend handleContinueOptimize() to use new refinement flow
- Changed prompt requirements from 'different styles' to 'comprehensive coverage'
- Added role definition requirement as first item in all prompt templates
2025-12-08 09:43:20 +08:00
602875b08c refactor: remove execute instruction button to simplify UX
- Removed '执行此指令' button from candidate cards
- Prevents confusion between execution interactions and new task input
- Cleaner workflow: input box for new tasks, 继续优化 for iteration, 复制 for copying
- Each candidate now only has two actions: continue optimizing or copy
2025-12-06 22:41:05 +08:00
da30a0999c feat: implement session-based architecture for OPRO
- Add session layer above runs to group related optimization tasks
- Sessions use first task description as name instead of 'Session 1'
- Simplified sidebar: show sessions without expansion
- Add '+ 新建任务' button in header to create runs within session
- Fix: reload sessions after creating new run
- Add debugging logs for candidate generation
- Backend: auto-update session name with first task description
2025-12-06 21:26:24 +08:00
21 changed files with 2087 additions and 135 deletions

28
.dockerignore Normal file
View File

@@ -0,0 +1,28 @@
__pycache__
*.pyc
*.pyo
*.pyd
.Python
*.so
*.egg
*.egg-info
dist
build
.git
.gitignore
.vscode
.idea
*.md
!README.md
# Include pre-downloaded Ollama binary for offline build
!ollama-linux-amd64.tgz
local_docs
examples
outputs
.DS_Store
*.log
.env
.venv
venv
env

10
.gitignore vendored
View File

@@ -149,6 +149,16 @@ outputs/
*.log *.log
local_docs/ local_docs/
# Docker build artifacts (DO NOT commit these - they are huge!)
ollama-models/
*.tar
ollama-linux-amd64.tgz
system-prompt-optimizer-*.tar
*.tar.gz
# Backup files from scripts
*.bak
# Node modules (if any frontend dependencies) # Node modules (if any frontend dependencies)
node_modules/ node_modules/
package-lock.json package-lock.json

461
DEPLOYMENT.md Normal file
View File

@@ -0,0 +1,461 @@
# Docker 部署指南
本文档说明如何在无外网访问的服务器上部署系统提示词优化工具。
## 部署方案
本项目提供两种部署方案:
### 方案 A: All-in-One 镜像(推荐,适用于无外网服务器)
**优点**
- 包含所有依赖:应用代码 + Ollama + LLM 模型
- 一个镜像文件,部署简单
- 无需在目标服务器上安装任何额外软件(除了 Docker
**缺点**
- 镜像文件很大10-20GB
- 传输时间较长
### 方案 B: 分离部署(适用于已有 Ollama 的服务器)
**优点**
- 镜像文件较小(~500MB
- 可以复用现有的 Ollama 服务
**缺点**
- 需要在目标服务器上单独安装和配置 Ollama
- 需要手动下载模型
---
## 方案 A: All-in-One 部署(推荐)
### 前置要求
#### 在开发机器上(有外网访问)
1. **Docker** 已安装
2. **Ollama** 已安装并运行
3. **磁盘空间**:至少 30GB 可用空间
4. 已下载所需的 Ollama 模型:
- `qwen3:14b` (主模型,~8GB)
- `qwen3-embedding:4b` (嵌入模型,~2GB)
#### 在目标服务器上(无外网访问)
1. **Docker** 已安装
2. **磁盘空间**:至少 25GB 可用空间
### 部署步骤
#### 步骤 1: 下载所需的 Ollama 模型
在开发机器上,确保已下载所需模型:
```bash
# 下载主模型(约 8GB
ollama pull qwen3:14b
# 下载嵌入模型(约 2GB
ollama pull qwen3-embedding:4b
# 验证模型已下载
ollama list
```
#### 步骤 2: 导出 Ollama 模型
```bash
# 运行导出脚本
./export-ollama-models.sh
```
这将创建 `ollama-models/` 目录,包含所有模型文件。
#### 步骤 3: 构建 All-in-One Docker 镜像
```bash
# 运行构建脚本(推荐)
./build-allinone.sh
# 或手动构建
docker build -f Dockerfile.allinone -t system-prompt-optimizer:allinone .
```
**注意**:构建过程可能需要 10-30 分钟,取决于机器性能。
#### 步骤 4: 导出 Docker 镜像
如果使用 `build-allinone.sh`,镜像已自动导出。否则手动导出:
```bash
# 导出镜像(约 10-20GB
docker save -o system-prompt-optimizer-allinone.tar system-prompt-optimizer:allinone
# 验证文件大小
ls -lh system-prompt-optimizer-allinone.tar
```
#### 步骤 5: 传输到目标服务器
使用 scp、U盘或其他方式传输镜像文件
```bash
# 使用 scp如果网络可达
scp system-prompt-optimizer-allinone.tar user@server:/path/
# 或使用 rsync支持断点续传
rsync -avP --progress system-prompt-optimizer-allinone.tar user@server:/path/
# 或使用 U盘/移动硬盘物理传输
```
#### 步骤 6: 在目标服务器上加载镜像
```bash
# 加载镜像(需要几分钟)
docker load -i system-prompt-optimizer-allinone.tar
# 如果遇到权限错误,使用 sudo
# sudo docker load -i system-prompt-optimizer-allinone.tar
# 验证镜像已加载
docker images | grep system-prompt-optimizer
```
#### 步骤 7: 启动服务
**CPU 模式(默认):**
```bash
# 启动容器(推荐:仅暴露 Web 端口)
docker run -d \
--name system-prompt-optimizer \
-p 8010:8010 \
--restart unless-stopped \
system-prompt-optimizer:allinone
# 查看启动日志
docker logs -f system-prompt-optimizer
```
**GPU 模式(推荐,如果有 NVIDIA GPU:**
```bash
# 使用所有可用 GPU推荐
docker run -d \
--name system-prompt-optimizer \
--gpus all \
-p 8010:8010 \
--restart unless-stopped \
system-prompt-optimizer:allinone
# 或指定特定 GPU
docker run -d \
--name system-prompt-optimizer \
--gpus '"device=0"' \
-p 8010:8010 \
--restart unless-stopped \
system-prompt-optimizer:allinone
# 查看启动日志
docker logs -f system-prompt-optimizer
```
**GPU 部署前提条件**:
- 已安装 NVIDIA 驱动 (`nvidia-smi` 可用)
- 已安装 NVIDIA Container Toolkit
- GPU 显存 ≥ 10GB (14b 模型) 或 ≥ 6GB (8b 模型)
**详细 GPU 部署指南**: 参见 [GPU_DEPLOYMENT.md](GPU_DEPLOYMENT.md)
**重要**
- 首次启动需要等待 30-60 秒CPU或 10-20 秒GPUOllama 服务需要初始化
- GPU 模式下推理速度提升 5-10 倍
- 端口 11434 (Ollama) 是可选的,仅在需要外部访问 Ollama 时暴露
- 不暴露 11434 更安全,因为 Ollama API 没有身份验证
#### 步骤 8: 验证部署
```bash
# 等待服务启动(约 30-60 秒)
sleep 60
# 健康检查
curl http://localhost:8010/health
# 应该返回:
# {"status":"ok","version":"0.1.0"}
# 检查 Ollama 服务
curl http://localhost:11434/api/tags
# 检查可用模型
curl http://localhost:8010/models
# 访问 Web 界面
# 浏览器打开: http://<服务器IP>:8010/ui/opro.html
```
---
## 方案 B: 分离部署
### 前置要求
#### 在目标服务器上
1. **Docker** 已安装
2. **Ollama** 服务已安装并运行
3. 已拉取所需的 Ollama 模型:
- `qwen3:14b` (主模型)
- `qwen3-embedding:4b` (嵌入模型)
### 部署步骤
#### 步骤 1: 构建应用镜像
```bash
# 在开发机器上构建
docker build -t system-prompt-optimizer:latest .
# 导出镜像
docker save -o system-prompt-optimizer.tar system-prompt-optimizer:latest
```
#### 步骤 2: 传输并加载
```bash
# 传输到目标服务器
scp system-prompt-optimizer.tar user@server:/path/
# 在目标服务器上加载
docker load -i system-prompt-optimizer.tar
```
#### 步骤 3: 启动服务
```bash
# 使用 Docker Compose
docker-compose up -d
# 或使用 Docker 命令
docker run -d \
--name system-prompt-optimizer \
-p 8010:8010 \
-e OLLAMA_HOST=http://host.docker.internal:11434 \
-v $(pwd)/outputs:/app/outputs \
--add-host host.docker.internal:host-gateway \
--restart unless-stopped \
system-prompt-optimizer:latest
```
## 配置说明
### 环境变量
`docker-compose.yml``docker run` 命令中可以配置以下环境变量:
- `OLLAMA_HOST`: Ollama 服务地址(默认: `http://host.docker.internal:11434`
- `PYTHONUNBUFFERED`: Python 输出缓冲(默认: `1`
### 端口映射
- **8010**: Web 界面和 API 端口(必需)
- **11434**: Ollama API 端口(可选,仅用于调试或外部访问 Ollama
### 数据持久化
- `./outputs`: 用户反馈日志存储目录(映射到容器内 `/app/outputs`
## 故障排查
### 0. Docker 守护进程连接错误
**问题**: 运行 `docker` 命令时提示 "Cannot connect to the Docker daemon"
**症状**:
```
Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
```
**解决方案**:
**方法 1: 检查 Docker 服务状态**
```bash
# 检查 Docker 是否运行
sudo systemctl status docker
# 如果未运行,启动它
sudo systemctl start docker
# 设置开机自启
sudo systemctl enable docker
```
**方法 2: 添加用户到 docker 组(推荐)**
```bash
# 将当前用户添加到 docker 组
sudo usermod -aG docker $USER
# 应用组变更(需要重新登录或使用 newgrp
newgrp docker
# 或者直接注销并重新登录
# 验证
docker info
```
**方法 3: 修复 Docker socket 权限**
```bash
# 检查 socket 权限
ls -l /var/run/docker.sock
# 修复权限
sudo chown root:docker /var/run/docker.sock
sudo chmod 660 /var/run/docker.sock
```
**方法 4: 临时使用 sudo**
```bash
# 如果上述方法不可行,使用 sudo 运行 Docker 命令
sudo docker load -i system-prompt-optimizer-allinone.tar
sudo docker run -d --name system-prompt-optimizer ...
```
**验证修复**:
```bash
# 应该能正常显示 Docker 信息
docker info
# 应该能看到当前用户在 docker 组中
groups | grep docker
```
---
### 1. 无法连接 Ollama 服务
**问题**: 容器内无法访问宿主机的 Ollama 服务
**解决方案**:
```bash
# 确保使用了 --add-host 参数
--add-host host.docker.internal:host-gateway
# 或者直接使用宿主机 IP
-e OLLAMA_HOST=http://192.168.1.100:11434
```
### 2. 模型不可用All-in-One 部署)
**问题**: 容器内模型未正确加载
**解决方案**:
```bash
# 进入容器检查
docker exec -it system-prompt-optimizer bash
# 在容器内检查模型
ollama list
# 如果模型不存在,检查模型目录
ls -la /root/.ollama/models/
# 退出容器
exit
```
如果模型确实丢失,可能需要重新构建镜像。
### 3. 模型不可用(分离部署)
**问题**: Ollama 模型未安装
**解决方案**:
```bash
# 在宿主机上拉取模型
ollama pull qwen3:14b
ollama pull qwen3-embedding:4b
# 验证模型已安装
ollama list
```
### 4. 容器启动失败
**问题**: 端口被占用或权限问题
**解决方案**:
```bash
# 检查端口占用
netstat -tulpn | grep 8010
netstat -tulpn | grep 11434
# 更换端口All-in-One 需要两个端口)
docker run -p 8011:8010 -p 11435:11434 ...
# 查看容器日志
docker logs system-prompt-optimizer
```
### 5. 性能问题
**问题**: 生成速度慢
**解决方案**:
- 确保 Ollama 使用 GPU 加速
- 使用更小的模型(如 `qwen3:4b`
- 调整 `config.py` 中的 `GENERATION_POOL_SIZE`
## 更新部署
```bash
# 1. 在开发机器上重新构建镜像
docker build -t system-prompt-optimizer:latest .
# 2. 导出新镜像
docker save -o system-prompt-optimizer-new.tar system-prompt-optimizer:latest
# 3. 传输到服务器并加载
docker load -i system-prompt-optimizer-new.tar
# 4. 重启服务
docker-compose down
docker-compose up -d
# 或使用 docker 命令
docker stop system-prompt-optimizer
docker rm system-prompt-optimizer
docker run -d ... # 使用相同的启动命令
```
## 安全建议
1. **网络隔离**: 如果不需要外部访问,只绑定到 localhost
```bash
-p 127.0.0.1:8010:8010
```
2. **防火墙**: 配置防火墙规则限制访问
```bash
# 只允许特定 IP 访问
iptables -A INPUT -p tcp --dport 8010 -s 192.168.1.0/24 -j ACCEPT
iptables -A INPUT -p tcp --dport 8010 -j DROP
```
3. **日志管理**: 定期清理日志文件
```bash
# 限制 Docker 日志大小
docker run --log-opt max-size=10m --log-opt max-file=3 ...
```
## 联系支持
如有问题,请查看:
- 应用日志: `docker logs system-prompt-optimizer`
- Ollama 日志: `journalctl -u ollama -f`
- API 文档: http://localhost:8010/docs

38
Dockerfile Normal file
View File

@@ -0,0 +1,38 @@
FROM python:3.10-slim
# Set working directory
WORKDIR /app
# Install system dependencies
RUN apt-get update && apt-get install -y \
curl \
&& rm -rf /var/lib/apt/lists/*
# Copy requirements file
COPY requirements.txt .
# Install Python dependencies
RUN pip install --no-cache-dir -r requirements.txt
# Copy application code
COPY _qwen_xinference_demo/ ./_qwen_xinference_demo/
COPY frontend/ ./frontend/
COPY config.py .
# Create outputs directory
RUN mkdir -p outputs
# Expose port
EXPOSE 8010
# Set environment variables
ENV PYTHONUNBUFFERED=1
ENV OLLAMA_HOST=http://host.docker.internal:11434
# Health check
HEALTHCHECK --interval=30s --timeout=10s --start-period=5s --retries=3 \
CMD curl -f http://localhost:8010/health || exit 1
# Run the application
CMD ["uvicorn", "_qwen_xinference_demo.api:app", "--host", "0.0.0.0", "--port", "8010"]

58
Dockerfile.allinone Normal file
View File

@@ -0,0 +1,58 @@
FROM --platform=linux/amd64 python:3.10-slim
# Set working directory
WORKDIR /app
# Install system dependencies
RUN apt-get update && apt-get install -y \
curl \
ca-certificates \
&& rm -rf /var/lib/apt/lists/*
# Install Ollama manually for amd64
# Copy pre-downloaded Ollama binary to avoid slow downloads during build
# Using v0.13.1 (latest stable as of Dec 2024)
COPY ollama-linux-amd64.tgz /tmp/ollama-linux-amd64.tgz
RUN tar -C /usr -xzf /tmp/ollama-linux-amd64.tgz \
&& rm /tmp/ollama-linux-amd64.tgz
# Copy requirements file
COPY requirements.txt .
# Install Python dependencies
RUN pip install --no-cache-dir -r requirements.txt
# Copy application code
COPY _qwen_xinference_demo/ ./_qwen_xinference_demo/
COPY frontend/ ./frontend/
COPY config.py .
# Create necessary directories
RUN mkdir -p outputs /root/.ollama
# Copy pre-downloaded Ollama models
# This includes qwen3:14b and qwen3-embedding:4b
COPY ollama-models/ /root/.ollama/
# Expose ports
EXPOSE 8010 11434
# Set environment variables
ENV PYTHONUNBUFFERED=1
ENV OLLAMA_HOST=http://localhost:11434
# Enable GPU support for Ollama (will auto-detect NVIDIA GPU if available)
ENV NVIDIA_VISIBLE_DEVICES=all
ENV NVIDIA_DRIVER_CAPABILITIES=compute,utility
# Copy startup script
COPY docker-entrypoint.sh /docker-entrypoint.sh
RUN chmod +x /docker-entrypoint.sh
# Health check
# Only check the web application, not Ollama (internal service)
HEALTHCHECK --interval=30s --timeout=10s --start-period=60s --retries=3 \
CMD curl -f http://localhost:8010/health || exit 1
# Run the startup script
ENTRYPOINT ["/docker-entrypoint.sh"]

117
QUICK_START.md Normal file
View File

@@ -0,0 +1,117 @@
# 快速开始指南
## 离线部署All-in-One 方案)
### 在开发机器上(有外网)
```bash
# 1. 下载模型
ollama pull qwen3:14b
ollama pull qwen3-embedding:4b
# 2. 导出模型
./export-ollama-models.sh
# 3. 构建并导出 Docker 镜像
./build-allinone.sh
# 4. 传输到目标服务器
# 文件: system-prompt-optimizer-allinone.tar (约 10-20GB)
scp system-prompt-optimizer-allinone.tar user@server:/path/
```
### 在目标服务器上(无外网)
```bash
# 1. 加载镜像
docker load -i system-prompt-optimizer-allinone.tar
# 2. 启动服务
docker run -d \
--name system-prompt-optimizer \
-p 8010:8010 \
-p 11434:11434 \
-v $(pwd)/outputs:/app/outputs \
--restart unless-stopped \
system-prompt-optimizer:allinone
# 3. 等待启动(约 60 秒)
sleep 60
# 4. 验证
curl http://localhost:8010/health
curl http://localhost:11434/api/tags
# 5. 访问界面
# http://<服务器IP>:8010/ui/opro.html
```
## 常用命令
```bash
# 查看日志
docker logs -f system-prompt-optimizer
# 重启服务
docker restart system-prompt-optimizer
# 停止服务
docker stop system-prompt-optimizer
# 删除容器
docker rm -f system-prompt-optimizer
# 进入容器
docker exec -it system-prompt-optimizer bash
# 检查模型
docker exec -it system-prompt-optimizer ollama list
```
## 端口说明
- **8010**: Web 界面和 API
- **11434**: Ollama 服务(仅 All-in-One 方案需要暴露)
## 文件说明
- `system-prompt-optimizer-allinone.tar`: 完整镜像10-20GB
- `outputs/`: 用户反馈日志目录
## 故障排查
### 服务无法启动
```bash
# 查看日志
docker logs system-prompt-optimizer
# 检查端口占用
netstat -tulpn | grep 8010
netstat -tulpn | grep 11434
```
### 模型不可用
```bash
# 进入容器检查
docker exec -it system-prompt-optimizer ollama list
# 应该看到:
# qwen3:14b
# qwen3-embedding:4b
```
### 性能慢
- 确保服务器有足够的 RAM建议 16GB+
- 如果有 GPU使用支持 GPU 的 Docker 运行时
- 调整 `config.py` 中的 `GENERATION_POOL_SIZE`
## 更多信息
详细文档请参考:
- `DEPLOYMENT.md`: 完整部署指南
- `README.md`: 项目说明
- http://localhost:8010/docs: API 文档

146
README.md
View File

@@ -1,58 +1,65 @@
# OPRO Prompt Optimizer # System Prompt Generator
## 功能概述 ## 功能概述
OPRO (Optimization by PROmpting) 是一个基于大语言模型的提示词优化系统。本项目实现了真正的 OPRO 算法通过迭代优化系统指令System Instructions来提升 LLM 在特定任务上的性能 是一个基于大语言模型的系统提示词System Prompt生成和迭代优化工具。通过简单的任务描述自动生成高质量的系统指令并支持基于用户选择的迭代改进
### 核心功能 ### 核心功能
- **系统指令优化**:使用 LLM 作为优化器,基于历史性能轨迹生成更优的系统指令 - **智能指令生成**:根据任务描述自动生成多个高质量的系统指令候选
- **多轮迭代优化**:支持多轮优化,每轮基于前一轮的性能反馈生成新的候选指令 - **迭代式改进**:基于用户选择的指令生成改进版本,避免被拒绝的方向
- **角色定义格式**:所有生成的指令都以角色定义开头(如"你是一个..."),符合最佳实践
- **智能候选选择**:通过语义聚类和多样性选择,从大量候选中筛选出最具代表性的指令 - **智能候选选择**:通过语义聚类和多样性选择,从大量候选中筛选出最具代表性的指令
- **性能评估**:支持自定义测试用例对系统指令进行自动评估 - **会话管理**:支持多个任务的并行管理和历史记录
- **会话管理**:支持多个优化任务的并行管理和历史记录 - **全面覆盖要求**:生成的指令全面覆盖任务的所有要求和细节,而非仅追求风格多样性
### 用户界面 ### 用户界面
- **现代化聊天界面**:类似 Google Gemini 的简洁设计 - **现代化聊天界面**:类似 Google Gemini 的简洁设计
- **侧边栏会话管理**:可折叠的侧边栏,支持多会话切换 - **侧边栏会话管理**:可折叠的侧边栏,支持多会话切换
- **实时优化反馈**:每轮优化生成 3-5 个候选指令,用户可选择继续优化或执行 - **实时生成反馈**:每轮生成 5 个候选指令,用户可选择继续优化或复制使用
- **模型选择**:支持在界面中选择不同的 LLM 模型 - **模型选择**:支持在界面中选择不同的 LLM 模型
## 主要优化改进 ## 核心特性
### 1. 真正的 OPRO 实现 ### 1. 简单直观的工作流程
原始代码实现的是查询重写Query Rewriting而非真正的 OPRO。我们添加了完整的 OPRO 功能 不同于复杂的 OPRO 算法(需要测试用例和自动评估),本工具采用简单直观的迭代改进方式
- **系统指令生成**`generate_system_instruction_candidates()` - 生成多样化的系统指令候选 - **初始生成**输入任务描述 → 生成 5 个全面的系统指令候选
- **性能评估**`evaluate_system_instruction()` - 基于测试用例评估指令性能 - **迭代改进**:选择喜欢的指令 → 生成基于该指令的改进版本,同时避免被拒绝的方向
- **轨迹优化**:基于历史 (instruction, score) 轨迹生成更优指令 - **无需评分**:不需要测试用例或性能评分,完全基于用户偏好进行改进
- **元提示工程**:专门设计的元提示用于指导 LLM 生成和优化系统指令
### 2. 性能优化 ### 2. 高质量指令生成
- **候选池大小优化**:从 20 个候选减少到 10 个,速度提升约 2 倍 - **角色定义格式**:所有指令以"你是一个..."开头,符合系统提示词最佳实践
- **智能聚类选择**:使用 AgglomerativeClustering 从候选池中选择最具多样性的 Top-K - **全面覆盖要求**:生成的指令全面覆盖任务的所有要求和细节
- **清晰可执行**:指令清晰、具体、可执行,包含必要的行为规范和输出格式
- **简体中文**:所有生成的指令使用简体中文
### 3. 性能优化
- **候选池大小优化**:生成 10 个候选,通过聚类选择 5 个最具多样性的
- **智能聚类选择**:使用 AgglomerativeClustering 从候选池中选择最具代表性的指令
- **嵌入服务回退**Xinference → Ollama 自动回退机制,确保服务可用性 - **嵌入服务回退**Xinference → Ollama 自动回退机制,确保服务可用性
### 3. API 架构改进 ### 4. API 架构
- **新增 OPRO 端点** - **核心端点**
- `POST /opro/create` - 创建 OPRO 优化任务 - `POST /opro/create` - 创建任务
- `POST /opro/generate_and_evaluate` - 生成并自动评估候选 - `POST /opro/generate_and_evaluate` - 生成初始候选
- `POST /opro/execute` - 执行系统指令 - `POST /opro/refine` - 基于用户选择进行迭代改进
- `GET /opro/runs` - 获取所有优化任务 - `GET /opro/sessions` - 获取所有会话
- `GET /opro/run/{run_id}` - 获取特定任务详情 - `GET /opro/runs` - 获取所有任务
- **会话状态管理**完整的 OPRO 运行状态跟踪(轨迹、测试用例、迭代次数) - **会话管理**支持多会话、多任务的并行管理
- **向后兼容**:保留原有查询重写功能,标记为 `opro-legacy` - **向后兼容**:保留原有查询重写功能,标记为 `opro-legacy`
### 4. 前端界面重构 ### 5. 前端界面
- **Gemini 风格设计**:简洁的白色/灰色配色,圆角设计,微妙的阴影效果 - **Gemini 风格设计**:简洁的白色/灰色配色,圆角设计,微妙的阴影效果
- **可折叠侧边栏**:默认折叠,支持会话列表管理 - **可折叠侧边栏**:默认折叠,支持会话列表管理
- **多行输入框**:支持多行文本输入,底部工具栏包含模型选择器 - **多行输入框**:支持多行文本输入,底部工具栏包含模型选择器
- **候选指令卡片**:每个候选显示编号内容、分数,提供"继续优化""复制"、"执行"按钮 - **候选指令卡片**:每个候选显示编号内容,提供"继续优化""复制"按钮
- **简体中文界面**:所有 UI 文本和生成的指令均使用简体中文 - **简体中文界面**:所有 UI 文本和生成的指令均使用简体中文
## 快速开始 ## 快速开始
@@ -97,19 +104,18 @@ uvicorn _qwen_xinference_demo.api:app --host 0.0.0.0 --port 8010
### 访问界面 ### 访问界面
- **OPRO 优化界面**http://127.0.0.1:8010/ui/opro.html - **系统指令生成器**http://127.0.0.1:8010/ui/opro.html
- **传统三栏界面**http://127.0.0.1:8010/ui/ - **传统三栏界面**http://127.0.0.1:8010/ui/
- **API 文档**http://127.0.0.1:8010/docs - **API 文档**http://127.0.0.1:8010/docs
- **OpenAPI JSON**http://127.0.0.1:8010/openapi.json - **OpenAPI JSON**http://127.0.0.1:8010/openapi.json
### 使用示例 ### 使用示例
1. **创建新会话**:在 OPRO 界面点击"新建会话"或侧边栏的 + 按钮 1. **创建新会话**:在界面点击"新建会话"或侧边栏的 + 按钮
2. **输入任务描述**:例如"将中文翻译成英文" 2. **输入任务描述**:例如"帮我写一个专业的营销文案生成助手"
3. **查看候选指令**:系统生成 3-5 个优化的系统指令 3. **查看候选指令**:系统生成 5 个全面的系统指令,每个都以角色定义开头
4. **继续优化**:点击"继续优化"进行下一轮迭代 4. **选择并改进**:点击喜欢的指令上的"继续优化"按钮,生成基于该指令的改进版本
5. **执行指令**:点击"执行此指令"测试指令效果 5. **复制使用**:点击"复制"按钮将指令复制到剪贴板,用于你的应用中
6. **复制指令**:点击"复制"按钮将指令复制到剪贴板
## 配置说明 ## 配置说明
@@ -123,8 +129,8 @@ OLLAMA_HOST = "http://127.0.0.1:11434"
DEFAULT_CHAT_MODEL = "qwen3:8b" DEFAULT_CHAT_MODEL = "qwen3:8b"
DEFAULT_EMBED_MODEL = "qwen3-embedding:4b" DEFAULT_EMBED_MODEL = "qwen3-embedding:4b"
# OPRO 优化参数 # 生成参数
GENERATION_POOL_SIZE = 10 # 生成候选池大小 GENERATION_POOL_SIZE = 10 # 生成候选池大小生成10个聚类选择5个
TOP_K = 5 # 返回给用户的候选数量 TOP_K = 5 # 返回给用户的候选数量
CLUSTER_DISTANCE_THRESHOLD = 0.15 # 聚类距离阈值 CLUSTER_DISTANCE_THRESHOLD = 0.15 # 聚类距离阈值
@@ -157,11 +163,30 @@ XINFERENCE_EMBED_URL = "http://127.0.0.1:9997/models/bge-base-zh/embed"
## API 端点 ## API 端点
### OPRO 相关(推荐使用) ### 会话管理
- `POST /opro/session/create` - 创建新会话
- `GET /opro/sessions` - 获取所有会话
- `GET /opro/session/{session_id}` - 获取会话详情
### 任务管理
- `POST /opro/create` - 在会话中创建新任务
- 请求体:`{"session_id": "xxx", "task_description": "任务描述", "model_name": "qwen3:8b"}`
- 返回:`{"run_id": "xxx", "task_description": "...", "iteration": 0}`
### 指令生成
- `POST /opro/generate_and_evaluate` - 生成初始候选指令
- 请求体:`{"run_id": "xxx", "top_k": 5, "pool_size": 10}`
- 返回:`{"candidates": [{"instruction": "...", "score": null}, ...]}`
- `POST /opro/refine` - 基于用户选择进行迭代改进
- 请求体:`{"run_id": "xxx", "selected_instruction": "用户选择的指令", "rejected_instructions": ["被拒绝的指令1", "被拒绝的指令2"]}`
- 返回:`{"candidates": [{"instruction": "...", "score": null}, ...], "iteration": 1}`
### 任务查询
- `POST /opro/create` - 创建优化任务
- `POST /opro/generate_and_evaluate` - 生成并评估候选
- `POST /opro/execute` - 执行系统指令
- `GET /opro/runs` - 获取所有任务 - `GET /opro/runs` - 获取所有任务
- `GET /opro/run/{run_id}` - 获取任务详情 - `GET /opro/run/{run_id}` - 获取任务详情
@@ -181,6 +206,37 @@ XINFERENCE_EMBED_URL = "http://127.0.0.1:9997/models/bge-base-zh/embed"
详细 API 文档请访问http://127.0.0.1:8010/docs 详细 API 文档请访问http://127.0.0.1:8010/docs
## 工作原理
### 初始生成流程
1. 用户输入任务描述(如"帮我写一个专业的营销文案生成助手"
2. 系统使用 LLM 生成 10 个候选指令
3. 通过语义嵌入和聚类算法选择 5 个最具多样性的候选
4. 所有候选都以角色定义开头,全面覆盖任务要求
### 迭代改进流程
1. 用户选择喜欢的指令(如候选 #3
2. 系统记录被拒绝的指令(候选 #1, #2, #4, #5
3. 向 LLM 发送改进请求:"基于选中的指令生成改进版本,避免被拒绝指令的方向"
4. 生成新的 10 个候选,聚类选择 5 个返回
5. 用户可以继续迭代或复制使用
### 与 OPRO 的区别
**OPRO原始算法**
- 需要测试用例(如数学题的正确答案)
- 自动评分(如准确率 0.73, 0.81
- 基于性能轨迹优化
- 适用于有明确评估标准的任务
**本工具(简单迭代改进)**
- 不需要测试用例
- 不需要自动评分
- 基于用户偏好改进
- 适用于任意通用任务
## 常见问题 ## 常见问题
### 1. 无法连接 Ollama 服务 ### 1. 无法连接 Ollama 服务
@@ -198,11 +254,17 @@ ollama serve
### 3. 生成速度慢 ### 3. 生成速度慢
- 调整 `GENERATION_POOL_SIZE` 减少候选数量 - 调整 `GENERATION_POOL_SIZE` 减少候选数量(如改为 6返回 3 个)
- 使用更小的模型(如 `qwen3:4b` - 使用更小的模型(如 `qwen3:4b`
- 确保 Ollama 使用 GPU 加速 - 确保 Ollama 使用 GPU 加速
### 4. 界面显示异常 ### 4. 生成的指令质量不高
- 提供更详细的任务描述
- 多次迭代改进,选择最好的继续优化
- 尝试不同的模型
### 5. 界面显示异常
硬刷新浏览器缓存: 硬刷新浏览器缓存:
- **Mac**: `Cmd + Shift + R` - **Mac**: `Cmd + Shift + R`

View File

@@ -14,6 +14,7 @@ from .opro.session_state import USER_FEEDBACK_LOG
# True OPRO session management # True OPRO session management
from .opro.session_state import ( from .opro.session_state import (
create_opro_session, get_opro_session, list_opro_sessions,
create_opro_run, get_opro_run, update_opro_iteration, create_opro_run, get_opro_run, update_opro_iteration,
add_opro_evaluation, get_opro_trajectory, set_opro_test_cases, add_opro_evaluation, get_opro_trajectory, set_opro_test_cases,
complete_opro_run, list_opro_runs complete_opro_run, list_opro_runs
@@ -23,7 +24,8 @@ from .opro.session_state import (
from .opro.user_prompt_optimizer import generate_candidates from .opro.user_prompt_optimizer import generate_candidates
from .opro.user_prompt_optimizer import ( from .opro.user_prompt_optimizer import (
generate_system_instruction_candidates, generate_system_instruction_candidates,
evaluate_system_instruction evaluate_system_instruction,
refine_instruction_candidates
) )
from .opro.ollama_client import call_qwen from .opro.ollama_client import call_qwen
@@ -122,6 +124,7 @@ class CreateOPRORunReq(BaseModel):
task_description: str task_description: str
test_cases: Optional[List[TestCase]] = None test_cases: Optional[List[TestCase]] = None
model_name: Optional[str] = None model_name: Optional[str] = None
session_id: Optional[str] = None # Optional session to associate with
class OPROIterateReq(BaseModel): class OPROIterateReq(BaseModel):
@@ -157,6 +160,15 @@ class OPROExecuteReq(BaseModel):
model_name: Optional[str] = None model_name: Optional[str] = None
class OPRORefineReq(BaseModel):
"""Request to refine based on selected instruction (simple iterative refinement, NOT OPRO)."""
run_id: str
selected_instruction: str
rejected_instructions: List[str]
top_k: Optional[int] = None
pool_size: Optional[int] = None
# ============================================================================ # ============================================================================
# LEGACY ENDPOINTS (Query Rewriting - NOT true OPRO) # LEGACY ENDPOINTS (Query Rewriting - NOT true OPRO)
# ============================================================================ # ============================================================================
@@ -360,12 +372,62 @@ def set_model(req: SetModelReq):
# TRUE OPRO ENDPOINTS (System Instruction Optimization) # TRUE OPRO ENDPOINTS (System Instruction Optimization)
# ============================================================================ # ============================================================================
# Session Management
@app.post("/opro/session/create", tags=["opro-true"])
def opro_create_session(session_name: str = None):
"""
Create a new OPRO session that can contain multiple runs.
"""
session_id = create_opro_session(session_name=session_name)
session = get_opro_session(session_id)
return ok({
"session_id": session_id,
"session_name": session["session_name"],
"num_runs": len(session["run_ids"])
})
@app.get("/opro/sessions", tags=["opro-true"])
def opro_list_sessions():
"""
List all OPRO sessions.
"""
sessions = list_opro_sessions()
return ok({"sessions": sessions})
@app.get("/opro/session/{session_id}", tags=["opro-true"])
def opro_get_session(session_id: str):
"""
Get detailed information about an OPRO session.
"""
session = get_opro_session(session_id)
if not session:
raise AppException(404, "Session not found", "SESSION_NOT_FOUND")
# Get all runs in this session
runs = list_opro_runs(session_id=session_id)
return ok({
"session_id": session_id,
"session_name": session["session_name"],
"created_at": session["created_at"],
"num_runs": len(session["run_ids"]),
"runs": runs
})
# Run Management
@app.post("/opro/create", tags=["opro-true"]) @app.post("/opro/create", tags=["opro-true"])
def opro_create_run(req: CreateOPRORunReq): def opro_create_run(req: CreateOPRORunReq):
""" """
Create a new OPRO optimization run. Create a new OPRO optimization run.
This starts a new system instruction optimization process for a given task. This starts a new system instruction optimization process for a given task.
Optionally can be associated with a session.
""" """
# Convert test cases from Pydantic models to tuples # Convert test cases from Pydantic models to tuples
test_cases = None test_cases = None
@@ -375,7 +437,8 @@ def opro_create_run(req: CreateOPRORunReq):
run_id = create_opro_run( run_id = create_opro_run(
task_description=req.task_description, task_description=req.task_description,
test_cases=test_cases, test_cases=test_cases,
model_name=req.model_name model_name=req.model_name,
session_id=req.session_id
) )
run = get_opro_run(run_id) run = get_opro_run(run_id)
@@ -385,7 +448,8 @@ def opro_create_run(req: CreateOPRORunReq):
"task_description": run["task_description"], "task_description": run["task_description"],
"num_test_cases": len(run["test_cases"]), "num_test_cases": len(run["test_cases"]),
"iteration": run["iteration"], "iteration": run["iteration"],
"status": run["status"] "status": run["status"],
"session_id": run.get("session_id")
}) })
@@ -433,15 +497,14 @@ def opro_evaluate(req: OPROEvaluateReq):
Evaluate a system instruction on the test cases. Evaluate a system instruction on the test cases.
This scores the instruction and updates the performance trajectory. This scores the instruction and updates the performance trajectory.
If no test cases are defined, uses a default score of 0.5 to indicate user selection.
""" """
run = get_opro_run(req.run_id) run = get_opro_run(req.run_id)
if not run: if not run:
raise AppException(404, "OPRO run not found", "RUN_NOT_FOUND") raise AppException(404, "OPRO run not found", "RUN_NOT_FOUND")
if not run["test_cases"]: # Evaluate the instruction if test cases exist
raise AppException(400, "No test cases defined for this run", "NO_TEST_CASES") if run["test_cases"] and len(run["test_cases"]) > 0:
# Evaluate the instruction
try: try:
score = evaluate_system_instruction( score = evaluate_system_instruction(
system_instruction=req.instruction, system_instruction=req.instruction,
@@ -450,6 +513,10 @@ def opro_evaluate(req: OPROEvaluateReq):
) )
except Exception as e: except Exception as e:
raise AppException(500, f"Evaluation failed: {e}", "EVALUATION_ERROR") raise AppException(500, f"Evaluation failed: {e}", "EVALUATION_ERROR")
else:
# No test cases - use default score to indicate user selection
# This allows the trajectory to track which instructions the user preferred
score = 0.5
# Add to trajectory # Add to trajectory
add_opro_evaluation(req.run_id, req.instruction, score) add_opro_evaluation(req.run_id, req.instruction, score)
@@ -462,7 +529,8 @@ def opro_evaluate(req: OPROEvaluateReq):
"instruction": req.instruction, "instruction": req.instruction,
"score": score, "score": score,
"best_score": run["best_score"], "best_score": run["best_score"],
"is_new_best": score == run["best_score"] and score > 0 "is_new_best": score == run["best_score"] and score > 0,
"has_test_cases": len(run["test_cases"]) > 0
}) })
@@ -638,3 +706,44 @@ def opro_execute(req: OPROExecuteReq):
}) })
except Exception as e: except Exception as e:
raise AppException(500, f"Execution failed: {e}", "EXECUTION_ERROR") raise AppException(500, f"Execution failed: {e}", "EXECUTION_ERROR")
@app.post("/opro/refine", tags=["opro-true"])
def opro_refine(req: OPRORefineReq):
"""
Simple iterative refinement based on user selection (NOT OPRO).
This generates new candidates based on the selected instruction while avoiding rejected ones.
No scoring, no trajectory - just straightforward refinement based on user preference.
"""
run = get_opro_run(req.run_id)
if not run:
raise AppException(404, "OPRO run not found", "RUN_NOT_FOUND")
top_k = req.top_k or config.TOP_K
pool_size = req.pool_size or config.GENERATION_POOL_SIZE
try:
candidates = refine_instruction_candidates(
task_description=run["task_description"],
selected_instruction=req.selected_instruction,
rejected_instructions=req.rejected_instructions,
top_k=top_k,
pool_size=pool_size,
model_name=run["model_name"]
)
# Update iteration counter
update_opro_iteration(req.run_id, candidates)
# Get updated run info
run = get_opro_run(req.run_id)
return ok({
"run_id": req.run_id,
"iteration": run["iteration"],
"candidates": [{"instruction": c, "score": None} for c in candidates],
"task_description": run["task_description"]
})
except Exception as e:
raise AppException(500, f"Refinement failed: {e}", "REFINEMENT_ERROR")

View File

@@ -56,14 +56,15 @@ def generate_initial_system_instruction_candidates(task_description: str, pool_s
目标任务描述: 目标任务描述:
{task_description} {task_description}
请根据以上任务,生成 {pool_size} 条高质量、风格各异"System Instruction"候选指令。 请根据以上任务,生成 {pool_size} 条高质量、全面"System Instruction"候选指令。
要求: 要求:
1. 每条指令必须有明显不同的风格和侧重点 1. 每条指令必须以角色定义开头(例如:"你是一个...""你是..."等)
2. 覆盖不同的实现策略(例如:简洁型、详细型、示例型、角色扮演型、步骤型等) 2. 每条指令必须全面覆盖任务的所有要求和细节
3. 这些指令应指导LLM的行为和输出格式以最大化任务性能 3. 指令应清晰、具体、可执行能够有效指导LLM完成任务
4. 每条指令单独成行,不包含编号或额外说明 4. 确保指令包含必要的行为规范、输出格式、注意事项等
5. 所有生成的指令必须使用简体中文 5. 每条指令单独成行,不包含编号或额外说明
6. 所有生成的指令必须使用简体中文
生成 {pool_size} 条指令: 生成 {pool_size} 条指令:
""" """
@@ -120,11 +121,68 @@ def generate_optimized_system_instruction(
然后,生成 {pool_size} 条新的、有潜力超越 {highest_score:.4f} 分的System Instruction。 然后,生成 {pool_size} 条新的、有潜力超越 {highest_score:.4f} 分的System Instruction。
要求: 要求:
1. 每条指令必须有明显不同的改进策略 1. 每条指令必须以角色定义开头(例如:"你是一个...""你是..."等)
2. 结合高分指令的优点,避免低分指令的缺陷 2. 每条指令必须全面覆盖任务的所有要求和细节
3. 探索新的优化方向和表达方式 3. 结合高分指令的优点,避免低分指令的缺陷
4. 每条指令单独成行,不包含编号或额外说明 4. 指令应清晰、具体、可执行能够有效指导LLM完成任务
5. 所有生成的指令必须使用简体中文 5. 每条指令单独成行,不包含编号或额外说明
6. 所有生成的指令必须使用简体中文
生成 {pool_size} 条优化后的指令: 生成 {pool_size} 条优化后的指令:
""" """
def refine_based_on_selection(
task_description: str,
selected_instruction: str,
rejected_instructions: List[str],
pool_size: int = None
) -> str:
"""
Simple refinement: Generate variations based on selected instruction while avoiding rejected ones.
This is NOT OPRO - it's straightforward iterative refinement based on user preference.
No scoring, no trajectory, just: "I like this one, give me more like it (but not like those)."
Args:
task_description: Description of the task
selected_instruction: The instruction the user selected
rejected_instructions: The instructions the user didn't select
pool_size: Number of new candidates to generate
Returns:
Prompt for generating refined candidates
"""
import config
pool_size = pool_size or config.GENERATION_POOL_SIZE
rejected_text = ""
if rejected_instructions:
rejected_formatted = "\n".join(f"- {inst}" for inst in rejected_instructions)
rejected_text = f"""
**用户未选择的指令(避免这些方向):**
{rejected_formatted}
"""
return f"""
你是一个"System Prompt 改进助手"
目标任务描述:
{task_description}
**用户选择的指令(基于此改进):**
{selected_instruction}
{rejected_text}
请基于用户选择的指令,生成 {pool_size} 条改进版本。
要求:
1. 每条指令必须以角色定义开头(例如:"你是一个...""你是..."等)
2. 保留用户选择指令的核心优点
3. 每条指令必须全面覆盖任务的所有要求和细节
4. 指令应清晰、具体、可执行能够有效指导LLM完成任务
5. 避免与未选择指令相似的方向
6. 每条指令单独成行,不包含编号或额外说明
7. 所有生成的指令必须使用简体中文
生成 {pool_size} 条改进后的指令:
"""

View File

@@ -66,10 +66,57 @@ def set_session_model(sid: str, model_name: str | None):
# TRUE OPRO SESSION MANAGEMENT # TRUE OPRO SESSION MANAGEMENT
# ============================================================================ # ============================================================================
# Session storage (contains multiple runs)
OPRO_SESSIONS = {}
def create_opro_session(session_name: str = None) -> str:
"""
Create a new OPRO session that can contain multiple runs.
Args:
session_name: Optional name for the session
Returns:
session_id: Unique identifier for this session
"""
session_id = uuid.uuid4().hex
OPRO_SESSIONS[session_id] = {
"session_name": session_name or "新会话", # Will be updated with first task description
"created_at": uuid.uuid1().time,
"run_ids": [], # List of run IDs in this session
"chat_history": [] # Cross-run chat history
}
return session_id
def get_opro_session(session_id: str) -> Dict[str, Any]:
"""Get OPRO session by ID."""
return OPRO_SESSIONS.get(session_id)
def list_opro_sessions() -> List[Dict[str, Any]]:
"""
List all OPRO sessions with summary information.
Returns:
List of session summaries
"""
return [
{
"session_id": session_id,
"session_name": session["session_name"],
"num_runs": len(session["run_ids"]),
"created_at": session["created_at"]
}
for session_id, session in OPRO_SESSIONS.items()
]
def create_opro_run( def create_opro_run(
task_description: str, task_description: str,
test_cases: List[Tuple[str, str]] = None, test_cases: List[Tuple[str, str]] = None,
model_name: str = None model_name: str = None,
session_id: str = None
) -> str: ) -> str:
""" """
Create a new OPRO optimization run. Create a new OPRO optimization run.
@@ -78,6 +125,7 @@ def create_opro_run(
task_description: Description of the task to optimize for task_description: Description of the task to optimize for
test_cases: List of (input, expected_output) tuples for evaluation test_cases: List of (input, expected_output) tuples for evaluation
model_name: Optional model name to use model_name: Optional model name to use
session_id: Optional session ID to associate this run with
Returns: Returns:
run_id: Unique identifier for this OPRO run run_id: Unique identifier for this OPRO run
@@ -87,6 +135,7 @@ def create_opro_run(
"task_description": task_description, "task_description": task_description,
"test_cases": test_cases or [], "test_cases": test_cases or [],
"model_name": model_name, "model_name": model_name,
"session_id": session_id, # Link to parent session
"iteration": 0, "iteration": 0,
"trajectory": [], # List of (instruction, score) tuples "trajectory": [], # List of (instruction, score) tuples
"best_instruction": None, "best_instruction": None,
@@ -95,6 +144,14 @@ def create_opro_run(
"created_at": uuid.uuid1().time, "created_at": uuid.uuid1().time,
"status": "active" # active, completed, failed "status": "active" # active, completed, failed
} }
# Add run to session if session_id provided
if session_id and session_id in OPRO_SESSIONS:
OPRO_SESSIONS[session_id]["run_ids"].append(run_id)
# Update session name with first task description if it's still default
if OPRO_SESSIONS[session_id]["session_name"] == "新会话" and len(OPRO_SESSIONS[session_id]["run_ids"]) == 1:
OPRO_SESSIONS[session_id]["session_name"] = task_description
return run_id return run_id
@@ -206,13 +263,22 @@ def complete_opro_run(run_id: str):
run["status"] = "completed" run["status"] = "completed"
def list_opro_runs() -> List[Dict[str, Any]]: def list_opro_runs(session_id: str = None) -> List[Dict[str, Any]]:
""" """
List all OPRO runs with summary information. List all OPRO runs with summary information.
Args:
session_id: Optional session ID to filter runs by session
Returns: Returns:
List of run summaries List of run summaries
""" """
runs_to_list = OPRO_RUNS.items()
# Filter by session if provided
if session_id:
runs_to_list = [(rid, r) for rid, r in runs_to_list if r.get("session_id") == session_id]
return [ return [
{ {
"run_id": run_id, "run_id": run_id,
@@ -220,7 +286,8 @@ def list_opro_runs() -> List[Dict[str, Any]]:
"iteration": run["iteration"], "iteration": run["iteration"],
"best_score": run["best_score"], "best_score": run["best_score"],
"num_test_cases": len(run["test_cases"]), "num_test_cases": len(run["test_cases"]),
"status": run["status"] "status": run["status"],
"session_id": run.get("session_id")
} }
for run_id, run in OPRO_RUNS.items() for run_id, run in runs_to_list
] ]

View File

@@ -11,7 +11,8 @@ from .prompt_utils import (
refine_instruction, refine_instruction,
refine_instruction_with_history, refine_instruction_with_history,
generate_initial_system_instruction_candidates, generate_initial_system_instruction_candidates,
generate_optimized_system_instruction generate_optimized_system_instruction,
refine_based_on_selection
) )
def parse_candidates(raw: str) -> list: def parse_candidates(raw: str) -> list:
@@ -147,3 +148,46 @@ def evaluate_system_instruction(
correct += 1 correct += 1
return correct / total return correct / total
def refine_instruction_candidates(
task_description: str,
selected_instruction: str,
rejected_instructions: List[str],
top_k: int = config.TOP_K,
pool_size: int = None,
model_name: str = None
) -> List[str]:
"""
Simple refinement: Generate new candidates based on user's selection.
This is NOT OPRO - just straightforward iterative refinement.
User picks a favorite, we generate variations of it while avoiding rejected ones.
Args:
task_description: Description of the task
selected_instruction: The instruction the user selected
rejected_instructions: The instructions the user didn't select
top_k: Number of diverse candidates to return
pool_size: Number of candidates to generate before clustering
model_name: Optional model name to use
Returns:
List of refined instruction candidates
"""
pool_size = pool_size or config.GENERATION_POOL_SIZE
# Generate the refinement prompt
meta_prompt = refine_based_on_selection(
task_description,
selected_instruction,
rejected_instructions,
pool_size
)
# Use LLM to generate refined candidates
raw = call_qwen(meta_prompt, temperature=0.9, max_tokens=1024, model_name=model_name)
# Parse and cluster
all_candidates = parse_candidates(raw)
return cluster_and_select(all_candidates, top_k=top_k)

141
build-8b.sh Executable file
View File

@@ -0,0 +1,141 @@
#!/bin/bash
# Quick build script for qwen3:8b (lower memory usage)
# Use this if your server has less than 12GB RAM
set -e
echo "=========================================="
echo "Building with qwen3:8b (Lower Memory)"
echo "=========================================="
echo ""
echo "Memory requirements:"
echo " - qwen3:8b: ~5GB RAM"
echo " - qwen3:14b: ~10GB RAM"
echo ""
# Check if 8b model is available
if ! ollama list | grep -q "qwen3:8b"; then
echo "ERROR: qwen3:8b model not found!"
echo ""
echo "Please download it first:"
echo " ollama pull qwen3:8b"
echo ""
exit 1
fi
# Clean up
echo "Cleaning up previous builds..."
rm -rf ollama-models/
docker rmi system-prompt-optimizer:allinone 2>/dev/null || true
# Export 8b model
echo ""
echo "Exporting qwen3:8b model..."
mkdir -p ollama-models/models/{manifests/registry.ollama.ai/library,blobs}
# Function to get blob hashes from manifest
get_blobs_from_manifest() {
local manifest_file=$1
grep -o 'sha256:[a-f0-9]\{64\}' "$manifest_file" | sed 's/sha256://' | sort -u
}
# Function to copy model files
copy_model() {
local model_name=$1
local model_tag=$2
local manifest_dir="$HOME/.ollama/models/manifests/registry.ollama.ai/library/$model_name"
if [ ! -d "$manifest_dir" ]; then
echo "ERROR: Model manifest not found: $manifest_dir"
return 1
fi
echo " Copying $model_name:$model_tag manifest..."
mkdir -p "ollama-models/models/manifests/registry.ollama.ai/library/$model_name"
if [ -f "$manifest_dir/$model_tag" ]; then
cp "$manifest_dir/$model_tag" "ollama-models/models/manifests/registry.ollama.ai/library/$model_name/"
echo " Finding blob files for $model_name:$model_tag..."
local blob_hashes=$(get_blobs_from_manifest "$manifest_dir/$model_tag")
local blob_count=0
for blob_hash in $blob_hashes; do
local blob_file="$HOME/.ollama/models/blobs/sha256-$blob_hash"
if [ -f "$blob_file" ]; then
cp "$blob_file" "ollama-models/models/blobs/" 2>/dev/null
blob_count=$((blob_count + 1))
fi
done
echo "$model_name:$model_tag copied ($blob_count blobs)"
else
echo "ERROR: Manifest file not found: $manifest_dir/$model_tag"
return 1
fi
}
# Copy models
copy_model "qwen3" "8b" || exit 1
copy_model "qwen3-embedding" "4b" || exit 1
echo ""
echo "✓ Models exported successfully"
echo ""
# Update config.py to use 8b
echo "Updating config.py to use qwen3:8b..."
sed -i.bak 's/DEFAULT_CHAT_MODEL = "qwen3:14b"/DEFAULT_CHAT_MODEL = "qwen3:8b"/' config.py
# Update docker-entrypoint.sh to check for 8b
echo "Updating docker-entrypoint.sh to check for qwen3:8b..."
sed -i.bak 's/qwen3:14b/qwen3:8b/g' docker-entrypoint.sh
# Build image
echo ""
echo "Building Docker image..."
docker build --platform linux/amd64 \
-f Dockerfile.allinone \
-t system-prompt-optimizer:allinone .
if [ $? -ne 0 ]; then
echo ""
echo "Build failed!"
# Restore backups
mv config.py.bak config.py
mv docker-entrypoint.sh.bak docker-entrypoint.sh
exit 1
fi
# Export image
echo ""
echo "Exporting Docker image..."
docker save -o system-prompt-optimizer-allinone.tar system-prompt-optimizer:allinone
# Restore original files
mv config.py.bak config.py
mv docker-entrypoint.sh.bak docker-entrypoint.sh
echo ""
echo "=========================================="
echo "Build Complete!"
echo "=========================================="
ls -lh system-prompt-optimizer-allinone.tar
echo ""
echo "This image uses qwen3:8b (~5GB RAM required)"
echo ""
echo "Transfer to server and run:"
echo ""
echo " CPU mode:"
echo " docker load -i system-prompt-optimizer-allinone.tar"
echo " docker run -d -p 8010:8010 --restart unless-stopped system-prompt-optimizer:allinone"
echo ""
echo " GPU mode (recommended):"
echo " docker load -i system-prompt-optimizer-allinone.tar"
echo " docker run -d --gpus all -p 8010:8010 --restart unless-stopped system-prompt-optimizer:allinone"
echo ""
echo "Note: GPU mode provides 5-10x faster inference."
echo " See GPU_DEPLOYMENT.md for GPU setup instructions."
echo ""

133
build-allinone.sh Executable file
View File

@@ -0,0 +1,133 @@
#!/bin/bash
# Build all-in-one Docker image with Ollama and models
# This creates a complete offline-deployable image
set -e
IMAGE_NAME="system-prompt-optimizer"
IMAGE_TAG="allinone"
EXPORT_FILE="${IMAGE_NAME}-${IMAGE_TAG}.tar"
echo "=========================================="
echo "Building All-in-One Docker Image"
echo "=========================================="
echo ""
echo "This will create a Docker image containing:"
echo " - Python application"
echo " - Ollama service (v0.13.1)"
echo " - qwen3:14b model"
echo " - qwen3-embedding:4b model"
echo ""
echo "Target platform: linux/amd64 (x86_64)"
echo ""
echo "WARNING: The final image will be 10-20GB in size!"
echo ""
echo "NOTE: If you're building on Apple Silicon (M1/M2/M3),"
echo " Docker will use emulation which may be slower."
echo " The image will still work on x86_64 servers."
echo ""
# Check if ollama-models directory exists
if [ ! -d "ollama-models" ]; then
echo "ERROR: ollama-models directory not found!"
echo ""
echo "Please run ./export-ollama-models.sh first to export the models."
exit 1
fi
echo "✓ Found ollama-models directory"
echo ""
# Check if Ollama binary exists
if [ ! -f "ollama-linux-amd64.tgz" ]; then
echo "ERROR: ollama-linux-amd64.tgz not found!"
echo ""
echo "Please download it first:"
echo " curl -L -o ollama-linux-amd64.tgz https://github.com/ollama/ollama/releases/download/v0.13.1/ollama-linux-amd64.tgz"
echo ""
exit 1
fi
echo "✓ Found ollama-linux-amd64.tgz"
echo ""
# Check disk space
AVAILABLE_SPACE=$(df -h . | awk 'NR==2 {print $4}')
echo "Available disk space: $AVAILABLE_SPACE"
echo "Required: ~20GB for build process"
echo ""
read -p "Continue with build? (y/n) " -n 1 -r
echo
if [[ ! $REPLY =~ ^[Yy]$ ]]; then
echo "Build cancelled."
exit 1
fi
echo ""
echo "=========================================="
echo "Building Docker image..."
echo "=========================================="
echo "Platform: linux/amd64 (x86_64)"
echo "This may take 20-40 minutes depending on your machine..."
echo ""
# Build for amd64 platform explicitly
docker build --platform linux/amd64 -f Dockerfile.allinone -t ${IMAGE_NAME}:${IMAGE_TAG} .
echo ""
echo "=========================================="
echo "Build complete!"
echo "=========================================="
docker images | grep ${IMAGE_NAME}
echo ""
echo "=========================================="
echo "Exporting image to ${EXPORT_FILE}..."
echo "=========================================="
echo "This will take several minutes..."
docker save -o ${EXPORT_FILE} ${IMAGE_NAME}:${IMAGE_TAG}
echo ""
echo "=========================================="
echo "Export complete!"
echo "=========================================="
ls -lh ${EXPORT_FILE}
echo ""
echo "=========================================="
echo "Deployment Instructions"
echo "=========================================="
echo ""
echo "1. Transfer ${EXPORT_FILE} to target server:"
echo " scp ${EXPORT_FILE} user@server:/path/"
echo ""
echo "2. On target server, load the image:"
echo " docker load -i ${EXPORT_FILE}"
echo ""
echo "3. Run the container:"
echo ""
echo " CPU mode:"
echo " docker run -d \\"
echo " --name system-prompt-optimizer \\"
echo " -p 8010:8010 \\"
echo " --restart unless-stopped \\"
echo " ${IMAGE_NAME}:${IMAGE_TAG}"
echo ""
echo " GPU mode (recommended if NVIDIA GPU available):"
echo " docker run -d \\"
echo " --name system-prompt-optimizer \\"
echo " --gpus all \\"
echo " -p 8010:8010 \\"
echo " --restart unless-stopped \\"
echo " ${IMAGE_NAME}:${IMAGE_TAG}"
echo ""
echo " Note: Port 11434 (Ollama) is optional and only needed for debugging."
echo " GPU mode provides 5-10x faster inference. See GPU_DEPLOYMENT.md for details."
echo ""
echo "4. Access the application:"
echo " http://<server-ip>:8010/ui/opro.html"
echo ""
echo "See DEPLOYMENT.md for more details."

37
build-and-export.sh Executable file
View File

@@ -0,0 +1,37 @@
#!/bin/bash
# Build and export Docker image for offline deployment
# Usage: ./build-and-export.sh
set -e
IMAGE_NAME="system-prompt-optimizer"
IMAGE_TAG="latest"
EXPORT_FILE="${IMAGE_NAME}.tar"
echo "=========================================="
echo "Building Docker image..."
echo "=========================================="
docker build -t ${IMAGE_NAME}:${IMAGE_TAG} .
echo ""
echo "=========================================="
echo "Exporting Docker image to ${EXPORT_FILE}..."
echo "=========================================="
docker save -o ${EXPORT_FILE} ${IMAGE_NAME}:${IMAGE_TAG}
echo ""
echo "=========================================="
echo "Export complete!"
echo "=========================================="
ls -lh ${EXPORT_FILE}
echo ""
echo "Next steps:"
echo "1. Transfer ${EXPORT_FILE} to target server"
echo "2. Transfer docker-compose.yml to target server (optional)"
echo "3. On target server, run: docker load -i ${EXPORT_FILE}"
echo "4. On target server, run: docker-compose up -d"
echo ""
echo "See DEPLOYMENT.md for detailed instructions."

View File

@@ -7,7 +7,7 @@ APP_CONTACT = {"name": "OPRO Team", "url": "http://127.0.0.1:8010/ui/"}
OLLAMA_HOST = "http://127.0.0.1:11434" OLLAMA_HOST = "http://127.0.0.1:11434"
OLLAMA_GENERATE_URL = f"{OLLAMA_HOST}/api/generate" OLLAMA_GENERATE_URL = f"{OLLAMA_HOST}/api/generate"
OLLAMA_TAGS_URL = f"{OLLAMA_HOST}/api/tags" OLLAMA_TAGS_URL = f"{OLLAMA_HOST}/api/tags"
DEFAULT_CHAT_MODEL = "qwen3:8b" DEFAULT_CHAT_MODEL = "qwen3:14b"
DEFAULT_EMBED_MODEL = "qwen3-embedding:4b" DEFAULT_EMBED_MODEL = "qwen3-embedding:4b"
# Xinference # Xinference

23
docker-compose.yml Normal file
View File

@@ -0,0 +1,23 @@
version: '3.8'
services:
app:
build: .
container_name: system-prompt-optimizer
ports:
- "8010:8010"
environment:
- OLLAMA_HOST=http://host.docker.internal:11434
- PYTHONUNBUFFERED=1
volumes:
- ./outputs:/app/outputs
restart: unless-stopped
extra_hosts:
- "host.docker.internal:host-gateway"
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:8010/health"]
interval: 30s
timeout: 10s
retries: 3
start_period: 5s

103
docker-entrypoint.sh Normal file
View File

@@ -0,0 +1,103 @@
#!/bin/bash
set -e
echo "=========================================="
echo "System Prompt Optimizer - Starting Up"
echo "=========================================="
echo ""
# Check if Ollama binary exists
if ! command -v ollama &> /dev/null; then
echo "ERROR: Ollama binary not found!"
echo "Expected location: /usr/bin/ollama or /usr/local/bin/ollama"
ls -la /usr/bin/ollama* 2>/dev/null || echo "No ollama in /usr/bin/"
ls -la /usr/local/bin/ollama* 2>/dev/null || echo "No ollama in /usr/local/bin/"
exit 1
fi
echo "✓ Ollama binary found: $(which ollama)"
echo ""
# Check if model files exist
echo "Checking model files..."
if [ ! -d "/root/.ollama/models" ]; then
echo "ERROR: /root/.ollama/models directory not found!"
exit 1
fi
MANIFEST_COUNT=$(find /root/.ollama/models/manifests -type f 2>/dev/null | wc -l)
BLOB_COUNT=$(find /root/.ollama/models/blobs -type f 2>/dev/null | wc -l)
echo "✓ Found $MANIFEST_COUNT manifest files"
echo "✓ Found $BLOB_COUNT blob files"
if [ "$BLOB_COUNT" -lt 10 ]; then
echo "WARNING: Very few blob files found. Models may not be complete."
fi
echo ""
echo "Starting Ollama service..."
ollama serve > /tmp/ollama.log 2>&1 &
OLLAMA_PID=$!
# Wait for Ollama to be ready
echo "Waiting for Ollama to start..."
OLLAMA_READY=false
for i in {1..60}; do
if curl -s http://localhost:11434/api/tags > /dev/null 2>&1; then
echo "Ollama is ready!"
OLLAMA_READY=true
break
fi
echo "Waiting for Ollama... ($i/60)"
sleep 3
done
if [ "$OLLAMA_READY" = false ]; then
echo ""
echo "ERROR: Ollama failed to start within 3 minutes!"
echo ""
echo "Ollama logs:"
cat /tmp/ollama.log
echo ""
echo "Check full logs with: docker logs system-prompt-optimizer"
exit 1
fi
# Check if models exist, if not, show warning
echo ""
echo "Checking for models..."
ollama list
echo ""
if ! ollama list | grep -q "qwen3:14b"; then
echo "ERROR: qwen3:14b model not found!"
echo "The application requires qwen3:14b to function properly."
echo ""
echo "Available models:"
ollama list
echo ""
exit 1
fi
if ! ollama list | grep -q "qwen3-embedding"; then
echo "WARNING: qwen3-embedding model not found!"
echo "The application requires qwen3-embedding:4b for embeddings."
echo "Continuing anyway, but embeddings may not work."
fi
echo ""
echo "✓ All required models are available"
echo ""
echo "=========================================="
echo "Starting FastAPI application..."
echo "=========================================="
echo "Application will be available at:"
echo " - Web UI: http://localhost:8010/ui/opro.html"
echo " - API Docs: http://localhost:8010/docs"
echo " - Ollama: http://localhost:11434"
echo ""
exec uvicorn _qwen_xinference_demo.api:app --host 0.0.0.0 --port 8010

168
export-ollama-models.sh Executable file
View File

@@ -0,0 +1,168 @@
#!/bin/bash
# Export Ollama models for offline deployment
# This script copies Ollama models from your local machine
# so they can be bundled into the Docker image
#
# Required models:
# - qwen3:14b (main chat model)
# - qwen3-embedding:4b (embedding model)
set -e
MODELS_DIR="ollama-models"
OLLAMA_MODELS_PATH="$HOME/.ollama"
echo "=========================================="
echo "Exporting Ollama models for offline deployment"
echo "=========================================="
# Check if Ollama is installed
if ! command -v ollama &> /dev/null; then
echo "ERROR: Ollama is not installed or not in PATH"
exit 1
fi
# Check if required models are available
echo ""
echo "Checking for required models..."
MISSING_MODELS=0
if ! ollama list | grep -q "qwen3:14b"; then
echo "ERROR: qwen3:14b model not found!"
echo "Please run: ollama pull qwen3:14b"
MISSING_MODELS=1
fi
if ! ollama list | grep -q "qwen3-embedding:4b"; then
echo "ERROR: qwen3-embedding:4b model not found!"
echo "Please run: ollama pull qwen3-embedding:4b"
MISSING_MODELS=1
fi
if [ $MISSING_MODELS -eq 1 ]; then
echo ""
echo "Please download the required models first:"
echo " ollama pull qwen3:14b"
echo " ollama pull qwen3-embedding:4b"
exit 1
fi
echo "✓ All required models found"
# Check if Ollama directory exists
if [ ! -d "$OLLAMA_MODELS_PATH" ]; then
echo "ERROR: Ollama directory not found at $OLLAMA_MODELS_PATH"
exit 1
fi
# Create export directory structure
echo ""
echo "Creating export directory: $MODELS_DIR"
rm -rf "$MODELS_DIR"
mkdir -p "$MODELS_DIR/models/manifests/registry.ollama.ai/library"
mkdir -p "$MODELS_DIR/models/blobs"
echo ""
echo "Copying only required models (qwen3:14b and qwen3-embedding:4b)..."
echo "This may take several minutes (models are large)..."
# Function to get blob hashes from manifest
get_blobs_from_manifest() {
local manifest_file=$1
# Extract all sha256 hashes from the manifest JSON
grep -oE 'sha256:[a-f0-9]{64}' "$manifest_file" 2>/dev/null | sed 's/sha256://' | sort -u
}
# Function to copy model files
copy_model() {
local model_name=$1
local model_tag=$2
local manifest_dir="$OLLAMA_MODELS_PATH/models/manifests/registry.ollama.ai/library/$model_name"
if [ ! -d "$manifest_dir" ]; then
echo "ERROR: Model manifest not found: $manifest_dir"
return 1
fi
echo " Copying $model_name:$model_tag manifest..."
mkdir -p "$MODELS_DIR/models/manifests/registry.ollama.ai/library/$model_name"
# Copy the specific tag manifest
if [ -f "$manifest_dir/$model_tag" ]; then
cp "$manifest_dir/$model_tag" "$MODELS_DIR/models/manifests/registry.ollama.ai/library/$model_name/"
# Get all blob hashes referenced in this manifest
echo " Finding blob files for $model_name:$model_tag..."
local blob_hashes=$(get_blobs_from_manifest "$manifest_dir/$model_tag")
local blob_count=0
for blob_hash in $blob_hashes; do
local blob_file="$OLLAMA_MODELS_PATH/models/blobs/sha256-$blob_hash"
if [ -f "$blob_file" ]; then
cp "$blob_file" "$MODELS_DIR/models/blobs/" 2>/dev/null
blob_count=$((blob_count + 1))
fi
done
echo "$model_name:$model_tag copied ($blob_count blobs)"
else
echo "ERROR: Manifest file not found: $manifest_dir/$model_tag"
return 1
fi
}
# Copy required models with specific tags
copy_model "qwen3" "14b" || exit 1
copy_model "qwen3-embedding" "4b" || exit 1
echo ""
echo "=========================================="
echo "Models exported successfully!"
echo "=========================================="
echo ""
echo "Total size:"
du -sh "$MODELS_DIR"
echo ""
echo "Models included:"
if [ -d "$MODELS_DIR/models/manifests/registry.ollama.ai/library" ]; then
ls -lh "$MODELS_DIR/models/manifests/registry.ollama.ai/library/"
fi
echo ""
echo "Blob files:"
if [ -d "$MODELS_DIR/models/blobs" ]; then
echo " Total blobs: $(ls -1 "$MODELS_DIR/models/blobs" | wc -l)"
du -sh "$MODELS_DIR/models/blobs"
fi
echo ""
echo "=========================================="
echo "Summary"
echo "=========================================="
echo "✓ Only qwen3:14b and qwen3-embedding:4b were exported"
echo ""
echo "Models in your Ollama that were NOT copied:"
ollama list | grep -v "qwen3:14b" | grep -v "qwen3-embedding:4b" | tail -n +2 || echo " (none)"
echo ""
echo "This keeps the Docker image size minimal!"
echo ""
echo "=========================================="
echo "Next steps:"
echo "=========================================="
echo "1. Build the all-in-one Docker image:"
echo " ./build-allinone.sh"
echo ""
echo "2. Or manually:"
echo " docker build -f Dockerfile.allinone -t system-prompt-optimizer:allinone ."
echo ""
echo "3. Export the image:"
echo " docker save -o system-prompt-optimizer-allinone.tar system-prompt-optimizer:allinone"
echo ""
echo "4. Transfer to target server:"
echo " scp system-prompt-optimizer-allinone.tar user@server:/path/"
echo ""
echo "Note: The final Docker image will be very large (10-20GB) due to the models."

View File

@@ -6,7 +6,7 @@
<meta http-equiv="Cache-Control" content="no-cache, no-store, must-revalidate"> <meta http-equiv="Cache-Control" content="no-cache, no-store, must-revalidate">
<meta http-equiv="Pragma" content="no-cache"> <meta http-equiv="Pragma" content="no-cache">
<meta http-equiv="Expires" content="0"> <meta http-equiv="Expires" content="0">
<title>OPRO - System Instruction Optimizer</title> <title>系统提示词优化</title>
<script crossorigin src="https://unpkg.com/react@18/umd/react.production.min.js"></script> <script crossorigin src="https://unpkg.com/react@18/umd/react.production.min.js"></script>
<script crossorigin src="https://unpkg.com/react-dom@18/umd/react-dom.production.min.js"></script> <script crossorigin src="https://unpkg.com/react-dom@18/umd/react-dom.production.min.js"></script>
<script src="https://cdn.tailwindcss.com"></script> <script src="https://cdn.tailwindcss.com"></script>
@@ -50,18 +50,22 @@
// Main App Component // Main App Component
function App() { function App() {
const [sidebarOpen, setSidebarOpen] = useState(false); const [sidebarOpen, setSidebarOpen] = useState(false);
const [runs, setRuns] = useState([]); const [sessions, setSessions] = useState([]);
const [currentSessionId, setCurrentSessionId] = useState(null);
const [currentSessionRuns, setCurrentSessionRuns] = useState([]);
const [currentRunId, setCurrentRunId] = useState(null); const [currentRunId, setCurrentRunId] = useState(null);
const [messages, setMessages] = useState([]); const [messages, setMessages] = useState([]);
const [sessionMessages, setSessionMessages] = useState({}); // Store messages per session
const [sessionLastRunId, setSessionLastRunId] = useState({}); // Store last run ID per session
const [inputValue, setInputValue] = useState(''); const [inputValue, setInputValue] = useState('');
const [loading, setLoading] = useState(false); const [loading, setLoading] = useState(false);
const [models, setModels] = useState([]); const [models, setModels] = useState([]);
const [selectedModel, setSelectedModel] = useState(''); const [selectedModel, setSelectedModel] = useState('');
const chatEndRef = useRef(null); const chatEndRef = useRef(null);
// Load runs and models on mount // Load sessions and models on mount
useEffect(() => { useEffect(() => {
loadRuns(); loadSessions();
loadModels(); loadModels();
}, []); }, []);
@@ -85,29 +89,78 @@
chatEndRef.current?.scrollIntoView({ behavior: 'smooth' }); chatEndRef.current?.scrollIntoView({ behavior: 'smooth' });
}, [messages]); }, [messages]);
async function loadRuns() { async function loadSessions() {
try { try {
const res = await fetch(`${API_BASE}/opro/runs`); const res = await fetch(`${API_BASE}/opro/sessions`);
const data = await res.json(); const data = await res.json();
if (data.success) { if (data.success) {
setRuns(data.data.runs || []); setSessions(data.data.sessions || []);
} }
} catch (err) { } catch (err) {
console.error('Failed to load runs:', err); console.error('Failed to load sessions:', err);
}
}
async function loadSessionRuns(sessionId) {
try {
const res = await fetch(`${API_BASE}/opro/session/${sessionId}`);
const data = await res.json();
if (data.success) {
setCurrentSessionRuns(data.data.runs || []);
}
} catch (err) {
console.error('Failed to load session runs:', err);
}
}
async function createNewSession() {
try {
const res = await fetch(`${API_BASE}/opro/session/create`, {
method: 'POST',
headers: { 'Content-Type': 'application/json' }
});
const data = await res.json();
if (!data.success) {
throw new Error(data.error || 'Failed to create session');
}
const sessionId = data.data.session_id;
setCurrentSessionId(sessionId);
setCurrentSessionRuns([]);
setCurrentRunId(null);
setMessages([]);
setSessionMessages(prev => ({ ...prev, [sessionId]: [] })); // Initialize empty messages for new session
// Reload sessions list
await loadSessions();
return sessionId;
} catch (err) {
alert('创建会话失败: ' + err.message);
return null;
} }
} }
async function createNewRun(taskDescription) { async function createNewRun(taskDescription) {
setLoading(true); setLoading(true);
try { try {
// Create run // Ensure we have a session
let sessionId = currentSessionId;
if (!sessionId) {
sessionId = await createNewSession();
if (!sessionId) return;
}
// Create run within session
const res = await fetch(`${API_BASE}/opro/create`, { const res = await fetch(`${API_BASE}/opro/create`, {
method: 'POST', method: 'POST',
headers: { 'Content-Type': 'application/json' }, headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({ body: JSON.stringify({
task_description: taskDescription, task_description: taskDescription,
test_cases: [], test_cases: [],
model_name: selectedModel || undefined model_name: selectedModel || undefined,
session_id: sessionId
}) })
}); });
const data = await res.json(); const data = await res.json();
@@ -119,16 +172,33 @@
const runId = data.data.run_id; const runId = data.data.run_id;
setCurrentRunId(runId); setCurrentRunId(runId);
// Add user message // Save this as the last run for this session
setMessages([{ role: 'user', content: taskDescription }]); setSessionLastRunId(prev => ({
...prev,
[sessionId]: runId
}));
// Add user message to existing messages (keep chat history)
const newUserMessage = { role: 'user', content: taskDescription };
setMessages(prev => {
const updated = [...prev, newUserMessage];
// Save to session messages
setSessionMessages(prevSessions => ({
...prevSessions,
[sessionId]: updated
}));
return updated;
});
// Generate and evaluate candidates // Generate and evaluate candidates
await generateCandidates(runId); await generateCandidates(runId);
// Reload runs list // Reload sessions and session runs
await loadRuns(); await loadSessions();
await loadSessionRuns(sessionId);
} catch (err) { } catch (err) {
alert('创建任务失败: ' + err.message); alert('创建任务失败: ' + err.message);
console.error('Error creating run:', err);
} finally { } finally {
setLoading(false); setLoading(false);
} }
@@ -137,6 +207,7 @@
async function generateCandidates(runId) { async function generateCandidates(runId) {
setLoading(true); setLoading(true);
try { try {
console.log('Generating candidates for run:', runId);
const res = await fetch(`${API_BASE}/opro/generate_and_evaluate`, { const res = await fetch(`${API_BASE}/opro/generate_and_evaluate`, {
method: 'POST', method: 'POST',
headers: { 'Content-Type': 'application/json' }, headers: { 'Content-Type': 'application/json' },
@@ -148,19 +219,33 @@
}); });
const data = await res.json(); const data = await res.json();
console.log('Generate candidates response:', data);
if (!data.success) { if (!data.success) {
throw new Error(data.error || 'Failed to generate candidates'); throw new Error(data.error || 'Failed to generate candidates');
} }
// Add assistant message with candidates // Add assistant message with candidates
setMessages(prev => [...prev, { const newAssistantMessage = {
role: 'assistant', role: 'assistant',
type: 'candidates', type: 'candidates',
candidates: data.data.candidates, candidates: data.data.candidates,
iteration: data.data.iteration iteration: data.data.iteration
}]); };
setMessages(prev => {
const updated = [...prev, newAssistantMessage];
// Save to session messages
if (currentSessionId) {
setSessionMessages(prevSessions => ({
...prevSessions,
[currentSessionId]: updated
}));
}
return updated;
});
} catch (err) { } catch (err) {
alert('生成候选指令失败: ' + err.message); alert('生成候选指令失败: ' + err.message);
console.error('Error generating candidates:', err);
} finally { } finally {
setLoading(false); setLoading(false);
} }
@@ -185,12 +270,23 @@
} }
// Add execution result // Add execution result
setMessages(prev => [...prev, { const newExecutionMessage = {
role: 'assistant', role: 'assistant',
type: 'execution', type: 'execution',
instruction: instruction, instruction: instruction,
response: data.data.response response: data.data.response
}]); };
setMessages(prev => {
const updated = [...prev, newExecutionMessage];
// Save to session messages
if (currentSessionId) {
setSessionMessages(prevSessions => ({
...prevSessions,
[currentSessionId]: updated
}));
}
return updated;
});
} catch (err) { } catch (err) {
alert('执行失败: ' + err.message); alert('执行失败: ' + err.message);
} finally { } finally {
@@ -204,25 +300,64 @@
setInputValue(''); setInputValue('');
if (!currentRunId) { // Always create a new run with the message as task description
// Create new run with task description
createNewRun(msg); createNewRun(msg);
} else {
// Continue optimization or execute
// For now, just show message
setMessages(prev => [...prev, { role: 'user', content: msg }]);
}
} }
function handleContinueOptimize() { async function handleContinueOptimize(selectedInstruction, allCandidates) {
if (!currentRunId || loading) return; if (!currentRunId || loading || !selectedInstruction) return;
generateCandidates(currentRunId);
setLoading(true);
try {
// Get rejected instructions (all except the selected one)
const rejectedInstructions = allCandidates
.map(c => c.instruction)
.filter(inst => inst !== selectedInstruction);
// Call the refinement endpoint
const res = await fetch(`${API_BASE}/opro/refine`, {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({
run_id: currentRunId,
selected_instruction: selectedInstruction,
rejected_instructions: rejectedInstructions
})
});
const data = await res.json();
if (!data.success) {
throw new Error(data.error || 'Failed to refine instruction');
}
// Add refined candidates to messages
const newMessage = {
role: 'assistant',
type: 'candidates',
iteration: data.data.iteration,
candidates: data.data.candidates
};
setMessages(prev => {
const updated = [...prev, newMessage];
// Save to session messages
setSessionMessages(prevSessions => ({
...prevSessions,
[currentSessionId]: updated
}));
return updated;
});
} catch (err) {
alert('优化失败: ' + err.message);
console.error('Error refining instruction:', err);
} finally {
setLoading(false);
}
} }
function handleExecute(instruction) { function handleExecute(instruction) {
if (loading) return; if (loading) return;
const userInput = prompt('请输入要处理的内容(可选):'); executeInstruction(instruction, '');
executeInstruction(instruction, userInput);
} }
function handleCopyInstruction(instruction) { function handleCopyInstruction(instruction) {
@@ -235,11 +370,33 @@
} }
function handleNewTask() { function handleNewTask() {
// Create new run within current session
setCurrentRunId(null); setCurrentRunId(null);
setMessages([]); setMessages([]);
setInputValue(''); setInputValue('');
} }
async function handleNewSession() {
// Create completely new session
const sessionId = await createNewSession();
if (sessionId) {
setCurrentSessionId(sessionId);
setCurrentSessionRuns([]);
setCurrentRunId(null);
setMessages([]);
setInputValue('');
}
}
async function handleSelectSession(sessionId) {
setCurrentSessionId(sessionId);
// Restore the last run ID for this session
setCurrentRunId(sessionLastRunId[sessionId] || null);
// Load messages from session storage
setMessages(sessionMessages[sessionId] || []);
await loadSessionRuns(sessionId);
}
async function loadRun(runId) { async function loadRun(runId) {
setLoading(true); setLoading(true);
try { try {
@@ -301,22 +458,22 @@
// Content area // Content area
React.createElement('div', { className: 'flex-1 overflow-y-auto scrollbar-hide p-2 flex flex-col' }, React.createElement('div', { className: 'flex-1 overflow-y-auto scrollbar-hide p-2 flex flex-col' },
sidebarOpen ? React.createElement(React.Fragment, null, sidebarOpen ? React.createElement(React.Fragment, null,
// New task button (expanded) // New session button (expanded)
React.createElement('button', { React.createElement('button', {
onClick: handleNewTask, onClick: handleNewSession,
className: 'mb-3 px-4 py-2.5 bg-white border border-gray-300 hover:bg-gray-50 rounded-lg transition-colors flex items-center justify-center gap-2 text-gray-700 font-medium' className: 'mb-3 px-4 py-2.5 bg-white border border-gray-300 hover:bg-gray-50 rounded-lg transition-colors flex items-center justify-center gap-2 text-gray-700 font-medium'
}, },
React.createElement('span', { className: 'text-lg' }, '+'), React.createElement('span', { className: 'text-lg' }, '+'),
React.createElement('span', null, '新建会话') React.createElement('span', null, '新建会话')
), ),
// Sessions list // Sessions list
runs.length > 0 && React.createElement('div', { className: 'text-xs text-gray-500 mb-2 px-2' }, '会话列表'), sessions.length > 0 && React.createElement('div', { className: 'text-xs text-gray-500 mb-2 px-2' }, '会话列表'),
runs.map(run => sessions.map(session =>
React.createElement('div', { React.createElement('div', {
key: run.run_id, key: session.session_id,
onClick: () => loadRun(run.run_id), onClick: () => handleSelectSession(session.session_id),
className: `p-3 mb-1 rounded-lg cursor-pointer transition-colors flex items-center gap-2 ${ className: `p-3 mb-1 rounded-lg cursor-pointer transition-colors flex items-center gap-2 ${
currentRunId === run.run_id ? 'bg-gray-100' : 'hover:bg-gray-50' currentSessionId === session.session_id ? 'bg-gray-100' : 'hover:bg-gray-50'
}` }`
}, },
React.createElement('svg', { React.createElement('svg', {
@@ -331,12 +488,12 @@
React.createElement('path', { d: 'M21 15a2 2 0 0 1-2 2H7l-4 4V5a2 2 0 0 1 2-2h14a2 2 0 0 1 2 2z' }) React.createElement('path', { d: 'M21 15a2 2 0 0 1-2 2H7l-4 4V5a2 2 0 0 1 2-2h14a2 2 0 0 1 2 2z' })
), ),
React.createElement('div', { className: 'text-sm text-gray-800 truncate flex-1' }, React.createElement('div', { className: 'text-sm text-gray-800 truncate flex-1' },
run.task_description session.session_name
) )
) )
) )
) : React.createElement('button', { ) : React.createElement('button', {
onClick: handleNewTask, onClick: handleNewSession,
className: 'p-2 text-gray-600 hover:bg-gray-100 rounded-lg transition-colors flex items-center justify-center', className: 'p-2 text-gray-600 hover:bg-gray-100 rounded-lg transition-colors flex items-center justify-center',
title: '新建会话' title: '新建会话'
}, },
@@ -352,7 +509,10 @@
// Header // Header
React.createElement('div', { className: 'px-4 py-3 border-b border-gray-200 bg-white flex items-center gap-3' }, React.createElement('div', { className: 'px-4 py-3 border-b border-gray-200 bg-white flex items-center gap-3' },
React.createElement('h1', { className: 'text-lg font-normal text-gray-800' }, React.createElement('h1', { className: 'text-lg font-normal text-gray-800' },
'OPRO' '系统提示词优化'
),
currentSessionId && React.createElement('div', { className: 'text-sm text-gray-500' },
sessions.find(s => s.session_id === currentSessionId)?.session_name || '当前会话'
) )
), ),
@@ -391,7 +551,7 @@
), ),
React.createElement('div', { className: 'flex gap-2' }, React.createElement('div', { className: 'flex gap-2' },
React.createElement('button', { React.createElement('button', {
onClick: handleContinueOptimize, onClick: () => handleContinueOptimize(cand.instruction, msg.candidates),
disabled: loading, disabled: loading,
className: 'px-4 py-2 bg-white border border-gray-300 text-gray-700 rounded-lg hover:bg-gray-50 disabled:bg-gray-100 disabled:text-gray-400 disabled:cursor-not-allowed transition-colors text-sm font-medium' className: 'px-4 py-2 bg-white border border-gray-300 text-gray-700 rounded-lg hover:bg-gray-50 disabled:bg-gray-100 disabled:text-gray-400 disabled:cursor-not-allowed transition-colors text-sm font-medium'
}, '继续优化'), }, '继续优化'),
@@ -404,12 +564,7 @@
React.createElement('path', { d: 'M5 15H4a2 2 0 0 1-2-2V4a2 2 0 0 1 2-2h9a2 2 0 0 1 2 2v1' }) React.createElement('path', { d: 'M5 15H4a2 2 0 0 1-2-2V4a2 2 0 0 1 2-2h9a2 2 0 0 1 2 2v1' })
), ),
'复制' '复制'
), )
React.createElement('button', {
onClick: () => handleExecute(cand.instruction),
disabled: loading,
className: 'px-4 py-2 bg-gray-900 text-white rounded-lg hover:bg-gray-800 disabled:bg-gray-300 disabled:cursor-not-allowed transition-colors text-sm font-medium'
}, '执行此指令')
) )
) )
) )
@@ -452,7 +607,7 @@
handleSendMessage(); handleSendMessage();
} }
}, },
placeholder: currentRunId ? '输入消息...' : '在此输入提示词', placeholder: '输入任务描述,创建新的优化任务...',
disabled: loading, disabled: loading,
rows: 3, rows: 3,
className: 'w-full px-5 pt-4 pb-2 bg-transparent focus:outline-none disabled:bg-transparent text-gray-800 placeholder-gray-500 resize-none' className: 'w-full px-5 pt-4 pb-2 bg-transparent focus:outline-none disabled:bg-transparent text-gray-800 placeholder-gray-500 resize-none'
@@ -489,8 +644,10 @@
) )
) )
), ),
!currentRunId && React.createElement('div', { className: 'text-xs text-gray-500 mt-3 px-4' }, React.createElement('div', { className: 'text-xs text-gray-500 mt-3 px-4' },
'输入任务描述后AI 将为你生成优化的系统指令' currentSessionId
? '输入任务描述AI 将为你生成优化的系统指令'
: '点击左侧"新建会话"开始,或直接输入任务描述自动创建会话'
) )
) )
) )

7
requirements.txt Normal file
View File

@@ -0,0 +1,7 @@
fastapi==0.109.0
uvicorn==0.27.0
requests==2.31.0
numpy==1.26.3
scikit-learn==1.4.0
pydantic==2.5.3

131
test_session_api.py Normal file
View File

@@ -0,0 +1,131 @@
#!/usr/bin/env python3
"""
Test script for OPRO session-based API
"""
import requests
import json
BASE_URL = "http://127.0.0.1:8010"
def print_section(title):
print(f"\n{'='*60}")
print(f" {title}")
print(f"{'='*60}\n")
def test_session_workflow():
"""Test the complete session-based workflow."""
print_section("1. Create Session")
# Create a new session
response = requests.post(f"{BASE_URL}/opro/session/create")
result = response.json()
if not result.get("success"):
print(f"❌ Failed to create session: {result}")
return
session_id = result["data"]["session_id"]
print(f"✅ Session created: {session_id}")
print(f" Session name: {result['data']['session_name']}")
print_section("2. Create First Run in Session")
# Create first run
create_req = {
"task_description": "将中文翻译成英文",
"test_cases": [
{"input": "你好", "expected_output": "Hello"},
{"input": "谢谢", "expected_output": "Thank you"}
],
"session_id": session_id
}
response = requests.post(f"{BASE_URL}/opro/create", json=create_req)
result = response.json()
if not result.get("success"):
print(f"❌ Failed to create run: {result}")
return
run1_id = result["data"]["run_id"]
print(f"✅ Run 1 created: {run1_id}")
print(f" Task: {result['data']['task_description']}")
print_section("3. Create Second Run in Same Session")
# Create second run in same session
create_req2 = {
"task_description": "将英文翻译成中文",
"test_cases": [
{"input": "Hello", "expected_output": "你好"},
{"input": "Thank you", "expected_output": "谢谢"}
],
"session_id": session_id
}
response = requests.post(f"{BASE_URL}/opro/create", json=create_req2)
result = response.json()
if not result.get("success"):
print(f"❌ Failed to create run 2: {result}")
return
run2_id = result["data"]["run_id"]
print(f"✅ Run 2 created: {run2_id}")
print(f" Task: {result['data']['task_description']}")
print_section("4. Get Session Details")
response = requests.get(f"{BASE_URL}/opro/session/{session_id}")
result = response.json()
if not result.get("success"):
print(f"❌ Failed to get session: {result}")
return
print(f"✅ Session details:")
print(f" Session ID: {result['data']['session_id']}")
print(f" Session name: {result['data']['session_name']}")
print(f" Number of runs: {result['data']['num_runs']}")
print(f" Runs:")
for run in result['data']['runs']:
print(f" - {run['run_id'][:8]}... : {run['task_description']}")
print_section("5. List All Sessions")
response = requests.get(f"{BASE_URL}/opro/sessions")
result = response.json()
if not result.get("success"):
print(f"❌ Failed to list sessions: {result}")
return
print(f"✅ Total sessions: {len(result['data']['sessions'])}")
for session in result['data']['sessions']:
print(f" - {session['session_name']}: {session['num_runs']} runs")
print_section("✅ All Tests Passed!")
if __name__ == "__main__":
try:
# Check if server is running
response = requests.get(f"{BASE_URL}/health")
if response.status_code != 200:
print("❌ Server is not running. Please start it with:")
print(" uvicorn _qwen_xinference_demo.api:app --host 127.0.0.1 --port 8010")
exit(1)
test_session_workflow()
except requests.exceptions.ConnectionError:
print("❌ Cannot connect to server. Please start it with:")
print(" uvicorn _qwen_xinference_demo.api:app --host 127.0.0.1 --port 8010")
exit(1)
except Exception as e:
print(f"❌ Error: {e}")
import traceback
traceback.print_exc()
exit(1)