feat: add Docker support for offline deployment with qwen3:14b
Major additions: - All-in-One Docker image with Ollama + models bundled - Separate deployment option for existing Ollama installations - Changed default model from qwen3:8b to qwen3:14b - Comprehensive deployment documentation Files added: - Dockerfile: Basic app-only image - Dockerfile.allinone: Complete image with Ollama + models - docker-compose.yml: Easy deployment configuration - docker-entrypoint.sh: Startup script for all-in-one image - requirements.txt: Python dependencies - .dockerignore: Exclude unnecessary files from image Scripts: - export-ollama-models.sh: Export models from local Ollama - build-allinone.sh: Build complete offline-deployable image - build-and-export.sh: Build and export basic image Documentation: - DEPLOYMENT.md: Comprehensive deployment guide - QUICK_START.md: Quick reference for common tasks Configuration: - Updated config.py: DEFAULT_CHAT_MODEL = qwen3:14b - Updated frontend/opro.html: Page title to 系统提示词优化
This commit is contained in:
@@ -7,7 +7,7 @@ APP_CONTACT = {"name": "OPRO Team", "url": "http://127.0.0.1:8010/ui/"}
|
||||
OLLAMA_HOST = "http://127.0.0.1:11434"
|
||||
OLLAMA_GENERATE_URL = f"{OLLAMA_HOST}/api/generate"
|
||||
OLLAMA_TAGS_URL = f"{OLLAMA_HOST}/api/tags"
|
||||
DEFAULT_CHAT_MODEL = "qwen3:8b"
|
||||
DEFAULT_CHAT_MODEL = "qwen3:14b"
|
||||
DEFAULT_EMBED_MODEL = "qwen3-embedding:4b"
|
||||
|
||||
# Xinference
|
||||
|
||||
Reference in New Issue
Block a user