Local LLM context window orchestrator with automatic handoff
- Python 97.9%
- Shell 1.9%
- Dockerfile 0.2%
| bin | ||
| config | ||
| deploy | ||
| docs | ||
| examples | ||
| hooks | ||
| lib | ||
| tests | ||
| .dockerignore | ||
| .env.example | ||
| .gitignore | ||
| pyproject.toml | ||
| README.md | ||
| requirements.txt | ||
LLM Orchestrator (llm-orchestrator)
Local LLM context window orchestrator with automatic handoff.
Overview
Orchestrates multiple local LLMs by managing context windows and automatically handing off conversations between models. Routes queries to the optimal model based on complexity, context size, and latency requirements.
Stack
- Python
Quick Start
# Clone
git clone ssh://git@192.168.183.110:2222/pook/llm-orchestrator.git
cd llm-orchestrator
Status
Active
License
Private — All rights reserved