Setup Instructions
For Ollama:
# Install Ollama
curl -fsSL https://ollama.ai/install.sh | sh
# Pull a model
ollama pull llama3.1:8b
# Start the server (usually runs on localhost:11434)
ollama serve
# Your endpoint: http://localhost:11434/api/chat
For LM Studio:
1. Download LM Studio from https://lmstudio.ai/
2. Load a model (e.g., Llama, Mistral, etc.)
3. Start the local server
4. Default endpoint: http://localhost:1234/v1/chat/completions
For Text Generation WebUI:
# Clone and setup
git clone https://github.com/oobabooga/text-generation-webui
cd text-generation-webui
./start_linux.sh --api
# Default endpoint: http://localhost:5000/v1/chat/completions
Common Endpoints:
Ollama: http://localhost:11434/api/chat
LM Studio: http://localhost:1234/v1/chat/completions
KoboldAI: http://localhost:5001/api/v1/generate
Text-Gen WebUI: http://localhost:5000/v1/chat/completions