REAL AI HIVE-MIND INTEGRATION
Connect and coordinate multiple local AI models
Setup
Monitor
Collective Intelligence
Instructions

AI Model Configuration

Connected AI Models

Collective Intelligence Interface

Collective AI Response
Submit a query to see responses from all connected models...

Setup Instructions

For Ollama:

# Install Ollama curl -fsSL https://ollama.ai/install.sh | sh # Pull a model ollama pull llama3.1:8b # Start the server (usually runs on localhost:11434) ollama serve # Your endpoint: http://localhost:11434/api/chat

For LM Studio:

1. Download LM Studio from https://lmstudio.ai/ 2. Load a model (e.g., Llama, Mistral, etc.) 3. Start the local server 4. Default endpoint: http://localhost:1234/v1/chat/completions

For Text Generation WebUI:

# Clone and setup git clone https://github.com/oobabooga/text-generation-webui cd text-generation-webui ./start_linux.sh --api # Default endpoint: http://localhost:5000/v1/chat/completions

Common Endpoints:

Ollama: http://localhost:11434/api/chat LM Studio: http://localhost:1234/v1/chat/completions KoboldAI: http://localhost:5001/api/v1/generate Text-Gen WebUI: http://localhost:5000/v1/chat/completions
💚 Support this project: 0x0755F4A43C7A567E6554AEedC91F9Fe37737D35F
🔗 View Wallet on Etherscan
Fetching ETH balance...