D: The combination of Ollama (for running local LLMs) and n8n (a powerful workflow automation tool) opens up exciting possibilities for private, self-hosted AI automation. Letβs dive into how to connect these tools and build intelligent workflowsβ100% offline if needed!
π§ Prerequisites
Before we start, make sure you have:
- Ollama installed (Download here) β Run models like
llama3
,mistral
, orphi3
locally. - n8n set up (Installation guide) β Self-hosted or cloud version.
- Basic familiarity with APIs and JSON.
π Step 1: Run Ollama & Enable API
Ollamaβs REST API lets n8n interact with your local AI. Start Ollama with:
ollama serve
By default, the API runs at http://localhost:11434
.
πΉ Test the API
Try a curl command to confirm it works:
curl http://localhost:11434/api/generate -d '{
"model": "llama3",
"prompt": "Hello, how are you?"
}'
You should see a streamed AI response.
π€ Step 2: Connect Ollama to n8n
In n8n, use the HTTP Request node to call Ollamaβs API.
π Example Workflow: AI-Powered Content Summarizer
- Trigger: Use an n8n Schedule Trigger (e.g., hourly).
- HTTP Request Node:
- Method:
POST
- URL:
http://localhost:11434/api/generate
- Headers:
Content-Type: application/json
- Body (JSON):
{ "model": "mistral", "prompt": "Summarize this text: {{$node["Webhook"].json["input_text"]}}" }
- Method:
- Output: Save the AI response to a file/database or send it via email/Slack.
π Advanced Use Cases
1οΈβ£ Automated Document Processing
- Flow:
- Watch a folder (via n8n File Trigger).
- Extract text (OCR if needed).
- Send to Ollama for summarization/translation.
- Output results to Notion or Google Docs.
2οΈβ£ AI Chatbot with Slack
- Use n8nβs Slack Trigger to detect messages.
- Route queries to Ollama (
llama3
for creative answers,phi3
for facts). - Post replies automatically.
3οΈβ£ Data Enrichment Pipeline
- Fetch raw data (e.g., CRM contacts).
- Use Ollama to categorize sentiment or generate tags.
- Update your database.
β οΈ Tips & Troubleshooting
- Performance: Larger models (e.g.,
llama3-70b
) need significant RAM. Start withmistral
ortinyllama
for lightweight tasks. - Streaming: Ollamaβs API streams responses. Use
"stream": false
if you prefer a complete reply. - Security: If exposing n8n/Ollama online, add authentication (e.g., n8n user management, Ollama behind a reverse proxy).
π₯ Why This Combo Rocks
- Privacy: No data leaves your machine.
- Flexibility: Chain AI with 300+ n8n integrations (Discord, Airtable, etc.).
- Cost-Efficient: Avoid GPT-4 API fees for simple tasks.
π Resources
π‘ Pro Tip: Combine with Tavily (local web search) for AI agents that research autonomously!
Ready to build? Start smallβtry summarizing emails or generating SQL queries, then scale up! π