ν™”. 8μ›” 12th, 2025

D: The combination of Ollama (for running local LLMs) and n8n (a powerful workflow automation tool) opens up exciting possibilities for private, self-hosted AI automation. Let’s dive into how to connect these tools and build intelligent workflowsβ€”100% offline if needed!


πŸ”§ Prerequisites

Before we start, make sure you have:

  • Ollama installed (Download here) – Run models like llama3, mistral, or phi3 locally.
  • n8n set up (Installation guide) – Self-hosted or cloud version.
  • Basic familiarity with APIs and JSON.

🌟 Step 1: Run Ollama & Enable API

Ollama’s REST API lets n8n interact with your local AI. Start Ollama with:

ollama serve

By default, the API runs at http://localhost:11434.

πŸ”Ή Test the API

Try a curl command to confirm it works:

curl http://localhost:11434/api/generate -d '{
  "model": "llama3",
  "prompt": "Hello, how are you?"
}'

You should see a streamed AI response.


πŸ€– Step 2: Connect Ollama to n8n

In n8n, use the HTTP Request node to call Ollama’s API.

πŸ“Œ Example Workflow: AI-Powered Content Summarizer

  1. Trigger: Use an n8n Schedule Trigger (e.g., hourly).
  2. HTTP Request Node:
    • Method: POST
    • URL: http://localhost:11434/api/generate
    • Headers: Content-Type: application/json
    • Body (JSON):
      {
      "model": "mistral",
      "prompt": "Summarize this text: {{$node["Webhook"].json["input_text"]}}"
      }
  3. Output: Save the AI response to a file/database or send it via email/Slack.

πŸ›  Advanced Use Cases

1️⃣ Automated Document Processing

  • Flow:
    • Watch a folder (via n8n File Trigger).
    • Extract text (OCR if needed).
    • Send to Ollama for summarization/translation.
    • Output results to Notion or Google Docs.

2️⃣ AI Chatbot with Slack

  • Use n8n’s Slack Trigger to detect messages.
  • Route queries to Ollama (llama3 for creative answers, phi3 for facts).
  • Post replies automatically.

3️⃣ Data Enrichment Pipeline

  • Fetch raw data (e.g., CRM contacts).
  • Use Ollama to categorize sentiment or generate tags.
  • Update your database.

⚠️ Tips & Troubleshooting

  • Performance: Larger models (e.g., llama3-70b) need significant RAM. Start with mistral or tinyllama for lightweight tasks.
  • Streaming: Ollama’s API streams responses. Use "stream": false if you prefer a complete reply.
  • Security: If exposing n8n/Ollama online, add authentication (e.g., n8n user management, Ollama behind a reverse proxy).

πŸ”₯ Why This Combo Rocks

  • Privacy: No data leaves your machine.
  • Flexibility: Chain AI with 300+ n8n integrations (Discord, Airtable, etc.).
  • Cost-Efficient: Avoid GPT-4 API fees for simple tasks.

πŸ“š Resources

πŸ’‘ Pro Tip: Combine with Tavily (local web search) for AI agents that research autonomously!

Ready to build? Start smallβ€”try summarizing emails or generating SQL queries, then scale up! πŸš€

λ‹΅κΈ€ 남기기

이메일 μ£Όμ†ŒλŠ” κ³΅κ°œλ˜μ§€ μ•ŠμŠ΅λ‹ˆλ‹€. ν•„μˆ˜ ν•„λ“œλŠ” *둜 ν‘œμ‹œλ©λ‹ˆλ‹€