์ˆ˜. 8์›” 13th, 2025

D: The combination of Ollama (for running local LLMs) and n8n (a powerful workflow automation tool) opens up exciting possibilities for private, self-hosted AI automation. Letโ€™s dive into how to connect these tools and build intelligent workflowsโ€”100% offline if needed!


๐Ÿ”ง Prerequisites

Before we start, make sure you have:

  • Ollama installed (Download here) โ€“ Run models like llama3, mistral, or phi3 locally.
  • n8n set up (Installation guide) โ€“ Self-hosted or cloud version.
  • Basic familiarity with APIs and JSON.

๐ŸŒŸ Step 1: Run Ollama & Enable API

Ollamaโ€™s REST API lets n8n interact with your local AI. Start Ollama with:

ollama serve

By default, the API runs at http://localhost:11434.

๐Ÿ”น Test the API

Try a curl command to confirm it works:

curl http://localhost:11434/api/generate -d '{
  "model": "llama3",
  "prompt": "Hello, how are you?"
}'

You should see a streamed AI response.


๐Ÿค– Step 2: Connect Ollama to n8n

In n8n, use the HTTP Request node to call Ollamaโ€™s API.

๐Ÿ“Œ Example Workflow: AI-Powered Content Summarizer

  1. Trigger: Use an n8n Schedule Trigger (e.g., hourly).
  2. HTTP Request Node:
    • Method: POST
    • URL: http://localhost:11434/api/generate
    • Headers: Content-Type: application/json
    • Body (JSON):
      {
      "model": "mistral",
      "prompt": "Summarize this text: {{$node["Webhook"].json["input_text"]}}"
      }
  3. Output: Save the AI response to a file/database or send it via email/Slack.

๐Ÿ›  Advanced Use Cases

1๏ธโƒฃ Automated Document Processing

  • Flow:
    • Watch a folder (via n8n File Trigger).
    • Extract text (OCR if needed).
    • Send to Ollama for summarization/translation.
    • Output results to Notion or Google Docs.

2๏ธโƒฃ AI Chatbot with Slack

  • Use n8nโ€™s Slack Trigger to detect messages.
  • Route queries to Ollama (llama3 for creative answers, phi3 for facts).
  • Post replies automatically.

3๏ธโƒฃ Data Enrichment Pipeline

  • Fetch raw data (e.g., CRM contacts).
  • Use Ollama to categorize sentiment or generate tags.
  • Update your database.

โš ๏ธ Tips & Troubleshooting

  • Performance: Larger models (e.g., llama3-70b) need significant RAM. Start with mistral or tinyllama for lightweight tasks.
  • Streaming: Ollamaโ€™s API streams responses. Use "stream": false if you prefer a complete reply.
  • Security: If exposing n8n/Ollama online, add authentication (e.g., n8n user management, Ollama behind a reverse proxy).

๐Ÿ”ฅ Why This Combo Rocks

  • Privacy: No data leaves your machine.
  • Flexibility: Chain AI with 300+ n8n integrations (Discord, Airtable, etc.).
  • Cost-Efficient: Avoid GPT-4 API fees for simple tasks.

๐Ÿ“š Resources

๐Ÿ’ก Pro Tip: Combine with Tavily (local web search) for AI agents that research autonomously!

Ready to build? Start smallโ€”try summarizing emails or generating SQL queries, then scale up! ๐Ÿš€

๋‹ต๊ธ€ ๋‚จ๊ธฐ๊ธฐ

์ด๋ฉ”์ผ ์ฃผ์†Œ๋Š” ๊ณต๊ฐœ๋˜์ง€ ์•Š์Šต๋‹ˆ๋‹ค. ํ•„์ˆ˜ ํ•„๋“œ๋Š” *๋กœ ํ‘œ์‹œ๋ฉ๋‹ˆ๋‹ค