Inspectable workflow example

Slack Knowledge Bot with n8n, Qdrant, and Groq

A practical workflow that turns approved Google Drive docs into a Slack assistant. It chunks selected files, stores embeddings in Qdrant, retrieves relevant context, asks Groq to answer only from that context, and replies with sources or a low-confidence note.

Slack knowledge bot cover with document search and source-based answer concept

Many teams already have the answer somewhere: a pricing note, an onboarding doc, a service FAQ, a checklist, or a handoff process. The slow part is finding the right section while work is happening.

This workflow gives Slack a searchable knowledge layer. A team member asks a question, n8n searches an approved Google Drive folder through Qdrant, and Groq writes an answer from the retrieved context with source files attached.

The useful part is not a confident answer.

It is a traceable answer: what documents were searched, which sources were used, and whether the match was weak enough to treat with caution.

Knowledge source Approved Google Drive Markdown files
Search layer Qdrant vectors from local embeddings
Team interface Slack mention with sources

Approximate running cost

For this demo, the software bill can stay very low. Ollama runs the embedding model locally. n8n can run as a free self-hosted community edition. Qdrant can run locally or start on a free cloud tier for testing. Groq is the main paid API part, and a small internal bot like this can often be tested on a small balance when question volume is modest.

1

Ollama embeddings: no API bill when the embedding model runs on your own machine or server. The cost is local compute.

2

n8n: free if self-hosted, with hosting and maintenance still owned by someone. n8n Cloud is paid if you want managed hosting.

3

Qdrant: local/self-hosted can be free to run as software. Qdrant Cloud also has a free tier for testing and prototypes, with paid usage when the project grows.

4

Groq: the answer generation step is usage-based. For a low-volume demo, a small balance such as USD 5 can be enough to test the pattern for a while, but real spend depends on model choice, prompt size, and usage volume.

Low software cost to start. Real cost is hosting, API calls, and whoever maintains it.

The full workflow map

The canvas has two lanes. The top lane refreshes the knowledge base: list files, filter approved text files, chunk them, embed them, delete old chunks for the same source, and upsert the fresh points into Qdrant. The bottom lane answers questions from Slack.

n8n workflow canvas for a Slack knowledge bot with Google Drive ingestion and Slack answer paths
Two visible paths: document ingestion on top, Slack question answering below.

1. Start with approved docs

The workflow starts from a controlled Google Drive folder. In this demo, the folder contains Markdown files for FloxoLab pricing, services, FAQ, workflow examples, onboarding, and intake process notes.

That matters because a useful knowledge bot should not begin by crawling a messy shared drive. The first version should use current, approved documents that are safe for internal answers.

Google Drive demo folder containing selected Markdown knowledge files
The workflow filters for Markdown files, so the first knowledge base stays simple and reviewable.

2. Chunk the files with metadata

Each file is downloaded and passed through a code node that splits text by headings, keeps chunks under a practical word limit, and adds metadata such as source, file index, chunk index, word count, and last modified time.

Metadata is not decoration. It is what lets the answer show where the context came from, and it gives the workflow a clean way to replace old chunks from the same source during the next ingestion run.

n8n Smart Chunk and Metadata output showing chunk text source chunk index and word count
Each chunk carries source and index data, not just text.

3. Store searchable chunks in Qdrant

The workflow uses Ollama with nomic-embed-text for embeddings, then prepares a Qdrant point with a deterministic id, vector, and payload. Before upserting new points, it deletes old chunks for the same source so stale content does not sit beside fresh content.

Qdrant collection showing stored knowledge chunks with text source chunk index and vector length
Qdrant stores the vector plus the payload fields the workflow needs later: text, source, chunk index, timestamps, and word count.

4. Ask from Slack

The question flow starts when someone mentions the bot in Slack. n8n removes the mention text, embeds the clean question, searches Qdrant for the top matching chunks, and builds a compact context block for Groq.

The answer prompt is deliberately strict: answer only from the provided context, do not invent pricing, services, guarantees, timelines, or policies, and keep the message concise enough for Slack.

Slack question and bot answer showing sources from approved knowledge files
The user asks inside Slack. The bot answers from the retrieved context and appends source files.

5. Make uncertainty visible

The workflow also marks weak retrieval. If the top Qdrant score is below the threshold, the answer includes a low-confidence note. That is a practical guardrail because the user can see the answer should be checked before it becomes a decision.

Slack bot low confidence response when the answer is not found in the approved documents
A missing answer should stay visible. Here the bot says it does not know and marks the match as low confidence.

What I would improve next

The demo proves the path, but a production version should make ownership clearer.

1

Better source links. File names are useful, but links back to the exact Google Drive document would make review faster.

2

Threaded Slack replies. Posting answers in the original thread would keep channels cleaner.

3

Low-confidence review. Weak matches could be logged to a sheet or sent to a private review channel.

4

Ingestion logs. A clear record of files, chunk counts, and failures would make handoff easier.

Bottom line

A Slack knowledge bot is useful when the answer already exists, but the team wastes time finding it. The important choices are not only model and database. They are the approved knowledge set, the retrieval threshold, the source display, and the fallback behavior.

This workflow keeps those parts visible: selected docs in, searchable chunks stored, Slack question asked, context retrieved, answer written from that context, sources shown, uncertainty surfaced.

Related guide

For the plain-English explanation behind this pattern, read RAG for Small Businesses: AI That Searches Your Company Knowledge.

n8n referral

Trying n8n Cloud?

If this workflow helped you decide n8n is worth testing, you can use my referral link. FloxoLab may earn a commission at no extra cost to you.

Try n8n Cloud

Want to map one knowledge workflow?

Send the docs your team searches most often, where the questions happen, and one example question people keep asking. I will map the most useful first version before suggesting a larger build.

Map my knowledge workflow