Many teams already have the answer somewhere: a pricing note, an onboarding doc, a service FAQ, a checklist, or a handoff process. The slow part is finding the right section while work is happening.
This workflow gives Slack a searchable knowledge layer. A team member asks a question, n8n searches an approved Google Drive folder through Qdrant, and Groq writes an answer from the retrieved context with source files attached.
It is a traceable answer: what documents were searched, which sources were used, and whether the match was weak enough to treat with caution.
Approximate running cost
For this demo, the software bill can stay very low. Ollama runs the embedding model locally. n8n can run as a free self-hosted community edition. Qdrant can run locally or start on a free cloud tier for testing. Groq is the main paid API part, and a small internal bot like this can often be tested on a small balance when question volume is modest.
Ollama embeddings: no API bill when the embedding model runs on your own machine or server. The cost is local compute.
n8n: free if self-hosted, with hosting and maintenance still owned by someone. n8n Cloud is paid if you want managed hosting.
Qdrant: local/self-hosted can be free to run as software. Qdrant Cloud also has a free tier for testing and prototypes, with paid usage when the project grows.
Groq: the answer generation step is usage-based. For a low-volume demo, a small balance such as USD 5 can be enough to test the pattern for a while, but real spend depends on model choice, prompt size, and usage volume.
Low software cost to start. Real cost is hosting, API calls, and whoever maintains it.
The full workflow map
The canvas has two lanes. The top lane refreshes the knowledge base: list files, filter approved text files, chunk them, embed them, delete old chunks for the same source, and upsert the fresh points into Qdrant. The bottom lane answers questions from Slack.
1. Start with approved docs
The workflow starts from a controlled Google Drive folder. In this demo, the folder contains Markdown files for FloxoLab pricing, services, FAQ, workflow examples, onboarding, and intake process notes.
That matters because a useful knowledge bot should not begin by crawling a messy shared drive. The first version should use current, approved documents that are safe for internal answers.
2. Chunk the files with metadata
Each file is downloaded and passed through a code node that splits text by headings, keeps chunks under a practical word limit, and adds metadata such as source, file index, chunk index, word count, and last modified time.
Metadata is not decoration. It is what lets the answer show where the context came from, and it gives the workflow a clean way to replace old chunks from the same source during the next ingestion run.
3. Store searchable chunks in Qdrant
The workflow uses Ollama with nomic-embed-text for embeddings, then prepares a Qdrant point with a deterministic id, vector, and payload. Before upserting new points, it deletes old chunks for the same source so stale content does not sit beside fresh content.
4. Ask from Slack
The question flow starts when someone mentions the bot in Slack. n8n removes the mention text, embeds the clean question, searches Qdrant for the top matching chunks, and builds a compact context block for Groq.
The answer prompt is deliberately strict: answer only from the provided context, do not invent pricing, services, guarantees, timelines, or policies, and keep the message concise enough for Slack.
5. Make uncertainty visible
The workflow also marks weak retrieval. If the top Qdrant score is below the threshold, the answer includes a low-confidence note. That is a practical guardrail because the user can see the answer should be checked before it becomes a decision.
What I would improve next
The demo proves the path, but a production version should make ownership clearer.
Better source links. File names are useful, but links back to the exact Google Drive document would make review faster.
Threaded Slack replies. Posting answers in the original thread would keep channels cleaner.
Low-confidence review. Weak matches could be logged to a sheet or sent to a private review channel.
Ingestion logs. A clear record of files, chunk counts, and failures would make handoff easier.
Bottom line
A Slack knowledge bot is useful when the answer already exists, but the team wastes time finding it. The important choices are not only model and database. They are the approved knowledge set, the retrieval threshold, the source display, and the fallback behavior.
This workflow keeps those parts visible: selected docs in, searchable chunks stored, Slack question asked, context retrieved, answer written from that context, sources shown, uncertainty surfaced.
Related guide
For the plain-English explanation behind this pattern, read RAG for Small Businesses: AI That Searches Your Company Knowledge.