This is a demo workflow, not a fake client case study. The goal is to show the useful parts of AI automation in the Philippines: the input, the rules, the classifier, the validation step, the route, and the log.
The lead path is walked through in detail because it is enough to show the pattern. Support, invoice, spam, and review branches use the same structure: validate first, route second, act third, log last.
It is deciding what should happen, testing real emails, tuning labels, and making the workflow understandable enough for someone else to own.
The full workflow map
The canvas has five possible routes. This page follows the lead route end to end, while keeping the other branches visible on the map.
1. Start with a controlled test email
Before building the rest of the workflow, send yourself one test email for the scenario you want to prove. In this case, the test email is a lead inquiry asking about n8n automation services and pricing.
The Gmail Trigger is then tested and pinned. That gives every next node a stable input while the workflow is being built, instead of waiting for Gmail to trigger again.
2. Normalize fields once
Gmail gives useful data, but it is not shaped for the rest of the workflow. The Extract Fields node turns raw Gmail output into clean top-level fields: sender, sender name, domain, subject, body snippet, attachment names, MIME types, and a PDF flag.
3. Add basic rule signals before AI
Basic Rules does not replace the classifier in this demo. It adds cheap, inspectable signals before the AI step: suspicious domains, no-reply senders, newsletter patterns, aggressive spam words, trusted domains, and attachment flags.
That makes the workflow easier to review. If the model later calls something spam, validation can check whether the rule-based signals agree before archiving anything.
4. Keep the AI classifier small
The classifier is called through an HTTP Request to Groq. Its job is narrow: return one label, one confidence score, and one short reason.
{
"label": "lead",
"confidence": 0.95,
"reason": "The email is an inquiry about services and pricing from a potential client."
}
5. Validate before routing
AI output is not safe to trust directly. It can return markdown, broken JSON, a low confidence value, or a label that is semantically correct but operationally unsafe.
Invalid JSON goes to review. The workflow should not fall over because the model wrapped JSON in markdown.
Confidence below 0.75 goes to review. If the model is unsure, a human should see it.
Spam needs rule confirmation. A spam label alone is not enough to archive a message.
Invoice without PDF goes to review. The action depends on attachments, not only on the AI label.
6. Route by validated route, not raw AI label
The router uses the validated route field. This keeps the Switch node simple: the safety decisions already happened in Validation.
7. Lead path: CRM, Slack, and master log
The tested path for this demo is lead routing. The workflow writes a CRM row, posts a Slack alert, and then appends the operation to the master log.
Other branches use the same pattern
Fully testing every branch means preparing separate tables, statuses, dates, files, review rows, and sometimes real attachments. That setup work matters more than simply drawing lines between nodes.
Create a Notion or Airtable ticket, draft a reply, notify Slack, then log the completed route.
Save the PDF to Drive, add a row to an invoice sheet, notify admin, then log the route. If there is no PDF, send it to review.
Archive only when rule signals and confidence agree. Otherwise send it to review.
Add uncertain messages to a human review queue instead of forcing the workflow to guess.
What this demo does not cover
This is a useful first version, not the final production checklist. A production build would add an Error Trigger workflow, retry rules, stricter credential ownership, more test messages, and clearer handoff notes for the person who owns the inbox.
It would also test each branch with real examples: a support question, a real invoice PDF, a marketing email, a spam candidate, and at least one messy message that should go to review.
The setup pieces can be scaffolded too, such as a Drive folder, Sheets logs, a CRM sheet, and a Notion ticket database. I covered that pattern in the Claude connectors guide.
Bottom line
This is the kind of workflow automation Philippines teams can inspect: one trigger, clean fields, editable rules, a small AI classifier, validation before action, visible branches, Slack alerts, and a master log.
It is not AI magic. It is a small operations workflow that becomes useful when the labels, thresholds, and ownership are tuned against real email.