5 GHL Bot Failures That Are Silently Costing You Clients
By BadBots.ai Team
5 GHL Bot Failures That Are Silently Costing You Clients
We've audited over 50 GHL bots across agencies of all sizes — solo operators managing 3 sub-accounts and teams running 50+. The bots look different on the surface: med spas, real estate, dental practices, coaching businesses. But the failures are remarkably consistent.
These aren't obscure edge cases. They're patterns that show up in the majority of bots we test. And the worst part? Most agency owners don't know they're happening because they only show up in real customer conversations — the ones you never see.
Here's what we found, why it happens, and how to fix each one.
1. Hallucinated Pricing and Offers
This is the single most common failure we see. A customer asks "How much does [service] cost?" and the bot confidently responds with a number that's completely wrong — or invents a price for a service the business doesn't even offer.
Why it happens: GHL's Conversation AI pulls from the knowledge base, but when the KB doesn't contain explicit pricing, the bot doesn't say "I don't have that information." Instead, it tries to be helpful. It infers, extrapolates, or outright fabricates a number based on context clues in the KB content.
Real example: A med spa bot told a customer that Botox was "$10 per unit" when the actual price was $14 per unit. The bot had pulled a number from a promotional document that was uploaded to the KB months earlier and never removed.
The fix: Audit your knowledge base for outdated content — old promotions, discontinued services, pricing from last year. Then test the bot specifically with pricing questions for every service listed and a few that aren't. If the bot makes up a price for a service you don't offer, your KB instructions need tightening. Add explicit instructions like "If pricing is not in the knowledge base, tell the customer to call for current pricing."
2. Broken Cancellation and Reschedule Flows
The second most common failure involves appointment management. A customer says "I need to cancel my appointment" and the bot either:
- Triggers a "stop bot" action because "cancel" is in the message
- Says "I can help with that" but never actually routes to the cancellation flow
- Asks for information it should already have (like the appointment date)
- Loops endlessly asking clarifying questions without resolving anything
Why it happens: GHL workflow triggers and conversation AI actions are configured separately, and they often conflict. A "stop bot" action might be set to trigger on keywords like "cancel" or "stop" — which catches legitimate cancellation requests. Or the appointment-cancel action exists but the bot's instructions don't clearly define when to invoke it.
Real example: One agency's bot had a "Stop Bot" action triggered by the word "cancel." Every single cancellation request killed the conversation immediately. The agency had been live for three months before a client complained. Dozens of customers had simply never gotten help.
The fix: Review your Stop Bot triggers. Narrow them to explicit opt-out language: "stop," "unsubscribe," "bye," "done." Then create a separate intent path for appointment cancellations and reschedules. Test both paths — try canceling an appointment and try ending the conversation — and verify they route correctly.
3. Data Leakage Between Sub-Accounts
This one makes agency owners lose sleep, and it should. When you manage multiple sub-accounts, knowledge base content or conversation context from one client can leak into another client's bot responses.
Why it happens: It usually stems from copy-paste setup. You clone a bot configuration from Client A to Client B, and some of Client A's KB documents, custom field references, or instruction text carries over. The bot then references Client A's services, team members, or policies when responding to Client B's customers.
Real example: A dental practice bot mentioned "our downtown spa location" because the bot configuration had been cloned from a med spa client. The agency didn't catch it because they only tested with generic questions during setup.
The fix: After setting up any new sub-account bot, run a full scenario suite that asks about the business by name, its competitors, services it does NOT offer, and team members from other clients. If the bot references anything from another account, you've got leakage. Scrub the KB, instructions, and any custom field mappings.
4. Channel-Specific Response Failures
A bot that works perfectly on Live Chat might fail completely on SMS or Instagram. This catches agencies off guard because they typically only test on one channel.
Why it happens: Different channels have different constraints. SMS has character limits and formatting restrictions. Instagram and Facebook Messenger have their own UI patterns. WhatsApp handles media differently. The bot's knowledge base content might include HTML links or formatting that renders fine in Live Chat but looks broken in SMS.
More critically, channel-specific configurations in GHL — like which channels the conversation agent is active on — can cause the bot to simply not respond on certain channels. If your bot is configured for SMS and Live Chat but a customer reaches out on Facebook Messenger, the message might go into a void.
Real example: An agency's bot included "Click here to book: [link]" responses that worked great on Live Chat and email. On SMS, customers received a raw URL with no context. On Instagram, the link wasn't clickable at all.
The fix: Test your bot on every channel it's supposed to support. Not just "does it respond?" but "does the response make sense on this channel?" Pay special attention to links, formatting, and response length. And verify the conversation agent's channel configuration in GHL — make sure it's active where you think it is.
5. Escalation Black Holes
The bot is supposed to hand off to a human when it can't handle a request. Instead, it either never escalates (trying to handle everything itself) or escalates prematurely (pushing simple questions to humans that the bot could easily answer).
Why it happens: Escalation rules in GHL are typically defined through conversation actions — specifically the "Human Handover" action. But the triggers for this action are often too broad or too narrow. "I want to speak to someone" might not trigger it because the exact phrasing doesn't match. Or any expression of frustration triggers an immediate handover, including cases where the customer is just being emphatic.
The bigger problem is what happens after handover. If no human is monitoring the conversation, the customer sits in limbo. The bot has stopped responding, and nobody picks up. That's worse than a bad bot response.
Real example: A bot correctly handed off when a customer asked for a manager. But the handoff happened at 11 PM on a Saturday. No one was monitoring. The customer waited 14 hours for a response. By then they'd booked with a competitor.
The fix: Map out every path that should trigger escalation and test each one. Then — and this is the part people skip — test what happens after escalation. Does someone get notified? Is there a fallback if no human responds within a set time? Consider configuring an "Advanced Follow-Up" action that sends a notification or auto-response when a handoff occurs outside business hours.
The Pattern Behind the Pattern
All five of these failures share a root cause: the bot was deployed without systematic testing. Someone built it, tried a few questions, and called it done.
The fix isn't to test harder once. It's to test regularly. Bots degrade over time as knowledge bases change, new services are added, promotions expire, and GHL updates roll out. A bot that passed QA in January might be hallucinating by March.
At BadBots.ai, we run these exact checks (and about 100 more) across every channel, for every sub-account, on a regular schedule. But even if you're doing it manually, start with these five. Open each client's bot right now and try to cancel an appointment, ask about pricing for a service that doesn't exist, and request a human at midnight. What you find will probably surprise you.
Get more insights like this
Weekly tips on AI bot quality, GHL best practices, and agency growth.
Join the Waitlist