← Back to Blog

Case Study: How Context-Aware Onboarding Cut Support Tickets 38%

Support ticket reduction chart

Onboarding flows are where most B2B SaaS support tickets originate. Users hit a configuration step they do not understand, do not complete setup, and submit a ticket asking for help with something the product should have guided them through. One of our customers - a workflow automation platform with a multi-step workspace setup flow - was generating 40% of their total support ticket volume from onboarding, specifically from users who abandoned setup mid-way and then needed manual intervention to resume. Here is how the integration worked, what failed during deployment, and what the 90-day data showed.

The Problem: Cold Restart Every Time

The customer's original onboarding flow was a linear wizard: seven steps, no context saved between sessions. If a user completed steps 1-4 and closed the browser, they returned to step 1 on next login. They had to re-enter their organization details, team size, and initial configuration preferences before reaching the integration step where they had stopped. Users who returned to the wizard more than once in a 48-hour period had a 71% abandonment rate, compared to 23% for single-session completions. They were not struggling with the actual configuration - they were frustrated by repeating information.

The ticket category breakdown confirmed this: 44% of onboarding tickets contained the phrase "I already entered this" or "I set this up before." Users were submitting support tickets not because they needed help with configuration but because they wanted human intervention to skip the repeated steps they found intolerable.

Integration Architecture

The integration preserved the existing wizard UI and backend configuration logic. Equmenopolis's API was inserted as a context layer between the UI and the backend: each wizard step completion wrote the completed values to an Equmenopolis context object keyed to the user's account ID. On wizard load, the application checked whether an active context existed for the user. If yes, the wizard pre-populated completed fields from context and advanced to the first incomplete step. If no active context, the wizard started fresh.

The context object stored 23 fields across the seven wizard steps: company metadata, team configuration, integration credentials (stored as credential tokens, not raw values), notification preferences, and initial workspace settings. The context was configured with 30-day dormant retention - a user who started setup and returned a month later would still have their progress saved.

The dialogue component handled the in-wizard assistant: a sidebar chat that responded to questions about the current step. The assistant had access to the full context object, so it could answer "I see you've connected your GitHub integration in step 3. The webhook URL you'll need for the next step is available in your GitHub settings under Developer Tools." This level of contextual specificity was not possible with a generic FAQ bot that had no access to the user's current wizard state.

What Failed During Deployment

Two issues emerged in the first two weeks that were not anticipated during planning. First: credential token storage. The original design stored integration credentials in the context object so they would be available if the user resumed setup. Security review flagged this correctly: credential tokens should not be in a session context object, they should be in the application's credential store. We modified the architecture to store a credential reference (an internal ID pointing to the credential) rather than the credential value itself. The context object holds the reference; the application resolves it when needed. This was a 2-day change but it was not in scope for the original integration.

Second: context collision when users shared account access. The platform allowed multiple users at an organization to have admin access. When two admins started the wizard simultaneously - which happened more often than expected, particularly at onboarding for larger teams - their context objects were written to the same account ID and overwrote each other. The fix was a per-user context scope: context was keyed to account ID plus user ID, not account ID alone. Step completion state was shared at the account level (so either admin's completions counted for the shared wizard), but user-specific preferences were stored per-user. This required re-architecting the context schema and took 4 days.

The 90-Day Results

At 90 days post-deployment, the ticket volume from onboarding dropped 38% compared to the 90-day pre-deployment baseline. The breakdown: tickets attributable to repeated information entry dropped 79%. Tickets from configuration questions that the in-wizard assistant could answer dropped 41%. Tickets from step completion confusion (users not knowing what a step required) were largely unchanged - the assistant helped some users, but not all configuration concepts were captured in the assistant's knowledge base, which was a content gap rather than a context failure.

Wizard completion rate increased from 61% to 78%. Users who started setup were 17 percentage points more likely to complete it. The median time-to-completion for users who completed the wizard dropped from 22 minutes to 14 minutes - not because the wizard was shorter, but because returning users did not repeat completed steps. Single-session completions were largely unchanged at 8 minutes median; the improvement was entirely driven by returning users.

What the Dialog Analytics Showed

The in-wizard assistant handled 3,100 queries in the first 90 days. Fallback rate (questions the assistant could not answer and deferred to support) was 18%, concentrated on two topic areas: billing and custom integration edge cases. Both topics were subsequently added to the assistant's knowledge base, reducing fallback rate to 9% in month 4. As we discussed in our article on dialog analytics metrics, tracking per-topic fallback rates is the fastest way to identify knowledge gaps in a deployed assistant.

Context hit rate for the onboarding assistant was 96.2% - higher than our overall production average - because onboarding conversations are highly structured, entities established in prior steps rarely need coreference resolution beyond simple slot propagation, and the user is unlikely to introduce ambiguous references. This is a case where the structured nature of the domain simplifies context tracking significantly.

Lessons

The credential storage issue and the context collision problem were both detectable at design time with a proper threat model. The integration was designed by engineers thinking about functionality, not security or concurrency. A security review earlier in the design process - before code was written - would have surfaced both issues before they became production problems. The 6 days spent fixing them post-deployment would have been 2 days if caught in design review.

The knowledge gap in the in-wizard assistant was a content problem, not a technology problem. The integration was technically successful on day 1; the assistant's 18% fallback rate was a consequence of not having documented knowledge of billing and custom integrations before launch. Production monitoring surfaced it quickly; addressing it was a content authoring task, not a development task.

Conclusion

Context-aware onboarding reduces support volume by eliminating repeated information entry and enabling contextually specific in-flow assistance. The technical integration is straightforward; the tricky parts are credential storage and multi-user context scope, both of which are solvable with proper design review. The outcome - 38% ticket reduction, 17-point increase in wizard completion - was significant enough that the customer extended the integration to their account configuration flow and plan-upgrade flow in Q4 2025.