← Back to Work
UX Research · Evaluative Research
Customer support agents were skipping the churn nudge.
That turned out to be the most important finding.
A U.S. telecom provider built an AI-powered feature inside their customer support agent CRM — a nudge that flags customers at risk of leaving and prompts agents to make a retention offer. Low adoption triggered this research. But the finding wasn't that customer support agents didn't care. It was that the nudge was surfacing at the wrong moment, with information that made no sense in the middle of a live call.
✦
Note on images: All visuals in this case study are AI-generated representations created using Google Gemini. They are used to illustrate research context and findings — not to reproduce real company interfaces or identifiable individuals. The ✦ watermark is retained intentionally.
01 — Industry Context
Churn is the defining problem of U.S. telecom
In the United States, telecom is one of the most contested service industries. Customers switch providers regularly — drawn by promotional pricing, better coverage, or dissatisfaction with how they were treated on a call. The cost of that churn lands squarely on the company.
20–50%
Annual churn rate
Industry benchmark, U.S. telecom sector
5–7×
Cheaper to retain than acquire
Harvard Business Review / Bain & Company
~50%
Contacts still go to human customer support agents
J.D. Power 2024 Customer Service Study
$20–30
Average cost per customer support agent call
Industry operational benchmark
Despite heavy investment in self-service and AI chat, roughly half of all telecom customer interactions in the U.S. still involve a human customer support agent. That means every call is both an operational cost and a retention opportunity — and the customer support agent is sitting at the intersection of both.
🎧
Who are customer support agents — and why do they matter?
Customer support agents (sometimes called care agents or in-store associates, depending on the channel) are the frontline humans who handle billing disputes, technical issues, upgrade requests, and account changes. In U.S. telecom, they are often the last interaction a customer has before deciding to stay or leave. A well-timed retention offer from a trained customer support agent can reverse a cancellation decision. A clunky or irrelevant tool in the middle of that conversation can make everything worse — for the agent and the customer.
A typical customer support agent environment — dual monitors, headset, live dashboards, active call queue. ✦ AI-generated via Gemini.
02 — The Feature
A churn-risk nudge built directly into the customer support agent CRM
The company built an AI-powered churn prediction model and surfaced it as a nudge inside the customer support agent's CRM. When a customer's account was flagged as high-risk, a visual prompt appeared on screen during the call — suggesting the agent acknowledge the risk and offer a retention incentive.
On paper, the logic was sound: customer support agents are already on the call; the customer is already engaged; this is the right moment to intervene. In practice, customer support agents were not using the feature consistently. Some ignored it entirely. Some acknowledged it only after the customer had already indicated they wanted to cancel. Some couldn't explain what it was asking them to do.
The brief that started this research
"We built a tool to help customer support agents retain customers. Adoption is low. We need to understand why customer support agents aren't using it — and what would need to change."
Customer support agent mid-call — the churn nudge visible but easy to miss in a busy dual-screen setup. ✦ AI-generated via Gemini.
A representation of how the churn flag appeared — surfaced alongside call history, without clear action guidance. ✦ AI-generated via Gemini.
03 — Research Approach
Getting into the real call environment
The brief asked why adoption was low. I reframed the starting question before designing the study: rather than measuring what customer support agents did or didn't do with the feature, I wanted to understand what was happening in the moment the nudge appeared — cognitively, conversationally, and contextually.
12
Customer support agents
In-person, 45 min each
6+
Months tenure
All had direct experience with the feature
3
Research methods
Semi-structured interviews, think-aloud, observation
2
Months fieldwork
May – June 2025, on-site access
Sessions were conducted in-person, either at call-centre desks or in quiet breakout rooms at the same site. I used a think-aloud protocol during scenario walkthroughs — customer support agents were shown a recreated CRM screen with the nudge active and asked to talk through what they would do. This surfaced decision-making processes that would never appear in usage logs.
Why in-person mattered here: Customer support agents handle sensitive customer calls in a high-pressure, time-bound environment. Remote sessions would have removed exactly the contextual cues — the screen setup, the workflow rhythm, the noise level — that shaped how they related to the nudge. Physical access was essential to understanding the real experience.
On-site observation — the researcher alongside a customer support agent reviewing the CRM screen. Watching real reactions in real conditions told us more than retrospective recall ever could. ✦ AI-generated via Gemini.
04 — My Research Process
What I did — and why I made those choices
Every research study involves a series of decisions that shape what you find. Here I want to be transparent about those decisions: what I chose, what I didn't, and the reasoning behind each one.
Decision 01
Reframing the research brief
The original question —
"why aren't customer support agents using the nudge?" — assumed the problem was with agent behaviour. I shifted it to:
"what happens in the moment the nudge appears?" This reframe moved the investigation from judging an outcome to understanding an experience. That one shift opened up everything that followed.
Alternative not chosen
Accepting the original framing would have led to a study measuring training gaps and adoption barriers. The output would have been change management recommendations — not insights about tool design. It would have confidently answered the wrong question.
Decision 02
Think-aloud over retrospective interview
Split-second decisions made under call pressure don't survive memory accurately. If I had asked "what do you do when the nudge appears?", I would have gotten reconstructed answers — what agents believed they did, not what they actually did.
Think-aloud walkthroughs with a live CRM simulation surfaced real-time reasoning, not the tidied-up version people tell themselves afterwards.
Alternative not chosen
Diary studies or end-of-shift retrospective interviews would have been easier to run and less disruptive for agents. But memory for fast, pressure-driven decisions reconstructs rather than recalls. The reasoning agents gave us post-hoc would have reflected how they made sense of their behaviour, not why they made it in the moment.
Decision 03
In-person, not remote
The physical environment was data. The noise level, the dual-screen layout, the queue visible on the wall monitor, the rhythm of call transitions — all of this shaped how customer support agents related to the nudge.
A remote session would have removed the very context that made the problem legible. I needed to be in the room.
Alternative not chosen
Remote sessions via video call would have been faster to schedule and cheaper to coordinate across sites. But losing the physical environment — the dual screens, the ambient noise, the queue pressure — would have stripped out the contextual layer that made the timing problem visible. The environment wasn't background noise; it was part of the finding.
Decision 04
6+ months tenure as a recruitment filter
Newer customer support agents hadn't yet formed stable habits around the nudge — their responses would have reflected confusion, not judgment.
I needed participants who had tried the feature, decided something about it, and settled into a pattern. Those decisions — whether to use it, ignore it, or work around it — were the data.
Alternative not chosen
Including newer agents (1–3 months tenure) would have broadened the participant pool and made recruitment faster. But first impressions of a feature are different from settled behaviour. I wasn't studying what agents thought about the nudge when they first saw it — I was studying what they had decided to do with it over time. That required experience.
Decision 05
Semi-structured, not scripted
Some of the most valuable findings came from threads I hadn't planned for — including customer support agents describing their own informal churn signals, which became Finding 04.
A rigid script would have cut those off entirely. Semi-structured interviews gave me a framework to cover the essential ground while staying genuinely open to what I hadn't anticipated.
Alternative not chosen
A fully structured script would have made analysis cleaner and cross-session comparisons easier. But it would have foreclosed the unplanned threads that surfaced agents' own mental models of churn — the most consequential finding in the study. Comparability is useful; not at the cost of the unexpected.
05 — Team & Collaboration
Who I worked with — and what that made possible
This research was not a solo effort. A small, focused team made it possible to go deep in the field without losing the organisational thread. Here's how responsibilities were structured and what the operational groundwork looked like.
👩💼
Research Manager
Strategic & Stakeholder Layer
The research manager framed the overall scope, managed stakeholder expectations across product and operations, and served as the bridge between the research team and company leadership. This collaboration meant I could focus entirely on fieldwork and participant relationships without getting pulled into internal alignment conversations mid-study.
📝
Transcriptor / Note-Taker
In-Session Support
A dedicated note-taker joined every session and was responsible for capturing full transcripts in real time. Having someone focused entirely on documentation meant I could give my full attention to listening, probing, and building rapport with each participant. The transcripts became the primary data source for synthesis and pattern-finding.
Beyond the core team, the operational work required to run this research in a live call-centre environment was significant in its own right:
📅
Scheduling around a live call queue. Coordinating 45-minute windows with 12 customer support agents across an active call centre required weeks of back-and-forth with team leads and shift schedulers. Multiple sessions were moved at short notice. I built buffer into every week and treated rescheduling as a normal part of the process — not a disruption to it.
✅
Securing approval from company leadership. Getting on-site access and research clearance required sign-off from company heads. This involved explaining the purpose of the study, addressing concerns about what would be done with findings, and ensuring participants understood their contributions were confidential. The approval process took longer than anticipated but was essential for earning participant trust once we were in the room.
🎯
Recruiting a varied participant set. I recruited across tenure levels (6 months, 1 year, 3+ years), channel types (phone and chat customer support agents where access allowed), and shift patterns. Ensuring variety meant findings weren't skewed toward one type of agent experience or one team's culture — which was especially important given how differently agents in different roles had responded to the nudge.
06 — Key Findings
Customer support agents weren't indifferent. They were making rational decisions to skip it.
Across all 12 sessions, a consistent pattern emerged: customer support agents were not apathetic toward retention. Most could clearly describe the value of keeping a customer. What they couldn't do was act on the nudge — because in the moment it appeared, acting on it would have actively made their job harder.
⏱️
Finding 01 — The nudge arrived at the wrong moment
The churn flag appeared approximately 1–1.5 minutes into the call. By that point, customer support agents had already established the reason for contact and were mid-diagnosis. Switching to a retention conversation required a complete context reset — one that would disrupt the call they were already managing.
"By the time it pops up, I'm already three steps in. I can't just stop and go — 'actually, do you want to stay with us?'"
🚩
Finding 02 — Wrong customers were being flagged
The AI model was surfacing risk flags for customers who called about billing errors or routine support issues — not actual churn signals. Customer support agents noticed this quickly. After a few irrelevant flags, they stopped trusting the system and learned to ignore the nudge as a default.
"It shows up when someone's calling about Wi-Fi. That person isn't leaving — they're just having a bad day."
📋
Finding 03 — The information shown wasn't useful in real time
The nudge displayed a churn risk score and account history. Customer support agents needed different information to act confidently: what offer to make, what the customer's current contract looked like, whether they'd received a retention offer before. None of that was surfaced.
"It tells me they might leave. But it doesn't tell me what I'm supposed to do about it or what I'm allowed to offer."
🧠
Finding 04 — Customer support agents had built their own mental model
Most experienced customer support agents had developed their own churn signals — tone of voice, repeated contacts, mention of competitor names. They trusted their own judgment over the tool. The nudge didn't align with or augment what they already knew; it competed with it.
"I already know when someone's about to leave. I can feel it in the call. The flag just gets in the way."
The timing problem visualised — the nudge appearing at ~1:22 into the call, when a customer support agent is already mid-diagnosis and cannot switch context. ✦ AI-generated via Gemini.
Core Insight
The nudge wasn't being ignored.
It was being disqualified — in real time.
Customer support agents were making fast, context-sensitive decisions to skip the feature — not because they didn't care about retention, but because the tool demanded action at the exact moment it was least possible.
The churn risk indicator as it appeared in the CRM — a tiered flag (High / Medium / Low) with no actionable guidance attached. ✦ AI-generated via Gemini.
A representation of what the churn flag surfaced — risk score and call history, with no clarity on what the customer support agent was expected to do next. ✦ AI-generated via Gemini.
07 — Design Opportunity
Reframing the question before recommending a solution
The original brief was framed around a behaviour problem: customer support agents weren't using the feature. The research showed this framing was incomplete. Before any redesign could be meaningful, the team needed to agree on a different question.
HMW 01
How might we surface the churn nudge at a moment when customer support agents can actually act on it, rather than mid-diagnosis?
HMW 02
How might we give customer support agents the specific information they need to make a retention offer — not just a risk score — so the nudge feels actionable?
HMW 03
How might we align the AI's churn signals with what customer support agents already know about at-risk customers, so the tool augments rather than competes with their judgment?
What the team agreed to: Rather than immediately redesigning the nudge, the team decided to pause design investment and commission a follow-up study in six months. The research had revealed enough structural misalignment that iterating on the existing design risked building on a flawed foundation. This decision itself was a research outcome.
08 — Research Challenges
What made this study difficult
Every research engagement comes with friction. This one had three specific challenges worth naming honestly.
🤝
Building trust with customer support agents quickly
Customer support agents are evaluated on call metrics and performance dashboards. Being observed — or asked why you're not using a company-mandated tool — can feel like an audit. I spent the first part of every session making clear that this was about the tool, not about them. Most opened up once they understood I wasn't there to report on their behaviour.
⚖️
Internal disagreement about the research framing
Some stakeholders came into the study believing the problem was customer support agent training — not tool design. There was early pressure to structure the research around adoption improvement rather than root-cause understanding. Holding the original framing required clear communication about what evaluative research can and cannot answer — and the confidence to push back constructively.
⏰
Securing time with customer support agents in a live environment
Call centres operate on strict scheduling. Getting 45 minutes with a customer support agent during a working day — without a call queue waiting — required weeks of coordination and flexibility. Several sessions were rescheduled. I learned to treat scheduling itself as part of understanding the operational context rather than an obstacle to getting to the 'real' research.
09 — Impact
What this research prevented — and what it opened up
The most important outcome of this research wasn't a design recommendation. It was a decision to not proceed — to pause investment in a feature iteration that would have built on unresolved structural problems.
1
Investment decision redirected
Further development of the existing nudge design was paused pending structural re-evaluation
3
HMW questions agreed by team
Product, engineering, and operations aligned on what needed to be answered before redesign
6 mo
Follow-up study scheduled
A second evaluative study planned for late 2025 to test proposed timing and content changes
🔄
Current status: This case study reflects research completed in June 2025. Design changes are currently paused. A follow-up study is scheduled for late 2025. I will update this page when new findings are available.
Beyond the immediate outcome, this research shifted how the team talked about the feature. The original question — "why aren't customer support agents using it?" — implied the problem was with agents. The new question — "what would make this actually useful in a live call?" — put the design back on the table. That reframe came directly from what customer support agents told us.
10 — Reflection
What I'd carry forward
🔍
Low adoption is a research brief, not a verdict
When a product team says "customer support agents aren't using the feature," it's easy to frame the research around changing agent behaviour. This study reminded me that low adoption is almost always a signal worth investigating before any solution is considered.
🧭
Context-switching has real cognitive cost
Designing for customer support agents in live conversations is different from designing for most digital products. Interruption has real cost — not just in usability terms, but in the quality of the human interaction happening simultaneously. I want to bring this lens to more of my work.
📊
Usage data alone wouldn't have found this
Analytics would have shown low feature engagement. It would not have shown the moment-by-moment decision customer support agents were making — or why. The qualitative layer was essential here, not as a supplement to quantitative analysis but as the primary source of insight.
💬
Customer support agents already know things the model doesn't
Experienced customer support agents had built their own churn signals through thousands of calls. That knowledge was invisible to the AI system. Future work should find ways to surface and incorporate agent expertise — not replace it with a risk score.