The Real-World series continues. In these posts I’m sharing how the 2012 Temkin Customer Experience Award finalists actually go about building their customer experiences.
The first two posts focused on how the respondents create Customer Intelligence – the first stage of the Heart of the Customer’s Customer Experience Model. We started with Bringing Your Customers to Life, and continued with Identifying the Metrics that Matter.
Now we move into the Customer-Based Capabilities stage of the model, specifically Isolating What Really Matters. This stage goes beyond the relationship metric that matter to find the drivers that actually impact your customer. Rather than telling your managers to focus on improving your Net Promoter or Satisfaction scores, you discover what factors actually drive those scores. You can see more detail here.
EMC appears to use NPS as their relationship score and satisfaction for measuring transactions. They isolated three drivers as critical to their relationship.
From focusing on these drivers they saw both their NPS and Satisfaction scores increase, although they redacted the levels of improvement.
EMC had a fascinating approach of using the Van Westendorp statistical methodology to find their customers’ expectations behind frequency of communication and time to resolve.
Discussing this methodology is beyond the scope of this blog, although you can find more here. But this method is typically used to understand expected pricing – I have never seen it used in this context before. But it’s an innovation worth considering.
Since they identified Time to Resolve and Timely Progress Updates as having strong statistical links to satisfaction, EMC needed to understand their customers’ expectations in these areas. They used this methodology to understand how frequently customers want updates, and how long they expect it to take to resolve their problems. This enables EMC to target initiatives to meet this expectation.
EMC discovered that more updates actually decrease satisfaction. So by understanding how frequently customers expect contact they can meet this expectation without overinvesting. Let me quote from EMC, as they do a great job of explaining their approach:
“EMC did an extensive study to determine the impact of multiple touches and case transitions on both TTR and overall CSAT. This study has indicated that by decreasing the amount of these events it drives customer satisfaction up. There are multiple initiatives being piloted around limiting the case transitions and multiple touches to improve the customer experience.
“Recognizing we need to establish targets for execution based on customer expectations, and not just on CSS’ operational ability to execute, we added customer experience focus questions around the customer’s expectations during a service event. For example, a question was added that asks the customer what timeframe between updates they would find acceptable. We then used a Van Westendorp Methodology to analyze the customer’s responses. We were able to determine the optimum timeframe for progress updates as it relates to the customers expectation. Knowing what the customer expected allowed us to add or improve processes and set expectation. One process developed to help in this specific example was the Next Action Due Technology and Process; this is a reminder that has been developed as part of the CRM that will allow for the tracking of customer updates.”
EMC provided the best example I have ever seen of analyzing drivers and expectations, and using this to drive your capability development.
Fidelity did not list what their specific drivers are. However, they are the only company that reported analyzing competitors’ performance on their drivers.
JetBlue has created specific linkage between on-time departure to NPS. By understanding this linkage, they were able to better allocate staff at airport.
Specifically, once a delay exceeds 15 minutes, customer disappointment sets in. To help ease their customers’ anxiety, they established a protocol to have customer notifications every fifteen minutes during a delay. JetBlue can’t stop all late departures, but they do make certain that they clearly communicate while this occurs, decreasing dissatisfaction.
They also host annual meetings with their executives, operations, and customer service teams to focus on improvements based on the statistical drivers of NPS. They create goals, then communicate the key metrics on a weekly basis – not only to executives, but also to airport staff.
What caught my attention from Oracle is that do the statistical work to separate causation from correlation. While they did not give any detail, they apparently link drivers not only to their NPS scores, but also to financial results.
Safelite identified the linkage between service issues reported after the repair and a low customer experience score. While this is not surprising, their action shows how they take it seriously. When they see a high NPS score but negative comment, or a reference to service issue, their Executive Services Department reaches out to the customer to fix the problem. They then send another survey to make certain the problem is resolved.
We have some nice teasers on how companies Isolate What Really Matters, although most did not go into a ton of detail. But they did discuss action taken, which is what truly matters.
I am particularly interested in EMC’s approach to determine the right amount of time before they follow up with customers – this is a great practice that we all should consider.