Measuring ROI

Tragically, so many customer centered initiatives go unmeasured

Intuitively embedding user research and UX earlier in an agile product life cycle just makes sense. Users needs are identified and clarified earlier on. This translates into more accurate user stories, weighted in the broader impact to user experience. The result: more effective roadmap planning, and a more careful stewardship of resources.

Researching with one client team recently, findings indicated that although this one portion of the prototype frustrated their customers, in the context of their every day work, the customers really didn't need that feature. That insight provided clarity for roadmap planning, shaving off precious development time.

While many product managers, development and design teams who regularly practice customer driven innovation (CDI) know this process yields better experiences and furnishes rich insights for future innovation, quantifying and measuring this is difficult. In a recent article "Design for Disruption," only 9% of BPM (Business Process Management) programs have realized significant success for their customer experience activities. Of the companies surveyed 56% had not measured their customer centered initiatives.1 So much effort, but tragically so little to show for it.

Knowing Your Context

Before we look at how to measure, it's vital to know one's context. When it comes to customer driven innovation there are essentially three different kinds of companies, those who:

  1. Embrace and assume CDI as core to their BPM. In such companies, innovation has been baked into the very culture. Here the need to prove the worth of CDI is not as critical, while measuring improvements to the customer experience and process is assumed within the CDI process.
  2. Incubate CDI, in the form of ux and user research teams engaged in specific projects, or within specific units, with the goal to test and validate this approach before scaling it. In this context, measuring the results of CDI is imperative, to build bridges from one VP's project / unit to other parts of the company. Equally vital is documenting improvements to the customer experiences for the purpose of continuing to shapre the roadmap, and tell the story of CDI.
  3. Maintain a BPM focused primarily on older models of IT processes, where stakeholders (in the form of VP's, product managers, developers and ux) drive roadmap planning. Here the goal is to lay groundwork for incubating CDI.

Reflect: Which best describes your company? How does customer driven innovation relate to the BPM? Is it baked in? Do teams have make cases for implementing it? Knowing this context will help delineate the data needed for successful measuring. For example, if you're measuring the benefits of baking in customer driven design against teams that do not, you will need to know comparable teams and projects. Likewise, if your context assumes customer driven design, you need to know past results, for continuity. Your context informs the kind of data you measure.

Selecting the Project

Almost as important as measuring project outcomes, is selecting the projects to measure. Projects that talk like customer driven innovation, but in the end are hijacked by stakeholders are not the best plumbline. Nor are projects crippled by third party architecture issues; though CDI can be leveraged for exploring the risks of entering into third party contracts. The most optimal projects are those with:

  • Leadership dedicated to putting the user front and center.
  • Limited scope and budget; this provides focus for the outcomes.
  • Scalability worked into the process.

These ingredients set up the project for success: where necessary changes will be heard and acted on.

Measuring Value

Measuring the value of user research and ux is where the rubber meets the road. Softer indicators such as NPS scores can be more intangible, holding less weight for making business cases to upper management. Results that speak more plainly to past and future financial value turn heads, results that positively effect opportunity costs. That said such measurements should not be limited to quant results; qual insights consistently lay a solid foundation for innovation. These cannot be jettisoned as unsophisticated.

For measuring results, consider the following perimeters:

  • Time saved: Did time on task diminish? If so, how? How was time saved executing against "user defined" user stories vs. other "product management" or "ux + design" teams' user stories? Where research focused roadmap planning, how did that save time in future planning, compared to teams who did not engage user research to plan roadmaps?
  • Usage increase: After changes were implemented, did usage increase? Did an increased number of visitors use the site? Did exit rates decrease? What was the impact to customer service?
  • Spending gain/loss: For e-commerce sites, was there an increase in AOV, greater site conversion? An increase in orders due to pathing through a specific experience? How did clarity in user stories bloat or trim the budget?
  • Customers acquired: If a process was shortened, and this time savings passed onto to the customer, how did that play into retaining current customers and gaining new customers?
  • Ease of use: To what degree has the interface become easier to use? What contributed to this? How has this impacted the bottom line?
  • Alignment to external processes: How has the interface better aligned to external tasks? How have the user flows aligned to users mental model of engagement? Where has is it fallen short? Are these delights and frustrations sufficiently reflected in the product roadmap? Why or why not?

Tracking Results

Tracking these results is not for the faint of heart. Results are seldom as forecasted. While it's important to have objectives, engaging users is typically messy. What users desire is sometimes not even on the roadmap, but begins to sketch out solutions for better products. In one study we recently conducted, the primary focus was on a desktop prototype for assisting store associates in executing shipping online orders from store. A tablet solution was in the works, but what store associates needed was a mobile solution.

Even when results are well received, recommendations can be misinterpreted. Other times, even when implemented correctly, components are not properly tagged, making measuring via analytics impossible. Worse more siloed processes prevent the full implementation of results; and suddenly, the roadmap becomes political.

But baking in a process for follow-up mitigates against these risks. Identifying check-in's for implementation, tagging, and further testing goes a long way to paving the path for success.

1. Richardson, Clay. "Design for Disruption: Take an Outside-In Approach to BPM." June 14, 2013, Forrester.


Related Articles