User Research and Execution


User Research is a vital element of creating products. But how do we prevent it from slowing down concept creation and getting something live?


I have a product management background, not a UX one. This blog is based on my observations. Please feel free to disagree and let me know your thoughts!

User Research

User Research takes many forms. We might undertake diary studies; conduct interviews with users; observe users; and more. This research provides insights into user needs and the degree to which our product satisfies these (or doesn’t). However, it isn’t the work of a moment. Diary studies may take place over a period of weeks and genuine user personas will be created from extensive research.

There’s also user research that takes place while building hypotheses. If we have a data insight that certain user segments behave differently to others we are likely to wish to investigate why. User research and testing may also be conducted in order to validate concepts that we are thinking of taking forward to the build process.

And Usability Testing is yet another level of research. Not only do we validate prototypes and designs, but these sessions often throw up new findings that can be added to an insights catalogue.

Contrasting Two Projects

On one product I heard of, the UX team required that user research take place before any product decisions were made. While this appears sensible, the research subsequently took several months with little if anything in the way of deliverables during this period. While ensuring that the user is understood is a laudable ideal, this level of detail was clearly not optimal; it’s essentially a fall-back to Waterfall days of performing big up-front analysis before moving to design. Most products can ill-afford to research anything and everything to do with the product.

The second product team has taken a more pragmatic approach. They wanted to learn more about their users but had agreed a regular cadence of delivery. So they are placing more traditional user research alongside experiments which were initially guided by gut feel of what “better” might be. Over time this guidance will become driven by the insight they discover from both directed research and the results of their experiments.

Looking at what Works

This second product team has a clear view of the research that they will undertake and have proposed working on a “Gold” and a “Bronze” path. The Gold path is their vision; the idealised picture that inspires and informs design. The Bronze path is the live site; at present a world away from the gold one although these will become closer over time.

The original Gold / Bronze illustration, by Carl Kim
The original Gold / Bronze illustration, by Carl Kim

There are many advantages to this approach.

Firstly, there is the acknowledgement that knowing everything about users isn’t practical or necessary. Research needs to be directed by what will add business value, moving the agreed metrics.

Secondly, we only truly learn when we go live. User research gives insight but the live site is where we learn what works. So research without execution is mere theory.

Thirdly, having two paths means that we focus on both. We don’t neglect the product vision and have a forum for exploring new concepts and ideas. Equally, we remain rooted in reality and the achievable. So concepts are applied to the bronze path and learning is reflected in the gold path.

While the image suggests that the paths should converge, my feeling is that convergence would imply that there has been insufficient work on the vision (gold path). Equally, too much divergence suggests that we need to get on with improving the site as the bronze path has fallen too far behind.

Furthermore, the research going into the gold path is directed by the business value that is driven. If insights are catalogued effectively, we know where we need to focus. Thus effort is not duplicated and is directed to those areas that are the highest priority.

Bronze and Gold paths
Bronze and Gold paths


User Research is a vital part of the product management framework. It allows us to understand user needs and should inform better product decisions. However, it needs to be directed and focussed effectively to ensure that we are not wasting time.

Creating gold and bronze paths allows us to create and update a product vision while maintaining a high cadence of delivery.


Experience Report Part 2: Types of Project

Following my previous experience report, it strikes me that there are mainly four types of project that we engage in. This is looking at product management but sparked from thoughts around involving the UX team more.

Recap – the Product Process

To explain the squiggles in the diagrams below, here’s the full version of the Product Management process that Chris Matts and I arrived at:

The Product Management process
The Product Management process

Starting with insights we note the hypothesised user needs, target audience and business outcomes. As these are hypotheses, we ensure that our concepts will test the hypotheses (rather than simply testing designs). This part is the Vision Board element.

Having created prototypes, acceptance criteria and stories, we validate these designs. Firstly with the team, to check that that we are still coherent with our hypotheses. Then usability to kill bad ideas (groupthink). And then having built the concepts, we go into MVT to validate our design concepts (hypotheses).

New Projects

At some point, a product is a new greenfield projects. Here we build a brand new Thing, which hopefully means that we’re testing a hypothesis rather than building a specific feature.

In the world of websites, this means that we perform a detailed discovery on those pieces that make a difference, where we will use our competitive advantage. And we should steal use established industry practice on the non-core elements.

The diagram below highlights this:

New projects. Build the minimum thing with the minimum effort
New projects. Build the minimum thing with the minimum effort

Migrated Projects – Type 1

Of course, websites and products don’t remain new for long. If not looked after, the infrastructure languishes until Somebody decides that it must be replaced. By this point, things are usually pretty crufty so organisations like to update the lot (especially if Somebody gets to put in their favourite technology). In this situation, a decision is taken to upgrade the back-end and the front-end too.

Migrating from one site to another
Migrating from one site to another

So the old site is maintained while a new site is created. At some point the customers cross over and we’re all happy.

Except that, as I mentioned last time, this approach builds up a lot of risk. Will we ever deliver? What if we’ve not fulfilled a critical user journey? What if the new infrastructure doesn’t hold up or if the new UI isn’t well received? Sure we tested it but…

Migrated Projects – Type 2

There is another way. The migration can take place as a planned Strangler application. In this situation the migration is prioritised by risk and is done gradually. Thus we only ever run a single system, rather than attempting to build a duplicate and migrate users over. User journeys are upgraded as they fit with product infrastructure.

Migrating a Site by Feature
Phased site migration

Do we migrate every system? Possibly not; when the risk profile is lower than the business gains from upgrading the user experience we may choose to upgrade the user experience. Ultimately it comes down to how comfortable we are supporting the remaining infrastructure. Because we’re not changing everything at once we can defer the decision.

Continuous Improvement

Ultimately, we’d love to get to continuous improvement. In this world, technical upgrades are prioritised with the customer and business benefit so that a big-bang migration is typically unnecessary. Here, we assess concepts based on insights and we’re into pure product management.

Continually updated features and infrastructure
Continually updated features and infrastructure

Conclusion and Thoughts

In my experience, Continuous Improvement is a difficult place to get to and I’m unsure whether it’s fully achievable. Often it’s not a technology that causes a big-bang rollout but a new feature that fundamentally changes the business model (my previous company, Cheapflights, recently rolled out meta search on the UK site; this isn’t a feature that lends itself to feature-incremental rollout). However, if a migration is required there are certainly ways to deal with this; running one site while working on a rewrite appears to invite risk that can typically be avoided.

Again, I’d love to hear experience reports and other views on this. How does your organisation handle migrations? And how could they have been done better? These diagrams are also going to be flawed – please share your thinking on the categories of project.