Facts don’t matter

Working in tech at this present moment is refreshing. For several years there has been a movement towards acknowledging that those closest to a problem are best placed to fix it, and that the role of a manager is best placed to define desirable outcomes within which solutions may be freely found.

There are a number of reasons why this appeals. It removes the much of the political gameplay and focuses on getting results. It means that opinion is viewed as hypothesis that should be tested. And it views a failure as a learning opportunity. Opinions are allowed to change as learning is acquired.

However, this can make the rest of the world difficult. To a techie – and many who aren’t – the facts are clear and available so why don’t people use them? Why do they stick to an argument even when it apparently makes no sense?

Humans and the Cost of Change

There are many reasons for this. In fact, it’s only human to stick hard to a viewpoint once formed. Matthew Syed notes this in his book Black Box Thinking.


Festinger predicted that when the world didn’t end as a cult predicted, the cult’s belief would be strengthened rather than diminished. And this happened. Contrary evidence was presented and yet, far from changing their mind, the cult only reinforced their views.

This is due to the personal cost of change being high. In tech, things change every couple of years and learning is embraced. As things are going to change, you might as well get used to this and create environments which decide winners on the basis of evidence and deliberately make the personal cost of change low. This isn’t the case in the rest of the world.

Changing a viewpoint is expensive. Kate Gray and Chris Young have looked at how to manage change in an organisation: they note that there is no point targetting the extreme opposition simply because of the effort needed to change their mind. Far better to target the floating voters who are not entrenched and therefore will have a lower personal cost of change. (If you’re unfamiliar with their work, take a look here for a discussion on how they went about winning hearts and minds during a Change programme).

Gray Young model.png
Gray / Young Electoral Politics Model

So the task of winning people is less than we might think: we don’t need to persuade everyone. But how can we change peoples minds at scale when facts seem to be ignored or wilfully discarded?

It is instructive to look at some examples of why people might make decisions.

Brexit

To me the Brexit debate seemed clear as a simple question of risk. On the one side was a known entity, however flawed. On the other was complete uncertainty with no limit on the downside. What was always stated by the EU was that freedom of movement was the price for single market access. The Leave side seemed confident that this would not be the case (as my son often is when he demands ice cream without eating his dinner first) and made several promises that were not backed up by previous voting records of the politicians.

Despite this, they won

farage-with-a-pint
Everyday man doing everyday things

. Some of this was a clear dislike for the government, but much of this support seemed to have been built over the years by Nigel Farage. Love him or loath him, people appear to identify with him. And yet he is a public-school educated multi-millionaire who has little in common with many of those who voted for him. What they appear to have identified with is the persona of Farage: the man who refuses to obey the norms of politics and who stirs the hornets’ nest with controversial statements. He is clearly positioned as different to most politicians and thus stands out.

Cycling Provision

Our towns often suffer from dreadful pollution and the population is increasingly unhealthy. The Netherlands has shown that many short journeys can be made by cycle instead of car if good infrastructure is put in place: this is relatively inexpensive and has been shown to have positive outcomes on public health. It is also notable that motor traffic flows well here, taking advantage of what is almost a permanent end-of-term effect.

lycra-lout
Lycra Lout?

However, discussion around cycling tends not to focus on this, but on a persona of the cyclist as a law-breaking uninsured hooligan, intent on doing everything they can to cause crashes, annoy pedestrians and increase pollution in towns. Small wonder that the person typically making these connections cannot imagine themselves cycling short journeys in everyday clothes in safety away from motor traffic. Essentially, their persona is somebody who travels by car and there is no readily available persona of the family who travel by bike.

Positive Personas

My hypothesis is that in both these situations, people seem to have identified with a persona that affirms their views and reinforces their conclusions. This is sometimes built around a person, but may simply be a concept (the parent who drives a 4×4; the lycra lout cyclist). A factual argument will simply bounce off this persona; indeed, it may even reinforce the strength of opposition.

Remember that at this point we only wish to target the undecided voter. While it’s tremendous fun to troll those who have an aversion to fact, it’s ultimately taking time away from more effective means of gaining support. Instead, would a fact-based story around a persona be more effective? That persona would identify with the concerns of the undecided voter but address them with the fact we’re attempting to portray.

We don’t wish to create another Be Like Bill character that is a patronising figure of fun. But when attempting to win over that vital undecided vote, we cannot simply deliver dry facts. Instead, we could represent the debate with believable personas that directly address the concerns voiced, rationalising the desired behaviour and ostracising the undesirable outcome. For the cycling discussion, give a platform to a disabled person who finds their bike gives them freedom and who needs good infrastructure. For the Remain Brexit discussion, give a voice to those who have concerns about immigration but believe the solution is to work with, rather than against, the EU.

At this stage, this is simply a clumsily-written hypothesis. I’d love to hear whether this has legs and, if so, could be developed further. Or, if this is established practice and I simply need to read more.

Advertisements

How to avoid cutting one’s balls off

The Dad-Joke

There’s a story about a man who suffers from appalling headaches. They started as a minor inconvenience, but after 20 years have worsened and now affect every part of his life: failed relationships, a series of sackings, etc., etc. So he finally goes to the doctor. After many tests, the doctor diagnoses the problem: the patient’s testicles are pressing against the base of his spine, causing pressure which leads to the headaches. There’s only one solution: castration.

The patient initially declines, but ultimately pain gets the better of him and he agrees to the surgery.

So the surgery happens and the patient feels like a new man afterwards. Without the headaches, he realises this is the restart his life needed. To celebrate and kick-start his new life and job hunt, he goes to buy a new suit.

He walks into a tailors and says he’s looking for a new suit. The tailor looks him up and down. “You’re a 44-inch chest”, he says. “Brilliant”, says our patient. “How did you know?”

“Easy”, replies the tailor. “I’ve been doing this for 40 years. And you’re a 16-inch neck”.

“Bang on”, says the man. “How about the trousers?”

“36-inch leg” says the tailor. “And a 36-inch waist”.

“Gotcha!” shouts our hero. “I’m a 34-inch waist and have been since university.”

“Oh no,” the tailor insists. “You’re a 36-inch waist. If you wear 34-inch trousers, you’ll wear 34-inch underpants. And they’ll be far too tight – they’ll compress your balls against your spine and give you terrible headaches”.

 

The Product Stuff

Without wishing to over-analyse an excellent Dad-joke, this is how many people approach product management. There are two issues here.

Finding the actual problem

Instead of identifying the root cause of the problem, we fix the local, obvious, issue. While this often alleviates the symptoms, it doesn’t resolve the underlying problem. Such a resolution is therefore sub-optimal and possibly even harmful in the longer term. Here, they appear to have found out the cause. But they haven’t uncovered what caused the cause.

So when we approach a problem, why don’t we always use techniques such as 5-whys to dig down to the root cause? Why do we tackle only the immediate cause?

Well, firstly it’s a tricky habit to get into. Fixing a problem is satisfying and that buzz can be obtained quickly with an immediate fix. We set our mindset on achieving small wins, failing to optimise for long-term gain. Our sprint is full, so we put in a quick fix and mark this done.

Secondly, we’re often busy, with many competing demands. Fixing a problem feels like it is detracting from our main goals. Why spend time on fixing this when there are quarterly targets to be hit? So we avoid the potential hassle of coordinating with other departments and spending time identifying a root cause by simply fixing the surface issue. It’s good enough, and we can get on with our main priorities.

Safe-to-fail Experimentation

The second problem is that cutting one’s balls off is quite clearly not a safe-to-fail experiment. While this resolved the immediate problem, it is an irreversible action. In the language of Chris Matts, it is most definitely a commitment rather than an option. One can imagine that at the punchline our patient would have preferred to be holding an option rather than having committed early.

The realm of product management is that of uncertainty. There are outcomes which we are tasked with moving towards, rather than a fixed set of features that we must implement. To meet these outcomes, we identify small, safe-to-fail, hypotheses that we can test to see if they move closer to our aims.

The safe-to-fail part is key. Given that we cannot have certainty in advance about which features will and what won’t move our metrics the right way, we must have the option to reverse the feature at low cost should it fail. As responsible product managers it is not acceptable to hold merely a hypothesis and yet to create a complete, fixed, product (this is the realm of mass-production). Instead, we validate the hypothesis via testing and we always have a rollback option.

Conclusions

So two lessons from a terrible joke.

Always check for the root cause of a problem. And be sure to run safe-to-fail experiments, rather than commitments you’ll later regret.

 

Management and Product Management

Over my past few assignments, I have seen several software teams that are instructed by the management group to be an Agile, Product-Led Team, but are then thwarted by said management. This management is often well intentioned but their activity has an outcome that obstructs the team and prevents their (and the business’s) success. The management often speak about the right things: wanting self-governing teams; optimising for outcomes; and hiring in a manner to “raise the bar”. However, the actions they take undermine all that, creating a disconnect for their team and friction with the management tiers.

Unhelpful management behaviours I have observed include:

  • Managers running scrums and stand-ups
  • Insistence on production of project reports that nobody reads
  • Suppressing team members more capable than the manager at a specialist skill
  • Requiring teams to clock in and out – and following up from the central finance function when a “full day” hasn’t been worked
  • Refusal to spend budget on hardware or software that will clearly enable the team to be more productive

 

Why doesn’t this help?

Some of this might appear to be good governance – after all, shouldn’t a manager run the stand-ups to ensure that work is progressing to plan?

However, all of these behaviours remove responsibility from the team. The last two, in particular, show a failing to understand how product and development teams work. This isn’t about people wanting special treatment, more about understanding that in these roles a manager is unlikely to govern by authority and simply doing the job better than everyone in the team. Given the variety in roles within a software development or product discovery team, it is unrealistic to expect that a manager can do all roles better than their team. Rather, a manager serves the team well by removing obstacles to effective working and giving the team a steer toward an objective.

 

Why this happens

Two of the three organisations I reference above have a history of retail or small scale manufacturing. In these areas, store managers progress by knowing every detail that goes on in their store and still being great at doing those jobs – one often sees managers on the shop floor mucking in and visibly taking pride in how their store is presented – and being prepared to contribute to that effort. Similarly, the manager in a small-scale production facility will often design the flow and optimise that; it is the role of the team to follow the guide and not deviate.

In both of these cases, the approach to staffing had not changed much with the transition to the running software teams. While it was acknowledged that teams might have specialists, the concept of self-organising teams was mistrusted and hierarchical control over the team was maintained.

The Cynefin model helps us understand these issues (Greg Brougham gives a great explanation here). The small-scale production facility and retail store reside in the Obvious space. Here we have predictable work, where process can be documented. Reduction of costs is desirable here and a repeatable process helps avoid having to hire skilled workers.

IMG_20160327_212511911
Mapping work to the Cynefin model

However, software development sits somewhere between the Complicated (knowable but a subject matter expert is required) and Complex (safe-to-fail experimentation) domains. Product Management, in particular, fits in the Complex domain, requiring safe-to-fail experimentation to identify what leads to desirable outcomes.

When work is in this domain, it cannot be boiled down to a series of repeatable process steps. There may be no point finishing the day’s eight hours if one’s brain isn’t producing; conversely there will be days where the morning flies by and insistence on the employee taking a 15 minute break would seriously interrupt their productivity (and they won’t thank you for that!).

IMG_20160327_212653782
How Product Teams are typically managed

Equally, when the team takes responsibility for the work they do, they will suggest safety measures far more effective than a manager would suggest. Equally, they might suggest sanctions far beyond what would be appropriate for a manager to do! But the point is that they take this responsibility and keep each other honest. This moves beyond working to a set of rules: the team adopts a culture that actively seeks to add value to the business and users. When managers attempt to take control of this process, they take responsibility away from the teams and the virtuous outcomes are diminished or lost.

IMG_20160327_213054035
Needs of Complex-domain teams

So what does a manager do in this domain? Here, the manager sets the desired outcomes and describes the constraints (legal / ethical / regulatory, etc). The team then seeks to meet those outcomes via user research, testing, MVPs, and so on. Attempting to apply a rigid process, or assign blame based on contractual wrangling runs counter to the behaviour required here.

IMG_20160327_213247236
Applying Obvious-domain rules to Complex-domain teams

This sounds like my organisation…

Firstly, this isn’t a rare problem. I am constantly staggered at how many organisations make good money despite following practices that actively hinder that organisation. And secondly, why would it occur to a business to change management practices for different teams? The typical organisation would reject this as it would cause friction and complications between different departments.

Clearly, I don’t have the answers. However, organisations need to recognise the problems that a one size fits all approach creates. It might seem that treating different teams differently will cause problems – in fact, the opposite feels more likely to be true. Teams need to be managed in a manner suited to the domain in which they reside and management hiring decisions and training needs to reflect this.

Tracking Product Discovery

Product Discovery

In recent years, product management has expanded from focussing on “building it right” – ensuring that a backlog is delivered to an acceptable cadence and quality – to “building the right thing”. This means moving away from the illusion of certainty to embracing experimentation.

Coupled with experimentation is an acknowledgement that a product manager is probably not the most qualified person to design the interface or do detailed data mining. More and more, product management teams are being formed of User Researchers, Data Analysts, UX and UI designers as well as Product Managers. The goal of this team is to identify user needs and create services that meet those needs whilst driving business value.

The diagram below shows how this system can work. We begin with insights, identify user needs for a given target group that will drive business value and hypothesise concepts that will satisfy these requirements. These hypotheses are necessarily experiments: we might be right, we might not. We only find out by creating something (or better, things) and testing.

Discovery and Delivery process
Discovery and Delivery process

Following this product discovery process we create artefacts that aid communication and understanding: user stories; prototypes; acceptance criteria.

And after some validation, we are ready to build – which is where familiar tools such as JIRA or Rally are used.

Coordination

Coordinating the elements of Product Discovery can be difficult. I’ve known teams attempt to create a Todo / Doing Done board within JIRA; however, the issue is that there is no single task that moves.

When a development team works on a story, the story card remains constant and moves into different stations (e.g. Analysis, Design, Build, Test, Deploy for the classic Scrum-as-Waterfall model).

Tracking progress through Development
Tracking progress through Development

However, in product management, we identify Insights that help us identify one or more user need. Concepts answer these needs – ideally more than one concept as we should be testing. And a Concept may meet more than one need.

Each concept will give rise to one or more user stories and one or more designs.

So there’s an issue with what we are tracking – we tend to think of stories but are actually tracking the meeting of needs. And even these may be met by more than one concept, so there isn’t a neat 1:1 mapping from one phase of discovery to the next.

Flow in Product Discovery
Flow in Product Discovery

From what I’ve seen of many tools, the collaborative product management approach is supported by “Hey colleagues, send me your ideas and we’ll vote on them!”. While it’s great to gain ideas from the wider company, these are actually only insights, from which we must still identify user needs. I also have my reservations about voting. Ultimately we want to drive business metrics, not popularity.

Observed Problems that need resolving

There are several issues that I’d like a tool to resolve:

  • a lack of a central storage space means that team stores insights separately. This causes research to be repeated and learning is not shared across teams
  • a lack of tracking means that we cannot easily trace development requirements back to the originating insight. At best this leaves teams open to HIPPO prioritisation; at worst it means that the stories for a development team might not answer the original need
  • No common storage or version control causes teams to fake this on Google drive (or similar) or on local storage. The main issue with Google drive is that people forget to remove staff who’ve left; it’s an administration headache.
  • No natural coordination between work in progress and product roadmaps. Sure, we don’t want roadmaps going forward for years. But we ought to be able to identify themes and track the progress of concepts that we believe meet the needs within those themes.
  • Tracking associated non-development tasks (e.g. creating a case study, training customer support) gets tracked in yet another place, such as Trello. Again, Trello is a great tool but it’s not integrated with this process.

What I’d like to see

  • Easy storage of user insights
  • Traceable progress from insight through to development backlog (and maybe even MVT…)
  • Building of roadmaps
  • Easy and ready sharing of concepts / prototypes, etc

Conclusion

To conclude, I’d love a tool that does this or that can be readily adapted. Should I find one, I’ll be sure to publish my experience with it. So please leave your thoughts below.

Tool vendors, I’d love you to join in! Feel free to add comments about how your tools fit with these needs.

Appendix: User Stories for Product Discovery

Insights

As a User Researcher, I want to log and share insights and research, so that effort is not duplicated and learning is not lost.

As a User Researcher, I want to easily retrieve stored insights from across the business, so that I can start to identify related user needs.

So we need a big data store that swiftly searchable and trivial to add to. It’s essential that we can tag and log basic information. It’d also be useful to have easy addition by, say, email.

And we need to be able to swiftly pull back these insights. See the age of them, who logged them, etc.

User Needs

As a User Researcher, I want to distill insights into needs, so that I can identify what our user truly wants.

As a User Researcher, I want to tag which insights have been pulled into user needs, so that we can later identify which insight sources are most effective or which insights are most helpful

OK, so we might not use the tool to *derive* user needs – this could be a group exercise done on a whiteboard. But when we’ve detailed them we need some means to track them for analysis and so that we can see where our user needs come from.

Concepts & Hypothesis

As a member of a Product Management team, I want to pull user insights, data insights and KPIs into a coherent hypothesis, so that we can build better concepts and tests.

As a member of a Product Management team, I want to track where my concepts come from, so that we understand the user need we are building for and the outcome we believe that we’ll see.

Again, this might not be *done* within the tool. But we need to be able to track the activity: where things come from and what we’re doing. And attach the concepts / reference where they’re stored.

Story Creation

As a Product Manager, I want to be able to trace back to concepts, needs and insights when writing stories, so that my dev team and I are reassured that we are adding value and meeting needs.

Prototypes

As a UI designer, I want to be able to trace back to concepts, needs and insights when creating designs and prototypes, so that I am reassured that I am meeting the correct user needs.

Roadmaps

As a Product Manager, I want to create Roadmaps based on opportunity, so that the team can work toward these strategic goals

Non-Development Items

As a Product Manager, I want to track associated tasks so that I can deliver a complete product and track progress toward that goal

Estimates

Introduction

There seems to be a great deal of discussion about whether or not estimates should be created for software projects. The arguments are several and frequently descends to the playground level; however, it seems to distill to a few reasonable points of contest.

Quick and Dirty Summary

Ron Jeffries and Steve McConnell – both industry figures that I hold great respect for – held a discussion recently over estimates. (Edit: Thanks to Henebb for making sense of the ordering!)

The No Estimates viewpoint appears to be summarised as:

  • Estimates are often wrong so putting effort into this is wasteful and misleading
  • Good teams will slice a story into smaller chunks which are often equivalent size. Thus the need for estimation is negated
  • Furthermore, breaking it up and thus allowing lower value stories to be deferred

By contrast, proponents of estimates would state that:

  • A skilled team can estimate accurately enough to provide business predictability
  • By estimating, we can ensure that features are finished, rather than providing more work that is all half-done
  • Estimation aids functions such as finance, planning, etc., allowing budget allocation; cost/benefit analysis; etc to be performed

Looking at the discussions, it feels to me that the benefits that McConnell sees appear to be upfront. Estimates are there to support business decisions and allow project control. Jeffries discusses not estimating at a story level, where a skilled team will reduce the size of stories to something more or less uniform.

What Does a Product Manager Want?

So why does a product manager care about estimation? Surely we just want the scrum team to deliver some business value, ideally each sprint?

Well, it goes deeper than this. Product Managers are entrusted with delivering business value and moving metrics in the right direction. Given two options, we want to know whether one has nasty dependencies or incurs significantly more cost than the other. Similarly, if projects have dependencies, the product manager will want reassurance that these will be delivered in a timely fashion. It’s no good having the product waiting for an overloaded team to complete a component.

In these cases, there will need to be some level of investigation work by the Dev teams so that the likely solutions are sufficiently understood. Whether or not an estimate is provided is of little concern to the Product Manager; what they want is to be equipped to make a decision and for the programme manager to be able to coordinate effort so that product is delivered.

Maturity Levels

At the scrum level, estimates are a means to an end. They protect a team from taking on too much work and set a reasonable expectation to the product manager of the value that will be delivered. However, do we intrinsically care that one story is an 8-pointer while another is a 5-pointer, if the estimate is reasonably accurate so there are few surprises? What’s more, if the development team can break down the eight point story into one- or two-point equivalents, the product manager will be happier as this is a signal that the team understands the problem well.

There may therefore be a progression in team maturity levels. At the very basic level a team will struggle to estimate stories (“everything’s an 8!”) and will fail to reflect on whether the estimate was realistic in hindsight (self-improvement). As they practise estimating, they will become proficient at investigating what goes into a story. Whether the output is a sliced story or a better-informed estimate is somewhat moot; the value is in the discovery and shared understanding. Different organisations will favour different approaches; personally I haven’t worked with slicing stories but I like the sound of it and would welcome a team reaching the point where this is possible.

Improved Thinking through Story Estimates
Improved Thinking through Story Estimates

Wider Planning

What of the wider landscape? As mentioned earlier, estimates can be helpful at the planning level, particularly where dependent resources are shared. However, we need to ensure that these are aligned to genuine business goals and are responsive to environmental shifts, rather than projects reduced to a cost-benefit analysis and pet-project favouritism.

Chris Matts and Tony Grout provide a useful experience report of this process at Skype, using SWAGs (Sweet Wild-Ass Guess) in place of a more formal estimation. Note, however, that the SWAG will be provided by a product owner (not a developer, so as not to commit the developers) with experience and some degree of skill. Despite the approximate nature, this is still an estimate. A weighting is applied over time, to ensure some semblance of reality, but the goal of this exercise is to direct resource to the highest priority resource. And isn’t this the entire point of an estimate for planning? We know roughly what will be delivered and we have an agreed order. Need we spend longer, optimising for precision over accuracy?

Conclusion

It feels that there is room for both approaches, used appropriately. Clearly any large organisations will have dependencies and will need to effectively allocate resources. SWAGs seem to be a practical solution, but I can see that an organisation might find value in a deeper level of estimation.

Estimation can also be good practice for less experienced team members. The exercise of examining a story; making an estimate based on one’s understanding; and then reflecting on that estimate having done the work creates a good feedback loop for learning. Where does complexity lie? What gets overlooked at first glance? Where are the defects hidden?

Beyond this, not estimating also appears to hold great value. If a team can work with the product manager to slice stories into small, easily testable, chunks, estimating becomes moot and the focus is on delivering business value. This is already in being done by development teams; at this level and with an experienced team, why would you go back?

Edit: Of course, Steve McConnell summarises the discussion nicely at this point. He suggests that knowing when to estimate and asking the business are the positions we should take, rather than blindly always / never estimating.

User Research and Execution

Summary

User Research is a vital element of creating products. But how do we prevent it from slowing down concept creation and getting something live?

Disclaimer

I have a product management background, not a UX one. This blog is based on my observations. Please feel free to disagree and let me know your thoughts!

User Research

User Research takes many forms. We might undertake diary studies; conduct interviews with users; observe users; and more. This research provides insights into user needs and the degree to which our product satisfies these (or doesn’t). However, it isn’t the work of a moment. Diary studies may take place over a period of weeks and genuine user personas will be created from extensive research.

There’s also user research that takes place while building hypotheses. If we have a data insight that certain user segments behave differently to others we are likely to wish to investigate why. User research and testing may also be conducted in order to validate concepts that we are thinking of taking forward to the build process.

And Usability Testing is yet another level of research. Not only do we validate prototypes and designs, but these sessions often throw up new findings that can be added to an insights catalogue.

Contrasting Two Projects

On one product I heard of, the UX team required that user research take place before any product decisions were made. While this appears sensible, the research subsequently took several months with little if anything in the way of deliverables during this period. While ensuring that the user is understood is a laudable ideal, this level of detail was clearly not optimal; it’s essentially a fall-back to Waterfall days of performing big up-front analysis before moving to design. Most products can ill-afford to research anything and everything to do with the product.

The second product team has taken a more pragmatic approach. They wanted to learn more about their users but had agreed a regular cadence of delivery. So they are placing more traditional user research alongside experiments which were initially guided by gut feel of what “better” might be. Over time this guidance will become driven by the insight they discover from both directed research and the results of their experiments.

Looking at what Works

This second product team has a clear view of the research that they will undertake and have proposed working on a “Gold” and a “Bronze” path. The Gold path is their vision; the idealised picture that inspires and informs design. The Bronze path is the live site; at present a world away from the gold one although these will become closer over time.

The original Gold / Bronze illustration, by Carl Kim
The original Gold / Bronze illustration, by Carl Kim

There are many advantages to this approach.

Firstly, there is the acknowledgement that knowing everything about users isn’t practical or necessary. Research needs to be directed by what will add business value, moving the agreed metrics.

Secondly, we only truly learn when we go live. User research gives insight but the live site is where we learn what works. So research without execution is mere theory.

Thirdly, having two paths means that we focus on both. We don’t neglect the product vision and have a forum for exploring new concepts and ideas. Equally, we remain rooted in reality and the achievable. So concepts are applied to the bronze path and learning is reflected in the gold path.

While the image suggests that the paths should converge, my feeling is that convergence would imply that there has been insufficient work on the vision (gold path). Equally, too much divergence suggests that we need to get on with improving the site as the bronze path has fallen too far behind.

Furthermore, the research going into the gold path is directed by the business value that is driven. If insights are catalogued effectively, we know where we need to focus. Thus effort is not duplicated and is directed to those areas that are the highest priority.

Bronze and Gold paths
Bronze and Gold paths

Conclusion

User Research is a vital part of the product management framework. It allows us to understand user needs and should inform better product decisions. However, it needs to be directed and focussed effectively to ensure that we are not wasting time.

Creating gold and bronze paths allows us to create and update a product vision while maintaining a high cadence of delivery.

Experience Report Part 2: Types of Project

Following my previous experience report, it strikes me that there are mainly four types of project that we engage in. This is looking at product management but sparked from thoughts around involving the UX team more.

Recap – the Product Process

To explain the squiggles in the diagrams below, here’s the full version of the Product Management process that Chris Matts and I arrived at:

The Product Management process
The Product Management process

Starting with insights we note the hypothesised user needs, target audience and business outcomes. As these are hypotheses, we ensure that our concepts will test the hypotheses (rather than simply testing designs). This part is the Vision Board element.

Having created prototypes, acceptance criteria and stories, we validate these designs. Firstly with the team, to check that that we are still coherent with our hypotheses. Then usability to kill bad ideas (groupthink). And then having built the concepts, we go into MVT to validate our design concepts (hypotheses).

New Projects

At some point, a product is a new greenfield projects. Here we build a brand new Thing, which hopefully means that we’re testing a hypothesis rather than building a specific feature.

In the world of websites, this means that we perform a detailed discovery on those pieces that make a difference, where we will use our competitive advantage. And we should steal use established industry practice on the non-core elements.

The diagram below highlights this:

New projects. Build the minimum thing with the minimum effort
New projects. Build the minimum thing with the minimum effort

Migrated Projects – Type 1

Of course, websites and products don’t remain new for long. If not looked after, the infrastructure languishes until Somebody decides that it must be replaced. By this point, things are usually pretty crufty so organisations like to update the lot (especially if Somebody gets to put in their favourite technology). In this situation, a decision is taken to upgrade the back-end and the front-end too.

Migrating from one site to another
Migrating from one site to another

So the old site is maintained while a new site is created. At some point the customers cross over and we’re all happy.

Except that, as I mentioned last time, this approach builds up a lot of risk. Will we ever deliver? What if we’ve not fulfilled a critical user journey? What if the new infrastructure doesn’t hold up or if the new UI isn’t well received? Sure we tested it but…

Migrated Projects – Type 2

There is another way. The migration can take place as a planned Strangler application. In this situation the migration is prioritised by risk and is done gradually. Thus we only ever run a single system, rather than attempting to build a duplicate and migrate users over. User journeys are upgraded as they fit with product infrastructure.

Migrating a Site by Feature
Phased site migration

Do we migrate every system? Possibly not; when the risk profile is lower than the business gains from upgrading the user experience we may choose to upgrade the user experience. Ultimately it comes down to how comfortable we are supporting the remaining infrastructure. Because we’re not changing everything at once we can defer the decision.

Continuous Improvement

Ultimately, we’d love to get to continuous improvement. In this world, technical upgrades are prioritised with the customer and business benefit so that a big-bang migration is typically unnecessary. Here, we assess concepts based on insights and we’re into pure product management.

Continually updated features and infrastructure
Continually updated features and infrastructure

Conclusion and Thoughts

In my experience, Continuous Improvement is a difficult place to get to and I’m unsure whether it’s fully achievable. Often it’s not a technology that causes a big-bang rollout but a new feature that fundamentally changes the business model (my previous company, Cheapflights, recently rolled out meta search on the UK site; this isn’t a feature that lends itself to feature-incremental rollout). However, if a migration is required there are certainly ways to deal with this; running one site while working on a rewrite appears to invite risk that can typically be avoided.

Again, I’d love to hear experience reports and other views on this. How does your organisation handle migrations? And how could they have been done better? These diagrams are also going to be flawed – please share your thinking on the categories of project.