Website analysis tools – why sample size matters

website-analysis-672x372

As the UK election results on May 7th amply demonstrate, polling is not an exact science, indeed sometimes they’re just plain wrong. But when you’re relying on a small percentage of the voting population and extrapolating results accordingly, it’s no surprise that the conclusions drawn can be inaccurate. Transfer this logic to user experience on websites and the same applies.

Let’s say a potential new customer drops onto one of your website’s pages. Your mission, should you choose to accept it, is to move that customer from landing page to check out without putting any unnecessary hurdles in the way. 

This isn’t mission impossible, but every website does have its barriers to conversion, some known, but many not – usability, functionality and performance issues that may be prompting a significant percentage of customers to drop off or go elsewhere.

As a matter of course, most web businesses carry out tests to ensure the online experience is optimized for the visitor. Testing will certainly be carried out at the soft-launch or beta stage and at regular intervals throughout the life of the site. Upgrades and improvements should also usually trigger a new round of testing. 

The type of user experience testing typically employed is either laboratory style – with say 20 testers acting as sample users, or (far better) watching a dozen or so session replays of real life customer journeys using one of the many session replay tools on the market.

The problem with this is that even a few hundred sample journeys can be as potentially misleading as the UK General Election polls. Some issues with the site will be found but it’s not possible to see how they play out across the whole user base. Critically, some problems are cosmetic and others are financially significant. When 100% of the user journeys are captured, the important problems can be found and prioritized.

The benefits of 100% capture

Let’s say a call comes in via customer services, social media or “voice of the customer” tools. The customer may not be able to explain exactly what went wrong in the level of detail required to replicate the problem. However if the actual journey can be recalled and visually replayed along with all the technical data – the problem can be seen and fixed.

Importantly if you have all the journeys stored in a tool like UserReplay you can ask the question “who else had this problem?” and since most customers who hit an issue just leave, there could be lots of revenue being lost. By capturing all the journeys you can understand the impact of any issues discovered and prioritize scarce technical resources on the fixes that matter.

100% of user journeys also form a rich data set for discovering problems that weren’t known about. Let’s say (and this is a real example) a booking form prevents anyone with a hyphen in his or her surname name from going on holiday. By storing meta-data about every journey in a way that can be searched, reported on and segmented, problems like these can be discovered through a process called “active insight”.

Journeys can be selected for replay on the basis of selected calls, warning signs (such as high drop off rates on certain pages) or analytics pointing to customer frustration. And it provides a means to identify all the barriers to conversion.

UserReplay is not a substitute for the other testing methods such as A/B testing, each has a role to play, but by giving you the power to use every single customer or visitor as part of the testing process, it provides a means to ensure the optimization of your site is ongoing and comprehensive. 

If the UK Labour party had been able to apply the same level of testing, their approach to the election campaign, indeed their whole administration strategy might have been markedly different.

Image: Intel Free Press/flickr cc