Almost everyone agrees on the theory of optimising web performance, but just what does it take to put the theory into practise?
As the new Google Core Web Vitals update is set to make its debut by next month, using the offering to assess real-world web user experience will be a great asset when it is time to track the SEO performance of your website.
The Core Web Vitals tool shows how your pages perform, based on real world usage data that monitors a set of metrics related to speed, responsiveness and visual stability to help website owners measure user experience on the web.
Just before the Core Web Vitals ranking signal tool goes live, Carl Anderson, Director of Engineering at Trainline sat down with TechRadar Pro (virtually) to discuss how his team has been working towards the launch.
Why is it important to measure web performance through ranking for customers?
Relevance obviously remains the number one factor in page ranking but now that web performance of your site via the load time, interactivity and visual stability are also considered, it’s important to optimise for these to increase your potential to reach more customers and provide a better user experience for more people.
How has your team been working towards the Core Web Vitals as a ranking signal and why is having it put in place important?
We’ve always been focussed on optimising our site for the user experience; however, the introduction of Core Web Vitals has provided us with an opportunity to capitalize on our investments to ensure we’re ahead of the curve for when they come into effect. I manage the Front-End teams at Trainline but it’s been a cross-functional effort – working closely with the Back-End to ensure we’re optimizing performance at every level.
We worked on establishing a baseline as it’s the most important step and then we built from there, layering on top of the back end with APIs and then the website – the user experience is a cumulation of all of these. A baseline allows you to analyze your performance, which provides a foundation for you to build your hypotheses.
For example, we realised we could optimize how we were communicating to our data platform by doing HTTP connection pooling in our web application, which allowed us to shave off seconds on the overall booking flow. Measurement was then the key in ensuring we could learn from our approach and enabling us to iterate on what we had built.
Creating an optimal user experience comes down to several factors including the perceived load time, interactivity, visual stability and creating a balance of these so the site behaves and loads as the user would expect. The key point is that there’s not a single speed metric any one team should align themselves to, as in our experience, optimising on one often means compromising on another, which ultimately impacts the user experience.
We aligned our approach according to the user’s priorities, for example, on our homepage we’ve focussed on making the journey search widget become interactive as soon as possible, so that customers do not have to wait for other elements to load before they can begin inputting their query. Search is a key component of the Trainline user experience which is why we focussed on this.
What has your web performance optimization journey been like so far?
We’ve been on the journey of optimizing our web performance for the customer experience for a long time, but it’s been great to see our approach validated through the introduction of Core Web Vitals.
It’s a continual process as the more we build, the more we have to re-evaluate and adjust in order to optimize our performance. We’re constantly building on the product with over 300 releases each week, while preventing this from degrading the performance, or even better, making it faster. Measurement has been a key factor throughout the whole journey – ensuring we continuously gather data in a consistent and reliable way so we can see how the performance is evolving and how we can improve.
How has your team learned to separate out actionable insights from the reporting noise?
It’s about getting the right metrics in place – in the beginning we measured everything, but the breakthrough moment came when we started to correlate our metrics to both our business metrics and our web deployments: we could see some releases had negligible impact whereas others would slow down or speed up performance allowing us to finetune towards optimizing for the user experience. Having them linked to our business metrics then allowed us to prioritize our actions.
Secondly, it’s about focussing on the right areas, for example, it’s instinctive to address the issues impacting the customers who receive the slowest experience – but to have the most impact, you need to look into what’s affecting the majority of your users. This way you will make a more meaningful difference to a greater number of your customers.
Lastly, I would advise that averages can be highly misleading in the context of web performance as they don’t give you a clear picture and can mask the extent of an issue impacting your speed. Instead, using percentiles to focus on specific groups of users proved instrumental in that journey.
How can the needle be shifted in regards to web performance?
We can optimize through what we measure. The combination of synthetic measurements and Real User Monitoring (RUM) is key. The synthetic measurements allow us to compare improvements in the exact same test environments, in the same conditions, which means we can compare performance of each version of our code.
We would then test these potential improvements in the field, recording user data which provided insights on their experience as this is what really matters – and wins in a lab environment don’t always translate into wins in the field.
Finally, we focussed on where we could make the biggest difference to the majority of customers so we could make the greatest overall impact. Moving the needle comes down to a combination of smart measurement that guides you to making a difference in the user experience.