The Ugly Truth About Apdex Score
July 10, 2018
by Sharon Solomon
Online publishers are drowning in a sea of data. Unfortunately, while collecting this data is easy, translating bounce rates, page load time, pages per session or time on page into business insights can be challenging. Is Application Performance Index (Apdex) really helping solve this problem?
The Apdex scoring system was invented to introduce standardized scoring into the world of performance metrics. The online experience is in a continuous state of change, with more and more variables defining the user experience. Unfortunately, Apdex hasn’t evolved at the same pace.
The categorization of the user sentiment into Satisfied, Tolerating and Frustrated (more about this later) is no longer enough to understand the impact of modern Ad Tech and Martech stacks on performance and business metrics.
What is Apdex?
Apdex was developed by an alliance of companies as an open standard to report, benchmark, and track application performance. It serves as an easy indication method to translate application performance benchmarks to a simple comparison metric measuring responsiveness over a period of time.The Apdex Alliance was created in 2004 by Peter Sevcik, President of NetForecast, Inc. It reached 2000 contributing member companies in 2010, despite having only 11 members in 2007. Click To Tweet
Apdex has provided Ops teams the luxury of using a standardized benchmark that does not require much technical knowhow. Apdex Scores can be monitored by marketing, sales and IT teams, who can collaborate on raising the figures – a great perception in theory, but not best for today’s complex ecosystems.
Apdex has provided a single KPI to align everyone around a single metric. However, this over-simplification comes at a cost.
For example, response time values do not reveal whether users are productive (level of engagement, time on page, conversion rates, etc), an issue further complicated when large numbers of samples are collected. Averaging the samples washes out significant details about user satisfaction. There should be a better way to analyze and measure what matters.
Apdex score – SIMPLIFIED Performance Grading
The aforementioned confusion caused by the massive influx of data is simplified by Apdex Scores. This method calculates and grades customer satisfaction against a predefined threshold on a scale of 0 to 1, with the latter meaning that all users are satisfied. This metric can be applied to any source of end-user performance measurements.
The apdex index is based on three application responsiveness zones:
- Satisfied: The user is fully productive. This represents the time value (t seconds) below which users are not impeded by application response time.
- Tolerating: The user notices performance lagging within responses greater than the threshold (Apdex t), but continues the process.
- Frustrated: Performance with a response time greater than F seconds is unacceptable, and users may abandon the process.
The Apdex Score is presented as a decimal value with a subscript representing the target time “t”. For example, if there are 100 samples with a target time of 3 seconds, where 60 are below 3 seconds, 30 are between 3 and 12 seconds, and the remaining 10 are above 12 seconds, the Apdex Score is 0.75.
Apdex score – A Thing Of The Past?
Simply put, the Apdex has become a “band aid solution” due to the developments in the online publishing industry and the internet space in general.
1 – The Human Factor
The reality today is that every service you deploy comes with a performance cost. More often than not, these services cause user experience (UX) problems that affect visitor engagement and also complicate the mitigation process when these issues are identified. Apdex provides no added value here.
The desired response time (Apdex t) is set manually. This threshold value is often derived from typical industry standards, which are often irrelevant to the specific website due to the reasons mentioned above. The modern day optimization process is much more complex and requires a comprehensive solution.
2 – Lack of Insights and Actionable Data
Response time values do not reveal the engagement levels of visitors. This lack of information only becomes more crucial when a greater number of samples are collected. Averaging the samples washes out significant details about frustration with slow response times, especially when time values are not uniform.
This inadequacy of the index is partially responsible for the rise of user experience measurement applications that are more qualitative than quantitative.
3 – The Growth of Third Party Integrations
Apdex scores are becoming more and more inaccurate due to their inability to attribute UX issues or success to a specific action, tool or solution. It simply doesn’t lead to any actionable insights. This is further exacerbated when significant parts of the code that renders the page is being delivered from off-page sources like code libraries or via third party integrations.
Every user action on a web app sends requests to dozens of different sources, who then send back code. These off-page resources can have, and do have, an effect on user experience. However, the Apdex score is unable to measure the impact that these fetches are having on the user experience. The bottom line is that Apdex score simply can’t measure these variables.
A good example is social-media giant Facebook, which operates as a Single Page Application (SPA). Single paged apps rely heavily on off-page code libraries.
Facebook uses its own coding language called React, which is no different. While you scroll through your news feed, new messages are pushed to you dynamically. Obviously, this would be very difficult to do if all the code was sitting directly on the page. Applying the Apdex philosophy to this complex ecosystem, which presents every user with a customized experience, has no benefit.
Another example is Digital Advertising, which is now almost entirely being delivered by third party code. This has a direct effect on your website’s user experience (UX). However, Apdex simply can’t accurately attribute UX metrics to your third party tags, making it an ineffective metric that has little to no benefit.
Third Party Monitoring – The Future of Performance Optimization
Just like you cannot assume you are healthy just by maintaining a healthy diet and looking for irregular symptoms, the modern online publisher cannot maximise business KPIs by just counting on Apdex. Just like you need to perform regular blood tests and dental work to stay healthy, your Tech stacks can only be optimized by monitoring third party services closely.
This is true especially for online publishers, who typically implement over 70 third party services in their Ad Tech and Martech stacks. These tags can have a massive impact on your website performance and can eventually become a significant factor in achieving maximum business revenue.
The reality today is that every service comes at a cost, also known as a performance cost. Your users can’t and won’t differentiate between your site performance issues and those caused by third party dependencies. Once your web pages don’t load fast enough, they will probably leave your site with a negative impression, regardless of its Apdex score.
How are your monitoring your Tech Stacks? What solution are you implementing right now? Feel free to comment below.