Will Making Your Site Faster Really Lead To More Revenue?

When considering any sort of site speed optimization effort, the big question is “will this actually help the bottom line?”. Those of us that enjoy site speed optimization (yes, we exist) might say that you should always just be making the site faster — make it as fast as you can! But we know there’s a lot of other considerations involved. And if you advocate for a site speed project with promises of huge payoffs, but those don’t come true it hurts your credibility and stakeholders willingness to do similar efforts in the future. Google also has recently kicked off a lot of new potential optimization projects with their announcement that performance (under the moniker “Core Web Vitals“) will become a bigger part of their all-powerful search engine rankings.

Prioritizing speed optimization is a balancing act made up of question like “is the extra time this custom font takes to load worth it?” or “sure, removing the ads will make the page load faster, but will it increase sales enough to offset the lost ad revenue?”. These are hard questions to answer that all seem like judgement calls you won’t really know the answer to until you do the project! I’ve worked on plenty of site speed projects, and while virtually all of them succeeded in making the site faster in some way, only a few actually seemed to make a big impact on revenue. This seems to be a little at odds with the advice we constantly see about the great benefits from squeezing every millisecond out of performance.

So how do you know if this kind of project is going to be worth the effort or not? In this article I will try to lay out a rubric for making that decision in a more informed way. While the rubric does spit out a score, I think it is important to remember that understanding the user experience and performance metrics in a holistic way is much more helpful.

The first thing to understand is that this is NOT a scenario where faster always equals better for your site. There’s plenty of numbers thrown around about site speed and how crucial it is to your site’s success, but what was true for someone else’s site might not be true for yours, and as we will see below many of the wider studies come with a bunch of caveats.

Some of the numbers are particularly drastic! “Google found 53% of mobile site visits were abandoned if a page took longer than 3 seconds to load.” or “Akamai says a 100ms delay can hurt conversion rates by 7%“. Sounds pretty severe if your site is slow, but let’s take a closer look at those numbers…

Crunching The Numbers 

Sensationalist clickbait-y numbers aside, there is at least one pretty good tool out there to try and figure this out. Google has provided a slick tool to plug your mobile numbers into to try and estimate the benefit of a speed boost — but when using this tool there’s a few things to be wary of.


That “could” is doing a lot of work here.

In my test example I say I have 100k (mobile) monthly visitors at a 2% conversion rate and my average order is $75. This tool thinks if I improve my speed from 2.3s to 1.1s I could increase revenue by $126,000, nice! Before you tell your project manager you have a way to increase revenue by 80% please notice a couple things here:

  1. Google is being a little sneaky here by having us put in monthly numbers but spitting out an annual revenue number. I guess $10,517 wasn’t impressive enough? That’s a 7% revenue bump, NOT 80%+. Still a pretty sizable increase though!
  2. The default assumes you cut your site speed in half. What happens if I put in something a little more reasonable? If I put in an improvement of 2.3s to 1.9s (still 17%, a good result) then my revenue bump drops to a more modest 1.5% revenue improvement.
  3. This is “mobile” only, don’t put in your total monthly visitors or your overall conversion rate. While improving speed on the desktop side will also likely improve conversion, it probably won’t be by as much.

According to the fine print link here, this is a 2017 study Google did based upon 383k unique profiles. I don’t see that they’ve published the detailed results anywhere… though it would be very interesting to read if they did! It seems like the study itself must have produced some quality results considering the scale and data source, but the tool seems specifically designed to overstate the benefits of speed improvements.

The numbers themselves seems pretty reasonable however — something like a .4% improvement per 100ms, a very far cry from the 7% per 100ms I mentioned above. What gives?

Let’s take a deeper look at that “100ms can hurt conversion rates by 7%” number. Here’s the original study (Akamai 2017). First, that 7% number applies to mobile, on desktop the number was 2.4%. Many of these studies (especially from Google) are focused on mobile because that’s where the real pain points are and where sites most need to improve. But take that mobile-only number and apply it to your site’s overall traffic and you’re going to be way off!

Second is that 7% number is measured from the peak point of conversion rates. For example on desktop at that maximum conversion rate point pages that took 2.8s converted 2.4% worse than pages that converted at 2.7s. That doesn’t mean if you take your site from 5.1s to 5s that you get that same bump, which is obvious since that would mean if we went from 8s to 3s we’d increase conversion 120% (or 350% on mobile!) if that increase was a linear constant. This assumption of linear effect is very important! The user response to site speed is very much non-linear, for example improving from 12s to 11.5s will have a much different impact than 4s to 3.5s, but in many studies it is assumed to be the same. The Akamai study rightly shows it is non-linear, but presents the steepest point on that curve and then lets us make the bad assumption that it is a constant effect.

Let’s also consider the source. While this certainly isn’t the tobacco industry telling us that maybe second-hand smoke isn’t so bad after all — many of the large-scale studies come from companies like Akamai who have a financial interest in convincing us we need to make our sites faster. I greatly appreciate that they release this research, just make sure you apply your own critical thinking here too! I would tend to trust the raw data in these cases, but reading the caveats presented in the fullest version of the study you will often find presents a less drastic take.

Also many of these numbers get passed around poorly (or completely) un-cited in infographics. As internet-savvy people we know this, but sometimes we Google something quickly and don’t have time to follow down every rabbit hole. Well, rabbit holes are my specialty — so let’s go:

I Googled “benefits of improving your site speed” and the knowledge box at the top cited this article from an marketing agency. Looks like a fine article, so apologies to them for picking on it here (hey, as an apology maybe that link will help maintain their #1 ranking?).

The part of my article where I screencap a post from a different post that’s actually from yet another post to illustrate a number I think is bogus…


The top finding in the featured infographic is “47% of consumers expect a web page to load in 2 seconds or less”. The article links a Crazy Egg article its source, which in turn links a Neil Patel article as its full source (he did found Crazy Egg…), where you can see that 47% number is sourced from either Gomez.com (now Dynatrace) or Akamai — both site performance companies. Turns out this 47% number is actually from Akamai again, from a study run for them Forrester in 2009 (I would link it here if I could find a copy of it..)! But it seems it was NOT based upon testing behavior of whether users leave or not, but by asking people the following question:
“What are your expectations for how quickly a Web site should load when you are browsing or searching for a product?”

  1. Less than 1 second
  2. 1 second
  3. 2 seconds
  4. 3 seconds
  5. More than 4 seconds

First, does that mean that only 47% picked a, b, or c?? If so wouldn’t an equally valid headline conclusion be, “53% of customers expect a web page to load in 3 seconds or more?”. Or do they mean that 47% picked option c? Or is it simply a question of survey responds picking one of the middle options on a question they don’t really have a specific opinion on other than “fast website please”. Again I can’t find the original published study, only the myriad of references to a questionable conclusion. An interesting full read on the problems with this study (and where I found the original question) here.

The point here is not to crap on Akamai, Google, Forrester, etc. (but gotta have a hobby, right?) — the point is to be careful with any of those shiny infographic numbers on performance impact. Measuring web performance is complex, so imagine how hard it must be to run this kind of experiment.

So we have some decently good numbers in amongst a lot of dreck, but what about that big question mark of “could impact”? What does that mean? It means that an improvement of 500ms on one site may not have the same impact of 500ms on yours, so to better understand what’s happening we need to understand why performance has different effects on different sites.

Performance Impact Factors

Web performance is a very deep and technical arena. And I am no expert! It is massively important, but sits in this hole in between developers, ops, and marketing analytics. These three groups don’t always have the smoothest working relationship between them, but there big parts of the puzzle in each group here and trying for some understanding of the bigger picture is crucial.

Like any area of web measurement, behind the headline numbers like “page speed” or “unique visitors” there’s a lot of depth and detail, and this means that making a good analysis from those numbers requires understanding what they really mean.

#1 – How fast is your site now, and how will you measure improvement?

Before making any changes it is very important to have a good measurement of where you’re at first! Sounds obvious, but I’ve come into multiple site speed projects where there was at best a rather vague idea of how fast the site is.

This doesn’t mean “users have complained it was slow”, or even “GTMetrix gave my homepage a D” (though that can be a start and is a lot better than “it feels slow”), it means something more systematic and ideally including real user measurements (RUM) as well. I’ve blogged about this before in discussing the difference between just testing HTML response time vs. synthetic full-page testing vs. RUM testing so I won’t go into that too much again.

The key part is you need something that measures multiple types of pages over & over in a consistent way (synthetic monitoring) as well as something that measures the actual timings that users get when using your site (RUM). The good news is compared to that article I wrote years ago the tools have come a long way! This article is long enough without me listing and reviewing tools — but the essential free tools out there are:
Google Lighthouse (automate-able testing kit built right into Chrome)
Google Chrome User Experience Report (provides real user data)
Webpagetest.org (detailed test results run from a test browser wherever in the world you want to test from)

Google’s PageSpeed Insights is a good place to start because it includes data from Lighthouse and the Chrome User Experience Report.

# 2 – How will users to respond to improved performance?

This is the core question. We assume users will like the site being faster, but will it change their behavior?

If you’re familiar with the price demand elasticity in economics this is the same basic concept. A good that is “elastic” means that the price will affect demand, and “inelastic” means it won’t. So for example — gas is relatively inelastic, meaning whether the price is $1.80/gallon or $2.70/gallon it’s not that likely to change if you pull over and fill up even though that’s a 50% increase in price.

Compare that to an elastic good like movie tickets: if the movie price goes from $9.25 to $14 then maybe you aren’t so psyched to go to the movies anymore. This is obviously a very simplistic example, if you’re nearly out of gas you would be likely to stop and buy some almost no matter the price, but maybe then you might cancel your drive to Yellowstone this summer. Indeed studies have show gas prices to be inelastic in the short term and elastic in the longer term. And additionally it’s all on a continuum — meaning there’s not perfectly elastic or inelastic, goods are relatively more or less elastic and it can be quite situational.

With that definition behind us I’d like to introduce a new concept:  “site speed demand elasticity” — the degree to which the willingness to wait for a website varies as the response time of the website changes.

Unfortunately this can get a little confusing in terminology since “high elasticity” would mean  “low tolerance for slowness”. Since I was not an econ major, I’m going to stick with what seems clearer to me, using “low tolerance” to indicate a something that is relatively more elastic to site speed.

Based on the Wikipedia page linked above, here’s a list of elasticity factors — tweaked a little for our case of site speed.

Factors that impact a user’s tolerance for a slow website:

  1. Availability of a substitute 
    If you’ve got a competitor offering the same thing and their site is a little faster, then users might switch.
  2. Breadth of offerings 
    The broader a site, the higher the tolerance. If you are eBay or Amazon and offer a huge variety of products (both in terms of number of SKUs and different markets served) then tolerance is likely to be higher because it’s overall more efficient for users to only go to one site and checkout once for their toothpaste and power tools.
  3. Time Cost (as percentage of available time)
    For goods this makes more obvious sense, the higher percentage of your income you are spending the more you pay attention when buying the item. Since we’re defining cost as elapsed time, the analogy would be time compared to what you have available at that moment. Say you’re in line checking out at a physical store and you want to do a price check against an online retailer. You don’t have a lot of time in that scenario and need the results pretty quickly. Compare that to being bored killing some time doing some “window shopping” online, where you probably are going to be more tolerant.
  4. Necessity
    The more necessary the service, the higher the tolerance. If your state’s tax filing website is slow, you’re still going to use it since you need to file your taxes.
  5. Duration of slowness 
    If your site is terribly slow, but only for a while, then it won’t affect you so much. This one gets a little tricky though, as I would expect consistently inconsistent speed to be worse than consistently mediocre speed because of slowness compared to user expectations. In other words, don’t worry so much if you had a couple really bad speed days if in general things are good.
  6. Brand loyalty
    Self-explanatory, the higher someone is committed to your brand the more tolerance they will have. Obviously poor performance over time will undermine that loyalty, so just because users are loyal now doesn’t mean they will always be.
  7. Who pays (for the time)?
    A example of this for goods could be plane tickets, which if your company is paying then you don’t really care so much how much it costs, versus if you are buying tickets with your own money. The analogy to speed would be if you were at work and someone else was paying you for your time no matter what you were doing — then maybe you wouldn’t mind so much if a site was slow vs. if it was on your own time.
  8. Addictiveness
    If your site is app purchases for a game someone is addicted to then they are more likely to tolerate slowness.

#3 – In what way is your site particularly slow? More specifically, is there a specific pain point?

This requires knowing more both about your different speed metrics and how those metrics can affect your site. I’ve made it this far in a detailed article about performance without the avalanche of different performance metric acronyms, but the fun times are over! To know how to use this factor you have to know if maybe you’ve got a problem with time to first byte (TTFB) or first contentful paint (FCP) or time to interactive (TTI), and how that particular problem might affect your users. The reasoning here is that if you have an issue that is a particular problem for your end users then you can be giving people a bad experience while still having pretty decent overall speed numbers. A few possible examples:

  1. You’ve got really slow TTFB, meaning it takes a long time for a page’s basic HTML to be delivered to the browser. All of your images such might be on a great CDN but your HTML pages are on a slow WordPress server far away from your target audience. This scenario could still give you an ok overall page load time in your RUM measurements, but your TTFB and FCP scores would be really bad and your users could be dissatisfied without you knowing.
  2. You’ve got a site with an bimodal distribution of site timings, causing some subset of your users to have a terrible experience but leaving the overall numbers looking ok. Say you’re a big company and you host your own site local to you. Everything seems super fast to you and your in-house developers, but maybe your servers are in Texas and half your audience is in Asia with some abysmal timings yet the averages look ok.
  3. You have a big problem with content shifting around while its loaded (aka Cumulative Layout Shift (CLS)). The load times might be overall ok but your users are confused and Google is complaining.

These may seem like specific cases, there’s a lot more out there and a lot of sites have use cases that don’t quite fit the mold. It might seem at first glance that having a major problem in just one part of your performance metrics would be something that would make your overall performance improvement score that we’re working on here be lower since it’s just one metric out of many — but the reason why this one is a factor by itself is that if there’s one particular point fixing that point is both something that can be easier (since it’s only one point), much more impactful (since it can be a big irritation for some set of users), and all the while otherwise flying under the radar.

#4 – How much can you reasonably expect to improve performance?

The Google tool above defaults to assuming timings cut by 50%. While that would certainly be nice, to get anywhere near that might be kind of a stretch. The last site speed project I worked on improved timings by 6%, and that was in my estimation a reasonably successful incremental project! If you’ve gone through a lot of rounds of optimization already or the site was built with speed in mind then it’s possible there’s not a lot you are likely to do! If you’ve used automated tools like Lighthouse and it does not show a lot of areas for improvement then what’s the point exactly?

#5 – What is your site volume?

This one is pretty easy to understand, but also easy to forget! Improving site speed doesn’t directly bring you more traffic, so if you only have a small number of visitors it doesn’t matter how fast you make things you won’t see much revenue lift. Even the smallest site needs to be of a “reasonably usable” speed, so you don’t ever want a big percentage of users frustrated, but if you improve site performance by 5% and you only have 1,000 monthly visits it’s not going to amount to much.

Figuring Out If It’s Worth The Effort

Ok — let’s try a quick grading rubric here. This obviously isn’t scientific, just a way to get a score 0-100 of how impactful a site performance effort may be.

Let’s first try that for my site right here (quantable.com). This is just my company’s site and blog, I’ve spent a little time optimizing the site speed but honestly not that much and it is a sort of typical WordPress site. If you’re here you know that my content is totally unique and can’t be found anywhere else (lol), so I would consider the users to be a little more tolerant than average but not in an unusual way.

Parameter Score (0-5) Weighting Factor Running Total
Current Speed
(0=screaming fast already, 5=omg it’s like molasses)
2 * 4 8
Speed Tolerance
(0=users are amazingly tolerant, 5=users will bail over milliseconds)
2 * 6 12
Specific pain point or blocker?
(0=different speed metrics are all similar, 5=one part is terrible!)
1 * 2 2
Likelihood of Significant Improvement
(0=nearly impossible, 5=you already see easy wins)
3 * 3 9
Site Volume
(0=tiny traffic, 5=huge traffic)
1 * 5 5
Total 36/100

Grading Scale:

0-30 Probably not big benefits
30-50 Some help, but carefully consider cost vs. benefit
50-70 Very likely to have measurable impact
70-100 Dire need (things have to be pretty bad to score in this range)

So for my own site (36/100) it seems like a bit more attention to performance couldn’t hurt, but that I’m not likely to see big gains. The way the scale works is that it’s quite hard to score really high and that most sites are likely to score roughly in the 30-60 range, which is intended to represent the reality that while site speed improvements will help most sites, it’s rare for it to have huge impact.

Let’s try it for a theoretical site — an ecommerce site with a modest amount of users that sells stuff you can buy elsewhere easily enough, that has done some optimization already, but is currently pretty slow.

Parameter Score (0-5) Weighting Factor Running Total
Current Speed 4 * 4 16
Speed Tolerance 4 * 6 24
Specific pain point or blocker? 2 * 2 4
Likelihood of Significant Improvement 2 * 3 6
Site Volume 3 * 5 15
Total 63/100

This site definitely need to spend some time doing optimization! The combination of users that can find stuff other places and slow current speed is a killer.

Now you try it. This 0-5 scale is pretty subjective I know — remember that this is intended to help organize a plan, not be any kind of final answer. Thanks for reading, I hope you found this a useful discussion!

Parameter Score (0-5) Weighting Factor Running Total
Current Speed
(0=screaming fast already, 5=omg it’s like molasses)
* 4
Speed Tolerance
(0=users are amazingly tolerant, 5=users will bail over milliseconds)
* 6
Specific pain point or blocker?
(0=different speed metrics are all similar, 5=one part is terrible!)
* 2
Likelihood of Significant Improvement
(0=nearly impossible, 5=you already see easy wins)
* 3
Site Volume
(0=tiny traffic, 5=huge traffic)
* 5
Total 0/100

Thanks to Tim Wilson for providing helpful feedback!

No comments yet.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.