Why Are We All So Bad at Math?
Categories: analytics
“A long time ago in a galaxy far, far away….”
If you’ve ever wondered exactly how long ago and how far away this iconic opening line of Star Wars means, you’re in good company among analysts. A desire for for specificity and the need to quantify everything are hallmarks of the trade.
Aren’t all galaxies “far, far” away though? Andromeda is our nearest galaxy and it’s 2.5 million light years away. That seems deserving of at least a couple “far”s, right? Or perhaps not? In the scope of the entire observable universe, 2.5M light years seems just around the corner.
Here’s the two galaxies (ours is on the left), showing what 2.5M light years looks like compared the size of each galaxy.
The universe is estimated to be 93 billion light years across, so the distance between the Milky Way and Andromeda is a minuscule 0.0027% of that. For comparison, if I was to drive down to a drug store one mile away to represent the entire length of the universe — I’d pass Andromeda 2 inches into my trip. For non-Americans, let’s say we’re walking down to road to the chemist’s 1km away and that we’d pass Andromeda 3 cm into the trip.
Ever thought it’s odd how the Marvel Cinematic Universe takes place mostly in the Milky Way? If Thanos is threatening all of creation, maybe some of the other galaxies need to step up and help out once in a while! Though it turns out that the Guardians of the Galaxy are actually guarding Andromeda, not the Milky Way (who knew!?) — so our nearest galactic neighbor is pitching in at least. The two galaxies are going to run into each other eventually so it’s good that we’re collaborating with them.
One or two galaxies is an a drop in the bucket compared to the size of the total universe though, and most science fiction tends to draw a line at scale when it comes to the galaxy. Star Wars doesn’t span multiple galaxies, and neither does Star Trek really. It’s as if our collective imagination doesn’t even want to try to grasp scale bigger than one galaxy, and frequently confounds “galaxy” and “universe”.
Honestly I hate it when sci-fi stories that use the words “galaxy” and “universe” interchangeably. Perhaps this is because I’m an incurably pedantic space nerd, but they are just so completely different in scale that they shouldn’t ever be compared to each other. Our galaxy is vast beyond comprehension, but the universe is made up of hundreds of billions of galaxies.
This many “-illions” in a row is usually where it gets hard to follow and my Carl Sagan schtick flames out. Everything just starts to read like “large number, large number, large number”.
Unfortunately these kind of large numbers come up all the time in analytics, and we fail to comprehend and communicate those numbers with surprising regularity. Think about these numerical comparisons:
– A database with 2B records vs. one with 100M.
– 50TB of network traffic vs. a petabyte.
– 10M pageviews vs. 750K.
– 99.99% uptime vs. 99%
These numbers are so vast in their differences that they require completely different solutions and modes of thinking, but at a quick glance they often seem similar.
As humans, we’re terrible with large numbers, but we don’t like to admit it — especially if we’re in a technical field. Historically, this wasn’t so much of a problem as these really big numbers didn’t come up very often. 10,000 was the largest uniquely named number in ancient Greek as well as traditional Chinese. In fact the Greek name for 10,000 was myriad, which means a “large but unspecified” number today in English. Modern times have called for much larger named numbers, especially now into the digital age.
Computers are good at math. It’s right there in the name, their whole raison d’être. The ways the computers deal with numbers is not at all the same way in which the humans deal with numbers, and it’s good to acknowledge that. There’s a tendency among tech people to think that the computers’ way is fundamentally superior since it is more precise, but that precision is worthless when the message itself is not understood. Humans need analogy and visualization in order to help us contextualize these large numbers. We can train ourselves to be a somewhat better at it, but humans and computers are still galaxies apart when it comes to abilities with large numbers.
Much of the field of data storytelling is about learning to embrace a human outlook on data: let the computers do the computing and engage with the humans with narrative and analogy.
Why are we so bad at this? Unlike computers which are designed to do math, anthropologists believe that we don’t come with that hardware built in. Humans can intrinsically understand only a very small number of items without counting them up (called subitizing), generally thought to be somewhere around 4 or 5.
Meaning we can look at a group of 3 things and immediately know exactly how many there are without explicitly counting them and using language. With groups greater that this limit we need to count them, coming up with a linguistic abstraction (you know, a number) to help us along. This “counting” could be one-by-one or via things like estimation or statistical methods.
With smaller numbers like 20 this abstraction doesn’t present much issue for communication and comprehension. We thoroughly understand what a group of 20 people looks like, and even if we have to count heads to know if the actual number is 17 or 20; we’re still comfortable with the meaning of the number. However, at some point our intuitive understanding of these abstractions starts to break down. What does a crowd of 250,000 people look like? What does a forest with a 1,000,000 trees look like? What about 1,000,000,000?
What we are good at is comparisons, especially visual comparisons. While we might not be able to envision what a million trees looks like, from high above in a plane we could tell that a forest with 1,200,000 trees was bigger than one with 900,000 trees (in an imaginary example where the trees were all the same size and spacing).
You likely have a much better mental picture of what a crowd of a million people looks like than a million trees. You’ve probably never seen a picture labelled as a million trees, whereas you have seen pictures of very large crowds of people with size estimates. You’ve got an abstraction for comparison for the crowd of people that you can start from.
One million is a very interesting number. Rather than “myriad” I would posit that “million” is the word most synonymous with the concept of “very big number”, and it’s also (perhaps coincidentally) roughly the maximum number of individual items we can visually discriminate. If 10,000 was historically the most human culture had common use for, we’re pushing that limit up as high as our brains can handle these days.
You’ve likely experienced this limit yourself in recent years if you upgraded to a screen where you could no longer see the pixels. This so-called “retina” resolution (as defined and marketed by Apple), is greatly dependent upon viewing distance, since the closer you get, the more likely you are to see the pixels. To see both the whole screen at once and to see pixels within will be a limit of low single-digit millions limit for most people. Since we will usually put a monitor in a position where they take up most of our active field of view (if not more), this is I think a reasonable approximation of our visual limits. According to Apple, the threshold to determine if something is “retina” or not is 57 pixels per degree (PPD), which includes the size of the device, resolution, and viewing distance.
I personally use a 24″/61cm external monitor at 1920×1200 viewed from 60cm away. This is 41 PPD, and indeed I can see the pixels.
My laptop is 16″/41cm , 3072×1920, and viewed from 40cm away is 66 PPD. I definitely can’t see the pixels.
Resolution has gotten very complicated in recent years since many of us run scaled displays where our device’s native hardware resolution doesn’t match the software resolution (otherwise objects would be too small to see for most people without huge monitors). This is why when you look in web analytics screen resolution reports you don’t see very many cases of 4K and higher resolutions, because JavaScript reports your scaled effective resolution, not the hardware resolution.
resolution | number of pixels | calculated PPD | Retina? | most popular desktop resolution (US) in year |
1024×768 | 786,432 | 27 (20″/51cm monitor @ 60cm) | No | 2012 |
1366×768 | 1,049,088 | 31 (22″/56cm monitor @ 60cm) | No | 2013-17 |
1920×1080 (HD) | 2,073,600 | 40 (24″/61cm monitor @ 60cm) | No | 2018-2022 |
2560×1440 (2K) | 3,686,400 | 54 (24″/61cm monitor @ 60cm) | Maybe | |
3840×2160 (4K) | 8,294,400 | 73 (27″/69cm monitor @ 60cm) | Yes |
For me, I cannot see the pixels on a 24″ 2K monitor set a normal viewing distance, but I know some people with better vision than me can. This means our physical limit is around 3-4M pixels total, which I think is interestingly close to our concept of an arbitrary big number and also around the limit at which to many the numbers feel so big as to be untethered to experience. I assume that people mix up million and billion on a regular basis partially because of the similar suffix, but also because above a million we have entered a realm so abstract we have little hope of understanding.
A classic way to show the difference between million and billion is by converting seconds to larger units of time:
1 thousand seconds: 17 minutes
1 million seconds: 12 days
1 billion seconds: 31 years
1 trillion seconds: 31,688 years
I was a math major in college (the least surprising fact in this article), so my training emphasized correctness and generalizability. We learned to abstract another step away from the numbers themselves. Instead of saying. “1,000,000 is one hundred times bigger than 10,000” we might say “given two numbers α, β: where α is two orders of magnitude larger than β”, because the specific numbers weren’t relevant. If we did have to write out large numbers we were more likely to use scientific notation than write a bunch of zeros, e.g. “2.1 x 10^9” instead of “2,100,000,000”.
Even if you weren’t so “lucky” as me to be a math major, this kind of desire for correctness and precision pervades analytics. When doing the analysis, this precision is absolutely necessary. When presenting the results, it gets in the way. We all know it does, but we can’t help ourselves. We want to be as correct as possible and show how rigorous and smart our analysis was. We need to instead choose analogy and reductive visualization whenever possible.
This study showed participants a labelled line the following and asked where on the line 1 million would sit:
nearly 50% placed 1 million halfway between 1 thousand and 1 billion, which is of course massively incorrect. If we asked the same people, “what is 1 million + 1 million”, I suspect that very few would say “1 billion”, but the context of presentation makes a huge difference in comprehension. We want to avoid obvious cognitive traps such as this in presentation, not say to ourselves, “well, the smart people will get it”.
Million is bad enough, but billion and trillion is when it gets really tough. Like our example above with a trillion seconds = 31,688 years, trillion tends to be incomprehensible no matter our analogy or unit conversion if we’re trying to compare to things on the scale of thousands or millions.
This gets me back to the whole galaxy vs. universe thing.
100 years ago, most astronomers believed that the Milky Way galaxy was the entire extent of the universe. In 1920, if we mixed up the two terms we’d have the conventional wisdom of the scientific community on our side saying they were basically the same thing. Other galaxies like Andromeda had been photographed, but the math indicated that for Andromeda to be a galaxy it would have to be unbelievably far away. Were that to be true, the universe would have to be mostly empty.
In 1920, the astronomers Harlow Shapley and Heber Curtis had a debate at the Smithsonian (now referred to as the “Great Debate” in astronomy) about whether or not there were additional galaxies out there beyond our own. It turned out that the math (and Curtis) was right and there are a huge number of other galaxies. Interesting side note, Harlow Shapley’s son was Lloyd Shapley, the creator of the the “Shapley Value” in game theory used in many data driven attribution models.
How many other galaxies are there? Somewhere between 200 billion and two trillion.
Two trillion. 2,000,000,000,000. 2×1012
You just can’t wrap your head around it.
NASA’s New Horizons mission recently indicated the number may be on the low side of that range, but even if the number is “only” hundreds of billions that’s no less mind-boggling… and none of that includes all the parts of the universe that are out there that we’ll simply never be able to see.
Let’s try to do some analogy here to see if we can un-boggle that a little, just for fun.
There are hundreds of billions of stars in the Milky Way, between 100 to 400B . We’ll pick a number on the low end and say 200B.
Now let’s start with something small and tangible, a single drop of water. One drop of water is about .05ml.
If we shrink each star down to a drop of water and then filled up a bucket, our bucket would need to be 10B liters (2.6B US gallons) to represent the stars in the galaxy. That’s less of a bucket and more of a lake.
10B liters is 10M cubic meters, or a cube about 215M per side. Lakes tend to be less deep than wide, so we could also get to the same volume with a lake 1 kilometer (.6 miles) on each side and 10M (32′) deep. That’s a pretty modest lake. In fact, my dad lives on a lake about this size in Maine.
In Maine they call this a pond, but it’s still an awful lot of water. In fact it’s a complete un-fathomable amount of water considered drop by drop. If you wanted to fill it with an eye-dropper one drop at a time it would take you over 2M years at 3 drops per second.
So what happens when we have 500 billion ponds full of stars, one for every galaxy out there (assuming the Milky Way is an average galaxy, which it seems to be). Things starts to get a little crazy. Now we have 5B cubic kilometers of water. A cube 1,710km on each side. Let’s make that more pond/ocean-shaped, 4km deep (the avg. depth of the Pacific Ocean) and 35,000km on each side.
Oops, turns out we’re now overflowed our planet. We’ve gone well over the total of all water on the earth. The estimated volume of all the water on earth is about 1.4B km^3, so we need 3 more earths to fill up drop-by-drop to represent every star in the universe. So if our galaxy’s stars fill one pond, the universe’s stars fill all the oceans of the earth 3.6 times over.
Here’s where the analogy can get a little confusing. The earth doesn’t have as much water as you think it does! We look at pictures of the mostly blue globe from space and can’t help but think that it’s a sphere made mostly of water. It’s not, the water is actually more of a thin film coating a rocky sphere. Like a golf ball picked up out of puddle with bit of water in its shallow dimples.
We’d need 4 dry earths to fill up with water one at a time, or one sphere full of water 2,119km in diameter (4/3π r^3 = 5B).
Despite having almost 4 Earths worth of water, that sphere is actually quite a lot smaller than earth. It’s even smaller than Pluto, celebrity ex-planet and Kuiper belt object.
Volume of spheres scale up quickly with increased diameters. Sort of like how we should always know that one large pizza is a lot more pizza for the buck than two mediums. So ultimately, if you learn anything from this article it’s that a spherical pizza with a molten cheese core would be an exceptionally good deal.
NB: Any math errors made simply prove my point that humans are pretty bad with large numbers.
What a good son…I was telling him about how I am so bad at math; here he tells me why!