Fat Tailed Thoughts: The Internet

Hey friends -

Appreciate all the feedback, please keep it coming!

In this week's letter:

  • The internet - what is it and how we went from military-funded academia to 4.5 billion internet users in 30 years

  • Facts, figures, and links to keep you thinking over a drink

  • A drink to think it over

Total read time: 14 minutes, 50 seconds


The Internet

I can't really remember a pre-internet world. I can remember the hiss, whirl, beeps of the dial-up modem connecting, but I can't really remember slow internet either. For most of my life, I've lived in areas with good wired internet and, more recently, good access to Wi-Fi. Here in NYC, Verizon's been winning business away from Spectrum by selling access to superfast "fiber" internet, as opposed to Spectrum's... not fiber, I guess? Unwittingly it seems I've always had access to fast internet, so it came as a real surprise to me to see the ongoing congressional debates on how to provide broadband internet (speeds greater than 25 megabits per second) to the estimated 21 million to 161 million Americans who lack access today.

It got me wondering - what is this internet thing anyhow?

We'll start with a quick trip down memory lane to the internet's origin story before we turn attention to the global infrastructure that makes the internet work today. We'll wrap up with a look to the future - what's on the horizon for the next phase of growth. As we journey through the evolution of the internet, we'll find that despite terms like “cloud” and “digital” the internet is very real, entirely dependent on massive physical infrastructure, and subject to the same geographical challenges that almost any globally delivered service will encounter. It's not actually all that mysterious and it certainly isn't magic - it's mostly a bunch of cables.

An important caveat before we jump in - we're going to focus on the standards and hardware that make the internet, not the web and other applications that use the internet. The Federal Networking Council's 1995 definition of the internet is a bit dense, so we'll substitute a more colloquial definition. The internet is:

  • a network of networks that can connect using unique addresses including Internet Protocol (IP) and communicate using messaging standards including TCP/IP,

  • a collection of resources, like servers hosting webpages, that can be reached from those networks, and

  • the underlying infrastructure that connects those networks.

Don't worry if some of the terms are unfamiliar for now, they're all part of the story.


The Russians are coming, the Russians are coming!

Flashback to the '60s and we have local computer networks made up of computers all physically connected to one another, but each network is isolated and can't talk to one another. One of the largest networks is IBM's SABRE, a network of 2,000 computers for airline reservations that communicate over telephone lines. Earlier attempts to hook up multiple computer networks to create a network-of-networks failed because messages were being centrally routed like old-school telephone switchboards. These central routing points simply couldn't scale to support significant volumes and when they inevitably failed, they'd take the whole network offline.

Despite the early successes of local computer networks, the Air Force realizes that any communication network that relies on central routing is always going to be highly vulnerable to attack. The Air Force funds engineer Paul Baran who publishes Report P-2626 in 1962, a design for a survivable communications network that: uses a decentralized network with multiple paths between any two points, divides user messages into message blocks, and delivers the messages by store and forward switching. Core to making the network functioning is packet switching, a concept which Paul Baran subsequently invents in 1964 following his initial paper. Packet switching allows a message to be chopped up into individual packets where each packet has a header that says where it's going and a payload that contains the actual message. Part of the power of packet switching is that a message can be chopped up into many individual packets, each of which can take its own path from source to destination, and then they can be reassembled at the destination thus helping prevent single-source-of-failure problems.

Shortly after the publication of Baran’s paper and an independently developed approach to packet switching by Donald Davies in 1965, the Department of Defense's Advanced Research Project Agency (DARPA) funds a new initiative in 1967, ARPANET, to become the first network of networks. To launch a network-of-networks we first had to solve two problems very much grounded in the physical world - a standard interface between the computers and the network, and the physical connections between the computers so we could actually have a network. The former was solved through a nifty device called an Interface Message Processor, the first of which were installed in two California universities to allow their computers to communicate over ARPANET. Today, we call those devices routers and they serve the critical function of moving data between networks including hooking up your home network to the internet. The latter challenge of physical connections was solved using a network that was already built - phone lines. While a great starting point, in time those phone lines would turn out to be inadequate and become a major place for innovation. More to come on that in a bit.

In 1969, just two years after the initiative was funded, the first message on ARPANET was sent from UCLA to the Stanford Research Institute, and boom! the internet was born. Well, sort of. While basic messages could be sent, most of what we think of as the internet still had yet to be developed and the physical connections between networks had yet to be laid. While the latter would take more time, the former moved at an extraordinary pace.

From government project to commercial use

In the next few years after the initial message, the university teams working on ARPANET released many applications that continue to be major uses of the internet that we know today. In 1971, the researchers release File Transfer Protocol to allow users to send files and Telnet to allow users to send text. Shortly thereafter in 1972, Ray Tomlinson launched a rudimentary email application (SNDMSG) where users could send electronic mail by combining the recipient's name and host computer with an "@" signal. These initial developments faired exceptionally well over time - File Transfer Protocol was supported by most browsers until 2019; Telnet lasted until Secure Shell in 1995, which we still use today; and SNDMSG email evolved into SMPT in 1981, which we also still use today. Not bad considering there were just 50 computers on the network in 1972.

These initial applications drove ARPANET growth to 111 computers by 1977, but this much larger network required a more sophisticated methodology to route messages from the sender to the intended receiver. In 1983, ARPANET adopted Transmission Control Protocol / Internet Protocol, today known as TCP/IP, and what we're familiar with as "the internet" finally takes form. TCP/IP is still how messages are routed from sender to receiver today. This critical innovation allows any network of computers to communicate with any other network, irrespective of each computer's hardware. It cannot be overstated how transformative this was - it allowed for a true network-of-networks for the very first time.

The number of interconnected networks ballooned as a result, from 15 in 1982 before TCP/IP was adopted to more than 400 by 1986. At this point, ARPANET is still limited to just governments and universities, making this growth even more impressive. The stage was set for true hyper-growth with just two remaining catalysts: a commercially available network for you and me, and a major use case that could drive adoption.

The catalysts came online independently in 1989. A few years earlier in 1986, the National Science Foundation funded five supercomputing centers to create a parallel network to ARPANET called NSFNET, again only accessible to government agencies and universities. NSFNET was ambitious from the start. It wasn't just another one of those 400 interconnected networks, but rather a hookup of 170 smaller not-yet-interconnected networks to each other. NSFNET soon took a new and revolutionary trajectory in 1989 when it opened up the network to the first commercial internet service provider who started to provide internet access to the general public. For the very first time, ordinary individuals with no government or university affiliation could log onto the internet.

The very same year across the pond in Switzerland, Tim Berners-Lee invented a way to specify a document's location on a network using Uniform Resource Locators (URLs), how to request and send that document using Hypertext Transfer Protocol (HTTP), and how to link one document to another using hyperlinks. He called this new invention the World Wide Web and launched the first browser in 1990 to make it easy to search. The invention was momentous. All of a sudden, you and I could share and discover webpages with anyone hooked up to the internet and link those pages to the seemingly inexhaustible supply of other pages hosted on servers all over the world. This was the application needed to drive truly rapid and massive adoption of the internet.

We had our network-of-networks protocol, our commercially available networks for the general public, and the World Wide Web as the major use case. The race to build the internet was on.


Rapid adoption drives new infrastructure

Adoption of the internet grew really fast. From just 159,000 users in 1989, internet usage grew to 2.6 million by 1990 and 44.4 million by 1995. Some of the first major internet companies were Internet Service Providers (ISPs) including CompuServe and AOL. ISPs still have the job today of hooking up you and me to the internet. In 1995, they were offering us the chance to get online with dial-up at a rapid 0.056 megabits per second use our existing phone lines, a speed at which it would take just a mere 85 hours to download a 2-hour movie. The allure of the internet drove rapid adoption even as faster speeds were yet to come.

With the entire world's population as potential users, the ISP industry remained fragmented with over 160 ISPs in the US alone in 1995 and many more in Europe and Asia. These ISPs competed with one another to deliver ever faster speeds in their race to win more new internet users, a battle that continues today as we use ever more data to stream movies, play video games, listen to music, and often doing many of these activities concurrently on multiple devices. There are two main ways to deliver faster internet speeds - shorten the distance a signal has to travel from your computer to its destination and install better cables that can handle more data.

The way to shorten the distance a signal has to travel is to lay straighter cables and hook up ISPs’ cables directly to one another (quite literally stitch them together). The approach is very similar to how airlines coordinate to fly us around the world. Imagine you want to fly from Des Moines, Iowa to Bangkok, Thailand. If there are no direct flights from one to another, you'll likely fly on one airline from Iowa to a place like Tokyo and then on another airline from Tokyo to Bangkok. The trip takes longer because you have to fly the extra distance to Tokyo and deal with the slowdown of switching airlines in Tokyo. Trying to access a website hosted in Thailand when you're in Iowa is remarkably similar. The website is code that lives in a file that's hosted on a server that's physically in Thailand. The message travels from your computer in Iowa through your local ISP and on through as many other ISPs as are necessary to make it to the server Thailand. The server responds to your message by sending the website data all the way back to your computer. If you provide a more direct route with fewer "layovers" then the signal can travel faster.

Internet Exchange Points are like the major airport hubs - they are where physical connections between ISPs' cables are made so the signal can keep traveling to its destination. Just like how most airlines are regional and connect to the bigger airlines at major airport hubs precisely because all the other airlines are there, most ISPs are regional and try to meet at internet exchange points where they are many other ISPs. These exchange points tend to be located in geographically advantageous areas, as you can see in the map below for the US.

Each one of those blue circles is a cluster of internet exchange points connecting the now over 7000 ISPs in the US to one another. Over time, a concept called Tier 1 ISPs has emerged. Tier 1 ISPs are the big guys, the ones that own millions of miles of cable all over the world and facilitate most of the internet traffic between more regional networks. While no longer existent, ARPANET and NSFNET were really the first Tier 1 ISPs. They've since been supplanted by AT&T, Verizon, Lumen Technologies, and about 15 other ISPs. Tier 2 ISPs are the regional networks and Tier 3 ISPs are the lowest on the totem pole. While network size is generally a good indicator of tier, it's the fees they pay one another that differentiate among the tiers. Tier 1 ISPs generally pay no fees to interconnect with other Tier 1 ISPs and charge Tier 2 and Tier 3 ISPs to transmit data on their networks. Tier 3 ISPs always pay fees to other ISPs to transmit data, and Tier 2 sometimes pays and sometimes doesn't based on who they are connecting to.

But it's not just the network size that matters, it's also how much data can be carried across those networks. More data requires better cables and a lot of them.

Cables and a lot of ‘em

Scroll back up to that picture of internet exchange points. You see all of those squiggly lines between the clusters? Those are the miles and miles of cable that physically carry signals from your computer to wherever it needs to go. The original telephone lines rapidly became inadequate to handle all of the new internet users and a never-ending increase in data-intensive applications like video streaming. Phone lines are made of copper wire which transmits data via electrons. Not only does copper have poor bandwidth, meaning it can only transmit a limited amount of data per second, the signal that is sent across the copper also degrades very quickly and requires repeated signal "boosting." By contrast, we can use instead use fiber optic cables that transmit data as photons across glass fibers. Fiber has ~0.2% the attenuation rate of copper and can carry many thousands of times more data per second.

As you would expect from companies competing to provide better internet to consumers, deploying more fiber cables has been a major battleground among ISPs. The scale of the ongoing fiber optic cable buildout is difficult to comprehend. In the US alone in 2019, Verizon owned over 1 million miles of fiber and AT&T owned another 1.2 million miles. Verizon ordered over 12 million miles of fiber in 2017 and has been deploying it at a rate of over 1000 miles a day. A million miles is 40 times around the equator or 5 times the distance to the moon. Earth from a million miles away looks like a pretty blue marble and individual ISPs are deploying many multiples of that distance in fiber cable.

Those millions of miles of cable don't begin to include the over 550,000 miles of submarine fiber optic cables. To get signals between continents, we spend huge sums of money laying cables 25,000 feet down across the bed of the seafloor. The places where those cables come ashore, known as Cable Landing Points, tend to aggregate in geographically advantageous areas just like the Internet Exchange Points. From the Cable Landing Points, cables travel onwards to Internet Exchange Points to interconnect with our domestic networks.


Score - Georgian grandmothers: 1, internet: 0

As you realize that the internet is just a bunch of cables and some meetup points, you start to think about just how fragile the whole system really is. Despite carrying enormous volumes of data, these cables are only about as big around as a garden hose. That squiggly line in image above that just circles in the middle of the Atlantic? That's all that connects the Azorean islands to the internet and if those cables get cut, it’s down until its repaired. This really does happen. In 2011, a 75-year-old Georgian woman accidentally sliced through a cable and cut off the 3.2 million people living in Armenia from the internet. A similar incident happened again in 2018 and cut off internet access to Mauritania and 9 other African countries for 48 hours.

The internet nonetheless continues to get more resilient over time without any central planning. The very same driver that's resulted in these millions of miles of cable is also driving the improved resilience - it's you and me Zooming with one another, deciding to Netflix and chill, and watching TikTok on our commutes. The demand for ever more and ever faster internet leads to more cables, and more cables mean more routes for messages to take if one of the routes is cut. Over the past four years, internet capacity has grown at 29% per year, a rate that leads to a doubling of capacity every 2.5 years and shows no signs of slowing down. Our demand for more internet is unwittingly leading to a network that becomes ever stronger.

The next frontier

So where do we go from here? We're on a steady trajectory to keep deploying ever more cable that can carry ever more data around the globe, but it's not guaranteed that access to the new capacity will be equitably distributed. Because of the high cost of laying cables, often as much as $30,000 per mile, it's often not economical for a company to deploy cable in sparsely populated areas. Even when the cables physically run through rural towns, they often fail to have hookups for the locals much like a railroad that passes through but doesn't have a train station. The US is a particularly notable example given our remarkable internet infrastructure and ongoing failures to provide equitable access to rural areas. Lack of access is even more pronounced on a global scale, with access closely correlated to per capita GDP.

There are three promising solutions on the horizon for access to high-speed internet even in rural areas: government incentives, 5G, and satellites. The first, and most likely to be near term applicable in the US, is state and local government subsidies for Tier 3 "last mile" ISPs combined with federal legislation for net neutrality. Those Tier 3 ISPs play an important role to hook up your home to the internet by providing the physical connection between you and the Tier 1 and 2 ISPs who they connect to at Internet Exchange Points. These smaller ISPs are well-positioned to meet the needs of sparsely populated areas, but without appropriate regulatory oversight and incentives, they can become abusive local monopolies. Regulated and incentivized effectively, better internet can be a win-win for the local areas with meaningful economic gains to be had for those who pursue them. One US study found a 1.1% incremental GDP growth rate for localities who implemented higher capacity internet versus those who did not, a roughly 50% increase from the average 2% GDP growth rate.

One of the most powerful regulatory principles is Net Neutrality. Even with the best of intentions and local Tier 3 ISPs who invest in high capacity fiber optic cables, rural towns can still be stuck with slow internet if the upstream bigger ISPs throttle their capacity. Net Neutrality is the principle that ISPs must treat all comers equally and cannot slow down one user in favor of another. It remains an ongoing debate at the federal level with the Federal Communications Commission previously supporting Net Neutrality principles and then rolling them back starting in 2017. Without Net Neutrality enshrined in law, small ISPs can invest meaningfully in local networks only to find that the big ISPs on which they depend abuse them, much like trying to increase water capacity by building a bigger water pipe to the main artery only to find out that the main artery still prevents more water from coming down the pipe.

5G is potentially a fascinating alternative for better internet, but the economics are just starting to be understood. 5G signals only travel about 1,500 feet, just a fraction of the 10 miles that 4G travels, which means the major ISPs including Verizon and others are laying millions of miles of fiber optic cable for new antennas to make 5G available. There's real optimism that a 5G wireless signal that can transmit data at roughly 50 megabits per second, twice the minimum for broadband, can be made available far more economically than the laborious process of hardwiring every individual home.

Not to leave the fun to everyone else, Elon Musk's SpaceX is busy building 120 Starlink satellites per month to make satellite-delivered internet at speeds of 50-150 megabits per second available for $100 per month. Satellite internet on this scale is in even earlier days than 5G with the long-term economics deeply intertwined with the rapidly changing economics of space launches. The excitement is certainly there and the early results from the initial 1,500 satellites already launch are promising.

Since its genesis in 1969, the growth of the internet has been a truly extraordinary journey and continues to be a major place for innovation. Internet speeds have grown from 0.056 megabits per second in 1995 to 50 megabits and more today, an increase of almost 1000x in just 15 years. That's meant the difference between it taking four days to download a movie to Netflix and chill on a Saturday night. While we don't yet have equitable access to the internet in the US or globally, we have hugely promising developments on the horizon that can close that gap, both through government-provided incentives and new innovation. Adoption of the internet has been as rapid and impressive an adoption as any technology we've seen in the US's history, and the best is yet to come.


Cocktail Talk

  • A truly awesome article on how undersea cables are laid, complete with video of the robot laying the cable.

  • The space race is heating up! Hot on the heels of Jeff Bezos announcing he was going to space July 20th, Richard Branson’s Virgin Galactic received approval to send passengers to space. No word yet if 130,000 people will sign a petition for Branson to also not come back.

  • Tiny, tiny zombies are alive and reproducing after a 24,000-year nap in the artic. The zombies, formally known as bdelloid rotifers, are multicelluar animals about as tall as 3-10x the width of a human hair. When exposed to adverse environmental conditions, they can go into cryobiosis and allow their bodies to freeze without producing ice crystals that would otherwise destroy their cells.

  • Sriracha, that tasty red rooster hot sauce that’s good on just about everything, does $150M in sales annually, 10% of the US hot sauce market. Even more impressive? No sales team, $0 in advertising, and the company doesn’t even own a trademark.


Your Weekly Cocktail

It is terribly hot here in NYC and even hotter elsewhere in the country. We’ve got something a bit funky and perfect for the weather.

The Chartreuse Swizzle

1.5oz Green Chartreuse
0.5oz Falernum
1.5oz Pineapple Juice (freshly pressed)
0.75oz Lime Juice (freshly squeezed)
Dusting of Nutmeg

Pour all of the liquids into a glass. Fill two thirds with crushed ice. Swizzle. Fill to the brim with ice. Keep on swizzlin’. Dust with nutmeg and drop in a straw to enjoy.

Swizzle - stir (a drink) with a swizzle stick [or long spoon if you don’t have one].
"he would swizzle it into a froth and pour it out for us"

I got super excited about this one and it lived up to the hype. Falernum, a sweet low alcohol syrup liqueur made from almonds and allspice, pineapple, and lime make a base for many tiki drinks. Chartreuse by contrast has no business being in a tiki drink and yet here we are. Chartreuse is what I call a “Dr Pepper” flavor - you have no idea what it is, but there’s nothing else that tastes quite like it. Famously distilled from over 130 botanicals by French monks, it has a wildly green color that it comes by naturally in the distillation process. Watered down as it is here from all the swizzling, it brings a refreshingly minty flavor to the party making it a perfect way to enjoy the evening on these hot summer days.

Cheers,
Jared