As the COVID-19 pandemic swept across the globe, content delivery network (CDN) providers were quickly thrust into the world's spotlight. People everywhere depended on CDNs to quickly and smoothly connect them to news, entertainment, education, social media, training, virtual events, videoconferencing, telemedicine ... the list goes on and on. That's why it's no surprise that the global CDN market that was valued at $9.9 billion in 2018 is now expected to reach $57.15 billion by 2025, according to Brand Essence Market Research. But to turn those lofty revenue projections into revenue growth, the smartest CDN providers must find ways to overcome significant challenges, such as these three mentioned below. Challenge #1: Deliver high-performance with low latency People everywhere are demanding high quality content and video, without any speed bumps due to latency issues. Although software, hardware, networks, and bandwidth all affect the level of latency, the single biggest factor that slows down content is the distance that light has to travel. That's because for all our mind-blowing achievements in technology, one thing we haven't yet figured out is how to speed up the speed of light. Light travels at about 125,000 miles per second through optical fibers, which is roughly two-thirds of the speed of light in a vacuum (186,000 miles per second). So for every 60 miles of distance a packet has to travel, about a half a millisecond of time is added to the one-way latency trip, and thus 1 millisecond to the round-trip time. They say money makes the world go 'round. So in essence, latency can stop the world from turning, as shown in these examples: In finance, for years firms have offered premium "ultra-low latency" services to investors who want to receive key data about two seconds before the general public does. What can happen in two seconds? In the stock market, quite a lot. Research by the Tabb Group estimated that if a broker's platform is even 5 milliseconds behind the competition, it could lose at least 1% of its flow, equating to about $4 million in revenues per millisecond. In retail, according to research by Akamai, a 100 ms delay in web site load time leads to a decrease in conversion rates of up to 7%. Conversely, Akamai reported that Walmart noticed that every 100 ms of improvement in load time resulted in up to a 1% increase in revenue. In the cloud, latency looms as a major hindrance. Among the findings in a research paper by the University of Cambridge Computer Laboratory are that 10µs (those are microseconds, or one-millionth of a second) latency in each direction is enough to have a noticeable effect, and 50µs latency in each direction is enough to significantly affect performance. For data centers connected by additional hops between servers, latency increases further. This has ramifications for workload placement and physical host sharing when trying to reach performance targets. Every CDN wants to provide high-performance, but predicting the performance of CDNs can be an imprecise exercise. CDNs use different methodologies to measure performance, and have various types of underlying architecture. However, one universal truth is that the geographic locations of CDN data centers play a big role in performance measurements. This is one reason why NTT's Global Data Centers division has strategically chosen certain locations for their data center campuses. For example, our data centers in Sacramento give companies based in San Francisco a low-latency experience as compared to other locations. Those customers experience round-trip latency of only 3 milliseconds to go out and back the 88 miles from Sacramento to San Francisco. That compares well to round-trip latency of 4.2 milliseconds from San Francisco to Reno (218 miles away), 15.3 milliseconds from San Francisco to Las Vegas (570 miles away), or 18.1 milliseconds from San Francisco to Phoenix (754 miles away). In Chicago, where NTT is building a new 72MW data center campus, customers at that Central U.S. location will enjoy low latency to both U.S. coasts. According to AT&T, IP network latency from Chicago to New York is 17 milliseconds, and from Chicago to Los Angeles is 43 milliseconds. Reducing latency is a huge point of emphasis at NTT. At our Ashburn, Virginia data center campus, we offer both lit and dark services to multiple carrier hotels and cloud locations, including AWS and Azure, providing sub-millisecond latency between your carrier, your data, and your data center. Challenge #2: Scale up to meet a growing base Every business wants more customers, but for CDNs, they need to be careful what they wish for. Huge bursts in Internet traffic can bring an overwhelming amount of peak usage. Videoconferencing historians will long remember the third week of March 2020, when a record 62 million downloads of videoconferencing apps were recorded. Once those apps were downloaded, they were quickly put to use - and have only increased in usage time since then. The instant reaction to those stats and trends would be for CDNs to add on as much capacity as possible. But building to handle the peak demand can be costly, as a CDN also needs to economically account for lower-usage times where there are huge amounts of capacity that will not utilized. These massive spikes and valleys bring a significant traffic engineering challenge. A well-prepared CDN will minimize downtime by utilizing load balancing to distribute network traffic evenly across several servers, making it easier to scale up or down for rapid changes in traffic. Technology such as intelligent failover provides uninterrupted service even if one or more of the CDN servers go offline due to hardware malfunction. The failover can redistribute the traffic to the other operational servers. Appropriate routing protocols will transfer traffic to other available data centers, ensuring that no users lose access to a website. This is what NTT's Global Data Centers division had in mind when we deployed a fully redundant point-to-point connection, via multiple carriers, between all our U.S. data centers. We make critical functions such as disaster recovery, load balancing, backup, and replication easy and secure. Our services support any Layer 3 traffic for functions such as web traffic, database calls, and any other TCP/IP based functions. Companies that leverage our coast-to-coast cross connect save significant money over installing and managing your own redundant, firewall-protected, multi-carrier connection. Challenge #3: Plan for ample redundancy, disaster recovery, and risk reduction Sure, most of the time disasters don't happen. But I suppose that depends on your definition of a disaster. It doesn't have to be the type that you see in movies - with earth-cracking gaps opening up in the middle of major cities. Even events as routine as thunderstorms can have ripple effects that could interrupt service provided by a CDN. That's why smart CDNs are reducing their risk of outages by not having all their assets and content in one geographic area. The good thing is that by enacting one fairly simple strategy, CDNs can check the boxes for ample redundancy, disaster recovery, and risk reduction. That strategy is to have a data center presence in multiple geographic locations. The three sections of the U.S. - East, Central, West - make for a logical mix. In the East region, well, Ashburn is the capital of the world as far as data centers are concerned. No other markets on the planet have as much deployed data center space as Ashburn, and construction of new data centers is ongoing in order to keep up with demand. Known as "Data Center Alley", Ashburn is a perfect home for a data center for many reasons, including its dense fiber infrastructure and low risk of natural disasters. Those factors alone make Ashburn a great location as part of a redundancy and disaster recovery strategy. In the Central region, Dallas has a very low risk of dangerous weather conditions. According to data collected from 1950-2010, no earthquakes of 3.5 magnitude or above have occurred in or near Dallas. No hurricanes within 50 miles of Dallas have been recorded either. And while tornadoes can occur, some data centers such as NTT's Dallas TX1 Data Center are rated to withstand those conditions. Another appealing aspect of Dallas is that Texas's independent power grid, managed by ERCOT (the Electric Reliability Council of Texas), is one of three main power grids that feed the continental United States. By maintaining a presence in each of the three grids, companies can make sure that their data centers are as reliable as could be. In the West, several appealing options are located along the Pacific coast. In the Northwest, the Oregon suburb of Hillsboro is a popular choice for an economical disaster recovery location. Hillsboro has a mild climate which translates to low heating and cooling costs, a minimal natural disaster risk and strong tax incentives. As a bonus, a variety of submarine cables deliver low-latency connections between Hillsboro and high-growth Asian markets. In Northern California, Sacramento offers a safe data center option as that city is out of the seismic risk area that includes Bay Area cities. Sacramento is also considered preferable to other Western data center markets such as Reno, Nevada. At least 30 seismic faults are in the Reno-Carson City urban corridor, and some studies say that two of those faults in particular appear primed to unleash a moderate to major earthquake. And then there's Silicon Valley, which of course is a terrific place to have data center space. However, no one would say that Silicon Valley is a truly seismically stable area. But, that risk can be mitigated if the data center is protected with technology such as a base isolation system, which NTT uses in its new four-story Santa Clara data center. That base isolation system has been proven to enable mu