Interesting stats from Akamai:
- 12 million requests per second peak
- 500 billion requests per day
- 61,000 servers at 1000 service providers
The University hosts an Akamai cache. My organization uses the University as our upstream ISP, so we benefit from the cache.
The Universities Akamai cache also saw high utilization on Thursday and Friday of last week. Bandwidth from the cache to our combined networks nearly doubled, from about 1.2Gbps to just over 2Gbps.
The Akamai cache works something like this:
- Akamai places a rack of gear on the University network in University address space, attached to University routers.
- The Akamai rack contains cached content from Akamai customers. Akamai mangles DNS entries to point our users to the IP addresses of the Akamai servers at the University for Akamai cached content.
- Akamai cached content is then delivered to us via their cache servers rather than via our upstream ISP’s.
It works because:
- the content provider doesn’t pay the Tier 1 ISP’s for transport
- the University (and us) do not pay the Tier 1 ISP’s for transport
- the University (and us) get much faster response times from cached content. The Akamai cache is connected to our networks via a 10Gig link and is physically close to most of our users, so that whole propagation delay thing pretty much goes away
The net result is that something like 15-20%of our inbound Internet content is served up locally from the Akamai cache, tariff free. A win for everyone (except the Tier 1’s).
This is one of the really cool things that makes the Internet work.
Update: The University says that the amount of traffic we pull from Akamai would cost us approximately $10,000 a month or more to get from an ISP. That’s pretty good for a rack of colo space and a 10G port on a router.