How to Compare Cloudflare, CloudFront, and Fastly with Real-User CDN Data
May 2, 2026 in CDN Performance9 minutes
Cloudflare, Amazon CloudFront, and Fastly can all be the right CDN in the right setup. The hard part is proving which one is fastest for your users, your files, and your traffic mix.
Synthetic benchmarks help with first impressions, but they cannot answer that question on their own. Your visitors may be concentrated in cities that a generic benchmark does not represent. Your files may be larger, smaller, more cacheable, or more frequently purged than the files used in public tests. Your origin and cache rules can change the result too.
A real-user test gets closer to the answer: serve equivalent files through each CDN, measure browser timing from actual visitors, and compare the results by geography, device, and cache behavior.
What you should measure
Do not reduce CDN performance to one number too early. Split the request lifecycle into the parts the browser can measure:
- Total duration: the full time from request start to response completion.
- DNS lookup: time spent resolving the CDN hostname.
- Connection time: time spent establishing the TCP connection.
- TLS time: time spent negotiating HTTPS.
- Request or TTFB-like time: time from request start until the first response byte is available.
- Response time: time spent downloading the response body.
- Transfer size: the number of bytes reported by the browser.
- Geography: country and city-level performance where available.
- Cache status: whether the CDN served the file from cache or fetched it from origin.
The browser’s Resource Timing API can expose detailed timing data for individual static files. For cross-origin assets, the CDN or origin must return Timing-Allow-Origin; otherwise browsers hide many timing fields and report them as zero.
For testing, this header is the simplest option:
Timing-Allow-Origin: *For a production setup with tighter access, scope it to your website:
Timing-Allow-Origin: https://www.example.comChoose equivalent test files
A fair CDN comparison starts with the files. If Cloudflare serves a 40 KB image and Fastly serves a 4 MB image, you are testing file size more than CDN performance.
Use files that represent the assets your users actually load:
- One small file, such as CSS or JavaScript.
- One medium image, such as a product image or article image.
- One large asset, such as a hero image, video segment, font bundle, or downloadable file.
For each file type, upload or expose equivalent files through Cloudflare, CloudFront, and Fastly. Keep these properties as similar as possible:
- File size.
- MIME type.
- Compression behavior.
- Cache-Control headers.
- Origin distance and origin performance.
- Purge frequency.
- URL path structure.
If you are comparing storage-backed delivery, use the same source object where possible. If you are comparing a production site, pick assets that already matter to the page experience.
Keep browser cache out of the test
CDNPulse adds a dynamic query parameter to every monitored file request. That prevents the browser from reusing its local cache, so each measurement reflects a network request instead of a memory or disk cache hit.
The CDN should still be allowed to use its own cache.
That means Cloudflare, CloudFront, and Fastly should ignore the CDNPulse cache-busting query parameter when they build the CDN cache key. If the CDN treats every query string as a different cache object, every measurement can become a CDN miss. The browser cache would be bypassed, but the CDN cache would be bypassed too, which is not the delivery path you usually want to measure.
The target behavior is:
- Browser sees each CDNPulse request as fresh because the URL has a changing query parameter.
- CDN maps those URLs back to the same cached object.
- Origin is contacted only when the CDN object is cold, expired, purged, or intentionally revalidated.
Before running the comparison, check each CDN’s cache key or query string settings for the monitored paths.
Configure the measurement path
Before collecting data, make sure every monitored CDN URL follows the same measurement rules.
First, every response needs Timing-Allow-Origin, otherwise cross-origin Resource Timing fields may be hidden by the browser:
Timing-Allow-Origin: *Second, each CDN should ignore the CDNPulse cache-busting query parameter in its cache key for the monitored paths.
Third, keep cache status headers near the timing data. If one provider is returning cache hits and another is going back to origin, the timing comparison is explaining cache policy as much as network performance.
The provider-specific part is mostly about how you verify those rules.
Cloudflare
Cloudflare exposes cache behavior through the CF-Cache-Status response header. A HIT means the asset was served from Cloudflare cache, while statuses such as MISS, DYNAMIC, BYPASS, and EXPIRED tell you that a request took a different path.
Also inspect the Age header. Cloudflare documents that Age is present for responses served from cache and represents how long the asset has been in cache, in seconds. It can reset after revalidation, purge, eviction, or re-cache.
After a few requests to the same file with different CDNPulse query values, you should still see cache hits. If every request is a MISS, check the cache rule or cache key settings for that path.
Amazon CloudFront
CloudFront response headers policies can add custom headers to responses sent to viewers, and they can be attached to cache behaviors. This is usually cleaner than modifying the origin just for measurement.
CloudFront can also add a Server-Timing header through a response headers policy. That can help debug CloudFront behavior, such as whether a cache layer served the response or whether the request went back to origin.
When testing CloudFront, pay close attention to cache behavior settings. Forwarding unnecessary request headers, cookies, or query strings can reduce cache hit ratio. If one provider is caching a file and another is forwarding every request to origin, the comparison will be misleading.
For CDNPulse tests, the cache policy should not include the changing CDNPulse query parameter in the cache key for monitored static files.
Fastly
Fastly includes cache debugging headers such as X-Cache, X-Cache-Hits, and X-Served-By. Fastly documents X-Cache as a proprietary response header that indicates whether a request was a HIT or MISS.
If shielding is enabled, Fastly may show multiple cache layers in the header values. Read the response as a path through edge and shield caches rather than a single binary cache result.
Fastly can still receive the full URL from the browser, but the cache lookup should resolve to the same object when only the CDNPulse cache-busting value changes.
For a fair comparison, decide how to treat cache layering. Fastly shielding, CloudFront origin shield, and Cloudflare tiered cache should either be intentionally enabled across the test or called out when you interpret the result.
Warm the caches, then measure real users
A cold-cache test answers a narrow question: what happens on the first request after a purge or eviction?
Cold-cache data matters, but it is usually not the main user experience for static content. For most CDN comparisons, collect both:
- Cold-cache behavior after a purge or first deployment.
- Warm-cache behavior after repeated requests.
Warm the cache by requesting each monitored URL from a few regions before you begin the main measurement window. Then collect real-user data for long enough to cover normal traffic patterns.
For a small site, that may mean several days. For a high-traffic site, a few hours may be enough to see clear patterns. The key is sample size by region. A global average can hide the result you actually care about.
Compare by region, not only globally
The fastest CDN for users in one country may not be fastest for users in another.
Answer questions like:
- Which CDN has the lowest median total duration in the United States?
- Which CDN has the best p95 duration in Germany?
- Which CDN is fastest for users in Brazil during peak traffic?
- Which provider has the most stable TTFB-like timing across regions?
- Which provider transfers the same file with fewer bytes?
Median reflects a typical user. p75, p90, and p95 reveal the slow tail, which is often where CDN differences matter most.
Avoid declaring a winner from a single worldwide average unless your traffic is evenly distributed worldwide and every region matters equally. Most sites have priority markets.
Keep cache status beside timing data
Real-user timing tells you what users experienced. Cache status helps explain the path each request took.
For example:
- Cloudflare may win globally when
CF-Cache-StatusisHIT, but lose onMISSin a region far from origin. - CloudFront may have low warm-cache TTFB, but slow first-byte time when the origin is far from the edge.
- Fastly may show strong results when shield cache is warm, but more variation when an edge has to fetch through multiple layers.
Cache status is not a replacement for browser timing. It is context for browser timing.
Start with user-visible performance, then use provider headers to explain the result.
Watch out for common comparison mistakes
The most common CDN benchmark mistakes are simple:
- Comparing different files.
- Comparing one warm cache against one cold cache.
- Testing from one office or one cloud region and calling it global.
- Ignoring
Timing-Allow-Originand trusting incomplete browser timing fields. - Averaging all countries together when traffic is regional.
- Comparing compressed and uncompressed responses.
- Letting the CDNPulse cache-busting query parameter create a new CDN cache object for every request.
- Forgetting that other query strings, cookies, or request headers can change cache behavior.
- Measuring only TTFB when the asset size makes download time important.
For static files, the best CDN is rarely the one with the best number in every metric. It is the one that gives your users the right balance of latency, consistency, cache behavior, operational overhead, and cost.
Test plan
Use this process with CDNPulse or your own Resource Timing instrumentation:
- Pick three to five representative static files.
- Serve equivalent versions through Cloudflare, CloudFront, and Fastly.
- Add
Timing-Allow-Originto every monitored response. - Configure each CDN cache key to ignore the CDNPulse cache-busting query parameter.
- Confirm cache headers and provider cache-status headers.
- Warm each CDN path.
- Install the measurement script on a real page.
- Collect data from real visitors for a meaningful traffic window.
- Compare median, p75, p90, and p95 timing by country or city.
- Segment cached and uncached responses when provider headers are available.
- Choose based on the regions and asset types that matter most to your business.
If you use CDNPulse, create one application for the website and add the Cloudflare, CloudFront, and Fastly asset URLs as files to monitor. Once the script is installed, CDNPulse requests those files with changing query parameters, collects browser timing data from real user sessions, and shows timing, transfer, and geographic breakdowns in the dashboard.
Write the decision down
A weak conclusion sounds like this:
Fastly is fastest.
A better conclusion gives the conditions:
For our product-image workload, Cloudflare and Fastly are close in North America. Fastly has better p95 duration in Germany and France, while CloudFront performs best in regions close to our AWS origin. CloudFront misses are slower, so we should either tune cache behavior or enable a shield layer before considering it for global image delivery.
That level of detail helps teams make infrastructure decisions.
The result should name the provider that performs best for your users, your assets, and your operating model.