Hacker News new | past | comments | ask | show | jobs | submit login
A bit of math around Cloudflare's R2 pricing model (twitter.com/quinnypig)
466 points by dustintrex on Sept 30, 2021 | hide | past | favorite | 180 comments



Cloudflare will charge per request over a certain threshold, like S3 does for API calls, so it won’t be just 13 cents depending on the rate of requests coming in.

“R2 will zero-rate infrequent storage operations under a threshold — currently planned to be in the single digit requests per second range. Above this range, R2 will charge significantly less per-operation than the major providers. Our object storage will be extremely inexpensive for infrequent access and yet capable of and cheaper than major incumbent providers at scale.”

Source: https://blog.cloudflare.com/introducing-r2-object-storage/


Per operation charges, even if at the same level as EC2, would still be substantially cheaper.

The lack of 9c per GB to egress is the killer.

I can rent unlimited 250mbit servers for 14 euros a month from Scaleway. It is truly unmetered; I've been a customer for years and do about 40 terabytes of egress a month on this box.

AWS would charge me $3800 for the egress alone (using the calculator). That is certainly almost all pure margin for them, when Scaleway will happily let me egress 40TB a month for years on 13.99 euro per month (plus I get a low-spec bare metal server at this price too).


> That is certainly almost all pure margin for them,

Yes. While there is a pretty hefty up-front cost to all the hardware and fiber interconnects AWS makes between servers, between AZs, and to the internet (probably in the millions per-dc including labor), the margin they charge per-gb is insanely high to the point where they likely made all of it back within the first day it was installed, with any extra usage in the years after being pure margin.

https://blog.cloudflare.com/aws-egregious-egress/


> I can rent unlimited 250mbit servers for 14 euros a month from Scaleway.

If we're looking only at egress, we can even find cheaper alternatives. OVH, for example, offers VPS' with 500Mbps unlimited for 10 euros a month, 1Gbps for 20 euros, etc.


I have a small Hetzner Cloud VPS with 20TB egress 100Mbps included for about 3€/month. The prices for new instances got up a few weeks ago, because of IPv4 prices, but they are still about 4€.

It's not unlimited, but 20TB is more than enough for the most use cases.


I wish this were available in the US...


There are VPS providers in the US that offer unmetered bandwidth. One quick example I could think of was Hostgator.

https://www.hostgator.com/vps-hosting


> For example, customers who are using 25% or more of system resources for longer than 90 seconds would be in violation of our Terms of Service. Please see our Terms of Service or contact us with any questions.

Not really...


I'm not sure where that quote comes from (its usually useful to provide a link to what you're quoting) but their VPS ToS does not list any network limits. As long as you're not 2x overloading the CPU on your VPS for more than 15min average it looks like you really do have unlimited bandwidth. I'm wondering if you were looking at their Shared Hosting?

https://www.hostgator.com/tos/vps-tos

VPS accounts may not:

exceed a 15 minute load average greater than two (2) times the amount of CPU cores given.

run public IRCd's or malicious bots.

run any type of BitTorrent client or tracker that links to or downloads illegal content.

use an Open/Public proxy, or utilize a proxy to access illegal/malicious content.

se I/O intensive applications which adversely affect normal server operations.

I do see they have a bit of a vague way of getting out of hosting you if you're really being excessive on the network throughput. It largely sounds like as long as your network usage isn't causing other customer impacts its still unlimited. I imagine even OVH or any other hosting service will have similar terminology along the lines of any usage that negatively impacts other customers is prohibited.

https://www.hostgator.com/tos/acceptable-use-policy

You may not consume excessive amounts of server or network resources or use the Services in any way which results in server performance issues or which interrupts service for other customers. Prohibited activities that contribute to excessive use, include without limitation:

Hosting or linking to an anonymous proxy server;

Operating a file sharing site;

Hosting scripts or processes that adversely impact our systems; or

Utilizing software that interfaces with an Internet Relay Chat (IRC) network.


The quote is from the “information” icon next to “Unmetered bandwidth” on the page you posted.


There's two locations in the US (only available via the US website/OVHcloud - I think) and at least one in Canada.



My apologies. I meant Scaleway. I was looking more for the k8s integration.


Ah. Ok. There's this: https://us.ovhcloud.com/public-cloud/kubernetes/

Though I don't know how well it compares.

Edit: pricing: https://us.ovhcloud.com/public-cloud/prices/#containers-and-...


Lack of egress charge is great for write once, read a lot.

But for write once, read a handful of times (e.g. a webhook system that basically is write once, read once of <32KB objects with long history for either redelivery attempts or replaying the entire stream for a point of time), request prices end up being the biggest cost.


Not if, as R2 says, they only charge for objects that exceed a single digit request per second threshold. Your write once read once model still only costs as much as the storage at rest would.


I don’t think they mentioned this would be “per object” and I assumed they meant per account.

Free per object requests is super game-able. Want 1000 free requests per second, just rotate through 100 identical objects.


But then you're multiplying your storage costs 100x, aren't you?


Which are often significantly lower than request (and egress) charges.

1,000 reads per second is 2.6 billion per month. At S3 prices, that wound cost around $1,040.

100x storage is pretty minimal to achieve this. For example:

100x 128KiB = $0.000192/month

100x 10MiB = $0.015/month

100x 1GiB = $1.50/month


Not disagreeing with you! Just pointing out that there will be additional costs over just 13 cents. Excited to check out R2.


Even on S3, operations costs are a very tiny fraction of bandwidth costs:

https://aws.amazon.com/s3/pricing/ https://www.backblaze.com/b2/b2-transactions-price.html


Yeah, read and write request pricing is still the big unanswered question here.

It’s great that they are happy to have a free tier, but <10RPS is only <777k per day constant. Great for small uses, but pretty easy to hit with even medium object store use-cases.

My current guess would be somewhere around S3’s $5/million writes and $0.40/million reads (with maybe a 20% discount like they’ve done with other products). Hope I’m wrong and it’s a lot lower though.

I’m probably in the minority, but I want to use an object store like this as a key/value store for <128KiB objects (write once, read a handful of times and then expire in 14-90 days).

I’d gladly pay egress, but usually the request prices (for S3 or DynamoDB) become pretty cost-prohibitive. Even services like Wasabi that, on the face of it look perfect, end up being unsuitable because the T&Cs limit maximum egress to total monthly storage volume (making the write once, read more than once for every short-lived object pretty difficult).

I’d gladly pay for a service with both “reasonable” egress and “reasonable” request prices instead of everyone trying to weight their service to one or the other.


I just stored a few large objects to pad the space on Wasabi. The S3 compatibility was the feature.


Check out Cloudflare Workers KV or Durable Objects :-)


Sorry, should've specifically mentioned them.

Workers KV is also $5/million writes and $0.50/million reads[1] (more expensive than S3 which is $0.40/million reads) and often DynamoDB (which is $1.25/million 1KB writes and $0.25/million reads scaling down to 1/7th that if you use provisioned instead of on-demand).

Workers KV is also eventually-consistent with no guarantee of read-after-write, which is a pretty big limitation compared to alternatives (S3 even has immediately-consistent list operations now after write).

Durable Objects Storage is $1/million writes at 4KB and $0.20/million reads[2], which is pretty good. But you can also only access it via a Durable Object which limits its usage (especially as neither Durable Objects or its storage are in a production state with final pricing yet) and means you need to also pay for the Durable Object runtime on top fo the Storage pricing whenever you want to access that data.

[1] https://developers.cloudflare.com/workers/platform/pricing#p...

[2] https://blog.cloudflare.com/durable-objects-open-beta/


> But you can also only access it via a Durable Object which limits its usage

Presumably with not-a-lot of code you could write a Durable Object to have an S3 compatible API...


It wasn't the code that was the issue (and you probably wouldn't want an S3 compatible API). It's that on top of the Durable Store pricing, you need to pay for the $0.15/million requests + $12.50/million GB-sec processing + $0.09/GB data transfer.

Given the point of R2, the fact there's $0.045/GB ingress + egress data transfer (which has been recently reduced from $0.09) when using Durable Objects.

Given the original comment is basically about egress being too expensive therefore R2 is offering to for free, Durable Objects' Store isn't a particularly good alternative to just using S3 or DynamoDB unless you're already using Workers/Durable Objects.


Out of curiosity what is your use case? What is S3 latency compared to Dynamo?


Pretty similar really. Latency is much less important than throughput for all of my use-cases.


And why doesn't something like dynamo work perfectly for this?


I love DynamoDB for a lot of things, but as a store for objects much larger than 1KiB it's not very cost-effective.

DynamoDB writes are priced per WCU.

A WCU will give you 1KiB written to the DB.

In On Demand, WCUs cost you (depending on region) $1.25/million writes. For Provisioned capacity, you pay about 1/7th that (but need to deal with keeping the Provisioned capacity and auto-scaler at the correct level, so in practice you can maybe get this to about 25-35% the On Demand rate when you know what your usage looks like, with further discounts for actually reserving capacity).

If you're writing 32KiB objects, this uses 32x WCU ($40/million writes) and if these objects are even updated (e.g. to change a status or similar), the update writes the entire object again.

On a performance/architecture note, DynamoDB has a hard per-partition limit of 1,000 WCU (per second). Writing larger objects can very quickly get you to these limits which increase your architectural complexity by requiring you to shard access.

Although, in practice you'd never write large objects to DDB. S3 is _only_ $5/million writes (and $0.023/GB/month vs $0.25/GB/month for DDB). If you're writing even 3+ KB (especially if you expect to need to update an item in the future) it's usually more cost effective to store these blobs in S3.

But either way, you're still going to be paying at least $5/million write requests. For multi-MB blobs, this is a non-issue. But for a K/V store of smallish objects, I think it's pretty expensive.


I really want to not need to care about object sizes. Whether I'm writing 1 byte or 1 gigabyte, I want to have the same semantics, same API, and same sane billing model.

The underlying tech can differ depending on object size, but as a user, I want it to all transparently look like one unified store.


Me too… but I don't think anything really does this.

_Maybe_ Firebase Store for certain scenarios, but it has its own tradeoffs too.


That's what object stores are.


Yeah, I ignored request charges just because CloudFlare was non-specific. I also ignored them on the AWS side; a million S3 GET requests would cost 40¢.


We’re still finalizing per request pricing. But we want to not charge anything in most cases (<1 op/sec). And even over threshold it will be at least 10% less than S3’s per op price.


That makes sense and is fair. The only times I encounter clients with S3 request pricing problems is when they’re doing something truly… unfortunate.


I am wondering how does request per second works with open connection. Doesn't that means you could have file hosting and get free bandwidth as long as the proxy infront of it only allows 10 Request per second to R2?


Keep in mind in this thread's example, there are a million downloads in one month. Spread evenly this is less than 1 RPS.


"Now let’s remember that the internet is 1-to-many. If 1 million people download that 1GB this month, my cost with @cloudflare R2 this way rounds up to 13¢. With @awscloud S3 it’s $59,247.52."

When I first saw this, my thought was it's highly misleading since you'd normally be fronting S3 with a CDN, so you'd only have to pay once to transfer a given resource every few days/weeks/months if the CDN is doing its job, and it can be served to millions of people at no or low cost (already been free on Cloudflare for a long time).

Then I came to my senses and realised there's probably orgs who don't know/care about that and they're actually paying these fees for every request.

Anyway, it's excellent to see this released. S3 is not just more expensive and tied to a single data centre. It's also bad DX, e.g. setting up permissions is way too complicated and the console is "brutalist" in both look and function. I hope this spurs some more competition in the area of storage.


He answered [1] that using AWS' on CDN actually makes things worse:

> Okay, then it’ll be 8.5¢ to 12¢ per GB depending upon where you happen to be downloading it from.

[1] https://twitter.com/QuinnyPig/status/1443135455763984384


Jeff Bezos has a famous quote - "Your margin is my opportunity"

Kudos Cloudflare! You won the Internet this week!


So Cloudfront at the base price is more expensive depending on the geography. That said, you can restrict your price class to class 100, let the volume pricing kick in, etc to get a maybe a 10% to 50% discount at huge volumes. Or you could negotiate and prepay / pre-commit to get a 60% to 70% discount.

The difference is still insane.


A lot of orgs use all AWS, with CloudFront for their CDN, which also has high outgoing bandwidth fees (cached or not)


Apologies if it was just loose use of language, but S3 is not “tied to a single data centre”. It’s a single region. An object within it is spread across _at least_ 3 availability zones, each AZ consisting of multiple data centres.


Permissions Policies are confusing because they can get very complex... because enterprises need that functionality. IDK, maybe I'm in the minority but I don't think IAM is something that can/should be watered down.


Last I checked there are four separate access control mechanisms for S3, of which IAM (both principal and resource policies) is a single one.


ACLs are deprecated and IAM is a single mechanism with multiple policy types (which again, I don’t think should be simpler… if anything, I find it’s not complex enough to handle some niche access requirements). Can you share what the other separate access control mechanisms you see for S3?


- ACLs / object ownership (which aren't deprecated - https://docs.aws.amazon.com/AmazonS3/latest/userguide/access...)

- IAM / Bucket Policies as mentioned

- S3 Block Public Access (yes this is the name of a distinct feature)

- S3 Access Points

The interactions between all four of the above need to be taken into account to determine the access a consumer has to a bucket or its objects.

I'm not taking a swing at IAM - it's a great system. My grudge is with AWS having disparate interacting security controls for different services (of which S3 is probably the worst offender).

To heave back onto the topic at hand, let's hope R2 can do better having the benefit of a clean slate to build off!


Don’t forget KMS key policies which also apply if you are using SSE-KMS.


CDNs aren't a silver bullet. CloudFlare's free CDN won't allow you to primarily serve non-HTML content. Other CDNs are way more expensive. If your usecase is like Twitter or GitHub, i.e. you have a few images on a bunch of pages and the odd file here and there, it's fine. But if you're making a photo or video sharing service you have no way around it.


I'd argue that Twitter and GitHub are excellent examples of where a CDN does not work.

Some assets like logo's, headers, footers, css, fonts etc will improve greatly.

But the main content is highly long-tail. And has a very high cache-miss chance.

For 140 chars that's not going to matter. But all the images, videos, large source-codes, tarballs, builds, etc all add up. No CDN is going to help when a 30MB nightly build is only downloaded once ever. When a 2000line sourcefile is read maybe once a month. Or when a video in a tweet is seen by at most 2 followers ever.


One other factor to consider: how much of your response latency is dominated by things like HTTPS session initialization or TCP window scaling? If a CDN means that the client does all of that within a few miles of the user and then their traffic to your origin server runs over the CDN provider's backbone, potentially with other backend optimizations like persistent connections and HTTPS session caching, it might actually be the case that it's faster than going direct for the users most like to experience performance problems.

For a single download of a large file, yes, it's probably not going to matter since you're talking a percentage of the total file transfer time but the story can be different for something like a web page where the client makes a single connection and reuses it for everything rather than having to establish connections to multiple servers, wait for window scaling to kick in, etc.


There are numerous reasons for employing a CDN. DDOS protection, for example, is another one.

Many of those reasons will apply to long-tail content like GH of Twitter and offer benefits there.

I was talking about the most obvious benefit of a CDN: caching on edge-servers.


Yes – my point was simply that caching connections (at any of the layers mentioned) is also often enough to be a net win even if the content cannot be cached.


That's true. The example involves serving the same content to a million users, so it's more relevant to the cache-friendly web page scenario.


> CloudFlare's free CDN won't allow you to primarily serve non-HTML content.

Do you have more detail about this? We cache a lot of image content with CF, though I haven't looked at cache effectiveness stats closely.



It's amazing how much conversation Cloudflare has created with this without even releasing the product yet.

I can't remember the last time there was this much interest in a new product launch - which gives an indication to the market need for pricing changes on this front. However, AWS re:invent is right around the corner and I suspect this will cause an announcement or two to come from the AWS side.


AWS could announce something that counters only this, perhaps making egress for S3 alone cheaper. But they won’t make egress itself free or close to free because egress is what keeps you checked in to Hotel California.

Removing egress feed is not just about taking a massive hit to revenue. If it was, they might bite the bullet and hope that the lower prices would attract more customers in the long term. But bandwidth pricing is the moat around their castle. If it didn’t exist, Snowflake and others would leave. But more than that, new AWS services benefit from the captive audience choosing them by default. Without the moat, each service has to compete on its merits. AWS Lambda, for example would need to be better than CloudFlare Workers or fly.io containers. It won’t win customers like it does today just because it’s “free” bandwidth.

They won’t rush to announce anything that might affect the foundation of their multi billion dollar money printing machine. They’ll take their time and see how customers react. If any large customer threatens to leave, they’ll be offered credits to stay. But that information about what customers want and what they’re likely to do will help them come up with a strategy.


Cloudflare has been ramping up a now years long campaign to (finally and rightfully) draw attention to the ridiculous bandwidth costs for AWS with their Bandwidth Alliance and now R2. Say what you want about Cloudflare (disclosure: I'm a happy customer) but this campaign is a substantial benefit for any consumer of cloud services whether you use Cloudflare or not.

My contention, as I've commented here before, is that AWS is taking advantage of a new "cloud native" generation of developers, startup founders, C-suite, etc who have never purchased bandwidth/transit, run an AS (autonomous system), etc. As mentioned in the twitter thread Amazon is essentially charging 1998 prices for bandwidth. I wouldn't be surprised at all if many of their services are essentially loss-leaders and Amazon more than makes up for it with their ridiculous markup on bandwidth.

This "cloud native" generation can wrap their heads around instances, request rates, storage volume, K8s, containers, functions, etc but the pricing on bandwidth is very nebulous, notoriously difficult to estimate/control, and poorly understood. Without organizational experience and knowledge of the real bandwidth costs in 2021 the assumption is likely "well, that's just what bandwidth costs".


> However, AWS re:invent is right around the corner and I suspect this will cause an announcement or two to come from the AWS side.

Call me a pessimist, but I expect that will happen around the same time they make managed NAT gateways reasonably priced.

I'm going to say never. I'll likely be right for years before I am eventually wrong due to the sticker price not moving but inflation making it relatively affordable.

Go ahead AWS, prove me wrong!


AWS's lock-in is in the egress. If you already got multiple petabytes ingested into AWS (for free), it's going to cost you a lot to send them to CF once.


Except you can put R2 in front of S3 and set it to "slurp" mode. That way as objects are requested through the normal course of use they'll be stored in R2. You can then keep S3 as a backup, or delete the objects that have moved over.

Being a proxy is cool.


The power dynamic is same as in retail whoever is closest to the "edge" wins in the long term. Kindah glad I have a bunch of NET :)


Slurp mode is a great idea

Kudos to all involved in this project! :)


This is on par with Google launching Gmail with 1GB + “infinity” space. The other email providers didn’t know what hit them. It took them years to catchup.


AWS can't reduce prices and have less revenue. That's not their goal, their goal is to keep expanding products.

They would also bring cloudflare in the spotlight and they already have the core products as a cloud hoster ( rdbms, nosql, serverless, apps, ... ).

Additionally, reducing eggres and making use of 3rd party products possible is not in their interest long term ( eg. The fork of elastic search).

What do you think they can do/announce?


Url changed from https://unrollthread.com/t/1443028078196711426/, which points to this*. It's fine to provide alternate URLs for Twitter threads (I know how some of you feel) but please do it by posting them in the comments. From the site guidelines:

"Please submit the original source. If a post reports on something found on another site, submit the latter."

https://news.ycombinator.com/newsguidelines.html

* or rather, doesn't point to it? that's not good (edit: it does point to it now - sorry if I missed it earlier.)


The unrollthread page does have a link to the original twitter thread. At the top, labelled @QuinnyPig.

Anyway, guidelines are guidelines, not cast iron rules. When the original site is hot garbage I'd much rather see an alternative.


The community is divided on that. No doubt most agree that hot garbage is in the mix but there is no agreement at all about what the hot garbage actually is—one user's garbage is another user's...delicious pot pie? I don't know.

If there were a consensus I'd be happy to go with it and suspend the rule, but given that there's zero consensus, we should stick with the rule.


> When the original site is hot garbage I'd much rather see an alternative.

I find the experience worse than Twitter's default web view with the odd reformatting and wasted space around tweets — compared to the native view, I see 5/7 as much text on a page — and most thread unrollers intentionally obscure the original link to keep you on their site so you have to be familiar with their UI or scrub the links to find it if you want to interact with the tweet (a common use for social media).


The layout on unrollthread is really confusing if you don't realize it came from twitter. It's kind of garbage too. I kept wondering whether there were missing images in some of the ". . ."

Something like https://threadreaderapp.com/thread/1443028078196711426.html I might advocate for. Better layout, plenty of links back, etc.


Thank you. The original link was worse than the twitter thread.


If this pricing model described in these tweets is actually accurate once R2 is released and once AWS responds (assuming that R2 doesn’t suck for other reasons) then this is business-plan-changing good pricing.

Imagine a competitor to YouTube. Storing tons of video content is expensive. Serving it to users is also expensive. Especially with those egress pricings. 1M * 1gb of egress costing 54k is completely undoable profitably unless that data is very valuable. Meanwhile <$1 is “put this MVP startup product on my credit card while applying to YCombinator” affordable.

If R2 could point to Backblaze instead of S3 as a compatibility layer (S3 compatible so should be easy), the you could have CHEAP redundancy and low costs without any egress fees.

I can think of several niche video sharing use cases not wel serviced by YouTube that just aren’t profitable enough to host so haven’t seen any startups succeed and I can totally see it now happening. Maybe I’ll try to make it happen…


Podcast distribution becomes very cheap and this can unlock a lot of innovation on that sector too.


Already on it, nobody else read this comment pls

Hosted.fm - free egress will mean no overage pricing needed!!

Inb4 Cloudflare announce a podcast hosting platform haha


For me, the last piece of the puzzle is RDS. If cloudflare can deliver a managed database, I would not see a reason to use AWS for new projects/ideas I'm tinkering with.


Working on it. Distributed makes it… non-trivial. But if we pull off what we think we can…


While we are asking for stuff:

Full text search, please :)

Inverted indexes are almost always eventually consistent, and easy to integrate in a distributed context.

- Store documents into R2 or Durable Objects or KV or (Cloudflare future storage).

- Create inverted index files (similar to Lucene)

- Cache the inverted index files at the edge. Cloudflare’s bread and butter.

- Run query execution nodes at the edge. Deal with the occasional cache miss by proxying to a different region.

- Its impossible to deal with relevance or ranking for that many different use cases. Let the clients control every aspect of relevancy and ranking, e.g. Similar to the flexibility of ElasticSearch.


Do GPUs at a reasonable price next and you'll own the ML space.

R2 will be amazing for hosting computer vision datasets and models.



Great start but the field is moving on from tensorflow. Torchscript support would be amazing.


If we can get GPUs deployed at the scale we want, adapting to whatever language or framework you want to use to program them will be easy. The challenge is getting GPUs into enough machines in enough places to make this interesting. But, again, we're working on it.


I'm not sure if the GPU for ML space is a fit for cloudflare.

It is for a cloud hoster. But ML does not take advantage of cloudflare's competitive edge ( their SDN ).

In the same way, i don't think docker containers are an interesting/good fit ( i could be wrong on this though)

Edit: i was wrong...


I have an image recognition service on GCP where majority of the latency is the initial upload. Pushing that closer to the user would be nice.


> It is for a cloud hoster.

Which is absolutely where CF are headed, as evidenced by R2.

Don’t forget that S3 was the first AWS product. It’s fitting that it’s also CF’s first real foray into cloud-land.


Well, i thought cloud hoster together with the advantage of the edge.

I thought that ai wasn't interesting on the edge.


Goddamn, extremely exciting stuff. Between this and fly.io, looks like exciting times for building distributed systems!


I hope for enterprise customers it will allow pay as you go. We were really interesting in Spectrum and the price was in our range but having a contract for our current capacity for 1 year didn’t make sense. And our needed throughputs were very spikey. It didn’t make sense to pay for 3k concurrent users when our 80th percentile would be around 1k.


RDS (PgSQL) is what keeping us on AWS, CF delivering a better alternative will trigger us into evaluating switching over.


That's amazing news. The only thing left is some kind of EC2 + ECS solution before I can shift all my AWS workloads to Cloudflare.

Already really impressed with Cloudflare Sites and planning to adopt Images soon.

You guys deliver at a dizzying pace, it's truly incredible.


They could just buy Vultr for that. I've been very happy there for 4 years.


At what point do you become a full blown hosting company and is that the goal?


They’ve built out a global network presence… it would be silly for them to not at least try. I really hope they build a cloud; AWS and GCP are the only serious players in the game but neither seem too concerned about lowering prices. It’s all about providing “managed X” and “hybrid Y” for them.


I would at least throw Azure into the serious players category. You have companies like Digital Ocean trying to compete as well, definitely not as many products though.


Honourable mention for Scaleway, too. I use their products because they just work, they're cheap, have good tf modules etc. Helps that they're in the EU.


Azure is kept afloat mostly due to the windows ecosystem. They won’t go away as long as that exists but I don’t see them growing into a significant player.


Azure is already a significant player.


Nope


They already are very much a hosting co.


SQLite on R2?

I'm wondering if cloudflare can add additional languages. The current languages on top of WASM is not so interesting to me. I'd love to try them out more seriously though.

Eg. A managed OpenFaas offering would be very interesting ( since it supports dotnet core, which in the current phase wouldn't be added i think)


What language(s) do you want?


Personally, I'm interested in dotnet core ( not blazor per se, i don't want "to end up like wpf")

But my interest would be satisfied enough if i know languages outside of WASM will/can be added in the future.

Edit: thanks for checking :)



Interesting, i like f# as an experiment, but functional programming is not what I daily use and a big change from my day-to-day flow ( unfortunately)


I get the "lol aws is owned" angle but it beggars belief that a petabyte of data transfer only costs $0.13. This is clearly unsustainable and we'll have to find out later what the hidden cost is


My long term understanding from Cloudflare blog posts is that network activity is actually incredibly cheap, moreso when you own a ton of the underlying infra.

Thing is, all cloud providers do own the pipes, so they just rent-seek on egress pricing. Cloudflare is disrupting that model.


This is correct. And largely a fixed rather than variable cost for us.


You’ve very likely forced Amazon’s hand here wrt egress pricing, and yet this move alone has won my business even if everything else was equal. Bravo.


^^^ The CEO of Cloudflare.


Do you have any mention on the speed of delivery from R2?

That is the only avenue why I can see why paying for S3 is worth it. Sure egress may be free, but what's the point if I can't download a file faster than 1MB/s.


Network and bandwidth are absolutely not free but certainly not that expensive as the three major public clouds are charging: $0.1 per GB equals to $100 per 1TB transfer.

Linode, Digital Ocean and other hosting companies sell small VPSs with 4TB bandwidth for $10/month and still making profit. At places like OVH, Hetzner bandwith is virtually unlimited.


One caveat for the VPS business model is they're almost certainly relying on the fact that most users don't use all their allocation and overprovisioning. Whereas something pay as you go like s3 or r2 is going to have users paid for volume equalling their used volume.


Linode, Digital Ocean, OVH, Hetzner all have hourly billing.


Sure, they have this model for CPU time, but not for network bandwidth. If your monthly allocation is 4TB and you use it for one day, you have 4TB / 30 bandwidth in that day. If you go over it they reserve the right to cut you off, if you go significantly under it, they don't discount you.

The average Linode customer is not using 136GB of bandwidth per host per day.


That is 1/250 the bandwidth for 60x the price, no?


You can see what egress costs companies in this Cloudflare blog post - TLDR they pay for the size of the hose, not the amount of water flowing through it https://blog.cloudflare.com/aws-egregious-egress/


Exactly. Those of us who spent time in the data center and network world had some adjustment to go through when we first encountered the cloud bandwidth billing model. It hasn’t aged particularly well either.


Which is why so many never left that world. e.g. I run my k8s clusters on dedicated Hetzner machines to avoid this entire pricing model. Even accounting for devops time it’s cheaper than AWS, GCP or Azure.


The other TL;DR is that AWS's markup on bandwidth for US/Canada is 80x.


I’d speculate CFs secret sauce here is working their asses off to negotiate peering arrangements globally, and avoiding paying transit. Then passing on the savings.


Cloud providers simply charge insane markups on very cheap bandwidth.

You or I can buy transit from e.g. he.net for ~$0.15/mbit/mo. 10Gbit transit @ $1500/mo will transfer 3PB/mo.

3PB egress on AWS = $157,491.11/mo.


100G transit is even cheaper at $5k/month.

And that’s before you take into account you can do settlement free peering at Internet exchanges.


> R2 will provide 99.999999999% (eleven 9’s) of annual durability, which describes the likelihood of data loss. If you store 1,000,000 objects on R2, you can expect to lose one once every 100,000 years — the same level of durability as other major providers.

I've seen this kind of marketing allot from cloud providers, and it always makes me wonder. At that point the probability that you're going to lose your data gets dominated by the probability of a rogue employee deleting your data, or of 3 simultaneous natural disasters destroying every warehouse your data was replicated to, or most likely someone gaining malicious access and deleting a bunch of stuff. Those are a little harder to quantify though.


These sorts of numbers are total bunk- the likelihood is pretty high one of their services go down (or something happens to your connection to Cloudflare) that will knock you offline for a few hours, maybe a day.


Durability is not availability.


durability != uptime. The 11 9s of durability encompass the likelihood they'd loose your data.


The chance of three simultaneous natural disasters should still be a lot lower than that number. But yeah, hackers or software faults are almost certainly more likely than that number would suggest.


I just... can someone more knowledgeable verify that this isn't overlooking something? This feels way too good to be true.


Yeah, I overlooked the fact that the AWS calculator added Developer tier support. Past that it's correct. I assure you, AWS would have Set The Record Straight(tm) otherwise before now.


Roast GCP egress next!


If AWS comes down to within hailing distance of reasonable, everyone else will too.


I mean come on, Google's egress is free. They lease their dark fiber to run backbone links and telecoms. How much do they pay to egress YT?


What are you talking about? It's absolutely not free at all.

https://cloud.google.com/vpc/network-pricing

https://cloud.google.com/storage/pricing#network-pricing

The only thing that is free is between nodes of the same zone using specifically the internal IPs.


I think they meant it's essentially free to Google, making charging so much for it especially egregious.


That doesn't really account for multiple orders of magnitude in price (although yes it was a chunk)


AWS marks up egress costs by up to ~80x (US/Canada), according to CloudFlare [1]. They're obviously competitors so take it with a grain of salt, but I'm able to egress 40TB for 14 euros on Scaleway (vs. $3800 for AWS egress), so the reality is AWS marks up egress unbelievably high.

[1]: https://blog.cloudflare.com/aws-egregious-egress/


Besides Cloudflare's pedigree and CDN reach, is there any other reason this is considered revolutionary when, say, Wasabi [0] already exists with the same no-egress-charges pricing model? I'm not affiliated with either company, just trying to get a handle on the hype.

[0] https://wasabi.com/


> For example, if you store 100 TB with Wasabi and download (egress) 100 TB or less within a monthly billing cycle, then your storage use case is a good fit for our policy. If your monthly downloads exceed 100 TB, then your use case is not a good fit.

> If your use case exceeds the guidelines of our free egress policy on a regular basis, we reserve the right to limit or suspend your service.

https://wasabi.com/paygo-pricing-faq/#free-egress-policy


In short, they're a backup/archival service, not a general-purpose object store. Egress is free because you're expected to use it infrequently.


Yes. But I guess it can also be used like a general-purpose object store, we just have to pay them for the bandwidth used or have a CDN in front to cache stuff.


No, I really don't think it can. Literally nothing in the marketing copy describes this use case -- it's all about backup/archival use applications. And the bit in the TOS you quoted makes it clear that Wasabi isn't interested in high-bandwidth customers at any price point. Even with a good caching CDN, most public applications won't be able to hit the "egress < storage" target that Wasabi expects.


Pricing of wasabi is valid for 3 months.

So store 1PB for an hour, pay for it for 3 months.

Also egress is not always free. Depends on how much you need the data to be in the free tier.

Source: already discussed

https://news.ycombinator.com/item?id=28685542


They also have a $6 minimum/mo, even if you are not using any storage.

https://wasabi-support.zendesk.com/hc/en-us/articles/3600023...

They also have a minimum object size of 4K:

"If you use Wasabi to store files that are less than 4 kilobytes (KB) in size, you should be aware that Wasabi’s minimum file size from a charging perspective is 4 KB. You can store files smaller than 4 KB with Wasabi but (for example), if you store a 2 KB file with Wasabi, you will be charged as if it were a 4 KB file. This policy is comparable to minimum capacity charge per object policies in use by some AWS storage classes (for example, AWS S3 IA has a minimum capacity charge of 128 KB)."

THe one I really hate is that 90-day deleted file charge. If you upload a 1K file every day for a month, into the same filename, you get charged for:

4K the first day, plus 8K the next day, plus ... plus 360K the 90th day and every day after that if you keep uploading every day

They are charging 360x your actual storage usage. Not because they are actually storing anything extra, but just as a policy.

If you do this for a year, you are charged for storing 113MB on their service. You still only have 1K actually stored there.


Wasabi limits egress to your total storage.


Another comparable is 100tb.com which can give you 100TB/m of bandwidth for $275/m (or for 1 million Gigabytes as the linked article uses, $2,750).

Cloudflare still wins this estimate at 13c, and AWS remains expensive at 55k.

I’m assuming the bandwidth traffic helps improve Cloudflares other offerings (eg, insights).


That's servers though right? Not object storage.


Ah, each server gives you 100TB/month of network bandwidth.


A much more extensive discussion of the same news can be found here: https://news.ycombinator.com/item?id=28682237.


I've wondered for a long time what the cost to bill accurately for cloud providers is. Is a lot of the cost associated with S3 API requests or outbound network traffic actually the cost of all the logging/metrics infrastructure needed to bill accurately for it?

> R2 will zero-rate infrequent storage operations under a threshold — currently planned to be in the single digit requests per second range.

If you a) don't bill for those in the 90th percentile of access and b) aren't too specific about the point at which you start billing, then you can eliminate a huge amount of complexity.


You could keep all your stuff in AWS and this would still come in hand. The web servers in EC2 would still receive the data from the user and then uplaod to S3 and/or R2.

From there you could give users the link to R2 to get the cheaper bandwidth/usage fees.

Cloudflare also have additional services like Image Resizing which would completely remove 3rd party services like Imgix.

There's too many companies paying thousands for S3+Imgix which could easily save 80% of their bill with alternative solutions (haven't tried CF resizing yet)

My post about it https://pedrogomes.medium.com/how-to-save-90-of-your-imgix-m...


I'd love to love this, but I've been bitten so many times by flashy Cloudflare products that sound amazing on paper, that I'm extremely sceptical of anything that the company offers at this point. If it sounds too good to be true -- it usually is.


Can you give any specifics on this?


Like when an exception in Cloudflare Workers would display a page telling you to look in the logs for the error, but it took me days of back and forth with support to figure out that THERE ARE NO LOGS. Or Cloudflare Pages not having atomic deployments leading to occasional small downtime on deployments. Or the horror of setting up full strict SSL with Heroku...


Thanks for sharing!


Kudos Cloudlare.. but

I've tried to migrate to cloudflare a couple of times.. I've never was able to find a way to setup routing for each `/path` of my website. I have different repos each controling one aspect of my website. This is a breaze to setup using cloudfront `origins`. My best guess is that I have to setup a Nginx between my content and cloudflare.

Maybe its me but I've always found cloudflare UI/UX confusing and hard to use. AWS is very bare metal UI, but it actually makes sense.

Hopefully as Cloudflare is launching new products, like R2, will force them to improve UI.


I would 100% disagree with this. The AWS UI is considerably more confusing and complex than the cloudflare UI, even if just due to the fewer number of features and options.


Yeah AWS is a mess.. Maybe proper way to phrase this: Is not really a UI problem, is more like how features are organized.

Problem remains using AWS Cloudfront I can route content to different sources.

I can't do this with cloudflare.


It’s really good to see Cloudflare step up and compete like this. If they are successful Amazon will lower prices or throw in so much extra utility or value that the existing costs will make sense.


With the introduction of their own big storage, I hope Cloudflare also has plans for expanding workers into bigger tasks. I'm currently doing a bunch of intensive data processing on Lambda as part of site admin operations which would impossible to port over to CF Workers because of their strict limits. Those strict limits make sense without their own big storage in the picture, but with the introduction of R2 it would consequently make sense to me to expand Workers into enabling intensive use case scenarios as well.


I hope it means that the content has a chance to be un-siloed again. As I understand, sites like images hostings are mostly bandwidth-limited... Perhaps we'll see some good developments here


Very interested to see the pricing per operation (and the definition thereof; is it going to be 4 kB = 1 op like Durable Objects?). Apart from using it at scale at work, this might also become a pretty good and cheap solution for personal backup.


One thing I like about R2 is that it is affordable and can even be used for private stuff. Most AWS stuff I cannot use as a private person because the costs explode.


(Maybe off topic)

Anyone tried using User > Cloudflare > S3 directly?

Seems slower than User > Cloudflare > CloudFront > S3. (Using this for website CDN)


If there's a speed difference at all, it would be negligible, and would only apply for the first few requests, as after that, Cloudflare will have a cached copy.


I've seen both types of deployments. I never understood the second one.

Less hops generally means better performance.


$53k* without development support.


It hit me today, bandwidth is the digital currency of the internet. Discuss.


From very early in our history, this is something my cofounder Michelle often said. It’s fascinating how the economics and incentives of bandwidth change with scale. Figuring that out early was key to our success and continues to inform our strategy.

The other interesting thing is not all bandwidth is created equal. We provide services like 1.1.1.1 and infrastructure to root server operators in order to have a “currency” that’s helpful to ISPs who invite us into their networks. The number of times we’ve gotten invited into some telecom’s network because the network admin’s favorite sports team (or whatever) uses the free version of our service is… fascinating. And since every server in our network runs every service, once we’re in for one thing means everything we do in the region gets better and less expensive to operate.

This means, counter intuitively, as we add more locations to our network our costs generally go down, not up.


About 8 years ago there was bruhaha with some ISPs charging Netflix (or their CDNs at least) for peering; is that something that cloudflare has run into?

Given the rather large margins of home ISPs, it seems like they have you over a barrel if they set the price point sufficiently below the cost of transit without peering...


How much space and power do you get access into the ISP locations you are in? Are we talking single 4u and 8a? Or full racks, I suppose the answer is always, "it depends." What does the hardware side of CF look like? You must be toying with custom silicon.

It always baffled me that regional ISPs weren't selling that last mile bandwidth to customers for real dollars. Even over DSL lines in the late 90s one could have run an amazing VoD service.


It depends, as you said. You can look at our network map and get some sense of where we likely have a little vs a lot of equipment:

https://www.cloudflare.com/network/

But we deploy enough equipment to meet local demand (so will roughly correlate to population served). And that makes sense for ISPs to support because the alternative is them paying for back haul bandwidth themselves.


It's not a currency but a utility. You can't buy and hold bandwidth like you can currency.


Not directly, but Filecoin is launching a retrieval market where you'll indeed be able to buy, hod, and sell bandwidth like a currency; just like you can for storage today.


Back when bandwidth and network connectivity was an extremely rare commodity, we had to innovate to provide the functionality we wanted. Torrents, email, multi-part download/upload, compression algorithms, traffic shaping, software mirrors. The resilient parts of the digital world were born out of creatively circumventing our limitations.

In a world with no limitations, we're free to no longer be creative.


Time to be creative in other dimensions. Creativity doesn't only have to happen in scarcity, it can also flourish in largesse.


Price still seems absurdly high for backup purposes.

$0.015/GB/month means $150/10TB/month, but you can buy an 10TB USB hard drive for around $150-250 which is likely to last at least 10 years on average, so the service costs on the order of 100 times the cost of just the drives.

Though maybe CloudFlare is not the best suited company for the "lots of cheap drives for backup" business.


I'd say that comparison over-simplifies what Cloudflare does behind the scenes.

- Electricity costs

- 99.999999999% (eleven 9’s) of annual durability

- Speed of your HDD (around 200 MB/s?) vs speed of R2

...


- 99.999999999% (eleven 9’s) of annual durability

By the way this is the SLA terms from Cloudflare's website:

Q: What type of Service Level Agreement (SLA) do you offer? A: Cloudflare provides a SLA for our Business and Enterprise plans.

Cloudflare's Business plan offers a 100% uptime guarantee. In the event of downtime, customers receive a service credit against their monthly fee, in proportion to the respective disruption and affected customer ratio. You can view the full Business plan SLA, here: https://www.cloudflare.com/business-sla/

Cloudflare Enterprise customers receive a 10x credit (included in the Standard Success Offering) against the monthly fee, in proportion to the respective disruption and affected customer ratio. The Premium Success Offering includes 25x reimbursement uptime SLA. You can view the full Enterprise plan SLA, here:

https://www.cloudflare.com/enterprise_support_sla/




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: