I spent some time on Friday trying out Cloudflare tunnel and boy was it a bad experience. The big killer was that the tunnel endpoint they gave me had an IPv6-only endpoint that I'm not sure was even valid. None of my devices could connect to it, including macbook, phone, linux, AWS instance...
On top of that I keep running into unexpected roadblocks with Cloudflare, like when I was trying to set up the tunnel they required me to set up a dedicated domain, you can't set up a subdomain of an existing domain. Probably fine if you are rolling it out as a production service, but for just testing it to make sure it even works (see IPv6 comments above), I just wanted to set it up as a subdomain.
I'm very surprised to see all of the negativity toward Cloudflare's usability and value here.
It's been relatively painless for me to set up tunnels secured by SSO to expose dashboards and other internal tools across my distributed team using the free plan. Yes, I need to get a little creative with my DNS records (to avoid nested subdomain restrictions), but this is not really much of a nuisance given all of the value they're giving me for free.
And after paying just a little bit ($10-20 per month), I'm getting geo-based routing through their load balancers to ensure that customers are getting the fastest connection to my infra. All with built-in failover in case a region goes down.
> I'm very surprised to see all of the negativity toward Cloudflare's usability and value here.
As someone who uses Cloudflare at a professional level, I don't. To me each and every single service provided by Cloudflare feels somewhere between not ready for production or lacking any semblance of a product manager. Everything feels unreliable and brittle. Even the portal. I understand they are rushing to release a bunch of offerings, but this rush does surface in their offerings.
One of my pet peeves is Cloudflare's Cache API in Cloudflare Workers, and how Cloudflare's sanctioned approach to cache POST requests is to play tricks with the request, such as manipulate HTTP verb, URL, and headers, until it somehow works. It's ass-backwards. They own the caching infrastructure, they own the js runtime, they designed and are responsible for the DX, but all they choose to offer is a kludge.
Also, Cloudflare Workers are somehow deemed as customizable request pipelines, but other Cloudflare products such as Cloudflare Images service can't be used with Workers as it fails to support forwarding standard request headers.
I could go on and on, but ranting won't improve anything.
Post requests aren't really meant for repeatable stuff though. Even browsers will ask for confirmation before letting you reload the result of a post request. I think you are holding it wrong.
Now I get it things happen and you gotta do what you gotta do but then you aren't on the happy path anymore and you can't have the same expectations.
> Post requests aren't really meant for repeatable stuff though.
That's simply wrong. Things like GraphQL beg to differ. Anyone can scream this until they are red in the face but the need to cache responses from non-GET requests is pervasive. I mean, if it wasn't then why do you think Cloudflare recommends hacks to get around them?
Your blend of argument might have had a theoretical leg to stand on if Cloudflare didn't went out of it's way to put together official examples on how to cache POST requests.
The Cache API is a web-standard API. We chose to follow it in an attempt to follow standards. Unfortunately it turned out to be a poor fit. Among other things, as you note, the "cache key" is required to be HTTP-request-shaped, but must be a GET request, so to cache the result of a POST request you have to create a fake GET request that encodes the unique cache key in the URL. The keys should have just been strings computed by the app all along, but that's not what the standard says.
We'll likely replace it at some point with a non-standard API that works better. People will then accuse us of trying to create lock-in. ¯\_(ツ)_/¯
I really wanted to love Cloudflare, even invested in it a couple years ago I was so confident in their vision. But...
- They won't tell you at what point you will outgrow their $200/mo plan and have to buy their $5K+/mo plan. I've asked their support and they say "it almost never happens", but they won't say "It will never happen." HN comment threads are full of people saying they were unexpectedly called by sales saying they needed to go Enterprise.
- There are no logs available (or at least weren't 6-9 months ago) for the service I proxy through Cloudflare at the $200/mo level, you have to go with Enterprise ($5K+ I've been told) to get logs of connections.
- I set up some test certs when I was migrating, and AFAICT there is no way to remove them now. It's been a year, my "Edge Certificates" page has 2 active certs and 6 "Timed Out Validation" certs, I can't find a way to remove them.
- The tunnel issue I had on Friday trying to set up where my tunnel, more details in another comment here but apparently the endpoint they gave me was IPv6 only and not accepting traffic.
- Inability to set up a tunnel, even to test, on a subdomain. You have to dedicate a domain to it, for no good reason that I can tell.
Works great for me, 5 subdomains coming to various ports on my dev pc for whatever project I'm testing (8000 for laravel, 3000 for nextjs). Way better than ngrok.
It was a smooth experience for me. Just start the cloudflared container with the provided key in the environment and you are done.
I also don't have ipv6 but it is not required and if I remember correctly I did not have to specify any endpoints, just the key.
We're using Cloudflare Zero Trust quite extensively, and I find them quite easy to use. Works perfectly from AWS as well, all their endpoints have both IPv4 and IPv6 IPs.
Maybe the tunnel they provisioned for me was just broken, because:
$ host -t A 9c8855f1-e47f-47bf-9e0e-66938be0f076.cfargotunnel.com
9c8855f1-e47f-47bf-9e0e-66938be0f076.cfargotunnel.com has no A record
$ host -t AAAA 9c8855f1-e47f-47bf-9e0e-66938be0f076.cfargotunnel.com
9c8855f1-e47f-47bf-9e0e-66938be0f076.cfargotunnel.com has IPv6 address fd10:aec2:5dae::
$ telnet -6 9c8855f1-e47f-47bf-9e0e-66938be0f076.cfargotunnel.com 443
Trying fd10:aec2:5dae::...
telnet: Unable to connect to remote host: Connection timed out
I got the cloudflared running fairly easily (though their Debian package repo seemed broken and they didn't have an option listed on the setup page for downloading just the binary, I was able to find it after some searching). That part went smoothly, I just couldn't connect to the tunnel they provisioned.
I'd tend to agree with that, but I was able to find some youtube videos of people setting them up. It was still a little bit of a challenge though because they have moved the menus all around in the last few months, so even the most recent videos I could fine were pointing to locations that didn't exist and I had to go hunting for them.
I would have preferred to just use tailscale for this, but we are using headscale and want to make a service available to our sister company, that doesn't have e-mails in our Google Workgroup where we have the OIDC for auth, so they can't be part of our tailnet without buying them logins or setting up accounts in keycloak or similar.
Haven't used Cloudflare in a while, but in the past you needed $200/month Business plan to be able to use subdomains of an existing domain with DNS hosted elsewhere.
Nah, I'm free tier. I register domains through them and I think I pay around $10/month for R2 storage. All kinds of other freebies come on that tier, D1 databases (sqlite), Workers (think Lambda)
We were also super frustrated with Cloudflare Tunnel, especially from a developer experience and firewall perspective. So we built Tunlr to replace it: https://tunlr.dev. It's Cloudflare Tunnels but you can self-host it and provide your own domains for your internal developers to use, and it proxies over HTTP/SSE which plays nicely with firewalls.
Tunneling isn't that big of a toll on resource, it doesn't require storage/disk space nor compute power (CPU chips), all it needs is ingress/egress (spare bandwidth). A non-profit or decent business in telco can easily offer it, consider that many hosting companies offer entire package in free tier today (compute + disk + egress).
For several years, ngrok was practically free, only recently they've started monetizing once it gained popularity.
That really sums up the cloudflare experience and this is from someone heavily invested in their workers platform. They have lots of products and keep pumping more but except for DNS, most of them are half assed with weak maintenance/support.
That's not a fair take. I will give Cloudflare a lot of shit for some of their products, but some of their products are 100% best in class. For instance, R2 is just better than S3, and KV is better than AWS/GCP options. The pricing is better, it's multi-region by default and there's less ops overhead.
This is good to know. I haven't used R2, it's been on my radar but I haven't taken the steps to start using it. Partly because my experience with the rest of Cloudflare has been middling to poor. I'd love to save on our S3 bill, which is substantial, but it's going to take significant development to get there and it's an unknown how much it'll actually save. There are too many stories of people getting called by enterprise sales when their usage crosses some line in the sand that only the sales people know.
I agree with R2 but KV is un-realiable. I said DNS but I meant CDN which R2 kind of falls into. Cloudflare is good in moving lots of data but most of their other products are not polished. It doesn't mean that they are not exceptional products. I have deployed a wasm-worker 5 years ago and it is still up and running to this day. I don't think a server would have survived or any other product from any other provider would have guaranteed such backward compatibility.
I use it with a separate docker compose project so everything lives inside that (with traefik) and it's been utterly bulletproof for years - took a little puzzling out to start with but otherwise no drama and lets me do foo-whatever.mydomain.co.uk and route publically which is fantastic for local dev stuff or where I want to test something on iphone/android easily or share it - keeps all that stuff out of my "stack" for dev projects which makes for a very fast spinup if I want to test something.
Although Oxy is a closed, internal project, seems like they released part of it under a BSD license. Not the networking part, but a Rust library to create "production-grade systems".
A proprietary project. I was surprised to realize how little interest I have in these things anymore. I mean genuinely surprised. I suppose I have just seen so many large-corporation-does-something in isolation projects that I make two possibly wrong assumptions.
1) It will never work
2) The article is just advertising. Jobs, products whatever.
There is a third conclusion which is worrisome. That the leadership of the organization just doesn't get it.
I'm not advocating these as correct, just wondering if other readers share my instantaneous reaction of been-there, seen-that, know-how-it-ends.
Outside of HPC/HFT most people will never need kernel bypass. If you just got off Nginx you probably have years of optimizations left to do. (Username checks out though.)
They also could forget about it. I bet I've probably seen Oxy in some cloudflare post from years ago (maybe even from a launch week or something) but it never resonated.
But I might have encountered this problem or am about to, and such a post might resonate more.
It is like advertising in a way. But for knowledge. As long as people upvote it, it's resonating.
Isn't this the point of upvoting though - if people find it interesting and new, they will upvote and stuff will be visible.
I also think HN does some sort of deduplication if something has been posted recently (to count as upvote instead of new submission), but not sure of the details.
It's also the point of commenting. I think they were hoping for a more specific explanation along the lines of "I'm interested in it because it has X, Y, Z implications" or "Oxy continues to be important because ____ and here's the best comprehensive intro to it."
> Although Pingora, another proxy server developed by us in Rust, shares some similarities with Oxy, it was intentionally designed as a separate proxy server with a different objective.
Oxy actually means sharp or acidic in greek. Oxygen was wrongly named like that (acid former) because it was thought to be the element to give acids their sourness but later many acids without oxygen were discovered. The key turned out to be hydrogen not oxygen
I meant the I have no interest in knowing anything about any company's internal tech stack and also no interest in tying my application to one company's internal stack. Much of it sounded like lock-in to me.
Yup, here I am on the other side of the world and that was the first thing it reminds me of. The link to Rust is... remote, and I have to think a lot :D
I know it because of movies and books... so can we trust a "next generation proxy framework" by people who don't go out, don't read and don't watch culture things? The name is similar in other languages too..
The implication of being too nerdy would be that they are extremely well-versed in fantasy, science fiction and/or anime as well as random niche topics. They would probably read or watch way more culture things than you or me, just the kind that deals with current societal issues by allegory and thus wouldn't use real-world street names for drugs
Not that I think that that's a fair conclusion to jump through. Occam's razor would prefer "they were probably vaguely aware and didn't care". Just like how Torvalds knowingly named git after a slang word for a stupid person
On top of that I keep running into unexpected roadblocks with Cloudflare, like when I was trying to set up the tunnel they required me to set up a dedicated domain, you can't set up a subdomain of an existing domain. Probably fine if you are rolling it out as a production service, but for just testing it to make sure it even works (see IPv6 comments above), I just wanted to set it up as a subdomain.
It's been relatively painless for me to set up tunnels secured by SSO to expose dashboards and other internal tools across my distributed team using the free plan. Yes, I need to get a little creative with my DNS records (to avoid nested subdomain restrictions), but this is not really much of a nuisance given all of the value they're giving me for free.
And after paying just a little bit ($10-20 per month), I'm getting geo-based routing through their load balancers to ensure that customers are getting the fastest connection to my infra. All with built-in failover in case a region goes down.
As someone who uses Cloudflare at a professional level, I don't. To me each and every single service provided by Cloudflare feels somewhere between not ready for production or lacking any semblance of a product manager. Everything feels unreliable and brittle. Even the portal. I understand they are rushing to release a bunch of offerings, but this rush does surface in their offerings.
One of my pet peeves is Cloudflare's Cache API in Cloudflare Workers, and how Cloudflare's sanctioned approach to cache POST requests is to play tricks with the request, such as manipulate HTTP verb, URL, and headers, until it somehow works. It's ass-backwards. They own the caching infrastructure, they own the js runtime, they designed and are responsible for the DX, but all they choose to offer is a kludge.
Also, Cloudflare Workers are somehow deemed as customizable request pipelines, but other Cloudflare products such as Cloudflare Images service can't be used with Workers as it fails to support forwarding standard request headers.
I could go on and on, but ranting won't improve anything.
Now I get it things happen and you gotta do what you gotta do but then you aren't on the happy path anymore and you can't have the same expectations.
That's simply wrong. Things like GraphQL beg to differ. Anyone can scream this until they are red in the face but the need to cache responses from non-GET requests is pervasive. I mean, if it wasn't then why do you think Cloudflare recommends hacks to get around them?
https://developers.cloudflare.com/workers/examples/cache-pos...
Your blend of argument might have had a theoretical leg to stand on if Cloudflare didn't went out of it's way to put together official examples on how to cache POST requests.
We'll likely replace it at some point with a non-standard API that works better. People will then accuse us of trying to create lock-in. ¯\_(ツ)_/¯
- They won't tell you at what point you will outgrow their $200/mo plan and have to buy their $5K+/mo plan. I've asked their support and they say "it almost never happens", but they won't say "It will never happen." HN comment threads are full of people saying they were unexpectedly called by sales saying they needed to go Enterprise.
- There are no logs available (or at least weren't 6-9 months ago) for the service I proxy through Cloudflare at the $200/mo level, you have to go with Enterprise ($5K+ I've been told) to get logs of connections.
- I set up some test certs when I was migrating, and AFAICT there is no way to remove them now. It's been a year, my "Edge Certificates" page has 2 active certs and 6 "Timed Out Validation" certs, I can't find a way to remove them.
- The tunnel issue I had on Friday trying to set up where my tunnel, more details in another comment here but apparently the endpoint they gave me was IPv6 only and not accepting traffic.
- Inability to set up a tunnel, even to test, on a subdomain. You have to dedicate a domain to it, for no good reason that I can tell.
Tunnels are poorly documented.
I'd tend to agree with that, but I was able to find some youtube videos of people setting them up. It was still a little bit of a challenge though because they have moved the menus all around in the last few months, so even the most recent videos I could fine were pointing to locations that didn't exist and I had to go hunting for them.
I would have preferred to just use tailscale for this, but we are using headscale and want to make a service available to our sister company, that doesn't have e-mails in our Google Workgroup where we have the OIDC for auth, so they can't be part of our tailnet without buying them logins or setting up accounts in keycloak or similar.
[1] https://localtunnel.github.io/www/
For several years, ngrok was practically free, only recently they've started monetizing once it gained popularity.
He wouldn’t disclose any details to me but from point of view S3 was best in class
In my experience even backblaze b2 performs (way) better.
Their community forums are full of such reports.
KV is so expensive that it’s barely usable, and like R2, is very slow.
Agree with the KV point, Upstash is the same. But I just use dragonflydb on a single VM. No point paying for transactions.
Hell, S3 could have 20ms latency and it wouldn't matter since I can't afford it.
https://github.com/cloudflare/foundations
1) It will never work 2) The article is just advertising. Jobs, products whatever.
There is a third conclusion which is worrisome. That the leadership of the organization just doesn't get it.
I'm not advocating these as correct, just wondering if other readers share my instantaneous reaction of been-there, seen-that, know-how-it-ends.
But I might have encountered this problem or am about to, and such a post might resonate more.
It is like advertising in a way. But for knowledge. As long as people upvote it, it's resonating.
So, what's the threshold for what should be shared, given that most people don't know most thing things...?
I also think HN does some sort of deduplication if something has been posted recently (to count as upvote instead of new submission), but not sure of the details.
Isn’t that the whole benefit of sites like HN and Reddit?
https://blog.cloudflare.com/introducing-oxy/#relation-to
> Although Pingora, another proxy server developed by us in Rust, shares some similarities with Oxy, it was intentionally designed as a separate proxy server with a different objective.
seems fine to me?
Not that I think that that's a fair conclusion to jump through. Occam's razor would prefer "they were probably vaguely aware and didn't care". Just like how Torvalds knowingly named git after a slang word for a stupid person