The article posts a table of latency distributions, but the latencies are simulated based on the assumption that latencies are lognormal. I would be interested to read the article comparing the simulation to actual measurements.
The assumption that latencies are lognormal is a useful approximation but not really true. In reality you will see a lot of multi-modality (e.g. cache hits vs misses, internal timeouts). Requests for the same key can have correlated latency.
What I’ve always been curious about is if you can help the S3 query optimizer* in any way to use specialized optimizations. For example if you indicate the data is immutable[1] does the lack of a write path allow further optimization under the hood? Replicas could in theory serve requests without coordination.
*I’m using “query optimizer” rather broadly here. I know S3 isn’t a DBMS.
> Roughly speaking, the latency of systems like object storage tend to have a lognormal distribution
I would dig into that. This might (or might not) be something you can do something about more directly.
That's not really an "organic" pattern, so I'd guess some retry/routing/robustness mechanism is not working the way it should. And, it might (or might not) be one you have control over and can fix.
To dig in, I might look at what's going on at the packet/ack level.
I don't know what you mean by the word "organic", but I think lognormal distributions are very common and intuitive: whenever the true generative mechanism is “lots of tiny, independent percentage effects piling up,” you’ll see a log‑normal pattern.
The hedging strategies all seem to assume that latency for an object is an independent variable.
However, I would assume dependency?
Eg. if. a node holding a copy of the object is down and traffic needs to be re-routed to a slower node. Indifferently of how many requests I send, the latency will still be high?
It’s not addressed directly but I do think the article implies you hope your request latencies are not correlated. It provides a strategy for helping to achieve that
> Try different endpoints. Depending on your setup, you may be able to hit different servers serving the same data. The less infrastructure they share with each other, the more likely it is that their latency won’t correlate.
Yeah, engineering high scale distributed data systems on top the cloud providers a very weird thing at times.
But the reality is that as large enterprise move to the cloud, but still need lots of different data systems, it is really hard to not play the cloud game. Buying bare metal and direct connect with AWS seems a reasonable solution... But it will add years to your timeline to sell to any large companies.
So instead, you work in the constraints the CSPs have, and in AWS, that means guaranteeing durability cross zone, and at scale, that means either huge cross az network costs or offloading it to s3.
You would think this massive cloud would remove constraints, and in some ways that is true, but in others you are even more constrained because you don't directly own any of it and are the whims of unit costs of 30 AWS teams.
If cross AZ bandwidth was more reasonably priced it would enable a lot of design options like running something like MinIO on nothing but directly connected NVMe Instance store volumes.
The very first sentence of this article contains an error:
> Over the past 19 years (S3 was launched on March 14th 2006, as the first public AWS service), object storage has become the gold standard for storing large amounts of data in the cloud.
While it’s true that S3 is the gold standard, it was not the first AWS service, which was in fact SQS in 2004.
The assumption that latencies are lognormal is a useful approximation but not really true. In reality you will see a lot of multi-modality (e.g. cache hits vs misses, internal timeouts). Requests for the same key can have correlated latency.
*I’m using “query optimizer” rather broadly here. I know S3 isn’t a DBMS.
[1] https://aws.amazon.com/blogs/storage/protecting-data-with-am...
I would dig into that. This might (or might not) be something you can do something about more directly.
That's not really an "organic" pattern, so I'd guess some retry/routing/robustness mechanism is not working the way it should. And, it might (or might not) be one you have control over and can fix.
To dig in, I might look at what's going on at the packet/ack level.
However, I would assume dependency?
Eg. if. a node holding a copy of the object is down and traffic needs to be re-routed to a slower node. Indifferently of how many requests I send, the latency will still be high?
(I am genuinly curious of this is the case)
So while you could get unlucky and routed to same bad node / bad rack, the reality is that it is quite unlikely.
And while the testing here is simulated, this is a technique that is used with success.
Source: working on these sort of systems
> Try different endpoints. Depending on your setup, you may be able to hit different servers serving the same data. The less infrastructure they share with each other, the more likely it is that their latency won’t correlate.
But the reality is that as large enterprise move to the cloud, but still need lots of different data systems, it is really hard to not play the cloud game. Buying bare metal and direct connect with AWS seems a reasonable solution... But it will add years to your timeline to sell to any large companies.
So instead, you work in the constraints the CSPs have, and in AWS, that means guaranteeing durability cross zone, and at scale, that means either huge cross az network costs or offloading it to s3.
You would think this massive cloud would remove constraints, and in some ways that is true, but in others you are even more constrained because you don't directly own any of it and are the whims of unit costs of 30 AWS teams.
But it is also kind of fun
> Over the past 19 years (S3 was launched on March 14th 2006, as the first public AWS service), object storage has become the gold standard for storing large amounts of data in the cloud.
While it’s true that S3 is the gold standard, it was not the first AWS service, which was in fact SQS in 2004.
This is the source Wikipedia uses: https://web.archive.org/web/20041217191947/http://aws.typepa...