There’s lies, damn lies, and lies that disks tell the operating system. Don’t believe any of them!
If you need to know it’s been persisted to non-volatile storage then you need to own the full stack of every piece of software between the OS and the actual physical memory.
Every managed flash drive is going to have layers and layers of complexity and caching and things you simply can’t easily control or really understand. Don’t trust it unless you know exactly how it works all the way down.
Unless I am mistaken, it seems like there is a glaring flaw in this scheme, which is that without fsync you cannot guarantee the previous WAL blocks have been persisted before the current one, so a power loss event could leave a hole in the log and cause erroneous recovery. I believe that SSDs reorder writes internally so even having atomic batched O_DIRECT is not a strong enough guarantee for durability. I'll admit that I could be misunderstanding something about the system that alleviates this concern.
Assuming O_DIRECT actually blocks until the SSD has acked (this isn't actually what O_DIRECT's contract says, but what they rely on), you have to wait until each page write acks whenever you need a persistence barrier.
My guess is the preallocation + zeroing is what got them most of the win, and the O_DIRECT is actually hurting, not helping throughput. This has been the case 100% of the time I've benchmarked such things.
If you're doing this sort of stuff for real under Linux, check out sync_file_range. It's the only non-broken and performant sync API for ext4 (note that it's broken by design for many other file systems, and the API is terribly difficult to use correctly).
If you really care, it's probably just easier to use SPDK or something. Linux has historically been pretty hostile towards DBMS implementations.
Many storage devices guarantee that all successful DMA (e.g. O_DIRECT) writes are persisted even in the event of a power loss. This does not work on storage devices that do not offer this guarantee obviously. It also does not work if the filesystem does not support direct I/O or requires metadata updates.
This is not a new trick. It has been used in many storage engine designs to effect durability without an fsync.
if there is a hole in the log then the end of the log is before the hole. you do have to have checksums on log chunks, and better a kind of rolling hash, but you're really just talking about he number of entires that we would have liked to commit but didn't
Yeah this is a good point, and maybe a hole wasn't the right way to explain myself. The point is that the way a WAL is supposed to work is that the main data store always lags behind the WAL, so that if a partial operation (always idempotent) occurs on shutdown it is replayed on start up and fixed. In the case I describe, because of a lack of fsync it's possible for the WAL to lag the main data store, so partial operations will not be fixed on start up.
Thanks for pointing it out the mistakes. We should make it clearer, when fsync an opened file descriptor, it would only sync its own metadata. To make it truly persistent, we need to issue another fsync for the directory fd, which would make it more expensive.
Even with O_DIRECT and aligned blocks, I still don't understand how the storage engine can return a "successful commit" to the client without a sync at some point, because a sync (IIRC) is the only way to guarantee an ATA/NVMe FUA command is sent, and the device write cache/buffer is committed.
:-/ it’s a statistical guarantee in the first place. A successful commit in a durable storage engine just needs to achieve some finite level of durability, like “10^-7 probability of loss per year”. The durability is a property of the whole system, and it is possible to achieve durability without fsync, you just may have a hard time explaining what the durability is, how you calculated it, and what the evidence or justifications are for the numbers you give.
Even if you just look at hardware failure rates, you get unrecoverable I/O errors (data corruption) at about one in 10^15 bits, disk failures at a rate of about 1% per year, etc. People usually like to have better guarantees than those numbers give you with just a plain fsync anyway; so you are probably forced to do an analysis of the whole system if you want to provide good durability guarantees and be able to explain where the guarantees come from.
I used to say this as well but like.. industry has, for a long time now equated “durable” with “stored on disk”. Any DBA will assume that’s what it means, and use that fact when they work out the replication they need either in clustering or in raid.
If you’re building a data storage system and are using the term “durable” to mean “it’s in RAM on three virtual machines”, for example, I don’t think it’s unfair to say that you are lying to your customers, because you are intentionally misusing a well-established term.
10^-7 (loss/record) * 10^8 (record/year) yields 10 data losses per year. If you're even a medium sized business you need a much better than 10^-7 probability of losses.
That's only true if your typical loss event loses one record. If you have a one in a million chance of an array failure taking out 10% of your production database, and otherwise have zero possibility of data loss, you also get 10^-7 losses per record.
And I wouldn't assume they meant that number to be per record in the first place.
I don't think anyone in history has ever achieved a true 10^-7 annual probability of any data loss incident. So they must have been making some kind of per record or per operation claim.
Yes, as we mentioned in the post, it is targeted for the virtualized NVME disk and we don't have control for actually issing FUA command. We are also changing to open data files with O_DATA_SYNC to make them work with normal on-prem deployment environments.
Even then, I also share the confusion of the poster you're replying to.
I don't see how a virtualised NVMe disk is different from a physical one.
Especially if you don't have control over the underlying hardware (so you don't know if it has power-loss-protection PLP SSDs), you should send the FUA.
> O_DATA_SYNC
You mean `O_DSYNC`?
Why would you need `O_DSYNC` on-premise, but not on cloud VMs? (Or are you saying you'd include it everywhere?) Similar to my above point, surely it is the task of the VM to pass through any FUA commands the VM guest issues to the actual storage?
Further: Is `O_DSYNC` actually substantially different from writing and then `fdatasync()`ing yourself?
My understand is that no, it's the same. In particular, the same amount of data gets written. So if you believe that to avoid the "can trigger an order of magnitude more I/O" by avoiding `fdatasync()`, you would re-introduce it with `O_DSYNC`.
However, I suspect that that whole consideration is pointless:
The only thing that makes your O_DIRECT+preallocated-only-overwrites writes safe are enterprise SSDs with Power Loss Protection (PLP), usually capacitors.
On those SSDs, NVMe Flush/FUA are no-ops [1]. So you might as well `fdatasync()`/`O_DSYNC`, always. This is simpler, and also better because you do not need to assume/hope that your underlying SSDs have PLP: Doing the safe thing is fast on PLP [2], and safe on non-PLP.
So the only remaining benefit of `O_DSYNC` over `fdatasync()` is that you save a syscall. That's an OK optimisation given they are equivalent, but it would surprise me if it had any noticeable impact at the latencies you are reporting ("413 us"), because [2] reports the difference beting 6 us.
Let me know if I got anything wrong.
The only remaining question is: Why do you then see any difference in your benchmark?
To truly guarantee things you probably also would need an uncached read afterwards (to verify the data comes back properly from the device). Now that would kill any sort of performance, of course.
There is no such thing as a guarantee in life, there are only probabilities. The goal is to make it sufficiently unlikely that data is lost, and to balance that against the cost of doing so.
That is where the disparity lies here. Reading back the data after the device reports that it has been written offers little in the way of additional assurances that it's successfully written. But if you report successful writes without syncing, there is a near certainty that you'll lose data on every power loss.
This seems sketchy. O_DIRECT skips the operating system's page cache, it does not guarantee that the SSD driver sent the data to the SSD or issued a flush to the drive itself. The data could still be in the driver's memory or the in non-durable memory in the drive itself when this engine says "ok, we're good".
EDIT: sketchy from an answering "what exactly are the guarantees?" perspective
The model here is that the storage device is directly reading and writing the userspace buffer via DMA. It is one of the reasons use of O_DIRECT creates additional constraints on buffer alignment and size.
Some storage devices guarantee durability of non-persisted writes, which is explicitly part of their model. Consequently, the entire durable write path is the storage device completing a DMA read of their buffer.
The underlying assumptions will not hold true for every environment. However, it will hold true for many and you can check most (all?) of them at runtime.
Yes, that has also been pointed out in other threads. Yes this could be very important settings, and even some of common Linux file systems actually don't do that every time and we need to disable the disk writecache during boot up to make sure the data truly persistent (as in my previous storage company).
So instead of saying "We removed fsync" you should say: "We redesigned the database write path to avoid paying the full fsync durability cost on every write"
Author here. This is not a general argument against fsync; the design depends on SSD-only deployment, preallocated files, O_DIRECT, single-key atomicity, and device write guarantees.
Your approach looks interesting but I was curious when you talk about path-based splitting for ART, do you literally mean always on "/"? I know S3 directory buckets always use /, but the classical S3 model had no natural separator character and I was wondering if supporting those styles of prefix or custom delimiter queries suffered any impediment in your approach.
Bookmarked your whole blog for later consumption, interesting stuff!
Thanks for the encouragement! Another author here. Yes, if you are interested you can check our another blog [1] for the internal storage engine. Yes, we are limiting the delimeter to "/", to better support posix FS semantics. I have just finished the fs feature branch which has passed all posix fstests [2].
To step back a bit, the device still has a filesystem on it, and the structures described here are files within the filesystem? Just you're able to write directly into them, bypassing the filesystem layer, because you've constrained yourself to writes that don't require updating other parts of the filesystem structure?
Yes, that's right. We could go even further, to use the raw devices without relying on any filesystem. We then need to allocate/format raw disk spaces and we can not just open files as simple as right now. It would take some extra effort, but we would like to explore that in the future.
It will also make the system initialization faster, since right now we need to write all zeros to make ext4/xfs to actually initialize extents as "allocated".
This design ACKs writes that aren't yet durably persisted (to the journal or data areas). That might be ok, but it might not. It's certainly unusual not to at least persist the journal update.
I wonder why this is not more common. LVM is easy to set up, and it's already common to allocate volumes for things like disk images for VMs, so why not databases?
Some Linux filesystems, notably ext4 and XFS, provide the necessary features to get 90% of the benefit simply by using O_DIRECT correctly. The last 10% is achieved by doing direct I/O to raw block devices, with the obvious caveat that this is not as easy to manage.
Both of these are commonly done in database storage engines.
Because the speed increase is - on modern, properly tuned filesystems - surprisingly small, due to how RDBMS's manage their pool; by working on large container files, they avoid most of the filesystem overhead.
If you need to know it’s been persisted to non-volatile storage then you need to own the full stack of every piece of software between the OS and the actual physical memory.
Every managed flash drive is going to have layers and layers of complexity and caching and things you simply can’t easily control or really understand. Don’t trust it unless you know exactly how it works all the way down.
My guess is the preallocation + zeroing is what got them most of the win, and the O_DIRECT is actually hurting, not helping throughput. This has been the case 100% of the time I've benchmarked such things.
If you're doing this sort of stuff for real under Linux, check out sync_file_range. It's the only non-broken and performant sync API for ext4 (note that it's broken by design for many other file systems, and the API is terribly difficult to use correctly).
If you really care, it's probably just easier to use SPDK or something. Linux has historically been pretty hostile towards DBMS implementations.
This is not a new trick. It has been used in many storage engine designs to effect durability without an fsync.
Famously not, as the man page says.
It is also said later in the article:
> POSIX strictly requires a parent-directory fsync to make a newly created file’s existence durable.
So I'm not sure why the dirent sync is claimed earlier.
Even if you just look at hardware failure rates, you get unrecoverable I/O errors (data corruption) at about one in 10^15 bits, disk failures at a rate of about 1% per year, etc. People usually like to have better guarantees than those numbers give you with just a plain fsync anyway; so you are probably forced to do an analysis of the whole system if you want to provide good durability guarantees and be able to explain where the guarantees come from.
If you’re building a data storage system and are using the term “durable” to mean “it’s in RAM on three virtual machines”, for example, I don’t think it’s unfair to say that you are lying to your customers, because you are intentionally misusing a well-established term.
And I wouldn't assume they meant that number to be per record in the first place.
I don't see how a virtualised NVMe disk is different from a physical one.
Especially if you don't have control over the underlying hardware (so you don't know if it has power-loss-protection PLP SSDs), you should send the FUA.
> O_DATA_SYNC
You mean `O_DSYNC`?
Why would you need `O_DSYNC` on-premise, but not on cloud VMs? (Or are you saying you'd include it everywhere?) Similar to my above point, surely it is the task of the VM to pass through any FUA commands the VM guest issues to the actual storage?
Further: Is `O_DSYNC` actually substantially different from writing and then `fdatasync()`ing yourself?
My understand is that no, it's the same. In particular, the same amount of data gets written. So if you believe that to avoid the "can trigger an order of magnitude more I/O" by avoiding `fdatasync()`, you would re-introduce it with `O_DSYNC`.
However, I suspect that that whole consideration is pointless:
The only thing that makes your O_DIRECT+preallocated-only-overwrites writes safe are enterprise SSDs with Power Loss Protection (PLP), usually capacitors.
On those SSDs, NVMe Flush/FUA are no-ops [1]. So you might as well `fdatasync()`/`O_DSYNC`, always. This is simpler, and also better because you do not need to assume/hope that your underlying SSDs have PLP: Doing the safe thing is fast on PLP [2], and safe on non-PLP.
So the only remaining benefit of `O_DSYNC` over `fdatasync()` is that you save a syscall. That's an OK optimisation given they are equivalent, but it would surprise me if it had any noticeable impact at the latencies you are reporting ("413 us"), because [2] reports the difference beting 6 us.Let me know if I got anything wrong.
The only remaining question is: Why do you then see any difference in your benchmark?
That is what I'd find very valuable to investigate.The first suspicion I have is: Shouldn't you be measuring `+ fdatasync` instead?
So I'd be interested in:
Also I don't fully understand what the remaining diference between "ext4 + O_DIRECT + O_DSYNC" and "Our engine + O_DSYNC" would be.That is where the disparity lies here. Reading back the data after the device reports that it has been written offers little in the way of additional assurances that it's successfully written. But if you report successful writes without syncing, there is a near certainty that you'll lose data on every power loss.
EDIT: sketchy from an answering "what exactly are the guarantees?" perspective
Some storage devices guarantee durability of non-persisted writes, which is explicitly part of their model. Consequently, the entire durable write path is the storage device completing a DMA read of their buffer.
The underlying assumptions will not hold true for every environment. However, it will hold true for many and you can check most (all?) of them at runtime.
Would you be so kind to explain what happens in a power-loss scenario?
Bookmarked your whole blog for later consumption, interesting stuff!
[1] https://fractalbits.com/blog/metadata-engine-for-our-object-...
[2] https://github.com/pjd/pjdfstest
It will also make the system initialization faster, since right now we need to write all zeros to make ext4/xfs to actually initialize extents as "allocated".
[1] https://news.ycombinator.com/item?id=42805425
Both of these are commonly done in database storage engines.