Synthetic Benchmarks
Synthetics are a good way to get the big numbers right for once, if they are correct. We’ll see later how well this works in reality with the real application benchmarks. Therefore, I will start with CrystalDiskMark and four different file sizes. The SSD was not quite new at the time of the test and I also had fill levels of more than 50% before deleting the data several times. However, the SSD never quite reached the specified maximum values from the data sheet when writing large blocks, but the values determined here are quite constant and also still correspond to the actual state of the just installed, still virgin product.
You can see very well that the dynamic pSLC does exactly what it is supposed to, mind you with an empty (though not virgin) SSD. The annoying thing about a 2 TB SSD is that there is still a lot of space left, so it is better to never fill it more than 2/3 to 3/4 with data. You always have to take that into account when planning in advance. A higher load does not affect reading, but the dynamic SLC will definitely reach its limits at some point during writing. And if you do it over and over again, the switch of the memory modules between both methods will also not be possible at some point.
ATTO works very interestingly this time, whereby I only work with two sizes here, but it ends up the same. The data doesn’t quite resemble that of the CrystalDiskMark, but the manufacturer’s specifications are clearly undercut in real terms. Of course, it has to be said that the measured values are still completely sufficient. That’s why the “up to” and you’re fine.
Video streams
But what happens when you stream a video? The industry uses the AJA benchmark for this, which is in fact an interface between synthetic benchmarks and practical use. The NETAC NV7000-t 2TB doesn’t fluff here either, even though it already deviates a bit from the theoretical write and read rates, just like ATTO.
We can see that the comments made on the previous page about the dynamic pSLC cache and the behavior with larger file blocks are completely true. Smaller file movements would indeed be even faster if the overhead of the file system is left out for a moment. You can see this especially at the end of the run.
For the curious, I also have the measurement protocol with all details as a PDF:
BENCHMARK-TABLE
18 Antworten
Kommentar
Lade neue Kommentare
Urgestein
Urgestein
Veteran
Urgestein
1
Mitglied
Urgestein
Veteran
Veteran
Urgestein
Mitglied
1
Urgestein
Mitglied
Mitglied
1
Mitglied
Urgestein
Alle Kommentare lesen unter igor´sLAB Community →