Hello Charles,
Am 30.01.2025 um 16:57 schrieb Charles Wilt <charles.wilt@xxxxxxxxx>:
Possibly because IBM i don't have arms in most cases anymore?
Yeah, with SSDs, the topic of defragmentation is even more moot.
No idea about the details of IBM i, but here's an interesting look into
Windows...
https://www.hanselman.com/blog/the-real-and-complete-story-does-windows-defragment-your-ssd
Please note the mud throwing in the comments. Apparently some of the allegations of the original author are not necessarily based on hard, can-be-proven facts.
Some comments on that article:
"SSDs can only handle a finite number of writes before things start going bad. This is of course true of regular spinning rust hard drives"
=> Bullshit. There is no finite number of writes to ordinary hard disks. The magnetizable surface doesn't deteriorate, unlike the cells in solid state memory. For unknown reasons, hard disk manufacturers have started to define maximum numbers per day and overall for spinning disks, though. Maybe this picture changes with "write enhanced", very modern disks. Those use masers and lasers to heat a tiny area just before the write head. This so called heat-assisted magnetic recording is the next step in increasing write density. Maybe this has a deteriorating effect on the magnetic layer. Maybe the maser/laser generator itself is prone to wear. But reading is also accounted for in manufacturer limitations. They probably just found a creative way to avoid some warranty claims.
"It's also somewhat of a misconception that fragmentation is not a problem on SSDs"
=> True. Even if you leave out deficiencies of the underlying file system ("hit maximum file fragmentation"), there is definitely a performance penalty. Instead of sending one command to the device and then just slurp in a series of blocks, fragmentation causes more I/O requests to be generated. Those must be generated (by the CPU), sent over the wire (overhead!), and handled by the controller on the SSD. I strongly believe that for most workloads, the real impact is negligible. Elimination of mechanical seeks was the main reason for the speedups we've learned to love with SSDs. Raw, sustained transfer rates aren't that much common. Most often, starting applications, loading shared libraries, loading settings for that application, and finally user data is more about small I/Os. Small I/Os are small I/Os, no matter if they're intentionally as described above, or because the file system becomes fragmented.
"The file systems metadata keeps track of fragments and can only keep track of so many. Defragmentation in cases like this is not only useful, but absolutely needed."
=> Mathias (my decades-long experienced Boss/Friend) says, this has been a long standing myth, being spread by bloggers copying this allegation ever since. I guess the source of this allegation is the wrong interpretation of an old Technet article:
https://web.archive.org/web/20151114101334/http://blogs.technet.com/b/mikelag/archive/2011/02/09/how-fragmentation-on-incorrectly-formatted-ntfs-volumes-affects-exchange.aspx
Interestingly, this article was removed rather quickly, because it exploits a partition willfully formatted with plain dumb parameters and comes to wrong conclusions afterwards. But the damage was already done.
Note that this article is meanwhile over a decade old and hence not necessarily applicable 1:1 today.
Sorry for the offtopic post. I feel some things needed to be clarified.
:wq! PoC
As an Amazon Associate we earn from qualifying purchases.