Files are often stored in pieces (fragments) onto the disk or onto the file system. The size of the fragments depends on the cluster size of the respective file system (normally 4KB in NTFS). Let’s assume a block on our SSD is 128KB and a file (128KB) that is split into 32 fragments needs to be saved (you recall, each fragment will be 4KB large). In the worst-case scenario, this would then have to be saved on 32 different blocks. If this file had only one “fragment”, one single block would be enough. The problem then with writing such a fragmented file: The controller has to find 32 FREE blocks and, in the worst-case, delete them all. Then it must write to these 32 blocks as well – and this must happen every time a file is changed too!
The problem is, every file system is fragmented. This means that over time, files will no longer be filed consecutively but, instead, be scattered across the computer’s entire storage area. This can’t be avoided because files are subject to constant changes. In order to avoid fragmentation, you’d need to know all the newly arriving files as well as all changes and deletions in advance. This is simply not possible. This problem doesn’t just apply to hard disks but to SSDs as well.