Hard-disk maintenance in next-gen consoles

silhouette

Regular
I wonder if MS/Sony include any hard-disk maintenance software in Xb360/PS3 that runs in background. Even with 20gig(or 60 gig for PS3), the HD can get fragmented very easily. I already see some frames that are dropped in video playing on Xb360, and the reason for that can be basically a fragmented harddisk. In addition, I wonder if they have some sort of protection mechanism for harddisk sectors that fail (like tagging them so that they won't be used in the future).

I guess both feature is important as first one improves read/write times, and second one eliminates problems when a game patch is tried to be written/read to/from a faulty part of the harddisks.
 
I remember reading that for certain disk systems, such as Linux, fragmenting can actually be good for the access times and such. Perhaps the PS3 will use something like it.
 
I've heard something about conecting the Wii to an external harddrive via USB for extra storage space. I was wondering how they would make it fit with their API.
 
There's a way to defrag the X360 HDD manually with a certain combination of button presses, though I dont know it offhand.

There's been lots of issues with hi-def Xbox 360 videos from Live stuttering and I still dont know the definitive answer. Some say it's hard drive fragmentation, some say the spring update messed video playback up, some say it's certain videos, I do know downloading in the background while watching a Hdef movie can cause stuttering for me. That's usually the easiest fix, dont DL and watch at the same time.
________
hot girls Cams
 
Last edited by a moderator:
I wonder if MS/Sony include any hard-disk maintenance software in Xb360/PS3 that runs in background. Even with 20gig(or 60 gig for PS3), the HD can get fragmented very easily. I already see some frames that are dropped in video playing on Xb360, and the reason for that can be basically a fragmented harddisk. In addition, I wonder if they have some sort of protection mechanism for harddisk sectors that fail (like tagging them so that they won't be used in the future).

I guess both feature is important as first one improves read/write times, and second one eliminates problems when a game patch is tried to be written/read to/from a faulty part of the harddisks.
Most modern file systems prevent fragmentation within files but allow spaces in between files (because of deleted files). For example 'xxxppppppeeerrr' could be a set of sectors on the drive, a user deletes file 'e' so it ends up 'xxxpppppp rrr' (which is 3 spaces). A user tries to save file a which take 7 sectors and the file system puts the entire file at the end of the sectors instead of putting some before and the rest after 'r' ('xxxpppppp rrraaaaaaa').

I remember reading that for certain disk systems, such as Linux, fragmenting can actually be good for the access times and such. Perhaps the PS3 will use something like it.
I would love to hear of an instance where data fragmentation is good for access times. The closer together (and in order) that all of the data is, the faster it can be retrieved via standard hard drives. I think what you really heard is that it is hard to fragment some file systems.
 
Last edited by a moderator:
In addition, I wonder if they have some sort of protection mechanism for harddisk sectors that fail (like tagging them so that they won't be used in the future).

Unless I'm mistaken, harddrives do this automatically on their own.

I am kind of curious about how they handle defrag though. Seems like after a year or two of heavy playing it could possibly result in a mess of an HDD -- hopefully they have robust enough file systems that they can avoid most fragmentation issues.
 
Most modern file systems prevent fragmentation within files but allow spaces in between files (because of deleted files). For example 'xxxppppppeeerrr' could be a set of sectors on the drive, a user deletes file 'e' so it ends up 'xxxpppppp rrr' (which is 3 spaces). A user tries to save file a which take 7 sectors and the file system puts the entire file at the end of the sectors instead of putting some before and the rest after 'r' ('xxxpppppp rrraaaaaaa').
That'll result in pizza-slicing though, like the old Amiga memory model. Eventually you'll end up with holes that you can't use. You'd then need to perform a defrag to consolidate data and free up space. Does any OS talk about defraging to make room available on the drive? If that's possible, you could save a bit on formatting requirements and get a bit more real use out of the available data. I'm thinking AmigaDOS got 880 KB onto a 1 MB floppy, whereas MSDOS got 720 KB (and you could push AmigaDOS to someting like 950 KB per disk with a virtual drive format thingy). 15% extra room on your HDD isn't to be sniffed at.
 
There's a way to defrag the X360 HDD manually with a certain combination of button presses, though I dont know it offhand.

There's been lots of issues with hi-def Xbox 360 videos from Live stuttering and I still dont know the definitive answer. Some say it's hard drive fragmentation, some say the spring update messed video playback up, some say it's certain videos, I do know downloading in the background while watching a Hdef movie can cause stuttering for me. That's usually the easiest fix, dont DL and watch at the same time.

Actually, there is a way to clear the cache or reformat. I think you might be confused. Also, the fatx filesystem is highly resistant to fragmentation. This was the same big issue for the original xbox when modders started adding big HDDs. Constantly deleting and adding caused worry, but it was pretty much put to rest as no one had any problems.

I wouldn't get all worried. The cache is a seperate partition of the HDD and is cleaned every 5 games. Therefore you won't suffer problems playing games short of the game not being cleared from cache (oblivion) and corrupting itself. Easily fixed by loading a couple other games or clearing the cache. The rest of the HDD, as I said, is resistant of fragmentation enough to get your more than through the life cycle.
 
There's a way to defrag the X360 HDD manually with a certain combination of button presses, though I dont know it offhand.
You probably mean this button combination. system-memory-harddrive press Y then XX LB RB XX. As mentioned before all it does is clear the cache.
 
That'll result in pizza-slicing though, like the old Amiga memory model. Eventually you'll end up with holes that you can't use.

Actually that was cause the AmigaOS didn't understand how to use an MMU and hence could not implement virtual address space. ;)
 
Actually that was cause the AmigaOS didn't understand how to use an MMU and hence could not implement virtual address space. ;)
Well they didn't have an MMU when designed so it's understandable they couldn't use one. But the problems still the same, no? You end up with holes that aren't enough for the data you want to fit on.

eg.

xxxppppppeeerrr........ = Current state

Add dddd

xxxppppppeeerrrdddd....

delete eee

xxxpppppp...rrrdddd....

Now if you want to add fffffff, there's enough room on the disk but no area large enough to take it. You'd have to defrag the drive before that file could fit. Maybe the amount of space and general use of files doesn't break the drive up enough for it to matter on the whole, but I'd like to know how they get around that problem if they do.
 
I'm thinking AmigaDOS got 880 KB onto a 1 MB floppy, whereas MSDOS got 720 KB (and you could push AmigaDOS to someting like 950 KB per disk with a virtual drive format thingy). 15% extra room on your HDD isn't to be sniffed at.
720 KiB was the default setting in DOS, but you could go beyond that with command line parameters, too.
 
Well they didn't have an MMU when designed so it's understandable they couldn't use one. But the problems still the same, no? You end up with holes that aren't enough for the data you want to fit on.

eg.

xxxppppppeeerrr........ = Current state

Add dddd

xxxppppppeeerrrdddd....

delete eee

xxxpppppp...rrrdddd....

Now if you want to add fffffff, there's enough room on the disk but no area large enough to take it. You'd have to defrag the drive before that file could fit. Maybe the amount of space and general use of files doesn't break the drive up enough for it to matter on the whole, but I'd like to know how they get around that problem if they do.
using fragmentation. Why do you think the amiga had defrag tools? :LOL:
AFAIK the OFS/FFS filesystems simply filled available blocks as they found them. 3rd-party filesystems like AFS/PFS and SFS use some smarter strategies to prefer fitting files in sequential blocks, but they still fragment, just less so.
 
Well they didn't have an MMU when designed so it's understandable they couldn't use one. But the problems still the same, no? You end up with holes that aren't enough for the data you want to fit on.

eg.

xxxppppppeeerrr........ = Current state

Add dddd

xxxppppppeeerrrdddd....

delete eee

xxxpppppp...rrrdddd....

Now if you want to add fffffff, there's enough room on the disk but no area large enough to take it. You'd have to defrag the drive before that file could fit. Maybe the amount of space and general use of files doesn't break the drive up enough for it to matter on the whole, but I'd like to know how they get around that problem if they do.

The Amiga model had un-restricted block sizes with a granularity down to the byte level. By increasing the block size to be larger than a pointer/address offset you can reduce the problem but you gain internal fragmentation (wasted memory) e.g in FAT32, even a 1 byte file will take 4Kb on disk.

Ideally you want directories spaced far apart (so you can add files) and files within a directory stored contiguously. Most modern file systems now use some B-tree esque data structure (which seamlessly copes with the above) and buffer large chunks at the OS-level before writing to disk (kind of similar to a segmented log approach).

If you consider the PS3/360 usage scenarios these approaches are quite sensible and do not require anything radical. If you also consider the size of files being 'cached' from disk/networks they are typically large and do not result in the problem you describe. It's only when you continuously move/delete/create small files (tend to have 50%+ fragmentation rates against 20%- contiguous of large files) that fragmentation becomes a major issue.
 
Back
Top