Alternative distribution to optical disks : SSD, cards, and download*

The Amiga games at the time were distributed on floppies. So the jump in storage space went from 880KB to 600MB, it REALLY felt like infinite storage. Today going from 8.5GB to 50GB doesn't seem like such a big deal :LOL:

But I'm pretty sure there's a correlation to be made between available RAM and ideal game size. There has to be a balance somewhere.
 
Yep UAKM and TPD (i.e. the Tex Murphy games) are absolutely great.

EDIT: Yeah... they went from 880kb to 600MB... but Amiga games VERY often shipped on several discs (SoMI had 4, MI2 had 11, IJ3 had 3, IJ4 had... 12 iirc). But still a big jump, at times comparable to 6.8 to 50GB (remember, most 360 games didn't have access to the full discs). And PS2 to PS3 jumped about the same on average (a but more, just a small subset used dual layers then, but the same is true now, too).

Thing is (at least imho) as long as the RAM doesn't get bigger by the same factor, there's no real need for those huge discs. At least if we assume that we don't get an infinitely fast OD. For PS3, it was useful only at times (well, it was also negatively affected by 360 in that regard) and it had to rely on the HDD, too. Hence I'd propose an adequately sized slow RAM for buffering/caching/streaming data. This way, the main RAM can stay much smaller (megatexturing argument) and the drive doesn't need to sound like a vaccuum, either. It would get messy if only one of the three systems would provide such a system, though...
 
Last edited by a moderator:
Cahce ram isn't cheap. Using it only to hold redudant data seems really wasteful. Don't we already have the main ram for that?
 
'Cache RAM' in this case doesn't mean fast SRAM on CPU, but an interrim storage between the main RAM and mechanical drives. These drives often come with a few MBs of RAM as a cache as standard in an HDD or DVD drive. For the sort of data accesses a game makes, much larger cache in a very low seek time cache makes a lot of sense. That cache could be flash memory like you have in SD cards (similar transfer speeds to HDD, but much, much lower seek times) or a pool of RAM which needn't be the most expensive and fastest available but could be cheaper and slower and still be much, much faster than accessing the drives directly.
 
or you just have the main RAM for that, if main RAM is for the CPUs and fast gddr5 is for the GPU.

only, two pools of RAM is annoying and PS3 suffered for that vs the X360.
but, a SoC with CPU cores and GPU cores, ddr3 controller and gddr5 controller is technically possible and now you get incredibly fast communication between CPU and GPU, lessening a lot the chores of two distinct pools.

alternatively, you use incredibly fast memory on silicon interposer, mainly meant for the GPU, and a bigger chunk of external ddr3 or gddr5.
 
But fast RAM is MUCH more complex and more expensive. So by using a slow (but big) RAM for loading data and using small (but fast) RAM for processing, you could get a cheaper console and don't suffer massive load times... surely it's not the only option... they could as well put in an SSD or whatever instead of RAM, but I don't really think that that's an option.
 
or you just have the main RAM for that...
At significant cost. 2-4GBs more of whatever fast main RAM you have will cost way more than 2-4GBs of cheap RAM or flash that isn't fast enough for main system memory but is ideal for IO caching. Flash in particularly offers persistence so wouldn't need to be refreshed every time you load the same game. 16 GBs flash could have several games cached.
 
I'd love a split ram configuration as long it means more ram for the same cost, which is exactly what we have on the PC right now. I'm thinking that if the xbox had it's original 256MB total (which they upgraded at the last minute), nobody would have whined about the PS3's 256+256 configuration (or the other way around, if the PS3 had been 512+512). Most devs said that both consoles had way too little ram from the start anyway.

I can't find the reference anymore, but I seem to remember John Carmack saying gigatexturing would work a lot better if there was more memory for buffering/caching, it would hide the media latency. But how much would be enough?
 
I think it was Sebbi that explained that with Megatexture the benefit of having a larger tile cache in ram reduceds dramatically beyond a certain point.

IIRC, the range of possible texture elements in view increases exponentially over time as you project into the future. Trying to keep a larger tile cache filled would seem to place increasingly large demands on storage bandwidth too, as you'd be transferring more and more texture elements/tiles into the cache that would never be used. Locality of the data on the HDD / optical drive would drop too, so you'd get hammered on seek times.

I guess if you could store all your assets for a given area in ram (kind of like caching a full DVD on 360 Rage) your problem would be solved, but flash would have ample bandwidth for that and be cheaper.
 
So the question is, what would be more useful overall, 4GB additional "slow" ram or 16GB flash buffer? (I guess they'd be the same price)
 
I think that's a good question, and I guess the answer depends on how you see software developing. Both could stream texture data in fast enough for virtual texturing, though "slow" ram might be better for more common "burst-like" streaming (loading in a new LOD level for a model, or an entire higher resolution mip-map, on demand). You could possibly run some stuff directly out of slower ram (maybe AI? environment simulation?) and free up fast ram (so maybe you could include less).

Flash might be appealing for a few cost reasons - it'd probably need fewer chips and traces, and the system would need some form of (slow) internal flash and flash controller anyway for the firmware and bios/dash so you might save a little on that. If every system had a HDD a large pool of fast flash might seem largely redundant though.

Would be good to know what kind of technology the next Unreal engine is pimping - maybe that would give us a better idea of what to expect?
 
If both offer the same performance advantages for the IO requirements, flash seems the definite winner to me. Having to preload 4GBs of RAM every time is a significant disadvantage to having the last-used data on hand
 
For the optical drive to handle MegaTexturing without breaking a sweat make the disc smaller and spin it faster. All Microsoft would need to to is:

1) License Holographic disc tech (micro-holograms) from GE

2) Make the disc size smaller (so the spindle speed can be high without the worry of vibration) Think Gamecube sized discs or smaller.


Problem solved. It is similar to what Seagate does with their Cheetah hard drives. Instead of a magnetic drive technology, you apply the philosophy to an optical one.
 
For the optical drive to handle MegaTexturing without breaking a sweat make the disc smaller and spin it faster. All Microsoft would need to to is:

1) License Holographic disc tech (micro-holograms) from GE

2) Make the disc size smaller (so the spindle speed can be high without the worry of vibration) Think Gamecube sized discs or smaller.


Problem solved. It is similar to what Seagate does with their Cheetah hard drives. Instead of a magnetic drive technology, you apply the philosophy to an optical one.

3) Design, research, and manufacture the means to mass-produce the media, and research, design, and mass-produce the drive.
They'd have to do all this in very little time.
 
3) Design, research, and manufacture the means to mass-produce the media, and research, design, and mass-produce the drive.
They'd have to do all this in very little time.


GE has already done the work.


All Microsoft would have to do is request a small disc around the size of a PSP disc along with a fast spindle speed and John Carmack is happy as well as tens of millions of game players.

8062227_4942417ae0_z.jpg



Without any information I'm guessing a PSP sized GE micro-hologram disc could be hold about 200GB and have a transfer rate over 250 mb per second.
 
Mechanical seek times are always going to be diabolically poor compare to memory access times. A holographic disc may mean the head doesn't need to move so far in most cases, but there will be the mechnical limits of having to accelerate and decelerate a mass in the optical head.
 
This is the IEEE conference about optical media technology, which I posted before.
http://www.slideshare.net/rgzech/the-future-of-optical-storage-x-rg-zech-slide-share

Slide #3:
Future optical storage will probably be modeled on Blu-ray disc,whose basic design is robust and extensible. Older concepts, such as 3D holographic memories, Millipede, etc. ,will never be commercially viable.
Holo disk has been in development for 48 years and promised to come out real-soon-now. When DVD came out, we were bombarded by claims that DVD was already obsolete because Holo disks were just around the corner, it never happened. 10 years later, Bluray comes out, it was obsolete because Holo is just around the corner, didn't happen. BDXL just comes out? Holo just around the corner.

Low cost, low risk, near term evolution of Bluray format on slide #15: NFR+MLD+MLR = 1TB disks
Also, because the linear amount of data is increased by 5 times, the data rate is also increased by 5 times.
Slide #23 shows the "eye" of the current signal we can get today, it means 2.5 bits multi-level is already possible with 6 layers, with only a little more electronics and firmware. i-MLSE already gives us another 33% increase of both data rate and density (from BDXL). This is 3.3 times the data rate, and 3.3 times the capacity per layer. 500GB at 6layers, 2.5ML, i-MLSE. At 12x speed that's 178MB/s They can add near field to double that and at 12x speed it goes up to 350MB/s, 2TB disks, 6 layers. 12 layers can double capacity again to 4TB.

Why would anyone invest in the holo disks pipe dream?

BTW, I think it's interesting that the slides hints multi-layers isn't what they should invest in, because the OTHER technologies they plan are genuinely driving the cost per GB down, exponentially. Adding layers increases capacity, it does nothing for either cost per GB nor data rate. I'm beginning to doubt those 16 layers prototype will ever be commercialized.
 
Last edited by a moderator:
because I want some removable media for listening to music in my nuclear powered flying car.

how about latency? it seems it won't be improved by the bold three-letter acronyms. I wonder if holo can bring anything on that front. but probably not either.
 
because I want some removable media for listening to music in my nuclear powered flying car.

how about latency? it seems it won't be improved by the bold three-letter acronyms. I wonder if holo can bring anything on that front. but probably not either.
As far as I know, there's no way to improve latency on spinning disks, we're stuck with sucky latency. :cry:
The problem is eerily similar to DRAM, which bandwidth is growing, but latency hasn't improved enough and we need L3 cache to cope with it. We'll always need tiered storage because of the compromise between cost-per-GB and performance. Intermediary "fast small" to access "large slow". Loading time is the biggest issue with consoles, and it's caused by lack of bandwidth, not latency. Good data layout also partly solves latency issue. We know in advanced what will have to be loaded, in what order.

Bandwidth, Latency, Cost... pick any two. If a technology emerges with all three, we have a revolution.

(nuclear? I want MrFusion!)
 
Last edited by a moderator:
Back
Top