Predict: The Next Generation Console Tech

Status
Not open for further replies.
As things currently stand, what proportion of memory is used to store things like music, ambient sounds, sound effects and dialogue (basically any audio)? I know it'll vary by game, and that a lot of stuff can be streamed, but just a rough range of memory used by audio would be good. I'm just trying to get an idea of some of the stuff that you could put in a large but relatively slow pool of memory (in addition to a large disk cache, swap file and a pile of "always on" Xbox Live / PSN services).

A big chunk of memory on a separate board (like the 4GB 360S flash board) connected by a 32 or 16 bit bus should allow you to drop to a single chip over time. Being on a separate board could allow you to redesign it (to take advantage of cheaper / higher capacity memory) on a whim without needing to touch the mainboard design. It's maybe not a great idea (split memory pools and all that) but it could get around some issues with putting lots of memory on the mainboard.
 
By the way, streaming or some sort of background loading is a necessary element of practically any game engine nowadays, no matter how much memory you add to the console.

The main reason is general load times. Even 512MB on current consoles takes several minutes to fill and it is not an option to have the game pause for such amounts of time after completing levels, to erase most of the data from RAM and then start to load again. This is true even for your everyday corridor shooter. The most a game can get away is to play a prerecorded cutscene for a minute, but even these should be limited and in general the levels should appear to be continouos.

It's just a better use of system resources. If you were to hold all the data for a level in memory, it'd be limited at 6-7GBs with an 8GB system and no streaming. If you'd be able to do background loading, constantly, then you could still show ~2GB of data in any given scene and use a ~1GB buffer to load the next session, so the same level could use dozens of GBs of data. Let me add that even 2 GB is a LOT of data, current games use about 300-400MB at most. So even if we increase the framebuffer resolution two times, this still gives us 3-4 times as much texture data per final pixel.
Background loading also provides a more convenient gameplay experience where you don't have to sit around and wait for the game to do its boring stuff.

And we haven't even started to think about virtual texturing, which would give an even larger increase in detail - if the background storage system would be fast enough to support it.


For example Uncharted games have always relied on streaming the next part of the level in the background while the player was getting close to the transition zones. I don't recall any texture LOD issues with that game, the only real offenders were the first Gears with a few occurances, and the first Mass Effect. G2 had practically no LOD issues, and ME2 opted for long loading times between levels (which was pretty annoying).
Open world games can't exist without streaming, I hope there's no need to prove that. RPGs and GTAs could not work, as they need to constantly swap data in and out, requiring complex systems to determine what chunks to load.


So we can be sure that streaming is a necessary component for nearly any game, and so it's a better return of investment if the system is balanced for a fast access background storage instead of twice the memory and slow loading. It would also allow for virtual memory to swap out unused applications, although I still don't believe that a game console should have too many other uses than video games and media playback... ;)
 
I bought the original Playstation after watching a Toshinden and Ridge Racer demo at my exchange the day it came out.

It was pretty incredible for its time--I owned a Saturn at the time, but I had to get it.

Pretty much exactly my scenario, I rented a Saturn and Virtua Fighter and Daytona (yes, you could rent systems in those ancient days). Later I rented a Playstation and Ridge Racer and Toshinden, they both looked much better than those Saturn games my decision was made...
 
The comparison being made here is the PC, which effectively uses the slow system RAM as a cache for the HDD data, and streams from that to the VRAM. This is a legacy design. Consoles with opticals have streamed assets from there straight to the 'VRAM' accesible to GPU. Additon of an HDD enables better streaming with an install.

So the question is, will there be an advantage to taking a leaf from the PC design, and putting in a pool of interrim memory? Has this been avoided as a cost issue rather than an unecessary step?

Our options going forwards are (using 'VRAM' as memory accessible via GPU):
Stream from optical+HDD to VRAM
Stream from optical+HDD to cheap DDR to VRAM
Stream from optical+HDD to flash ram to VRAM

DDR is going to add a memory complexity, and being volatile would need to be refreshed every time. It will be faster than flash, but then what sort of performance is going to be needed to cope with streaming requirements anyway? Flash just sits on an IO controller so is very cheap to add, and will have the same cost reduction forces as SDRAM. My gut feeling is a few GBs of fast flash would be fast enough, and the advantages of a significant pool of flash for content would be added value. Of course, devs will work with whatever options they have available, so it's down to the console engineers to pick the specs based on cost. Is there any compelling argmuent that a DDR buffer will provide a cost-effective advantage?
 
flash is slow, a few GB would be a single chip and thus similar to a memory card or USB stick in performance. write performance is well below that of a HDD and thus you need to put several flash chips and a controller there, i.e. a SSD.

that's costly :). on the PC you have hybrid hard drives, Intel's caching solution (and other implementations of the same thing) but you're caching the OS, libraries, common apps etc. so it's longer term caching of smaller data. data that is useful to have when you boot up your desktop.

so I don't believe in flash too much except for firmware, DRM data, fallback when there's a HDD failure etc.
or an optional SSD or memory card, in the high end model or that you can add yourself.


not impossible : 2GB 64bit of fast ddr3 for the CPU, 2GB 128bit gddr5 for the GPU.
this would be just like a PC ; by the way with future gens of GPU, smarter with memory access, multitasking and I/O, we might well see streaming from media to VRAM on the PC :)
CPU and GPU will be able to read and write to each other's memory seamlessly, both on next gen PC and next gen console. (it could be conceptually the same even with unified memory + edram)

so in my opinion a combination of :
Stream from optical+HDD to VRAM
Stream from optical+HDD to cheap DDR to VRAM

is possible, with developers doing what they want as they see fit.
 
Last edited by a moderator:
The comparison being made here is the PC, which effectively uses the slow system RAM as a cache for the HDD data, and streams from that to the VRAM. This is a legacy design. Consoles with opticals have streamed assets from there straight to the 'VRAM' accesible to GPU. Additon of an HDD enables better streaming with an install.

So the question is, will there be an advantage to taking a leaf from the PC design, and putting in a pool of interrim memory? Has this been avoided as a cost issue rather than an unecessary step?

Our options going forwards are (using 'VRAM' as memory accessible via GPU):
Stream from optical+HDD to VRAM
Stream from optical+HDD to cheap DDR to VRAM
Stream from optical+HDD to flash ram to VRAM

DDR is going to add a memory complexity, and being volatile would need to be refreshed every time. It will be faster than flash, but then what sort of performance is going to be needed to cope with streaming requirements anyway? Flash just sits on an IO controller so is very cheap to add, and will have the same cost reduction forces as SDRAM. My gut feeling is a few GBs of fast flash would be fast enough, and the advantages of a significant pool of flash for content would be added value. Of course, devs will work with whatever options they have available, so it's down to the console engineers to pick the specs based on cost. Is there any compelling argmuent that a DDR buffer will provide a cost-effective advantage?

Since the HDD will be (probably) only on the premium SKU, i can see 32 to 64 GB of flash memory on all models, but not for installing an entire game. For example 10 GB maximum of space for each game, and other 10 GB reserved by the OS as memory swap. 64 GB seem more likely, for 3x games plus some downloaded content.
The stream from a fast blu-ray drive (40-50 MB/s) and from a fast flash memory (150+ MB/s), would be enough to fill 2-4 GB of RAM in few seconds.
 
The comparison being made here is the PC, which effectively uses the slow system RAM as a cache for the HDD data, and streams from that to the VRAM. This is a legacy design. Consoles with opticals have streamed assets from there straight to the 'VRAM' accesible to GPU. Additon of an HDD enables better streaming with an install.

So the question is, will there be an advantage to taking a leaf from the PC design, and putting in a pool of interrim memory? Has this been avoided as a cost issue rather than an unecessary step?

Our options going forwards are (using 'VRAM' as memory accessible via GPU):
Stream from optical+HDD to VRAM
Stream from optical+HDD to cheap DDR to VRAM
Stream from optical+HDD to flash ram to VRAM

DDR is going to add a memory complexity, and being volatile would need to be refreshed every time. It will be faster than flash, but then what sort of performance is going to be needed to cope with streaming requirements anyway? Flash just sits on an IO controller so is very cheap to add, and will have the same cost reduction forces as SDRAM. My gut feeling is a few GBs of fast flash would be fast enough, and the advantages of a significant pool of flash for content would be added value. Of course, devs will work with whatever options they have available, so it's down to the console engineers to pick the specs based on cost. Is there any compelling argmuent that a DDR buffer will provide a cost-effective advantage?

I don't know it's a compelling argument, but ...

A few GB/s would be plenty for anything "connectivity and none-game" related like joker was describing (an Athlon 64 with DDR1 still laughs off anything most users would do) and there may be some game related code where low latency but relatively low bandwidth would be fine. For audio and caching it would be very fast. Maybe you could put it on the peripheral bus (latency allowing) or give it a 16 or 32 bit bus of it's own (although this would add complexity). Obviously you'd use less than you would for flash, and you'd lose the advantage of it being permanent storage.

The key idea, I guess, was freeing the platform maker up to use 'less' but the absolute fastest ram that they could for main memory at launch (GDDR3 1600 mHz was actually available at the 360's launch and some GPU vendors ran it faster still), and also giving additional options for cost reduction without a mainbaord redesign (similar to the way that the 4GB of flash is implemented in the 360S). The main memory (the fastest stuff) could then almost be treated as a big, high bandwidth cache for what you needed for the next few frames, with less accessed data* or data needed for next few seconds or tens of seconds piled up in the slower memory.

*e.g. not all of your AI code or data needs to be accessed every frame, same for audio

Might be a bad idea, but I like the idea of - if you're going to have split memory - doing it based on bandwidth and time-until-needed rather than simply doing it by processor (e.g. GPU and CPU).
 
Last edited by a moderator:
Since the HDD will be (probably) only on the premium SKU, i can see 32 to 64 GB of flash memory on all models, but not for installing an entire game. For example 10 GB maximum of space for each game, and other 10 GB reserved by the OS as memory swap. 64 GB seem more likely, for 3x games plus some downloaded content.
The stream from a fast blu-ray drive (40-50 MB/s) and from a fast flash memory (150+ MB/s), would be enough to fill 2-4 GB of RAM in few seconds.

To get 50 MB/s from a Bluray drive you'd need to have a 12X drive, spinning at about 10,000 rpm. The 360's noisy, vibration trauma inducing 12x DVD drive was about 7,200 rpm, iirc. I know you can get quieter drives and use better soundproofing than MS used (although all my PC drives have been noisy as hell too) but I hope they don't go much past 6X.

Stating the obvious perhaps, but read speeds will drop as you move inward from the perimeter of the disk. I can't see any next gen console doing well without something to aid the optical drive.
 
I don't know it's a compelling argument, but ...

A few GB/s would be plenty for anything "connectivity and none-game" related like joker was describing (an Athlon 64 with DDR1 still laughs off anything most users would do) and there may be some game related code where low latency but relatively low bandwidth would be fine. For audio and caching it would be very fast. Maybe you could put it on the peripheral bus (latency allowing) or give it a 16 or 32 bit bus of it's own (although this would add complexity). Obviously you'd use less than you would for flash, and you'd lose the advantage of it being permanent storage.
That's exactly what A-RAM did in GC. I'd be curious to hear from any devs on their experiences and whether they have preference. If it's a cost effective solution, why hasn't it been tried before?
 
That's exactly what A-RAM did in GC. I'd be curious to hear from any devs on their experiences and whether they have preference. If it's a cost effective solution, why hasn't it been tried before?

Yeah, like A-ram but much larger in proportion to main ram and with massively more bandwidth (about 1,000 times) so that you can run a lot more of your game from it (if you want to). Not so much an auxiliary pool of memory like in the GC, but part of a hierarchy of Optical -> flash -> slow memory -> main memory -> processor caches. You could always skip the flash and/or slow memory for any data you wanted to directly move somewhere.

I'd see it as the kind of thing where, down the road, developers could build tools to help them locate data optimally, and perhaps even automate part of the process and change it on the fly to achieve best performance. I don't see it as being quite like the PC memory model.
 
Last edited by a moderator:
yes, this is the way of the future, also with many-core / massively parallel architectures where you will have to manage memory locality, dispatch data and bring it close to the crunching units.
we have that already with Cell SPE local storage, GPU registers and L1/L2, etc.

any work on hardware, OS and software tools for what you describe won't be wasted.
 
The main reason is general load times. Even 512MB on current consoles takes several minutes to fill and it is not an option to have the game pause for such amounts of time after completing levels, to erase most of the data from RAM and then start to load again.

Sorry, but that's just wrong. Filling 512MB in a HDD load game takes about 7-8 seconds (including decompression - data transfer is by far being the bottleneck), and it's not much slower from optical either with a little care. Don't get me wrong, proper streaming is great and all, but sometimes it's just an overkill.
 
The Saturn had more ram than the Playstation. :eek:

But all Saturn had a assload of different memory pools at different speeds, with max size of 1MB.

1 MB SDRAM as work RAM for both SH-2 CPUs (faster)
1 MB DRAM as work RAM for both SH-2 CPUs (slower)
512K VDP1 SDRAM for 3D graphics (Texture data for polygon/sprites and drawing command lists)
2x 256K VDP1 SDRAM for 3D graphics (Two framebuffers for double-buffered polygon/sprite rendering)
512K VDP2 SDRAM for 2D graphics (Texture data for the background layers and display lists)
4 KB VDP2 SRAM for color palette data and rotation coefficient data (local, on-chip SRAM)
512 KB DRAM for sound. (Multiplexed as sound CPU work RAM, SCSP DSP RAM, and SCSP wavetable RAM)
512 KB DRAM as work RAM for the CD-ROM subsystem's SH-1 CPU
32 KB SRAM with battery back-up for data retention.
512 KB Mask ROM for the SH-2 BIOS

Playstation had 2MB main RAM, 1MB VRAM and 512KB sound RAM. Therefore it could actually hold more data in one pool.
 
By the way, streaming or some sort of background loading is a necessary element of practically any game engine nowadays, no matter how much memory you add to the console.

The main reason is general load times. Even 512MB on current consoles takes several minutes to fill and it is not an option to have the game pause for such amounts of time after completing levels, to erase most of the data from RAM and then start to load again. This is true even for your everyday corridor shooter. The most a game can get away is to play a prerecorded cutscene for a minute, but even these should be limited and in general the levels should appear to be continouos.

Would you care to send this as an email to Gabe Newell and Valve and any of the other developers of Portal 2? Damn loading screens every two minutes. Grrr. :mad:
 
Honestly I think this story is well and truly overplayed.
MS most certainly did not add extra RAM to 360 to appease Epic. If I were speculating as to what the exact cause was, I'd make note of the fact that they made the decision at about the time the first PS3 devkits were circulating through 3rd party devs.

MS do listen to what developers say, and so does every other manufacturer, but at the end of the day they have a budget and they make decisions more RAM might mean weaker CPU or GPU, or whatever.

A case in point there was a support guy at MS who sent out an email asking devs if they would like more EDRAM about a year prior to 360 launch. Of course every one responded with yes, and of course MS couldn't accomodate because of cost and the support engineer had simply never understood what it would cost/unit to do..


I agree and as I stated in previous posts developers are key for consoles is well accepted (except maybe ps2) and in fact Epic it may have been exaggerated, but you don't think its strange the fact that Epic said it openly and MS does not appear and deny it publicly, or at least say that it is not so etc?

Edit: Sorry all for of topic,Shifty please if you want delet this post.
 
Last edited by a moderator:
Sorry all if was posted before.

Maybe unlikely, but is it possible that Sony is indeed preparing to launch PS4 until March 2013 (Japan and then Europe and USA end of year?)?

On the other hand perhaps not at all improbable that the PS4 is coming out in about 18 months,because we have case WiiU was announced in June this year and will be released in April 2012 with about 10 months between dates.

And if in fact be able to place the preparation of PS4, which would be the limit that the hardware was ready to consumers?

My guess would be at least a year before launch, so at least until March 2012 to be "closed" while giving the specs for developers at least have an idea of ​​what to prepare for the second generation of games...certainly the first to use less than half capacity due to programing on beta SDKs...if already exist "beta sdk ps4"* ...

What kind of hardware we can expect based on this period in March 2012 to a hypothetical launch in March 2013?

I kicking high...PPC A2 4 cores 3.2GHz cpu+ gpu 600MHz AMD 800/1000 SIMD cores (customise as those found in notebooks with 50/75 watts) + 3 GB RAM / VRAM maximum (unfortunately..).

Posted by Bgassassin in neogaf:

http://bitmob.com/articles/source-playstation-4-in-18-months

Edit:

* http://www.eurogamer.net/articles/2010-11-24-kaz-not-sure-if-gt6-will-be-on-ps3

Hirai said this in november 2010:

"10 years ago it was easier to predict what would happen three years in the future," he said in an exclusive interview at the official GT5 launch event in Madrid. "Nowadays no-one knows what happens in the future. In three years, we don't know what will happen."

CEO Polyphony Digital Yamauchi talks to about 3 years in november 2010 too.
http://www.gamesindustry.biz/articles/2010-11-25-yamauchi-unsure-if-gt6-will-be-on-ps3-or-beyond


Shuhei Yoshida in august,17, 2011:

"GT6 in few years''...

http://gamingbolt.com/yoshida-last-guardian-project-not-moving-fast-as-expected
 
Last edited by a moderator:
Sorry, but that's just wrong. Filling 512MB in a HDD load game takes about 7-8 seconds (including decompression - data transfer is by far being the bottleneck), and it's not much slower from optical either with a little care. Don't get me wrong, proper streaming is great and all, but sometimes it's just an overkill.

Why the hell do I have to sit 70-80 seconds in front of GTA 4 or Mass Effect 2 then? Are their programmers that bad or what? :p

Also, we're talking now abuot filling 8 GB with an optical drive that can read 50 MB/sec. I don't think it'd only take 7-8 seconds either...
 
Status
Not open for further replies.
Back
Top