Alternative distribution to optical disks : SSD, cards, and download*

Yet if the 360 could just fit assets on a 8GB disc wouldn't it be reasonable to think it would now need a larger storage medium for the 720 due to assets? One last thing, a 8GB DVD could fill the 256mb pool of ram for existing consoles 32 times. A dual layer BD (52GB) fills a 256mb pool of ram for existing consoles 208 times. A dual layer BD (52GB) can fill a 2GB pool of ram for future consoles 26 times.

Point I'm arguing is that 8x blu-ray drive might be comparatively just as fast or even faster than current gen optical media if binary size grows less than the speed of blu-ray drive. I think if assets grow to be more than 3x bigger than this gen then blu-ray next gen will start to give worse performance than optical media this gen. So if we take the assumption of 8GB game size you could triple it to 24GB next gen and see same performance from next gen console as what we are seeing today.Incidentally this would mean single layer BD is enough for next gen. You could ofcourse use dual layer BD and add more data duplication to avoid seek times...

While flash would be a step in the right direction I think the size is over-kill and the speed is sub-par. 64GB wouldn't be large enough to install future games too, would be excessivly large for buffering (2-4 GB ram in future console). Currently the fastest I've seen is an (600x) 90MBps cards and they are approaching $200 for 64GB. Granted the price will drop and we might get a 64GB card for $30 in 2 years but we still only get 90MBps on a medium that will have its life slashed by using it as a cache.

Think flash memory as the cheap base sku a la arcade xbox360. Ofcourse you would add the optional hdd for powerusers. The big installs go to hdd and flash is still being used as cache. On the cheap base unit flash is only used as cache+gamesaves+smaller psn/xbox live games storage. If you want more, you buy the optional HDD or the more expensive sku which comes with HDD.
 
I don't believe binary sizes will stay the same but in same note I don't believe binary size will grow proportionally to memory size either.
I agree, but I don't see savings coming from compression schemes. Even if you're right about texture compression, what other savings can be made? Movies and audio are already lossy compressed. Models can't be. Isome assets won't cost more to produce, as they are already created in higher fidelity and scaled down. The creation will thus cost the same. The difference will be where we need more content, and that will probably have to look to what was supposed to be one of the magics of this gen, procedural content. Games can be populated with characters and levels created by piecing together various assets. Think EA Sport's player customisation done procedurally for every NPC except those the artists want particulat control over, and you'll see that content variation can be handled via computation rather than prebuilding. There just hasn't been the juice to do that this gen, or so it seems with lots of games repeating character models etc.
 
I agree, but I don't see savings coming from compression schemes. Even if you're right about texture compression, what other savings can be made?

I would hazard guess textures is where the biggest gain can be made. Samples(sound) might have modest gains to be had. I wonder if object models(vertex data) is already compressed or if it's something that could be fit into the pipeline.

Another thing that might affect memory equation is different buffers and data sets used for graphics rendering, AI and so on. Perhaps game engines will move to more complex rendering models requiring more interim buffers eating into usable ram. Raising resolution or traditional AA levels also eats more memory. Perhaps the AI and game logic becomes more complex and data structures used to feed the engine grow. I think xbox360 get's some memory advantage here this gen due to edram and tiling compared to regular gpu.

If we have enough gpu power perhaps some of the lower res buffers used this gen get updated to full resolution next gen(think about shadows, smoke, particles, so on). Some buffers might double in size by using higher precision datatypes such fp16/fp32. Displacement mapping can magnify the amount of geometry in memory substantially compared to what was loaded from disc.

What I expect is most games next gen will try to be DD friendly and hence binary sizes will be somewhat limited. Higher end games might push 2 layer BD but then DD option starts to be rather painful unless assets are scaled down for DD version.
 
Last edited by a moderator:
I could really use a quick education (anyone offering).

Could someone tell me how we would fill a system with 2GB of usable ram (not vram). What I mean is I don't really understand what you guys are inferring too for binary data, I could assume that is basic code to tell the game how to run (compile textures x,y,z to model A etc etc) but I really don't know.

What is filling up the RAM today and what kind of scale would be present for the future? Master Chief/Drake - Would doubling the assets make an improvment in these characters, how would those assets increase etc etc.

How much room does a model take, does the size of the model increase at the same rate as the complexity (2x the amount of verticies = 2x the file size?)

Are we currently limited to the textures being used by the amount of ram available or is it more about processing speed. If it is ram what kind of textures are we looking at for next generation and size are they going to be?

Would the increase in processing power mean we would potentially have more objects on the screen? If so that would require textures too right?

I'm going on the presumption that the ram is being filled up with textures, audio, models, game code and.... ? that is where I'm getting stuck not knowing much about game design.


Thanks!
 
I could really use a quick education (anyone offering).

Could someone tell me how we would fill a system with 2GB of usable ram (not vram). What I mean is I don't really understand what you guys are inferring too for binary data, I could assume that is basic code to tell the game how to run (compile textures x,y,z to model A etc etc) but I really don't know.

What is filling up the RAM today and what kind of scale would be present for the future? Master Chief/Drake - Would doubling the assets make an improvment in these characters, how would those assets increase etc etc.

How much room does a model take, does the size of the model increase at the same rate as the complexity (2x the amount of verticies = 2x the file size?)

Are we currently limited to the textures being used by the amount of ram available or is it more about processing speed. If it is ram what kind of textures are we looking at for next generation and size are they going to be?

Would the increase in processing power mean we would potentially have more objects on the screen? If so that would require textures too right?

I'm going on the presumption that the ram is being filled up with textures, audio, models, game code and.... ? that is where I'm getting stuck not knowing much about game design.


Thanks!

The very layman explanation would be analogy with zip packages and text documents. The documents compress to small size but must be decompressed before they can be used. This same applies to most game data(objects, textures, game logic, scripts and so on).

For textures GPU's use very specific kind of compression allowing easy and fast access to any pixel inside the compressed image. This GPU friendly compression is not as dense as formats like jpeg or jpeg2000 which don't allow user to access single pixels without decompressing larger blocks or even whole file.

What we could do here is to use these denser compressions on disc and then recompress on memory to GPU friendly format. The tradeoff is between io time and cpu cycles used to do the operation.

For vertex data displacement mapping could be viewed as an extreme compression method. Similarly heightmaps used to do terrains or voxels could be seen as rather extreme format of compression. If we start to have large models and lot's of vertex data even simple zip compression might give considerable savings. This gen vertex data is probably small enough not to warrant too much compression as there is not too much to be gained?

Similarly any data that is not compressed on disc or compressed unoptimally can benefit from compressing the asset on disc and re/decompression once transferred to memory.

Some data such as movies in disc already might use best possible practical compression and there is not much(if anything) to be gained without lowering quality.

AI might start by reading the control points for navigating and some basic decision trees from disc but then start dynamically building structures based on player actions and game engine feedback. This dynamic behavior will be done during runtime and eats more memory than what was actually read from the disc.

For rendering games use lot's of buffers to do calculation such as shadows, stereoscopic rendering, deferred shading, dynamic lightmaps/volumes, z-buffer, normal buffer and so on. All this data is generated on the run based on level geometry and not necessarily read from disc(prebaked content is an exception though). It's not a big jump to expect more advanced rendering next gen requiring more precision and more interim buffers. Also it might be that we will have higher AA or higher resolution rendering next gen eating into usable ram unless consoles continue to use xbox360 like edram solution and tiling that can use less memory than rendering everything in one go to main memory.
 
Last edited by a moderator:
Another argument against digital distribution..

http://arstechnica.com/tech-policy/news/2011/03/data-caps-claim-a-victim-netflix-streaming-video.ars

Clearly the ISP´s and Content providers have issues to resolve in a "download/stream" everything world.

While there's certainly going to be an issue for digital distribution for anyone with slower connections and data caps.

That article is fucking horribly written. They lowered the default quality to preserve bandwidth for customers, but it's just a switch to turn on full quality.
 
While there's certainly going to be an issue for digital distribution for anyone with slower connections and data caps.

That article is fucking horribly written. They lowered the default quality to preserve bandwidth for customers, but it's just a switch to turn on full quality.

From reading around the net, it seems that plenty of customers have a real issue with Caps with this ISP and they can´t pick someone else either. Perfect!
 
My evening broadband is becoming useless. 0.69 Mbps in an evening, one tenth of what can be got in the day. I'm thinking there are just too many users. Though fibre is being rolled out across the country, my particular village, surrounded by fibre-enabled townsteads, isn't even on the list to be considered!
 
Shifty would you mind giving this one a go

http://netalyzr.icsi.berkeley.edu/

Once when your net is good and when its bad, then send me the links to your results, either in PM or posted in this thread.

One of my bad test results is this one, due to me playing with some ULA IPv6 stuff in here.

http://n5.netalyzr.icsi.berkeley.edu/summary/id=3210a1cd-32179-18cb64d7-cdd7-44f2-98be

And on our standard guest hotspot in the office

http://n5.netalyzr.icsi.berkeley.edu/summary/id=3210a1cd-32185-f79c4c56-174c-4c35-b83c
 
Last edited by a moderator:
FLASH ahaha savior of the universe !

I'm back...... and so is intel and micron announcing the first 20nm mlc nand for ssd use


http://www.anandtech.com/show/4271/intel-micron-announce-first-20nm-mlc-nand-flash-for-use-in-ssds


An 8GB 2-bit-per-cell MLC NAND device built at 20nm has a die area of 118mm2, down from 167mm2 at 25nm. A single 8GB NAND device wasn’t built at 34nm.


34nm,%2025nm,%2020nm%208GB%20NAND%20die%20comparison_575px.jpg


For consumers there’s an obvious win. We need smaller transistor geometries to reduce the cost of NAND, which ultimately reduces the cost of SSDs. The days of 50% annual price reductions are over however, expect to see a conservative 20 - 30% drop in price for SSDs that use 20nm NAND over 25nm NAND.


16GB sd cards are $20 bucks now . A nice 30% drop would mean $16 at retail. It will most likely cost less than $5 for the manufacturer to produce.


Its going to hit not in 2012 or 2013 but in 2nd half of 2011. This is a major step up and its my understanding that the reads don't hurt flash its the writes. So they can reduce the size further for consoles by going with TLC instead of MLC at 25nm it saved about 20% die space. So we can see closer to 50% drop in costs by the end of this year for what we'd want to consider for a console cart
 
I'm back...... and so is intel and micron announcing the first 20nm mlc nand for ssd use


http://www.anandtech.com/show/4271/intel-micron-announce-first-20nm-mlc-nand-flash-for-use-in-ssds





34nm,%2025nm,%2020nm%208GB%20NAND%20die%20comparison_575px.jpg





16GB sd cards are $20 bucks now . A nice 30% drop would mean $16 at retail. It will most likely cost less than $5 for the manufacturer to produce.


Its going to hit not in 2012 or 2013 but in 2nd half of 2011. This is a major step up and its my understanding that the reads don't hurt flash its the writes. So they can reduce the size further for consoles by going with TLC instead of MLC at 25nm it saved about 20% die space. So we can see closer to 50% drop in costs by the end of this year for what we'd want to consider for a console cart

The new SSD disk generation didn´t really show a price drop vs capacity maybe these 20nm will help.

But maybe it would be good to have a clear idead of what you propose.

SSD disk instead of harddrives and games developed and released on flash?

Or how does it work?
 
The new SSD disk generation didn´t really show a price drop vs capacity maybe these 20nm will help.

But maybe it would be good to have a clear idead of what you propose.

SSD disk instead of harddrives and games developed and released on flash?

Or how does it work?

Why SSD prices ? You have to remember that the newest SSD disks that are faster . Prices have still come down. I bought my vertex 2 for $170 its a 128gig drive thats how much i paid for my 60 gig vertex 1


N SD card form factor should be good enough remember even cheap sd cards are at 10MB/s with some going up to close to 30MB/s . CF cards can hit a 100MB/s

I propose something slightly larget than a 3DS card.

So basicly you go to the store and but the game just like you do now , except your on flash. It allows so many benfits.

The other thing I'd do is add multiple slots so you can have multiple games in at once. I invision gears 5 being able to acess gears 4 maps for multiplayer if you have gears 4 in another slot. This will reduce the used market.
 
Last edited by a moderator:
Sandisk and Toshiba announce 19nm Nand

http://www.theinquirer.net/inquirer/news/2045533/sandisk-toshiba-announce-19nm-nand-flash-memory



While Sandisk was revealing future plans, Toshiba outlined possible uses for the chips saying it can cobble together 16 65Gb chips in one package to produce 128GB drives for use in smartphones and tablets. Just like IM Flash before it, Sandisk said that shrinking NAND flash chips down to 19nm does not sacrifice reliability.

Read more: http://www.theinquirer.net/inquirer...announce-19nm-nand-flash-memory#ixzz1KaqjjvJB
The Inquirer - Computer hardware news and downloads. Visit the download store today.


They said they are working on 3bits per cell units which should drive costs down further as they are smaller. Of course it kills the number of writes but we really only need it to be writen to a few times at most.
 
Why SSD prices ? You have to remember that the newest SSD disks that are faster . Prices have still come down. I bought my vertex 2 for $170 its a 128gig drive thats how much i paid for my 60 gig vertex 1


N SD card form factor should be good enough remember even cheap sd cards are at 10MB/s with some going up to close to 30MB/s . CF cards can hit a 100MB/s

I propose something slightly larget than a 3DS card.

So basicly you go to the store and but the game just like you do now , except your on flash. It allows so many benfits.

The other thing I'd do is add multiple slots so you can have multiple games in at once. I invision gears 5 being able to acess gears 4 maps for multiplayer if you have gears 4 in another slot. This will reduce the used market.

Ok, still trying to get the message, sorry :)

You want the Consoles to have SSD or Standard Harddrives?
And then deliver the games on 16GB(?) cards?
 
Ok, still trying to get the message, sorry :)

You want the Consoles to have SSD or Standard Harddrives?
And then deliver the games on 16GB(?) cards?

Yea , I figure you get rid of the optical drive and the 2.5 inch drive and instead put in a single plater 1TB 3.5inch drive. You will greatly reduce the size of the console.

Then you ship the games on 4 /8/16/32/64 GB cards depending on whats needed. Obviously some games will still fit fine in less than 8 gigs of space and some even on just 4 gigs of space. Some would need more .
 
Back
Top