Predict: The Next Generation Console Tech

Status
Not open for further replies.
I was under the impression that much of the workload TEX units face, filtering is accomplished using reduced-precision hardware that is smaller than what a fully-fleshed ALU pipe in the SIMD would be.
There would be space and power savings there, and higher hardware throughput for the higher-precision formats would be predicated on increasing the width of the data paths and internal bandwidth in general.
 
Custom solutions still take up some processing power. Whether it be ALUs on the GPU or CPUs.

I'd be hard pressed to believe they'd rip out MSAA.
 
It's not the texture unit which keeps the loads in flight and has to take care of thread context storage though, so other than that I don't see how it's relevant.

I wouldn't be so sure about that. From what I understand of the various architectures used by both Nvidia and ATI, the shader blocks send the texture read to the texture unit which then handles all the addressing calculations and generation of all the appropriate loads required for the texture reads and also handles the pipelining and data time dependencies as well. Its not until the texture result is completed and the shader core gets the texture result that the shader core is again involved. Therefore the texture unit is handling the loads in flight to generate the texture result.

The best description of the current designs is from ATI and it certainly appears from all their documentation that the texture unit is fairly decoupled in all its operations form their shader core.
 
I was under the impression that much of the workload TEX units face, filtering is accomplished using reduced-precision hardware that is smaller than what a fully-fleshed ALU pipe in the SIMD would be.
There would be space and power savings there, and higher hardware throughput for the higher-precision formats would be predicated on increasing the width of the data paths and internal bandwidth in general.

As long as you aren't using FP texture formats, the texture filtering can be reasonably achieved with 8b or 9b math.
 
What are the chances we see a design that doesn't devote some of the silicon budget to AA and let's devs go with whatever they want or come up with custom solutions?

As per the Digital Foundry article (http://www.eurogamer.net/articles/the-anti-aliasing-effect-article), it seems almost a waste these days to say "this hardware has to do this but these are the limits" when you could use that budget for other functions.

I think ROPs are on their way out. Either next gen or the one after that.
 
Yesterday the company where i work assembled a firewall server, and put the linux firewall on a adata 32GB ssd for affidability heat and access time. I'm not sure if it will be effective for the last part, but hitted me for the price: 80€ for a more than good enought ssd :|

So returning home i was thinking...
Take only 16 or 24GB ofthat mlc, streap the case, streap the sata controller, include it in a motherboard, shift the price to end of 2011, reduce traces and chips to accomodate the augmented density, lower the price at direct mass order with the producer... fuck it's very cheap!
Or cheaper than what i was thinking, and at the moment has ~ 100MB/80MB read/write, what in the future?

Ok, returning to our console tech fetish.
One of the bad thing that microsoft done wrong with the 360 was the core/arcade without an hd to be used as cache, reserved space being 12GB, then they added the option to copy an image of the game directly to the hd.
The first negates to the core/arcade users to speed up a lot loading, being terrible in games that streams a lot; some developers to gain time almost skip the dual path cache/cacheless.
The second it's a nice patch, but using the same disk for image and cache isn't optimal, expecially if the disk isn't that fast, so that in some cases this tech doesn't reducesload times, in some other increases them in games that yet uses heavely the cache.
One thing that i would like to see on next generation are seas and seas of mlc, to store caching and all, but seeing as sony and ms engaged in the larger hdd race, it's not a viable solution.
Another thing that i would like to see,being a bd 8x painfully in the 36MB/s range, is sd like media to almost eliminate the cache problem and speeding up a lot load and stream time, but due to price of the media, and the necessity to put in any case a bluray laser (you know, for that other "i'm the leavingroom master" race), it's not a possible solution.

Ok folks, that's all.
Oh! Aaaaaaaaaaaaaaaaaaaaaaand one more thing...
What if we solder in every console those 16GB of cheap fast mlc ssd to function as the cache in the 360? Being ubiquitous it'll be more used, the experience will be identical on every model, the price keeped down enought, and we have always the option to had an almost cheap terabyte hdd to show to the average user how much hitech the console is.
Do you think that is possible?
 
Define cheap? They're already soldering 256MB flash chips to the 360 board. If they can get even 4GB for similar prices, that'd be sufficient for title caching and some storage. Then they'll market the fat hard drives for game installs and storing bigger marketplace items. :p
 
my goal was also replace completely the hdd cache so that you don't *need* it to play well, and to do this it must be at least 12GB

and that in the arcade isn't cache but slow memory, they are different price/things, but i miss some info to make a real comparison
can the ssd logic (like trim) be included in the southbridge? the traces add too much complexity? how many chip you need to reach 16GB? and... well... how much cost a lowend mlc chip?
 
Doesn't the idea of using flash as cache end up with burned out flash cells after a while? Unless you've got a large chunk of SLC with wear leveling, which is cost prohibitive.
 
There's cheap flash, and there's fast flash, but there's no cheap fast flash yet.

And don't assume the same linear/exponential growth we had 10 years ago.
 
trim and other technologies help there upping the lifecicle of modern ssd over the life of a console, and mlc that reads at ~100MB/s are actually consumer cheap/accessible
considering that every 18 months the nand price is cutted in half, and that judging the last interview from bioware, in early 2012 microsoft will sill ship heavy titles for 360,so that we can assume fall 2012 for the next console release, by that time nand will be dense and cheaper
but how much is "cheaper"?
 
Not cheap enough. Take any sized SSD drive you want and consider the price halved or even quartered by the next consoles, and you'll see either it's way too costly, or way too small. It'd only be viable as a local cache, like ReadyBoost, for either a performance enhancement over the standard system or as a cheap compromise for a system sacrificing RAM (so design; only difference is your take on it!). Flash isn't going to be a cheap alternative to an HDD in time for next gen's launch. Unless it's really late!
 
- all the 12GB in the 360 are reserved for cache? maybe if only 4 or 8 are, it will be easier to put as solid state memory, keeping backward compatibility

- full ssd are of course out of question, but nand chip soldered on the board? come on! apple is actually selling ipod with up to 64GB and making big profits on each one!
Even if you don't go to 100MB/s nand, and put in only 4/8/16GB, we'll have rationale cost (how much?) and all the advantages of my first post
 
The fast NAND, HDD speeds, isn't cheap. And the cheap stuff isn't fast. The option of flash storage is viable, say 32MBs on board, but it won't be able to replace the HDD for caching. The low speeds of cheap flash just can't serve up data fast enough.
 
Status
Not open for further replies.
Back
Top