Predict: The Next Generation Console Tech

Status
Not open for further replies.
Many of these ideas already exist in nascent form on PS3 today. They are applied to Cell as tech demoes, HPC projects or shipping games. ^_^

It'd be a shame to let all of 'em die or get buried.

In PS4, Sony seems to switch to a mainstream setup ? It will be interesting to see the difference in architecture and approach by the same teams.

I started noticing the same (DMA engines, high speed ring bus connecting processors at L2 and Local Stores, SPU-like GPGPU and GPU-assist functionality, etc). How will developers be able to use all these without it being too difficult or hated? One of the complaints, from 3rd party devs with PS3, was bandwidth bottleneck. 1st party devs said that weren't running into that problem, due to great use of EiB to SPUs local store and DMA lists.

With next-gen, will the way 3rd party developers design their games change, to efficiently use these features?

Has anyone cast serious doubt on the potential use of stacked RAM with Durango?
 
I started noticing the same (DMA engines, high speed ring bus connecting processors at L2 and Local Stores, SPU-like GPGPU and GPU-assist functionality, etc). How will developers be able to use all these without it being too difficult or hated? One of the complaints, from 3rd party devs with PS3, was bandwidth bottleneck. 1st party devs said that weren't running into that problem, due to great use of EiB to SPUs local store and DMA lists.

With next-gen, will the way 3rd party developers design their games change, to efficiently use these features?

Spend the time to design and integrate them properly, either as an integral part of the GPU (use a bigger GPU at higher "cost"), or fit them into a logical pipeline without overhead. In Cell, it's flexible but the devs have to do a lot of work to make the gain worthwhile.
 
This matches pretty well with the 3 dsp information.
1) HW audiovisual acceleration
2) HW scaler (and something more?)
3) HW Kinect computation acceleration
4) for the extra mystery sauce, edram/esram.

Those 3 PPC-cores for backward compatability, if they're in the final design, could be one place where the mm2's partially disappear.

It's certainly possible. How big would those 3 PPC cores be now at 28nm?

Also the 6-8 ARM/x86 cores on the left of that slide is what'll be in the system. The center purple box with the 2x ARM/x86 and 48ALU @ 500 Mhz is just what'll be active in the low power always on mode, not an extra set of CPU & GPU like most people thought at first glance.
 
Based on my understanding swizzling, at least in this case, involves taking a set of data and re-ordering it. Practically, the unit he is suggesting would be able to "move" data in memory from where the CPU would be able to access it optimally to where the GPU would be able to access it optimally and visa-versa. The swizzling support would allow it to simultaneously convert the data from a CPU-friendly format to a GPU-friendly format and back again.

Isn't the entire idea behind HSA that you don't even have to 'move' that data? The GPU and CPU simply just access the same data. Or am I wrong on this?
 
Isn't the entire idea behind HSA that you don't even have to 'move' that data? The GPU and CPU simply just access the same data. Or am I wrong on this?

It depends. e.g., a DSP usually has its own memory like an SPU, so you will still need to move data into it efficiently.

Logically, the GPU and CPU can reach/write into the same memory but there are also contention and coherency issues. In any case, it may not be a clear case of better/worse unless the need is always there and speed up/cost saving is consistent; or it opens up a new problem/application area.
 
I swear at some point in the last 18000 posts, it's like it was expected to be an 8970 equivalent, lots of eDRAM, 16GB of HMC memory on an interposer, a special sauce chip, 256GB of flash, a 500GB HVD drive, and it will be $299.
 
People have been expecting 8850 / 8870 equivalent cards to be in Durango. I think 8750/8770 is more likely the case for Durango, and 8830/8850 for Orbis. Orbis gpu has 50% more compute units.

I've been expecting 768 shaders @ 800MHz for a long time now, if they have more or at higher clock, I'll be pleasantly surprised.
 
according to the leaked specs HD8850 is a 3 TF, 130w card.
i have my doubts.

I imagine if they used it, it would have clocks reduced 20% and some functional units disabled. They'd want it in the 80-100W range at most I'd imagine. 20% reduced clocks and 20% less functional units gets you to 1.92 TF from 3 TF, which is near the 1.8TF rumored for Orbis.
 
If i had to bet it would be on the 18 cus rumor, 1152 shaders.

It would be just under 200 mm2 and under-clocked ie undervolted for better wields and lower power consumption.
less than that it would be disappointing.
 
according to the leaked specs HD8850 is a 3 TF, 130w card.
i have my doubts.

I also have my doubts about those leaked specs on those 88X0 cards, but that's for another thread.

I still think something like 1024 to 1280 SPs @ 800 MHz isn't too unrealistic, but I'm not keeping my hopes up.
 
Status
Not open for further replies.
Back
Top