Predict: The Next Generation Console Tech

Status
Not open for further replies.
BTW guys, how do you see the change from a 4 core steamroller at 3.2 Ghz to the same processor than Durango?

Isn´t it a weaker CPU?

A downgrade, yes, imo.

But a WHOLLLLLLLLLLLLLLLEEE lot less silicon, heat, and power draw.

Piledriver/Steamroller never seemed a realistic console CPU. Too power hungry.
 
You are not seeing the full picture, do you ?

Console manufacturers always end up finding technical efficient solutions to do things at launch time, only very expensive PCs can do. It has always been like that. True the xbox360 GPU was relatively more powerful in its time compared to the ps4 GPU. But what about :
- CPU - GPU interaction and bandwidth ?
- RAM quantity and Bandwidth ?

We are talking here about a UNIFIED 4 Gb of GDDR5 at 192 Gb/s (remember PC games use nowadays DDR3 RAM, how they could compete with first and second gen ps4 games designed with GDDR5 main RAM bandwidth in mind?), if true thats incredibly monster RAM architecture, far a lot better compared to the PC landscape today than the disappointing 512 Mb at 21.6 Gb/s of the xbox360.

Also having a SOC with CPU+GPU on the same chip allow to overcome any previous bottlenecks of PCs PCI express interaction bandwidth between the CPU and GPU, and xbox360 didnt have that advantage compared to PCs in its time.

My 1-year old HD7970 offers more than twice the flops (+3Gb GDDR5 @ 264Gb/s) of a console being released ~2 years later. On the CPU side, any modern Intel 4-core/AMD 8-core would be way faster than a Jaguar clocked at ~1.6GHz. Of course gaming PCs are more expensive, a high-end graphics card costs about the same if not more than a modern console..but you got the point.. that kind of tech is nothing new on the PC side. Yes, console-specific customizations will play their role, as always...but this time PCs will have a significant hardware advantage from the start (as opposed to X360's Xenos vs R520/R580 in 2005). It probably wont take long till mid-range PCs and high-end laptops can run multiplatform titles better than any next gen console.
 
That's not the first time they've pulled quotes from B3D. I suppose they assume it's ok since it's an anonymous forum, however sooner or later someone connects the dots and that leads to trouble. That's pretty much the main reason I don't post here anything of substance anymore.

Hearing this makes me SMH. Forum goers hungry for more info quoting certain people to put the pieces together is one thing, but it's another when you run a popular website and write an article directly linking said posts. The gaming 'press' truly has no tact and will do anything to scam a hit.
 
My 1-year old HD7970 offers more than twice the flops (+3Gb GDDR5 @ 264Gb/s) of a console being released ~2 years later. On the CPU side, any modern Intel 4-core/AMD 8-core would be way faster than a Jaguar clocked at ~1.6GHz. Of course gaming PCs are more expensive, a high-end graphics card costs about the same if not more than a modern console..but you got the point.. that kind of tech is nothing new on the PC side. Yes, console-specific customizations will play their role, as always...but this time PCs will have a significant hardware advantage from the start (as opposed to X360's Xenos vs R520/R580 in 2005). It probably wont take long till mid-range PCs and high-end laptops can run multiplatform titles better than any next gen console.

It's not about price, it's about power consumption. Current high-end PC have a lot higher power requirement than 2005 high-end PC
 
My 1-year old HD7970 offers more than twice the flops (+3Gb GDDR5 @ 264Gb/s) of a console being released ~2 years later. On the CPU side, any modern Intel 4-core/AMD 8-core would be way faster than a Jaguar clocked at ~1.6GHz. Of course gaming PCs are more expensive, a high-end graphics card costs about the same if not more than a modern console..but you got the point.. that kind of tech is nothing new on the PC side. Yes, console-specific customizations will play their role, as always...but this time PCs will have a significant hardware advantage from the start (as opposed to X360's Xenos vs R520/R580 in 2005). It probably wont take long till mid-range PCs and high-end laptops can run multiplatform titles better than any next gen console.

You're using homogeneous processors in your gaming PC. This rumor of the Orbis SoC describes a heterogeneous SoC. Don't make the mistake and view the eight Jaguars as the full CPU. We're talking about 8 Jaguars + X, where X are most likely GCN compute units dedicated to GPGPU. As someone mentioned earlier, we're probably talking about a 300GFLOPS CPU for Orbis, which is kind of hefty compared to other desktop CPUs, considering that this is a mobile-based processor architecture.

And you have to mind that the communication between the elements of the Orbis SoC will be much better than the communication between your multicore CPU/DDR3 RAM and your PCIe graphics card/GDDR5 RAM. Bandwidth- and latency-wise the HSA-based SoC is most likely going to deliver an very enjoyable performance. Not to mention the performance overhead due to your thick PC API.
 
You're using homogeneous processors in your gaming PC. This rumor of the Orbis SoC describes a heterogeneous SoC. Don't make the mistake and view the eight Jaguars as the full CPU. We're talking about 8 Jaguars + X, where X are most likely GCN compute units dedicated to GPGPU. As someone mentioned earlier, we're probably talking about a 300GFLOPS CPU for Orbis, which is kind of hefty compared to other desktop CPUs, considering that this is a mobile-based processor architecture.

And you have to mind that the communication between the elements of the Orbis SoC will be much better than the communication between your multicore CPU/DDR3 RAM and your PCIe graphics card/GDDR5 RAM. Bandwidth- and latency-wise the HSA-based SoC is most likely going to deliver an very enjoyable performance. Not to mention the performance overhead due to your thick PC API.

yup exactly, another important factor is that the main RAM this time has a huge bandwidth in consoles (GDDR5) compared to PCs (DDR3), games designed for this specific ps4 feature would have a hard time running on any PC even with 16 Gb of DDR4.

people here are downplaying and underestimating how much of a difference this would make for porting ps4 games to PCs. lets give an example :

a hypothetical naughty dog game called uncharted 4 running at 30 fps and sending 3.5 Gb of data per frame per second to the SOC CPU+GPU to handle (high rez textures, polygons, instructions, shaders, models, animations...whatever) how are you gonna make that run on a 1-2 Gb per frame limited RAM (DDR3-4) on PCs ? it is of course feasible with the help of compression and using the huge amount of main RAM and the very fast GPU RAM of very high end PCs as caches, but it is impossible to do this correctly on a mid-range PC of 2013-2014. And developers wont take the risk creating a game thats possible to run only on very high end PCs.

Thats the kind of situation PC games would face compared to exclusive PS4 games...in short, if the recent rumors of PS4 specifications are true than PS4 exclusive games would do some crazy things impossible to do with commercial PC games for at least the first 1-2 years of the console lifetime. More precisely I am thinking of very high rez textures all over the place (2048*2048) or some very fast complex and detailed gaming scenes running at crazy 60 fps a la Gran Turismo...etc
 
Last edited by a moderator:
running at 30 fps and sending 3.5 Gb of data per frame per second

Well the 3.5gb won't all be visual assets. What's a typical amount for regular game data? I guess it depends on the type of game (linear, open world, etc).

Since they potentially have 3 different places to stream from (flash, 8x optical drive, HDD). Could we be seeing 300+mb/s streaming? Pretty safe to say we'll have much more variety in textures and environments than any current PC game.
 
yup exactly, another important factor is that the main RAM this time has a huge bandwidth in consoles (GDDR5) compared to PCs (DDR3), games designed for this specific ps4 feature would have a hard time running on any PC even with 16 Gb of DDR4.

I seriously doubt 8 Jaguar cores can handle enough pending load/stores to saturate a 256-bit GDDR5 memory interface*... While the GPU portion of the APU will likely to use most of the theoretical bandwidth, I don't think the CPU will get anywhere near to it.

* The Eurogamer article does not say neither that the memory interface is 256-bit nor that the GDDR5 is clocked at 1500 MHz... How can we sure the bandwidth is really 192 GB/s? It could be a 128-bit interface with GDDR5 clocked at 1250 MHz (80 GB/s) for all we know. Most likely it will be 256-bit, but I don't think it makes sense to clock the memory that high (1500 MHz is the high-end, most expensive GDDR5).

EDIT Given the chip densities, it seems like 4 GB of GDDR5 imply a 256-bit memory interface. We are not sure about the clocks though.
 
Last edited by a moderator:
I seriously doubt 8 Jaguar cores can handle enough pending load/stores to saturate a 256-bit GDDR5 memory interface*... While the GPU portion of the APU will likely to use most of the theoretical bandwidth, I don't think the CPU will get anywhere near to it.

* The Eurogamer article does not say neither that the memory interface is 256-bit nor that the GDDR5 is clocked at 1500 MHz... How can we sure the bandwidth is really 192 GB/s? It could be a 128-bit interface with GDDR5 clocked at 1250 MHz (80 GB/s) for all we know. Most likely it will be 256-bit, but I don't think it makes sense to clock the memory that high (1500 MHz is the high-end, most expensive GDDR5).

I do agree with both points, we should wait for the final CPU specifications and customizations to find out if the system is well balanced or not (capable of handling huge gaming assets and send the instructions to the GPU), actually the only disappointing thing for ps4 for me is the CPU, maybe an additional 2 CUs could free up physics calculations or animation calculations from the CPU, another DSP for sound...who knows...

for the second point I do also agree I find it hard to believe ps4 would end up with 192 Gb/s, I always said the maxmum sony can afford given cost restrictions for fall 2013 is 160 Gb/s, but for me my number of 160 Gb/s is a maximum...lets wait if sony ends up surprising us with the crazy 192 Gb/s bandwidth....
 
Probably because the Orbis alpha kit is said to have 192GB/s bandwidth (for the GPU). And other recent leaked info seems to maintain the same bandwidth specs.
 
I don't know much about console devkit development practices to confirm either way.

I just have a hunch the vendors will tackle generation gap issues (upgrading consoles) and cross platform development this coming gen.

Is there a rough comparison between nVidia 7800 output vs late stage PS3 output ? Coz I want to see the closed box optimization gain. Just how far developers can go when they optimize for a specific box without API overhead.

I still think that developers is the biggest contribution to quality output. A solid specs is a great start. The right technologies can simplify their jobs though.
 
I think one of the most important thing we need to understand about Durango is the memory architecture. Assuming a 256-bit interface to DDR3/DDR4, the lower and upper bounds for the main memory pool should 59 GB/s at 1866 Mt/s and 77 GB/s at 2400 MT/s, respectively, The big question mark is the embedded memory.

My very optimistic wish: 128 MB at 1 TB/s accessible for any intent and purpose by any component in the SoC, connected to the other components through a crossbar or ring bus.

My most realistic wish: 64 MB at 512 GB/s accessible for any intent and purpose by any component in the SoC, connected to the other components through a crossbar or ring bus.

My guess based on the current rumors: 32 MB on a daughter die (in the same package) with the ROPs; internal bandwidth 819.2 GB/s*; bandwidth to the main die 102.4 GB/s. Maybe a smaller pool of ESRAM (8-16 MB) could be used in the main die to facilitate data sharing between the CPU and the GPU.

* The Xbox 360 has 8 ROPs at 500 MHz and 256 GB/s of internal bandwidth in the daughter die. If Durango has 16 ROPs at 800 MHz, scaling the bandwidth gives: (16/8) * (800 / 500) * 256 GB/s = 819.2 GB/s, which is incidentally 8x the rumored 102.4 GB/s, which I believe is only the bandwidth to the compute die.
 
but PS3 doesn't reserve a CPU core for the OS (it only has one core). I'm not too convinced : if the PS3 has a quad core Bulldozer, reserving one full core seems too much. It's a scheduler thing anyway, you could have it set up so games can have up to 366% CPU usage, leaving 33% guaranteed for the OS (with affinity to one particular core if so be it)

The SPEs were a full "core" for those (or most any) purposes. If the AMD cores are the smallest computational silicon within the system that can handle the task, then it makes sense that some of these cores are reserved for OS functionality/security.
 
You're merely quoting

This image

http://ic.pics.livejournal.com/misterxmedia/21549619/42533/42533_original.jpg

It's from Misterxmedia and it's a hilarious pipedream, this isn't one of those "wish it into reality" type scenarios. The specs are completely ridiculous.
.

Yes it's from MisterXmedia, you can link the blog if you want
http://misterxmedia.livejournal.com/
or the tweeter profile:
https://twitter.com/misterxmedia

so you don't bu this rumor but you buy other rumors, this is ridicolous and the other are not, fine.
I prefer to wait the official info before call something ridicolous :rolleyes:

since this moment I prefer to elaborate the rumors deeply as I can. Nor me, nor you know the truth, you can believe what you want and for me the same.

If the rumored CPU is a w2w 3d stacking cpu, then it uses 75% less power than a standard cpu, and produce 75% less heat, so you can easly uses the double amount of transistor and staying under the 300W mark.

What's the reason that make you believe that this can't be possibile?
 
The slide that caught my attention is #20 on page 10. Seems to offer very high performance per cost ratio compared to DSP. Not sure how an SPU would measure up.

It's a matter of specific implementations though - in theory a DSP can be as incredible as its designers make it for its given taskloads... it just won't be reprogrammable after the fact.

I'm still interested to find out whether these h/w units are in, and if there're really extra compute units in Orbis.

Sure, it's always the mystery bits that are interesting after all!
 
Pretty sure the secret sauce from either company is going to be these new fangled stacked FPGAs like Virtex-7 2000T from Xilinix or Altera. Maybe not that exact model but something cut down or similar. From what I've read they're insanely efficient at image processing and comparable or beat CPUs/GPUs in certain tasks. Also they're very low power and reprogrammable to boot.

http://low-powerdesign.com/sleibson...livers-1954560-logic-cells-consumes-only-20w/

If Sony's using this at all it'd make sense since it fits into their medical imaging and CMOS sensor interests. MS is probably on this track too.
 
Last edited by a moderator:
Yes it's from MisterXmedia, you can link the blog if you want
http://misterxmedia.livejournal.com/
or the tweeter profile:
https://twitter.com/misterxmedia

so you don't bu this rumor but you buy other rumors, this is ridicolous and the other are not, fine.

What's the reason that make you believe that this can't be possibile?

Just for some forum background, but MrCteam/MrXMedia was enough of a joke that he too was banned from these forums just like MrRigby. So yes, his rumors are complete bullshit.
 
No it doesn't. Physics foots his foot down. You have 20 years and many, many billions of dollars of research getting us to architectures like GCN and Kepler. There is no amount of money Microsoft could have spent a few years ago to magically and suddenly alter the equation in their favor.

Sorry but do you assume the current AMD/NV designs are the end of the pipeline? These companies have obviously parallel teams which develop future designs and MS could surely finance AMD to speed up some next generation design team combined with some DX11++ extensions. I can't really see why they shouldn't play by their strengths to get an edge over their competition.
 
Status
Not open for further replies.
Back
Top