Predict: The Next Generation Console Tech

Status
Not open for further replies.
The rumors point to DDR3 and slow access speeds to the ram.

DDR3 could work at a high clock and decent bandwidth. It has nothing to do with the type of ram. Slow access speeds can be partially solved with easy access to the ES/EDRAM embedded. So....we're back to square one.
 
DDR3 could work at a high clock and decent bandwidth. It has nothing to do with the type of ram. Slow access speeds can be partially solved with easy access to the ES/EDRAM embedded. So....we're back to square one.

Yes, I know. Theoretically, what would the speed be for DDR3 + ES/EDRAM vs DDR4 + ES/EDRAM?
 
DDR3 could work at a high clock and decent bandwidth. It has nothing to do with the type of ram. Slow access speeds can be partially solved with easy access to the ES/EDRAM embedded. So....we're back to square one.
Maybe you can say that in neogaf
http://www.neogaf.com/forum/showthread.php?t=507910&page=14

Plus i still don't know why everytime when neogaf talk about ram,they automatically think Durango 100% will use DDR3

It's like some of them try every way to make Durango sound like crap,even everything right now just rumors
 
Also, Bkilian's last post is indeed quite promising sounding!
Don't read _too_ much into it, it's just that PCs have to be general purpose. A console can spend silicon budget on fixed function stuff that can accelerate things way beyond what a PC could accomplish. The 360, for instance, can decode 320 or so simultaneous XMA streams without involving the CPU very much at all. To do the same in software would be prohibitive. Items like that, if used well, can result in the ability to perform feats that cannot currently be done on any CPU or GPU today.
 
Right, but if you optimize DDR3 for high bandwidth, you just end up with GDDR5 anyway. And as the WiiU has demonstrated so well, a pool of embedded memory doesn't solve all your bandwidth problems if your main memory is much slower than peer devices.

What we'll end up with is devs having to manage their LOD carefully to get data into the embedded memory when they need it on Durango, and doing the same thing on the other end on Orbis, to make sure assets are loaded from disc/flash cache into main memory when needed. I wouldn't expect huge visual differences on either, since any modern game is going to be streaming the environment like crazy in both cases. Depending on the effects in use, either memory architecture could have the advantage in a given situation. And as long as Bethesda doesn't end up with multi-gigabyte saves trying to stay in memory next gen, I wouldn't foresee any Skyrim-style debacles for Orbis.
 
RSX was also not unified. A lot less efficient potentially.
If you think about it, unified and more effcient vs. not unified and more peak performance could be exactly what we're looking at once again, just in a broader sense ;)

If Microsoft really went for a fully integrated HSA design, wouldn't having some very low-latency SRAM (basically a really fast, shared, chip-level cache) + a lot of DRAM make way more sense for them than going for lots of bandwidth?

If SONY really had a bunch of HSA enbabled, yet more specialized hardware components "glued together" on a multi-chip level, wouldn't going for lots of bandwidth in between them make way more sense than having a huge amount of memory?


Basically, if two or more heterogeneous minds could actually think together without having to speak, they could (and should) prioritize low latency over bandwidth - as the data directly shared between them in the process of achieving common results typically came in rather small, but very frequent chunks.

If two or more heterogeneous minds could efficiently co-work on a wider set of tasks, but had to communicate their more individually compiled results in order to achieve their common goal, they wouldn't need to meet that frequently, but there would be a hell of a lot information to communicate between them ...

The latter approach, though having some obvious disadvantages and being less efficient, has two main pros: It takes less mindwork to realize and - specifically because it is not completly unified (andrather relies on multiple chips glued together) - makes it a lot easier to bring more specialized team members in ...


If I was a Trekkie, I'd probably break it down arguing that Microsoft seems to strive for a perfectly synchronized BORG Collective while SONY likes to keep their stuff way closer to the United Federation of Planets ... ;)
 
Yes, I know. Theoretically, what would the speed be for DDR3 + ES/EDRAM vs DDR4 + ES/EDRAM?

ddr3 2133 on a 256bit bus would be three times the Xbox 360 bandwith (256 bits and 1066 MHz vs 700MHz and 128bits).
As I'm lazy, I'm considering ddr3 2133 is sold as "PC 17066" so total bandwith is four times that, so 68.3 GB/s.

Despite all the hubris this is not much behind a Radeon 7770 or a GTX 650. Or similar to stuff like the 9800GTX and Radeon 4850.
I thus think the console could work on 256bit ddr3 alone and not suck that much.

if it uses ddr4? let's consider it had ddr4 2133. Same bandwith, worse latency :p. Slower console than with ddr3. But perhaps it has ddr4 running at a 50% higher clock ("ddr4 3200"). Is that a best case? The "slow" memory breaks the 100GB/s barrier! Latency probably is mediocre but you live with it. (the skimpy ~1.8GHz CPU cores that suffer low power ddr3 in hugh end tablets will survive that)

Only people who buy gas guzzler graphics cards every year will think this bandwith is very low. Everything below 150 euros (except a few 7850 1GB) is stuck with 128bit gddr5 at best. The best of those cards (7770, GTX 650 ti) run recent games.

I've not talked about the Esram because I think we know nothing about it. In all, I'm just realizing this next Xbox is probably better than what I thought. The Esram adds a lot of icing on the cake. Hell, developers may put framebuffers/G-buffers in it and repeat what was done with the X360.
Or, if permitted by the GPU and at developer whim, why not put the buffers in the big system RAM (I know, this is supposed to totally be not what to do but is livable).. Then do some crazy stuff with a lot of HSA within that memory space.
 
Last edited by a moderator:
An FX8350 at 4.6Ghz will easily pull 200w+, undervolted or not.

This type of stuff is misleading. This tells you what the motherboard is drawing not the CPU.

First the power regulators aren't 100% efficient more like 85% (brings down the actual draw to 170W)

Then the all the things being powered on the motherboard, ram, pci, hypertransport, sata, southbridge,audio, fan controllers, keyboard ,mouse, bios,etc, etc, there is at least another 50W

The actual CPU wattage is closer to 120W
 
Proelite, back in June you posted this - which then proceeded to do rounds of the interwebz
http://www.neogaf.com/forum/showthread.php?t=478941

It mentions a 1-1.2 TF GPU, why did you go from that to saying there's a GTX680 equivalent in the kits?

And do you still think Durango's has on 4GB of memory?

There's also a bluedevilstudent on the semiaccurate forum, saying special guys (aka Rangers) specs are accurate:
http://semiaccurate.com/forums/showpost.php?p=175011&postcount=105

SpecialGuy on gaf pretty much posted the real specs for Durango in the Chinese rumor thread. It is spot on.

8 upgraded jaguar cores
8gb ram
8000 series gpu with specs that of a 7770 ghz edition + esram
+hefty pseudo gpu (unlike anything that we'll seen before) that assists the main gpu, marketing flops and documents don't include the numbers for part yet.
+general purpose dsp that can be used for audio or graphics

Do we know this guy?

A pseudo GPU sounds interesting.

Interesting.

+general purpose dsp that can be used for audio or graphics = "Common" media units

+hefty pseudo gpu = Additional AA, post processing, additional vertex/tessellation unit

... to jump in and help the GPU like Cell ?
FPGAs ? How many ? :runaway:

EDIT: ... and hooked up to their fast memory to avoid the "hardcoded" Smart AA limitation in 360 ?
 
I love how people are referring to > 1TF as "weak". How fickle we are. When the 360 launched, a machine capable of over a teraflop would almost make it onto the top 100 supercomputer list (6 months earlier, it would have _been_ on the list).

Stop comparing these systems to high end PC GPUs that require 300 watts just to function. For one, they have different constraints and requirements. For another, it's not all about the GPU. Raw GPU flops does not tell the whole story. I guarantee you, Microsoft's next system will be able to do things your current computer does not have the resources to do, and I say that knowing some of you folks have monster PC setups. (Dunno about Sony, I know next to nothing about their system)

From the time the 360 ​​is also true that 8 years have passed;)
I am very curious to see what MS has invented the xbox 1 and 360 have been very good in terms of performance for this seem strange
the rumors that point to a "weak" console.
 
This type of stuff is misleading. This tells you what the motherboard is drawing not the CPU.

First the power regulators aren't 100% efficient more like 85% (brings down the actual draw to 170W)

Then the all the things being powered on the motherboard, ram, pci, hypertransport, sata, southbridge,audio, fan controllers, keyboard ,mouse, bios,etc, etc, there is at least another 50W

The actual CPU wattage is closer to 120W

The figure is for the CPU only, Motherboard and DRAM don't add up to 50w, especially if you have an SSD it's more like 25-30w
 
ddr3 2133 on a 256bit bus would be three times the Xbox 360 bandwith (256 bits and 1066 MHz vs 700MHz and 128bits).
As I'm lazy, I'm considering ddr3 2133 is sold as "PC 17066" so total bandwith is four times that, so 68.3 GB/s.

Despite all the hubris this is not much behind a Radeon 7770 or a GTX 650. Or similar to stuff like the 9800GTX and Radeon 4850.
I thus think the console could work on 256bit ddr3 alone and not suck that much.

My two years old card (GTX 460) has less flops (900 GFLOPS) and much higher bandwidth (115 GB/s). So it's hard to say where the right balance is. Also, if the PS4 has 192 GB/s of bandwidth, third party developers would be most likely complaining. If MS is going for such a low bandwidth (~70 GB/s), I would bet on a large chunk of EDRAM (at least 64 MB, possibly 128 MB).

if it uses ddr4? let's consider it had ddr4 2133. Same bandwith, worse latency :p. Slower console than with ddr3. But perhaps it has ddr4 running at a 50% higher clock ("ddr4 3200"). Is that a best case? The "slow" memory breaks the 100GB/s barrier! Latency probably is mediocre but you live with it. (the skimpy ~1.8GHz CPU cores that suffer low power ddr3 in hugh end tablets will survive that)

Well, 256-bit DDR4 at 3200 MT/s sounds like the best case scenario unless they are using stacking. I'm not sure why everybody seem to assume MS is not going to use stacked memory.

I've not talked about the Esram because I think we know nothing about it. In all, I'm just realizing this next Xbox is probably better than what I thought. The Esram adds a lot of icing on the cake. Hell, developers may put framebuffers/G-buffers in it and repeat what was done with the X360.

I don't think we can expect more than 16/20 MB of ESRAM, it would already make the chip huge. This is certainly not enough for single-pass deferred shading at 1080p. Unless they have very good reasons not to, I hope they go for a large chunk of EDRAM instead of a smaller chunk of lower-latency ESRAM.
 
Saw another rumor Durango spec from spanish forum
http://www.vandal.net/foro/mensaje/800661/xbox-next-pag-32/58
(#859)
good game to me, according to a friend working in micro ingienero informatic, knows enough of the console, here it llevais

CPU: x16 x4 cores theards

ram: 8gm

gpu: (not sure) could be extreme range of the HD7000. 2gb vram

price: quite high, I can not say much more sorry xD

games that have already practically look like avatar on xbox 8, leaving in late 2013 as we all hope.

I say good as before, I'm 3DJuegos, we'm no troll this forum know him for 3 years but spent most of register, the moderators here is amazing

Hope Durango Unchained before April btw
 
Don't read _too_ much into it, it's just that PCs have to be general purpose. A console can spend silicon budget on fixed function stuff that can accelerate things way beyond what a PC could accomplish. The 360, for instance, can decode 320 or so simultaneous XMA streams without involving the CPU very much at all. To do the same in software would be prohibitive. Items like that, if used well, can result in the ability to perform feats that cannot currently be done on any CPU or GPU today.

How much more demanding is decoding XMA streams over MP3 streams?

Will there be a requirement or push for developers to use that DSP, next-gen? I know it wan't really used this gen.
 
Sony could make an in house GPU in 1999 for its PS2, but not today. The complexity of a current, big GPU makes me puke. Even the drivers are a nightmare (it should be a ton easier on a console. But you still need that shader compiler and stuff)
It would have been a billion dollar effort launched many years ago.

Even for the PS Vita, they simply used a PowerVR.

Talking about the APU design not components. The Vita chip is stacked and designed by Sony

http://www.chipworks.com/blog/techn...-ps-vita-uses-chip-on-chip-sip-3d-but-not-3d/
 
Status
Not open for further replies.
Back
Top