Predict: The Next Generation Console Tech

Status
Not open for further replies.
I've never really paid much attention to him, but I did notice he has a penchant for grandiosity due to boasting about how they caused MS to spend $1B to increase the memory in 360.
For salesmen, I have nothing against boosting, or grandiosity, as long as it's seemingly factual, not ill informed manipulative crap. He also said that Sony's bluray was a cost they couldn't reduce over time, and that it was their Achilles heel, the reality is that the optical engine went from around 150$ (kes400a) to 30$ (kes450a) in 3 years, still falling even lower today.
 
For salesmen, I have nothing against boosting, or grandiosity, as long as it's seemingly factual, not ill informed manipulative crap. He also said that Sony's bluray was a cost they couldn't reduce over time, and that it was their Achilles heel, the reality is that the optical engine went from around 150$ (kes400a) to 30$ (kes450a) in 3 years, still falling even lower today.

I agree, I was just pointing out what I noticed. But I didn't take any manipulation from that quote. The fact that he chose a card that old should have made the sarcasm pretty clear. I don't see how that could have been reinterpreted to him meaning one 580.
 
A comparison between AMD and Nvidia GFLOPS is not a valid one.
They are both theoretical peak numbers, not real world numbers.

Having an AMD GPU in a console will do nothing to get the AMDs GPUs FLOPS rating closer to its theoretical max.
 
Some Radeon HD 7870 ([FONT=verdana,geneva]Pitcairn[/FONT]) info. Looks like 130W for the full board (almost 100W for the 7850). 212mm^2, 2.8B transistors, 1GHz clock, 256bit bus, 1280 stream processors, 80 TMUs, 32 ROPS, 2GB 1.2GHz memory (153.6GB/s), etc the 7850 is 860Mhz and same memory. Adverted power is 175-190W and 130-150W (2nd power tune) but this link from Dave B caught my eye:

[FONT=verdana,geneva]Well, the R7870 used roughly a 100 Watts LESS then that GTX 580. When we reverse calculate and measure the power consumption the R7870 uses roughly 130 Watt where the GTX 580 hovers at a 235~240W, and that's measured in game while it's peaking and stressed. So for the R7850... well we measure roughly 106 Watt. Amazing stuff really.[/FONT]

Pretty impressive performance given the power envelope--I am curious how much memory power draw also plays into this. I do wish more people would offer some more settings in these reviews, e.g. BF3 with MSAA and w/o (ditto MSSAA+FXAA) would be nice. They mention 4xMSAA easily sucks down a ton of performance ("[FONT=verdana,geneva]4xA MSAA will cost you almost a third to half your framerate")[/FONT], so at 39fps for the 7870 at 1920x1200 does that work out to an average closer to 50fps or 60fps?

http://www.guru3d.com/article/amd-radeon-hd-7850-and-7870-review/1
http://www.guru3d.com/article/amd-radeon-hd-7850-and-7870-review/21
http://www.guru3d.com/article/amd-radeon-hd-7850-and-7870-review/24

Anyhow, I thought it was interesting info. If consoles launch on 28nm (with no goal of a fast shrink to accommodate excessive launch power budgets) it would be hard to expect a lot more than an HD 7870. It basically fits the size and the TDP. Maybe a better memory arrangement (stacked memory? see old SA AMD future GPU nugget) and some maturing/reworking over the next couple years allowing a slightly larger die and architectural evolution, but at < 250W on the 28nm node I think it is safe to say that a console using such isn't going to dwarf a 7870 in raw specs. Definitely possible we could/would see something better but nothing that blows it away unless budgets really changed.

The pricing strategy of AMD right now (pegging closely to NV) with the 2GB 7850 coming in at $249 retail (and you know AMD, the manufacturers, and retailers are all getting solid cuts on AMD's current line up), and based on the size of the die, are good indicators also that this sort of chip should fit into a console budget.
 
Some Radeon HD 7870 ([FONT=verdana,geneva]Pitcairn[/FONT]) info. Looks like 130W for the full board (almost 100W for the 7850). 212mm^2, 2.8B transistors, 1GHz clock, 256bit bus, 1280 stream processors, 80 TMUs, 32 ROPS, 2GB 1.2GHz memory (153.6GB/s), etc the 7850 is 860Mhz and same memory. Adverted power is 175-190W and 130-150W (2nd power tune) but this link from Dave B caught my eye:



Pretty impressive performance given the power envelope--I am curious how much memory power draw also plays into this. I do wish more people would offer some more settings in these reviews, e.g. BF3 with MSAA and w/o (ditto MSSAA+FXAA) would be nice. They mention 4xMSAA easily sucks down a ton of performance ("[FONT=verdana,geneva]4xA MSAA will cost you almost a third to half your framerate")[/FONT], so at 39fps for the 7870 at 1920x1200 does that work out to an average closer to 50fps or 60fps?

http://www.guru3d.com/article/amd-radeon-hd-7850-and-7870-review/1
http://www.guru3d.com/article/amd-radeon-hd-7850-and-7870-review/21
http://www.guru3d.com/article/amd-radeon-hd-7850-and-7870-review/24

Anyhow, I thought it was interesting info. If consoles launch on 28nm (with no goal of a fast shrink to accommodate excessive launch power budgets) it would be hard to expect a lot more than an HD 7870. It basically fits the size and the TDP. Maybe a better memory arrangement (stacked memory? see old SA AMD future GPU nugget) and some maturing/reworking over the next couple years allowing a slightly larger die and architectural evolution, but at < 250W on the 28nm node I think it is safe to say that a console using such isn't going to dwarf a 7870 in raw specs. Definitely possible we could/would see something better but nothing that blows it away unless budgets really changed.

The pricing strategy of AMD right now (pegging closely to NV) with the 2GB 7850 coming in at $249 retail (and you know AMD, the manufacturers, and retailers are all getting solid cuts on AMD's current line up), and based on the size of the die, are good indicators also that this sort of chip should fit into a console budget.

A 7870 in a console would be great. GTX 580 level performance in a closed environment... Let´s see if PS4 is going for that as I supposse Microsoft isn´t.
 
Why do you suppose MS isn't? Just curious.

At 212mm^2, 256bit bus, and some media indicating total board draw with 2GB of fairly fast GDDR5 memory to be about 130W it comes in at or below budgets sent for last gen consoles in general. Unless MS is aiming at a more Wii-like console with rudimentary ARM cores and intentionally not targeting GPU performance? I am a little behind on rumors.

EDIT: Per my BF3 comment a couple posts up, when 4xMSAA is disabled, but everything else maxed (including FXAA) at 1920x1200 it seems the 7870 hits about 60fps. Not bad... but in the context of next consoles is everyone ready to fork over $400 for a console that gives BF3 graphics (current gen) but 60Hz and cleaner image quality (textures, resolution, settings)? Generationally I think it is safe to say we have seen larger visual jumps.

As Erinyes pointed out in the Hardware.fr review they dropped the stock volated from 1.219V to 1.050V and their power load dropped from 124W to 94W.

And if we needed any other reason why everyone is flocking to AMD: look at the performance/W. AMD is running away there with the 7870 being the front runner.

IMG0035379.gif
 
Just for rumours that say Microsoft is going for a 6670 level performance GPU and Sony for a interposer.

That is like 118mm^2 on 40nm. That versus a 7870 class PS4 would fall well within the disparity my recent poll inquired about in terms of purchasing preference. That said I would be shocked at such a disparity.
 
That is like 118mm^2 on 40nm. That versus a 7870 class PS4 would fall well within the disparity my recent poll inquired about in terms of purchasing preference. That said I would be shocked at such a disparity.

Yes, but with the Kinect thing in each box MS may go for the SOC to make it cheaper. I don´t know. I also think these rumours could be fud to make the Wii hd or PS4 not more powerful in a last minute attempt.

If it is true that sony goes for an interposer is because they want two separate dies for the GPU and CPU and so bigger and more powerful chips.
 
I agree, I was just pointing out what I noticed. But I didn't take any manipulation from that quote. The fact that he chose a card that old should have made the sarcasm pretty clear. I don't see how that could have been reinterpreted to him meaning one 580.

It was just a small brain fart from him. I'm 99.9 % certain that he meant that it could run on only one of those 580s with proper optimisations. I don't know if that is really the case, but that was the point he wanted to make. Imo you'r taking your angle a bit too far with your assumptions. Also the point Rangers was making about needing 1.5 x 580GTX was obviously just to have a another chip that provides that sort of performance, not that you could saw of .56 of a card and throw it in there :LOL:
 
Last edited by a moderator:
Just for rumours that say Microsoft is going for a 6670 level performance GPU and Sony for a interposer.

There is something i don´t buy about the 130 TDP figures. If you see other reviews like

http://www.hardocp.com/article/2012/03/04/amd_radeon_hd_7870_7850_video_card_review/13

the wattage in full load for the 7870 is 275 - 67 ( tdp without video card ) = 208. Far from the 130 in the first review.

If you look at the <a href="http://www.techpowerup.com/reviews/AMD/HD_7850_HD_7870/24.html" title="Link to example website">TechPowerUp</a> review, they use a $1500 data logger/multimeter to isolate the power consumption of the whole card so this includes 2GB of GDDR5. Their claim is that the 7850 has a max load of 101W and the 7870 144W.

~100W for the GPU and 2 GB of DDR leaves plenty of room for a modest CPU, USB, WIFI, and drive.

My expectations for next gen went up with these power numbers.
 
A comparison between AMD and Nvidia GFLOPS is not a valid one.
They are both theoretical peak numbers, not real world numbers.

Having an AMD GPU in a console will do nothing to get the AMDs GPUs FLOPS rating closer to its theoretical max.

Haha, I dont think you understand optimization at all.

Regardless, it does not say "only nvidia flops" on Epic's slide, period.

I don't understand how you can make that argument. There's no logic in it. You have to have two to cover the extra 50%.

Sigh :/

And from what you're saying here some how a Pitcairn (I'm going with that as a possible target) in a closed environment is going to be at least equal to 1.5 times a 580? I think that's a reach at best.

I think so, easily. When you look at Killzone 3 running on a 7800GTX, from 2005, basically, you should be pretty awed what optimization can do. Carmack says in general it's a 2X speedup I believe.

We also know the demo was made with just a few people, so that also suggests little optimization.

Now that 7870 benches are out we see it's not running far behind a 580. I think it can do Samaritan, I guess we have to agree to disagree.

Also if we drop to 720p, then there's no debate at all...

It was just a small brain fart from him. I'm 99.9 % certain that he meant that it could run on only one of those 580s with proper optimisations.

Me too, if he was being sarcastic "Ti 500" would be an awfully arcane choice. If he meant to be sarcastic he likely would have said we're going to get it running on a iPhone or something.

A 7870 in a console would be great. GTX 580 level performance in a closed environment... Let´s see if PS4 is going for that as I supposse Microsoft isn´t.

Sure right, if enough people say it over and over on message boards...

I'd bet there's at least a 50-50 chance next box ends up more powerful than PS4. Probably greater. Sony's engineering has always talked a lot of talk but had a problem backing it up. Throw in their finances and...

If they are really going for all this stacked chip stuff and what have you (BIG if, as I dont trust Charlie), they will probably end up in 2014 scrapping it and scrambling to get a real machine that actually works out the door yesterday, as Xbox 720 that came out in 2013 snowballs momentum.. Sound familiar?
 
At 212mm^2, 256bit bus, and some media indicating total board draw with 2GB of fairly fast GDDR5 memory to be about 130W it comes in at or below budgets sent for last gen consoles in general.

Significantly above I would say. Once you factor in overhead power for other components (say 20W with 12x spinning optical drive + HDD, flash, HANA etc), you're left with about 155W split between the CPU and GPU + RAM on 360. The GDDR3 RAM chips consumed 3-4W per chip. That's 24-32W total there. At most, you'd have 130W split between the two chips.*

On the Playstation side, you've got Cell skewing the split considering a bunch of the RSX die at the time featured redundant hardware blocks and the XDR I/O.

GDDR5 consumes about 4W or so per chip.

*Even keeping in mind the eventual inadequacy of Xenos' heatsink, you're also talking about a chip that is almost always running at high power consumption compared to a PC GPU, the point being that PC GPU heatsinks at the high end are pretty ridiculous in comparison to begin with.

I think so, easily. When you look at Killzone 3 running on a 7800GTX, from 2005, basically, you should be pretty awed what optimization can do. Carmack says in general it's a 2X speedup I believe.

To be fair, DX9 is a pretty awful hinderance compared to consoles. For DX11, it's hard to say if it has caught up, at least when you consider Microsoft's apparent disapproval of coding to the metal on 360.

We also know the demo was made with just a few people, so that also suggests little optimization.
Right. I think it's safe enough to say that "optimization" to them does include tweaking the graphical effects for bang for buck i.e. not rendering at full res buffers for effects, reducing sample counts (SSAO/shadows). Especially for low frequency type stuff like SSAO or sub-surface scattering, the benefit of using full res buffers is pretty low. The SSAO in Samaritan is already starting with half-res scene; who knows how many samples. There's also the fact that there are numerous ways of achieving real-time AO, so this implementation won't necessarily be final. The SSSSS is using 16 samples with random jitter... And who knows what they chose for the shadow buffers. And we all know the tessellation factor would have been ridunkulous (nV-style).

Just changing the res to 1080p will have a cascade effect on all the post-processing.

The Bokeh implementation is also pretty ridiculous (spawning quads per pixel), and they can obviously tweak that. There's also the bokeh radius to tweak.
 
Some of these modern game engines are using a *lot* of buffers for deferred rendering. What are we looking at in terms of buffers with 4xMSAA with a G-Buffer, full resolution buffer for transparencies, etc?

Well, the framebuffer size balloons once you factor in 64-bit render targets. Though who knows if devs will just choose other 32-bpp HDR formats like RGB9E5 (DX10+) or if Microsoft comes up with a new format ala 7E3/FP10.

There's really no reason to use full res alpha if they fix up the edge cases.

At most you're probably looking at no larger than 160MB for the G-buffer with 4xAA and 32bpp @1080p (the BF3 golden target).

I'm actually quite curious to see if devs will consider using >4MRTs considering hardware support for 8 has been there since DX10, though the memory cost will obviously be ridiculous. :p
 
It was just a small brain fart from him. I'm 99.9 % certain that he meant that it could run on only one of those 580s with proper optimisations. I don't know if that is really the case, but that was the point he wanted to make. Imo you'r taking your angle a bit too far with your assumptions. Also the point Rangers was making about needing 1.5 x 580GTX was obviously just to have a another chip that provides that sort of performance, not that you could saw of .56 of a card and throw it in there :LOL:

Yes, I was obviously assuming he meant sawing another card in half even though he himself said he wasn't. :rolleyes: You weren't reading my responses or his based on this post. He talked about "1.5 580s" when Epic's own formula doesn't drop in half like that. 2.5 TFLOPs is not "1.5 580s" it's actually more, but I stuck to how he was viewing it. And there aren't any single GPU's out there that equal that level of performance which is what I'm getting at. And at the same time what I expect to be in a console wouldn't achieve that on it's own either. I assumed I didn't have to point that out like that. That's why you still need to two 580s to cover that extra 67% in reality. Is it more than what you need? Yes, but there's nothing available as a single GPU that can handle that.

And I think you're assuming too much on him having "just a small brain fart" for him to choose a 10+ year old card. You don't brain fart like that, especially someone apparently like Rein. Show me where he made an indication that he meant one 580 and I'll change my view. Like I said that's my first time seeing the actual quote, but I don't see how anyone can take a reference to an old card as a simple mistake. And if that's the only proof, then I don't think Rein is at fault for other people expecting something that Epic never indicated would happen.

We also know the demo was made with just a few people, so that also suggests little optimization.

Now that 7870 benches are out we see it's not running far behind a 580. I think it can do Samaritan, I guess we have to agree to disagree.

Also if we drop to 720p, then there's no debate at all...

The latter is something we wholeheartedly agree on. 720p should be easily obtained by what I expect from MS and Sony.

I think what we would agree to disagree on is the performance gains from being in a closed environment. Personally I expect at a best a GPU between a 7850 and a 7870, and I don't see the gain being up to the point where it's equal to 1.5x to 1.67x one 580. I could be wrong, but I think that's expecting a lot.

And I definitely acknowledge they originally made it with just a few people. My issue is that according Epic's official info, it's almost a year later and nothing has changed based on their own numbers. All they did was reduce the resolution.
 
And I think you're assuming too much on him having "just a small brain fart" for him to choose a 10+ year old card. You don't brain fart like that, especially someone apparently like Rein. Show me where he made an indication that he meant one 580 and I'll change my view. Like I said that's my first time seeing the actual quote, but I don't see how anyone can take a reference to an old card as a simple mistake. And if that's the only proof, then I don't think Rein is at fault for other people expecting something that Epic never indicated would happen.

Imo something like that is fairly typical example of a brain fart. Just a little misfire. Even with the large time gap between those two cards, they were both top end nVidia GPUs causing similar associations etc., and the focus wasn't on the model, but on the amount of cards, that's exactly the type of moment when errors like that happen.

http://www.geforce.com/News/article...dia-talk-samaritan-and-the-future-of-graphics

Not Rein, but Martin Mittring: Senior Graphics Architect at Epic Games.

As already mentioned, the demonstration ran in real-time on a 3-Way SLI GeForce GTX 580 system, but even with the raw power that configuration affords, technological boundaries were still an issue, and for that reason, Daniel Wright, a Graphics Programmer at Epic, felt that "having access to the amazingly talented engineers at NVIDIA’s development assistance centre helped Epic push further into the intricacies of what NVIDIA’s graphics cards could do and get the best performance possible out of them." Being a tightly controlled demo, Samaritan doesn’t include artificial intelligence and other overheads of an actual, on-market game, but with enough time and effort, could the Samaritan demo run on just one graphics card, the most common configuration in gaming computers? Epic’s Mittring believes so, but "with Samaritan, we wanted to explore what we could do with DirectX 11, so using SLI saved time
 
http://www.tomshardware.com/news/patent-microsoft-3d-mouse-gyroscope,14878.html

MS granted a patent filed in 2006 for 3D mouse.

I had deposited a long time ago prior to the WiiU I thought MS/Sony would either have a screen on the controller (not a wild prediction considering the Dreamcast kind of did this years ago) or that we could see a quasi Move-Classic controller where the wands "break out" essentially a normal pronged controller that could be separated for 3D wands. I am pretty curious at this point what MS and Sony will come up with. Personally a Kinect like camera with a traditional/breakout Move controller would pretty much fill a huge array of input scenarios.
 
http://www.tomshardware.com/news/patent-microsoft-3d-mouse-gyroscope,14878.html

MS granted a patent filed in 2006 for 3D mouse.

I had deposited a long time ago prior to the WiiU I thought MS/Sony would either have a screen on the controller (not a wild prediction considering the Dreamcast kind of did this years ago) or that we could see a quasi Move-Classic controller where the wands "break out" essentially a normal pronged controller that could be separated for 3D wands. I am pretty curious at this point what MS and Sony will come up with. Personally a Kinect like camera with a traditional/breakout Move controller would pretty much fill a huge array of input scenarios.
That's interesting, Sony have a patent on adding Depth channel to their eyeToy. If both plans happen, it means third party devs would be able to make motion based games multi platform... to a certain extent.
 
Status
Not open for further replies.
Back
Top