Intel pushing Larrebee console deal with Microsoft

Surely a one-chip console has big advantages in terms of longer term cost-cutting, die-shrinking etc? Also in terms of reliability too - a key concern bearing in mind the problems with 360.

If an underclocked, pre-release Larrabee can run Gears of War at 1080p60, I'd say that the full fat version would offer more than enough horsepower for a next-gen console - especially as consoles typically eek out more performance from the silicon than PCs running the same hardware.
 
a monster ATI GPU, or just a rasteriser with EDRAM, like PS2's GS but generations ahead, or something like a Xenos2. Thus, a 2-chip console. The ATI chip would not do anything except rendering to the screen, doesn't need to calculate anything on the front end (vertex shaders, geometry shaders), which would all be in the hands of custom Larrabee2. The ATI chip could even be a split die on one package, logic + EDRAM. just guessing.

It's been talked about before but doesn't rendering at 1920x1080 require significantly more eDRAM? It wouldn't be cost effective?
 
Between 64 and 128 cores should land between 4TFlops and 8 Tflops, that's huge and that won't happen.
Larrabee supposely are already tiny, INtel may have little room to do better.
The bald of LRB + GPU is unrealistic.
Between LRB is supposed to do its best as a deffered render, it's save a lot of bandwidth => no need for E-Dram.

Rangers LRB flexibility should helps in some areas. I'll edit latter and put the link to the Siggraph presentations.
There are a lot of interesting stuffs (not only larrabbe related but worse a read).
 
I hope the concept of EDRAM used as frame buffer in video game consoles dies a slow and painful death. It doesn't make any freaking sense, let's go TBDR for *** sake.
 
Or are you saying, LRB's flexibility would help it push out better graphics than a "standard" GPU?
Yes, among other things like AV encodning (webcam or tv recording), AV decoding, physics simulation, image recognition and motion detection, etc.

Surely a one-chip console has big advantages in terms of longer term cost-cutting, die-shrinking etc? Also in terms of reliability too - a key concern bearing in mind the problems with 360.

If an underclocked, pre-release Larrabee can run Gears of War at 1080p60, I'd say that the full fat version would offer more than enough horsepower for a next-gen console - especially as consoles typically eek out more performance from the silicon than PCs running the same hardware.
I believe those performance numbers was for rendering scenes of the game, not running the game as a whole.
In terms of scaling and cost cutting, I think it'd be easier for Intel to scale down a 2 chip configuration of similar or equal architecture onto 1 package and further onto 1 die, than Microsoft are doing today of putting the 360 GPU and CPU together.

Between 64 and 128 cores should land between 4TFlops and 8 Tflops, that's huge and that won't happen.
Larrabee supposely are already tiny, INtel may have little room to do better.
The bald of LRB + GPU is unrealistic.
Between LRB is supposed to do its best as a deffered render, it's save a lot of bandwidth => no need for E-Dram.
If we see a 32 core LRB1 in 09, I wouldn't rule out 64+ core LRB2 by 11/12. Perhaps more realistically for a high end console you could have 2 48 core rather than 1 gigantic one (for yields, heat dissipation, memory bandwidth scaling - I'm not a chip architect so maybe I'm completely off here).

I hope the concept of EDRAM used as frame buffer in video game consoles dies a slow and painful death. It doesn't make any freaking sense, let's go TBDR for *** sake.
EDRAM is definitely a bit unflexible to work with, but then it's not too much different from larrabee style tile based rendering, except your tiles are larger and your EDRAM is your cache...
 
I hope the concept of EDRAM used as frame buffer in video game consoles dies a slow and painful death. It doesn't make any freaking sense, let's go TBDR for *** sake.

Sorry, I don't get this. Where do you put the line between the Xbox EDRAM, where developers have to tile themselves, and the on-chip cache on Larrabee, where developers (okay, maybe also the "driver", but mad-scientist types would do it themselves) have to tile? To me, the differences between what is done on Xenon and what is described in the Larrabee PDF are minor and numeric.
 
If we see a 32 core LRB1 in 09, I wouldn't rule out 64+ core LRB2 by 11/12. Perhaps more realistically for a high end console you could have 2 48 core rather than 1 gigantic one (for yields, heat dissipation, memory bandwidth scaling - I'm not a chip architect so maybe I'm completely off here).
I would tend to think that nextgen system will launch @32nm.
It's likely that a 32cores larrabee will be a large / hot /power hungry chip.
For a two chips system I would put my bet on 2x24 cores LRB, clocked @2GHz or more (depending on power/heat considerations).
That would be around ~3TFlops.
I'm not sure that 64 cores will be achievable before 22nm.
 
It's been talked about before but doesn't rendering at 1920x1080 require significantly more eDRAM? It wouldn't be cost effective?

1920x1080 with no AA requires 15mb eDRAM, @ 4x AA you'd need about 60mb (rough guess)

I dunno exactly how expensive something like this would be, but by the time the new consoles are here it would probably cost less than MS costs with the 12mb eDRAM in the x360
 
Doesn't the PS3 have about 2 TFLOPS theoretically?
:LOL: it will be tough for anyone to beat the supposed power of the ps3.
Sony may be happy to actually beat that number (3) Do 4 would be great.
If Sony go with a cheap box they could actually to struggle to beat this figure with their PS4, speak about irony.
I'm amazed to read that here, this number came from nowhere and it marked minds so strongly... Ms should have do the same...
 
nAo said:
It doesn't make any freaking sense, let's go TBDR for *** sake.
Sure, but why not SBDR? :p

assen said:
Sorry, I don't get this.
The big difference is that XBox eDram is just a framebuffer, and when you have memory that large (or even bigger) it's a waste of space to specialize it that narrowly. The worst part of it is that it works agains a lot of image-space processing that eDram is originally intended to accelerate. Like nAo says, it's a fundamentally flawed concept.

The other thing is that in LRB you have full control over data-flow, so tiling doesn't have to behave like bolted on over regular rendering.
 
Programmable vs none programmable... huuuuge difference ;)

I would love to see what NV would rate the GTX 280 at if they counted the same way Sony did with RSX. :smile:
I would say it's Programmable vs "whatever" Sony counted every operations related to floating point data. It's not even relevant on how potent are the non programmable part in the RSX.
This value would be useless to define GT280 perfs.
And in fact it would only have been relevant if MS had gone though the same BS.
This number was a pure marketing value completely meaningless.
 
And in fact it would only have been relevant if MS had gone though the same BS.
Which they did, announcing XB360 was 1 teraflop before Sony announced 2 TFlops.
This number was a pure marketing value completely meaningless.
They're all meaningless, which is why we don't care to have these dumb PR number revisited! A far more useful 'metric' would be comparable GPU performance in terms of actual rendered pixels of a scene of x complexity, accounting for whatever rendering system is employed. Or at least, something far more complex than 'this number is bigger, ergo this one is better' ;)
 
And in fact it would only have been relevant if MS had gone though the same BS.
Which they did, announcing XB360 was 1 teraflop before Sony announced 2 TFlops.
They're all meaningless, which is why we don't care to have these dumb PR number revisited! A far more useful 'metric' would be comparable GPU performance in terms of actual rendered pixels of a scene of x complexity, accounting for whatever rendering system is employed. Or at least, something far more complex than 'this number is bigger, ergo this one is better' ;)
In case you didn't notice it, I'm the one pointing that those numbers are pointless.

In regard to the historic event.. well Ms pushed BS too well, (honestly I didn't remember, 2 TFlops gains the i"nternet colapse category" that may explain why), anyway both remain BS anyway ;)
 
I hope the concept of EDRAM used as frame buffer in video game consoles dies a slow and painful death. It doesn't make any freaking sense, let's go TBDR for *** sake.

Die, die, die, die! You hazelnut spread sucking troll!!!! :devilish:



J/K... ;)

Although to be fair, like Faf mentions, this most recent implementation IMO defeats the purpose. Then again I liked buffer math and thought that PixelPlanes was cool...

Excuse me Fafalada what does mean SBDR?

Screen Based Deferred Renderer (or at least that's what the scotch bottle is telling me).
 
Last edited by a moderator:
Back
Top