News & Rumours: Playstation 4/ Orbis *spin*

Status
Not open for further replies.
Timothy Lottes is an nVidia employee with no insider knowledge of either platform. That blog is his personal speculation based on his interpretation of rumours, backed with considerable expert understanding, but also with the same big question marks over the architectures that we're all shouldering. It is nothing like definitive, and shouldn't be treated any differently to any technical discussion we have here with developers. It's quite possible he's very wrong on some points.
 
Timothy Lottes is an nVidia employee with no insider knowledge of either platform. That blog is his personal speculation based on his interpretation of rumours, backed with considerable expert understanding, but also with the same big question marks over the architectures that we're all shouldering. It is nothing like definitive, and shouldn't be treated any differently to any technical discussion we have here with developers. It's quite possible he's very wrong on some points.

Maybe he really know what he is talking about?

http://www.ign.com/articles/2013/01...dly-leaked-to-nvidia-by-former-amd-executives
 
Well if that's true, his opinions of Durango may well be true (VLIW instead of GCN), which would make a substantial difference to Durango. That goes against other news we're hearing though. Impossible to see, the future is.
 
Why throw around baseless speculation as if he has some magic insight? He does not. Please stop.
 
Well ign is still clinging to the 6670 rumors, If that's what they got from nVidia I wouldn't be worried.
 
I assume Timothy doesn't, but if nVidia have had details leaked to them (which this AMD lawsuit claims), then perhaps he has?

It's possible but highly unlikely. As anyone with potential knowledge of anything on those papers that came over from AMD should be smart enough to know that they can't do anything that might imply that they have knowledge of anything contained in those papers.

Hence anyone saying anything based on those papers in a public forum (such as an interview or website) would have to be colossally stupid, especially with the pending lawsuit.

So, it's far more likely he's speculating just like the other developers on these forums that don't have access to a durango dev kit. And even if they had access, they probably wouldn't be able to talk about it other than in very general terms.

Regards,
SB
 
When people leave a company and go to a competitor, they bring information with them. It happens. That's one of the reasons companies try so hard to poach people from their competitors.

That said, I don't necessarily believe that Lottes has any super-insider info.
 
Well ign is still clinging to the 6670 rumors, If that's what they got from nVidia I wouldn't be worried.

And they're stilling clinging to the IBM PPC rumors for Durango.
 
I assume Timothy doesn't, but if nVidia have had details leaked to them (which this AMD lawsuit claims), then perhaps he has?

Yeah that would be extremely smart, providing additional material for the law suit.
Personally I think he's just speculating, there is nothing outlandish in his analysis, though he's very dismissive of Durango, and I'm not sure what he's basing the older architecture conjecture on.

It's easy to dismiss Durango as weaker if the leaks are true, less overall bandwidth, less CU's, if the intent of the ESRAM is to act primarily as a frame buffer as the embedded memory in 360 did IMO it is without a doubt weaker.
As I've said elsewhere if the DDR3 is intended to be the frame buffer, and the ESRAM the data source there is a chance it can offset some of the computational disadvantage, but it's still at at a bandwidth disadvantage and what I'm suggesting requires additional bandwidth to move data to the ESRAM before consuming it.
The question then becomes does it have enough bandwidth, something I can't answer because you need to actually use it to determine if it is.
 
Wow it has been a long time. I remember when we were doing this back in 05 and 06. Shifty Geezer is now a mod. Congrats ol friend. I didn't post that much but have much respect for you.

You guys mentioned stacking earlier. Vita has this tech right? So is it feasible we might see this in PS4?
 
Vita's stacking is a system in package setup that's not quite the same as the sort of stacked memory most have been talking about.
The system DRAM is physically on top of the SOC and VRAM, but it is wire-bonded.

The graphics RAM is Wide IO RAM whose pads attach to the face of the SOC below, basically by flipping the chip so that the logic layers for the SOC and memory are facing each other.
It's a minimal amount of stacking, and not the same sort of technique being discussed as 2.5 or 3D stacking.
It really only works for one extra layer, since no vias are being used to punch through silicon, and the chips being face to face means no additional chips can be joined in this manner.

The Vita's SOC is undoubtedly much lower power than whatever goes into Orbis, which makes me think this particular method won't work. It's no good to have an SOC that needs a full cooling solution insulated by memory attached in this fashion.
 
As I've said elsewhere if the DDR3 is intended to be the frame buffer, and the ESRAM the data source there is a chance it can offset some of the computational disadvantage
How would putting the framebuffer in main RAM be preferable? Framebuffer (Z-buffer, render targets etc) I/O is by far the biggest consumer of data when generating 3D imagery.

You guys mentioned stacking earlier. Vita has this tech right? So is it feasible we might see this in PS4?
Vita's stacking is as primitive as you can get, standard wirebonding between the dies. It wouldn't work in a high-powered SoC because the interposer and DRAM on top of the main die would cause the package to cook, and you can't put the main die on top because you can't bring enough power and I/O to it through wirebonding; you need those BGA connections. However, through-silicon via tech is still immature and probably isn't ready - or at least cost-effective - for a system that would need to be manufactured in tens of millions of units per year.
 
Yeah that would be extremely smart, providing additional material for the law suit.
I don't think the blog article has provided any additional detail. He was just talking about rumours. Similar to devs here talking about public rumours.

Personally I think he's just speculating, there is nothing outlandish in his analysis, though he's very dismissive of Durango, and I'm not sure what he's basing the older architecture conjecture on.
I now think he just hasn't read around so much. He's updated the blog:
Edit: the reason for the pre-GCN GPU guess for 720, is because of rumors from SemiAccurate on tape out date, and a guess, based on how rumors have changed, that Sony decided to change dGPU based on early complaints of performance after the first rounds of rumors. Certainly it would be much better for developers building portable games if both consoles are GCN vs one pre-GCN.

Edit: ran out of popcorn reading the associated NeoGaf thread. The rumors clearly state 720 as GCN, and a pair of GCN consoles is what I'm hoping for, in case that was not clear.
His first mistake was to listen to SemiAccurate. ;)
 
Vita's stacking is a system in package setup that's not quite the same as the sort of stacked memory most have been talking about.
The system DRAM is physically on top of the SOC and VRAM, but it is wire-bonded.

The graphics RAM is Wide IO RAM whose pads attach to the face of the SOC below, basically by flipping the chip so that the logic layers for the SOC and memory are facing each other.
It's a minimal amount of stacking, and not the same sort of technique being discussed as 2.5 or 3D stacking.
It really only works for one extra layer, since no vias are being used to punch through silicon, and the chips being face to face means no additional chips can be joined in this manner.

The Vita's SOC is undoubtedly much lower power than whatever goes into Orbis, which makes me think this particular method won't work. It's no good to have an SOC that needs a full cooling solution insulated by memory attached in this fashion.

The bolded is what's important here.
 
How would putting the framebuffer in main RAM be preferable? Framebuffer (Z-buffer, render targets etc) I/O is by far the biggest consumer of data when generating 3D imagery.

Yes that used to be the case, and I can't see any reason you wouldn't leave Z in the ESRAM. Modern games read a lot of texels/pixel though, an I'm not sure that the FB is the primary consumer of bandwidth for say opaque polygons with a rough Z sort. 1080P with a deferred renderer with say 28 bytes per pixel requires about 60MB per full screen, if you say end up with 3x overdraw because of the sorting/ early Z it's 180MB, 60GB/s gives you 1GB/frame, so if it's BW limited ( and I'm not sure you wouldn't be ROP limited) it's about 20% for the opaque pass.

I'd also render the transparent pass into the ESRAM, but I'm speculating about a piece of hardware I will likely never use.
You'd probably also render shadows and possibly environment maps to ESRAM and then use them directly as a texture source.

Texture caches used to be almost perfect, but that was because everything was computed in the vertex shader and all the pixel shader did was interpolate, now a lot of the lighting computations are done in the pixel shader, and anything using a reflection vector to, look up a texture is likely to induce cache misses.

Having textures in the ESRAM is the only way I can see any offsetting of the rumored ALU deficit.
 
Status
Not open for further replies.
Back
Top