Interview with Bill Daly

trinibwoy

Meh
Legend
Supporter
PCGH held a pretty long interview with Nvidia's top scientist.

http://www.pcgameshardware.com/aid,...chnology-DirectX-11-and-Intels-Larrabee/News/

Our understanding of Larrabee, which is based on their paper at Siggraph last summer and the two presentations at the Game Developers Conference in April, is that they have fixed function hardware for texture filtering, but they do not have any fixed function hardware either for rasterization or compositing and I think that that puts them at a very serious disadvantage. Because for those parts of the graphics pipeline they're gonna have to pay 20 times or more energy than we will for those computations.

The texturing and FLOPS actually tends to hold a pretty constant ratio and that's driven by what the shaders we consider important are using. We're constantly benchmarking against different developers‘ shaders and see what our performance bottlenecks are. If we're gonna be texture limited on our next generation, we pop another texture unit down. Our architecture is very modular and that makes it easy to re-balance.

The ratio of FLOPS to bandwidth, off-chip bandwidth is increasing. This is, I think, driven by two things. One is fortunately the shaders are becoming more complex. That's what they want anyway. The other is, it's just much less expensive to provide FLOPS than it is [to provide] bandwidth. So you tend to provide more of the thing which is less expensive and then try to completely saturate the critical expensive resource which is the memory bandwidth.

It is critically important to the people who do scientific computing on our GPUs to have double precision. So going forward, the GPUs that we aim at the scientific computing market will have even better floating point double precision than what's in GT200. That ratio of double precision to single precision which is now one double precision operation per eight single precision operations will get closer. An ultimate ration to target is something like two to one.

I think that we're increasingly becoming limited by memory bandwidth on both the graphics and the compute side. And I think there's an opportunity from the hundreds of processors we're at today to the thousands of cores we're gonna be at in the near future to build more robust memory hierarchies on chip to make better use of the off-chip bandwidth.
 
PCGH: That's about the same time, we're expecting the new game consoles. Is that also an opportunity Nvidia is looking forward to? What's your take on that?

Bill Dally: We're certainly very interested in game console opportunities. [smiles]

:D

Well I have to think Nvidia has floundered a bit lately, I agree with Charlies assessment that they aimed their chips at a market that doesn't really exist (GPGPU) and so made them too big for their performance level, as well as shader deficient almost in a sense. So I wonder if things will change with the new guy.

Anyways the game console response leads me to wild speculation, such as wondering if theyre going to be doing a custom work ala Xenos for next gen (probably Sony), rather than off the shelf.
 
Last edited by a moderator:
How can the GPU computing market exist before there's GPU computing hardware? The HPC market on the whole though is huge. Charlie really isn't the most logical bloke when it comes to his mindless Nvidia bashing.
 
How can the GPU computing market exist before there's GPU computing hardware? The HPC market on the whole though is huge. Charlie really isn't the most logical bloke when it comes to his mindless Nvidia bashing.

Well, I look at a 4890 nearly doubling the GTX285 in the strongest shader benchmarks such as OCCT or even furmark, at close to half the size. And in the very latest games again the 4890 seems to be the equal of the 285 straight up. Considering ATI vowed not to go after the high end, that's crazy.

As for the HPC market, I dont know. It just seems to me GPU's are for the large part just going to run games faster as their main job, for the forseeable future.

I mean what did Mr Daly just criticize Larrabee for? "We'll be a lot faster running games, so Larrabee doesn't scare me" is what he essentially said, and he's right. I'd agree with him, Larrabee looks a bit weak right now, just because it looks to not run the games the fastest.
 
Not following. First you mention Charlie's dissing the HPC potential and now you're talking about game performance as supportive of his claims? Nvidia's trying to do both, best of luck to them.
 
How can the GPU computing market exist before there's GPU computing hardware? The HPC market on the whole though is huge. Charlie really isn't the most logical bloke when it comes to his mindless Nvidia bashing.

Well I wouldn't say it was huge. But certainly VERY high margin. Akin to the workstation graphics market, although smaller but with higher margins.

Regards,
SB
 
At face value he seems to be saying that GT300 won't be much different from GT200 and that his particular interest in improving memory architecture will come in the medium term, some time after GT300.

On the other hand, memory is something that saw a significant gain in GT200 compared with G80 (coalescing, flexibility) and is an obvious target for further improvements, particularly as D3D11 places a considerable burden on irregularly accessed memory, i.e. with non-32/128-bit data structures and scatter/gather.

So, GT300 should see an improved memory system. But maybe he doesn't consider it much of an improvement :???:

Jawed
 
Well in the same interview he considers a doubling of off-chip bandwidth to be meh so who knows what benchmarks this guy is using.
 
If the bandwidth doubles at the same time as the overall performance of the chip doubles, then the performance per byte/s hasn't improved...

Jawed
 
Back
Top