Larrabee delayed to 2011 ?

I agree with Nick's notion that the focus should no longer be just on fillrate and raw rasterization performance, but rather on more efficient and more flexible/programmable architectures. I think most of us do.

I suppose the thing here is that Nick sees the raytracing demo as proof that Intel is going that route, but most of us aren't convinced yet.
 
I think another concern here is that unless there's a new console generation coming that switches to raytracing, with today's gaming industry somebody trying to be different will be given some harsh treatment methinks.
 
I think another concern here is that unless there's a new console generation coming that switches to raytracing, with today's gaming industry somebody trying to be different will be given some harsh treatment methinks.
No one is talking about *switching* to raytracing, but if you follow the literature it's pretty well accepted that it's a useful tool to have in the box, and thus an architecture that can both raytrace and rasterize "efficiently" is clearly a win.

This is a technical demo that showcases one particular application of the flexibility that Larrabee affords, not an attempt to reinvent the graphics pipeline overnight. NVIDIA has given similar demos recently of raytracing running on their architecture, and similarly they weren't trying to imply that we're going to stop using rasterization any time soon...

(As an aside, it's really ironic that I'm the one who has and will continue to argue strongly that rasterization isn't going away and it would be stupid to replace it with raytracing, but conversely people seem to have swung so far the other way that they thing raytracing isn't useful *at all*, which is an equally stupid stance. To dredge up and old example, it's funny to me that people continue to argue about whether apples or oranges are "better"... let's just have both :))

Still using x86 for all its cores smells like a horrible compromise
I see this comment thrown around a lot and superficially it seems to hold water... but in reality, I wonder whether people have the facts/numbers to actually back this up (particularly from the hardware point of view), or whether people are just making a lot of assumptions. I tend to give hardware designers the benefit of the doubt with respect to making good decisions, but hey I'm no hardware expert and maybe the people making these comments are :) Still, if that's the case, I'd be interested in seeing the facts/logic backing up the assertion rather than more vacuous statements.
 
Last edited by a moderator:
No one is talking about *switching* to raytracing, but if you follow the literature it's pretty well accepted that it's a useful tool to have in the box, and thus an architecture that can both raytrace and rasterize "efficiently" is clearly a win.

This is a technical demo that showcases one particular application of the flexibility that Larrabee affords, not an attempt to reinvent the graphics pipeline overnight. NVIDIA has given similar demos recently of raytracing running on their architecture, and similarly they weren't trying to imply that we're going to stop using rasterization any time soon...

(As an aside, it's really ironic that I'm the one who has and will continue to argue strongly that rasterization isn't going away and it would be stupid to replace it with raytracing, but conversely people seem to have swung so far the other way that they thing raytracing isn't useful *at all*, which is an equally stupid stance. To dredge up and old example, it's funny to me that people continue to argue about whether apples or oranges are "better"... let's just have both :))


I see this comment thrown around a lot and superficially it seems to hold water... but in reality, I wonder whether people have the facts/numbers to actually back this up (particularly from the hardware point of view), or whether people are just making a lot of assumptions. I tend to give hardware designers the benefit of the doubt with respect to making good decisions, but hey I'm no hardware expert and maybe the people making these comments are :) Still, if that's the case, I'd be interested in seeing the facts/logic backing up the assertion rather than more vacuous statements.

If you have programmed in x86 assembler and then RISC (any really) assembler you would know people are not jsut whining about it.

Also, a similar RISC CPU is in general faster if compared to a x86 based, that's why to compensate modern CPUs from both Intel and AMD basically convert the existing binaries to their internal more RISC like architectures (I believe micro ops in Intel's case). If you could take for example an imaginary CPU by Intel based on some RISC ISA it would be likely faster than it's x86 counterpart. Itanium is a different case completely since it is VLIW.

The only advantage of x86 is existing tools and software, there is really nothing else. Obviously this is only important for people who write compilers or optimize at a very low level.
 
If you have programmed in x86 assembler and then RISC (any really) assembler you would know people are not jsut whining about it.
I definitely have done work in both, but this isn't a question of the ISA's themselves at this point... I claim that no one in their right mind is going to write code at the assembly level for processors like these nowadays - intrinsics and their associated compilers are good enough, if not better than hand-coding the assembly nowadays, even for the inner-most kernels.

Thus the question purely comes down the advantages of x86 (as you state, compatibility and tools) vs. the disadvantages at the *hardware* level. The implicit assertion appears to be that it's expensive at the hardware level to support x86, but I've yet to see someone actually back that up with numbers.

Anyways this is straying a bit off-topic...
 
Last edited by a moderator:
How do you know?
Because I haven't seen it done yet. Why would NVIDIA raytrace two spheres and a checkerboard instead of something even a tiny bit more like a real game scene? Also note how the resolution is lowered dramatically when the camera is moved around. I doubt they're doing that for the retro look. Also, requiring a Quadro card to run the OptiX demos clearly indicates they don't want the larger public to know how terrible their current GPUs are at raytracing.

Despite that they do believe in "hybrid" rendering. To me that reads the same as "we'll use a unified architecture when the time is right"...
Maybe that's why the 'interesting' visual effects in that video are reflecting spheres?
No. The rays reflecting from those spheres shoot into a pretty complex polygonal scene. They have to traverse a BSP and then require several ray-triangle intersections. That's totally different from a scene with just spheres.

They probably could have used other reflective shapes as well but spheres allow you to verify the reflection.
 
Hm, I remember the Doom3 tech demo running on the GF3 looked quite impressive to me, certainly far more impressive to me then than this does now... ;)
Ok, I guess some early pixel shader demos actually looked quite good. But really the point is you shouldn't judge technology by the artwork that is used. There were far less visually stunning examples of pixel shaders as well and you had to be a developer to appreciate what was being done.

The Larrabee demo does some pretty impressive stuff if you know what to look for. During seconds 30-35 you can see spheres reflected into spheres, that reflect the environment (which includes a moving object and its schadow). Doing this with rasterization requires rendering high resolution environment maps for each sphere, twice, also requiring shadow maps for each view. So you have to rasterize the scene at least 72 times.

Also during seconds 55-65 you can see a portal that is looking at itself and being rendered recursively. I think it's at least 20 portal warps deep. You can get the same effect with rasterization and stenciling, but again it requires lots of passes. You can run Valve's Portal game with a maximum recursion depth of 9, but it's not recommended.
...especially now that ATI is dabbling in supersampling AA again...
They're doing supersamping because they don't have anything else to throw their TFLOPS at. Also, it's the only thing left that actually scales linearly. But what's the next step? 16x super-sampling? Nobody's going to pay 16 times more to get rid of a bit of aliasing.

We've really reached the limits of rasterization. You can't make a game more visually interesting and not kill performance. Ideally GPU vendors would like developers to make their shaders twice as long so it scales nicely with today's architectures. But really what they want to do goes beyond the rasterization pipeline.
I don't see what actual NEED larrabee fulfils with its raytracing demo. Are any devs at all asking to be liberated from traditional rasterization, truly?
Ask Tim Sweeney.
 
Not exactly a ringing endorsement for raytracing that ...
He's asking to be liberated from the rasterization pipeline. Whether he'll use raytracing or REYES or voxels or something else is really not the question. But a device has to be capable of efficient raytracing to also be efficient at supporting any other rendering approach.

Yes it's a hazy future but when you look at shader model 1.x some also didn't expect much more than allowing a few more ways to blend your texture maps together. Today we're at shader model 5.0 and it's used to do things vastly different from just blending textures.

It will take a couple of years before we know exactly what developers will do with the extra flexibility. But there's no doubt developers want it, even if some don't realize it yet. It will unleash a revolution bigger than shaders.

When you enable something new, developers flock to it trying to create the killer app. Larrabee is a goldmine for creative people.
 
When you enable something new, developers flock to it trying to create the killer app. Larrabee is a goldmine for creative people.
Strongly agree. More flexibility will go a long way, far beyond what we can imagine right now.
 
But a device has to be capable of efficient raytracing to also be efficient at supporting any other rendering approach.
It's an open question if it will even be more efficient than AMD/NVIDIA GPUs at these by the time it comes out. They have their baggage, but Larrabee is not free of it's own baggage.

Something like Reyes isn't even terribly poorly suited to them either ...
 
Maybe, but as far as PC gaming is concerned it doesn't really make much sense. It's kind of like the hydrogen car with no hydrogen fuel stations.
 
Maybe, but as far as PC gaming is concerned it doesn't really make much sense. It's kind of like the hydrogen car with no hydrogen fuel stations.
Sure but it's just a chicken and egg thing... no one is going to start writing the software until the hardware is available, and if the hardware does decently with the typical pipeline in the mean time, people can start adding cool stuff incrementally.
 
Andrew Lauritzen said:
if the hardware does decently with the typical pipeline in the mean time, people can start adding cool stuff incrementally.
I guess that is the issue I am having with it. I've seen no indication that Larrabee, when it finally does come out, will be able to compete with what AMD & Nvidia are offering on existing games. So if it comes in 3rd place and has the worst price/perf/watt ratio, where is the install base going to come from? I just don't see much dev time being put into something with 5% marketshare. Custom applications - definitely, UE4 - not so much.
 
I've seen no indication that Larrabee, when it finally does come out, will be able to compete with what AMD & Nvidia are offering on existing games.
It's their risk (as with everything new), let's not be overly negative towards it ;). Lets hope that something good comes out of it. If not card, then maybe push for AMD/NVIdia in direction of more flexiblesness.
We should be happy that Intel tries to innovate, instead of stagnating the industry.
 
Because I haven't seen it done yet. Why would NVIDIA raytrace two spheres and a checkerboard instead of something even a tiny bit more like a real game scene?
According to this page, the two spheres thingy is procedural geometry demo. I don't know exactly that means, but I suppose it's more interesting than just rendering just two spheres...

Also, requiring a Quadro card to run the OptiX demos clearly indicates they don't want the larger public to know how terrible their current GPUs are at raytracing.
Absolutely, no other explanations are even remotely plausible. Just like hardware supported anti-aliased lines are only supported for Quadro because of their terrible performance.

(My interpretation: increase the value of the Quadro brand by reserving high end features to those who are willing to pay for it.)

Despite that they do believe in "hybrid" rendering. To me that reads the same as "we'll use a unified architecture when the time is right"...
Why this urge to stretch every minor fact into an argument for your dogma's? There's really no need for that and it makes a smart person sound silly.

Frankly, I think a hybrid approach *is* a great way to get the best of both worlds. You get the speed of rasterization where possible and the image quality of ray tracing when you have no other choice. Rather than the implicitly admission of defeat for current architectures that you see in it, it sound entirely sensible to me.

Larrabee may very well have an architecture that's better suited for ray tracing, but without meaningful points of comparison is just stupid to declare a winner based on a crappy demo and reading tea leaves.
 
I agree with Nick's notion that the focus should no longer be just on fillrate and raw rasterization performance, but rather on more efficient and more flexible/programmable architectures. I think most of us do.

I suppose the thing here is that Nick sees the raytracing demo as proof that Intel is going that route, but most of us aren't convinced yet.

Having more fill and texture sample rate isn't something bad to have. As bandwidth and transistor density continues to increase so will data throughput. At some point there will be a switch I think, from pixel to voxel rendering, where you sample from volume textures and render into volume textures. To enable this you need all the fill rate you can get. At the same time this will be linked to realistic volumetric physics on the GPU and there again you need all the bandwidth you can get.

About rasterization, one should make a distinction between pixel rate and triangle rate. Currently GPUs are limited when rendering small triangle by triangle rate, with only at most one triangle per clock cycle. There still a lot of work is needed to pump up the number of small triangles to billions a second. This will needed to be done with fixed function logic in the rendering pipeline. This type of rendering load will also totally overwhelm Larrabee IMHO.

About the Intel raytracing demo, Intel knows all too well that good support of DirectX 9,10,11 will make or break Larrabee as a GPU. Currently this CGPU is in no state to render existing games at competitive speed/quality. So instead of showing a demo to prove how bad the current situation is with DirectX they choose to show something different, they just as well could have shown some physics demo, or combination of raytracing with physics.
But then of course physics and raytracing don't quite mix well, as raytracing is good at static scenes and poor at dynamic scenes. (try to rebuild a kd tree of millions of triangles once every frame)
 
Having more fill and texture sample rate isn't something bad to have.

That wasn't the point though.
The point was more: at what cost?
GPUs are getting larger, hotter, with more noisy coolers all the time, we get more GPUs on a card and all that... (because the 'natural' increase in bandwidth and transistor density isn't enough).
The reason for this is that it is getting ever more difficult to push graphics further in a way that is visually significant. We've now reached a point where trying to go for more resolution, higher AA/AF or better framerates seems to be a dead end. For most intents and purposes, 1920x1080 with 4xAA/16xAF and 60 fps is going to be enough for many years to come. And trying to push further requires incredible leaps in performance.

We need to step away from this bruteforce approach, and get more performance/better visuals out of the GPUs in another way. Make them more efficient at what they do, or make them do completely different things (physics, raytracing and tessellation are a good start).
 
You can run Valve's Portal game with a maximum recursion depth of 9, but it's not recommended.
You should let my 4870 know that then, it runs 9 depth Portal just peachy at 1920*1200 with everything maxed, 4*AA & 16*AF :LOL:
 
Back
Top