ART refocusing

Probably not. I doubt they'd change their business just because they think somebody else will make their products obsolete.

Any high-end 3D rendering company should consider either purchasing nVidia's chips, or making their own similar designs to compete.
 
Chalnoth,
Thank you for your constructive statements. It sometimes makes me wish there was a "kill file" for forums.

Raytracing offers convenient ways to implement a lot of effects that are otherwise tedious to program with standard rendering systems and, AFAIU, the ART chips offer acceleration for a large percentage of the raytracing cost (i.e. intersection tests).

Having said this, one area that may possibly benefit from using a "consumer" graphics accelerator would be to have it to do the first level of visible surface determination and then let the raytracing chips take over from there. (Are you reading this, Eduardo? :) )
 
Chalnoth said:
Probably not. I doubt they'd change their business just because they think somebody else will make their products obsolete.

Any high-end 3D rendering company should consider either purchasing nVidia's chips, or making their own similar designs to compete.

lmaa.gif
 
Doomtrooper:
If that is the best reply you can come up with, lower the signal to noise ratio and keep it in PMs. Please.

[rant]
These discussion boards have gone "downhill" since.. well, 3dfx went into chapter 11 (not saying there is a direct connection, but thats around the time it started IMO). And if the current chatroom trend continues, I'm afraid it'll be pretty tedious to separate the wheat from the chaff. Lurking takes more time that ever, especially trudgeing through the now mind-numbing amounts of "product favorism".

When you find an obviously "product favorist" statement remember the old adage: "Don't feed the trolls."
[/rant]

Sorry for the offtopic post, it just overflowed for a moment there. I'll go back to lurking now.
 
Bogotron said:
[rant]
These discussion boards have gone "downhill" since.. well, 3dfx went into chapter 11 (not saying there is a direct connection, but thats around the time it started IMO). And if the current chatroom trend continues, I'm afraid it'll be pretty tedious to separate the wheat from the chaff. Lurking takes more time that ever, especially trudgeing through the now mind-numbing amounts of "product favorism".

When you find an obviously "product favorist" statement remember the old adage: "Don't feed the trolls."
[/rant]

IMO good rant and I agree. still other forums have slid (even the fan sites) even more I see this place still top of that. (though some threads are really turning in to fan site flame wars.)

thank you not giving _my admired company_ not to release anything. this keeps me always on my toes when giving some statements. Still, you could low the grip on that little bit... It would be nice to be king of the hill at least some time...
 
Bogotron said:
[rant]
These discussion boards have gone "downhill" since.. well, 3dfx went into chapter 11 (not saying there is a direct connection, but thats around the time it started IMO). And if the current chatroom trend continues, I'm afraid it'll be pretty tedious to separate the wheat from the chaff. Lurking takes more time that ever, especially trudgeing through the now mind-numbing amounts of "product favorism".

When you find an obviously "product favorist" statement remember the old adage: "Don't feed the trolls."
[/rant]

I had been here before and after the 3dfx Chapter 11 (which was Nov 2000) and the quality was fine then. Actually, it wasn't until fanatics of both companies (ATI & NVIDIA) have joined here. This wasn't until ATI become a serious competitor to Nvidia, a lil after the 8500 release.

--|BRiT|
 
I had wondered why ART wasn't present on the show floor at Siggraph. The restructuring must be the reason. Regarding the question I don't believe ART has to worry about Nvidia or Ati eating into their business for a number of years. Tim Purcell from Stanford demoed real time ray tracing on a 9700 at Siggraph, but it was a very simple scene so there is still a ways to go. Consumer chips might be competition someday though.
 
3dcgi said:
I had wondered why ART wasn't present on the show floor at Siggraph. The restructuring must be the reason. Regarding the question I don't believe ART has to worry about Nvidia or Ati eating into their business for a number of years. Tim Purcell from Stanford demoed real time ray tracing on a 9700 at Siggraph, but it was a very simple scene so there is still a ways to go. Consumer chips might be competition someday though.

Yeah, but he said that it was only about 3 weeks of coding effort, and it was already about five times as powerful as a pentium 3 800 MHz. Even though it was a simple scene, it still had the core of raytracing, which is the triangle intersections. Things like texturing that make the scene look more complex don't really affect the performance that much. Also, a consumer card would be WAY cheaper than some fancy ART hardware. A farm of Radeon 9700's would be quite powerful indeed.
 
Seeing as the other thread was closed without having the opportunity to get an answer, I'll try again here!

I have a question for the more knowledgeble of you out there. Assuming that the NV30 can render any complexity of Renderman Shader or comparable renderer (3D Max, Lightwave, Cinema 4D etc...), how useful as a *final* rendering tool would a standard off-the-shelf board actually be? We'd also have to assume that the chip would satisfy the rendering precision of animation houses (they couldn't have a frame rendered on Linux/P4 box under software and a NV30 frame that didn't match).

Given that renderfarms typically have memory coming out of the ying-yang to hold massive textures and extremely dense geometry, how would a 128mb/256mb card manage? What would happen if the texture size and/or mesh (or even a useful portion of) was too large to fit into the card's memory? Wouldn't all the page swapping kill the performance, if it would work at all?
 
I think it's been talked about that an NV30 would be hundreds of times faster at high-end rendering than the current renderfarms, dropping the performance in half or below by resorting to AGP texturing is not much of a problem.

Now, I don't believe for a moment that the NV30 will be 100% as good as those renderfarms. For one, it won't be able to do customized texture filtering (at least, not anywhere close to the same speed that normal anisotropic is done).

But, from where I'm sitting right now, it certainly appears that it will be close enough for most people to live with the tradeoffs. For example, it might not be used in movies right away, but just TV.

If nVidia is planning on marketting the NV30 for these purposes, you can bet that they aren't planning on using the consumer version. The high-end market could make so much more money for nVidia if they design special versions of their chips/boards.
 
I think we're on the path to seeing consumer graphics chips used as co-processors for rendering movies, but they won't take over all of the rendering duties. I don't have an in depth knowledge of Renderman, but I doubt that NV30 or R300 can handle displacement shaders. Cloth and rigid body simulations slow down rendering in movies so even as more is done by the graphics chips the CPU will still be loaded.
 
Back
Top