my GOD i think i figured something about NV30....

DaveBaumann said:
We've seen all there is to see about LMAIII because it no longer exists as a piece of marketting they are pushing. LMAIII is now 'Intellisample' and there is nothing more to say.

Keep an eye open tomorrow.

It's nice to see that you're taking the time to investigate this. Because on page 24 of the preview slideshow I received, there's a statement that reads "The only GPU with 3rd generation Lightspeed Memory Architecture."
 
Vince said:
Overall, I never expected a full out region based deferred renderer, but am surprised that they seem to have taken such a 'traditional' route. I was expecting something more, from a deferred shader to utilizing temporal and spatial cohesion. Perhaps their ambitions of using the NV3x/CineFX core in Offline, High-End (SGI or Pixar-Like renderfarms as Carmack spoke of)) influenced them. <shrug>

Why would you have expected them to go the exotic route? Since we know for a fact that this chip was designed with the help of 3dfx engineers, a look at the product history of both companies plainly spells out "ramrod rendering" (ie, brute force--with some HZ stuff thrown in there.) They aren't interested too much in experimenting with what could become dead-ends (although they surely do so as a part of their R&D), but I rather think the company's main focus is to take what they know works, improve it, and implement it--along with some other features. nVidia said at the 3dfx acquisition that it wasn't interested in the GigaPixel tech, and by implication everything associated with it. So I wasn't a bit surprised. What does suprise me at this stage is the monstrous fans they're having to use to cool the buggers which apparently are to be stock configurations, at least for the top of the line nv30. Looks like to me they feel it's necessary to overclock this beast right out of the starting gate. As .09 microns is probably at least a year away for nVidia, I'm not sure this is a good portent to be seeing so early in the chip's life cycle (unless the fans fall off in the shipping products when yields improve, if that happens as it probably will--improved yields, I mean--don't know about the fans.)
 
as for nVidia being beyond the Rampage.... in some ways yes, but in some ways no. and as for the speed of the Rampage, it would have given a GF4 Ti a VERY hard-earned victory and would still be superior in some ways. a lot of people think of the Rampage project as being just like all the other 3Dfx chips, but it was not. the Rampage Project was totally seperate from the Voodoo Project. It wasnt something to scoff at, thats for sure.
 
DaveBaumann said:
Keep an eye open tomorrow.

Will this be the transcription from your questions today? You have hinted to asked someone at the demo a bunch of questions, but haven't seemed to let the answers out yet. It will be interesting to see you spill the beans.
 
as for nVidia being beyond the Rampage.... in some ways yes, but in some ways no. and as for the speed of the Rampage, it would have given a GF4 Ti a VERY hard-earned victory and would still be superior in some ways. a lot of people think of the Rampage project as being just like all the other 3Dfx chips, but it was not. the Rampage Project was totally seperate from the Voodoo Project. It wasnt something to scoff at, thats for sure.

Sage, would you say that NV30 might not be as good as Rampage in some areas?

or just not better in some, while being better in others?
(hope i wrote that clear enough to understand)
 
It's nice to see that you're taking the time to investigate this. Because on page 24 of the preview slideshow I received, there's a statement that reads "The only GPU with 3rd generation Lightspeed Memory Architecture."

I suspect that might be an earlier revision - when did you recieve it?
 
Sage said:
as for nVidia being beyond the Rampage.... in some ways yes, but in some ways no. and as for the speed of the Rampage, it would have given a GF4 Ti a VERY hard-earned victory and would still be superior in some ways. a lot of people think of the Rampage project as being just like all the other 3Dfx chips, but it was not. the Rampage Project was totally seperate from the Voodoo Project. It wasnt something to scoff at, thats for sure.

:rolleyes:
 
WaltC said:
Why would you have expected them to go the exotic route? Since we know for a fact that this chip was designed with the help of 3dfx engineers, a look at the product history of both companies plainly spells out "ramrod rendering" (ie, brute force--with some HZ stuff thrown in there.)

Because nVidia has many very intelligent people who realise that computational and bandwith effeciency could provide a large advantage in the comming generations when your doing heavy shading, which is computationally expensive (Think raytracing in the shader) and your architecture is just pissing it away. I mean Ned Greene was preaching effeciency like 2 years ago.

Both companies relied on the same core architecture for their SKUs untill the Nv30/CineFX was announced. I would assume that when starting over, you could examine how the 3D graphics realm has changed over the past half decade and move from there.

nVidia said at the 3dfx acquisition that it wasn't interested in the GigaPixel tech, and by implication everything associated with it. So I wasn't a bit surprised.

I never said 3dfx or GP - where do people get this. I blatently stated that I didn't expect a Region-based deferred renderer (ie. 3dfx's Mosiac)... how much clearer can that get?

Yet, many things can be done outside of a tiling/chunking/et al. form of architecture. I was hoping for this. Look at Talisman - Reuse the temporal coheresncy (IIRC) between frames... the list is endless.
 
megadrive0088 said:
Sage, would you say that NV30 might not be as good as Rampage in some areas?

I hate this icon with a passion, but it's gotta be done... :rolleyes:

I'm going to assume your joking and not pull a :rolleyes: en masse - Kristoff style. lol
 
Vince said:
megadrive0088 said:
Sage, would you say that NV30 might not be as good as Rampage in some areas?

I hate this icon with a passion, but it's gotta be done... :rolleyes:

I'm going to assume your joking and not pull a :rolleyes: en masse - Kristoff style. lol

Ban that icon please!
 
Sage said:
as for nVidia being beyond the Rampage.... in some ways yes, but in some ways no. and as for the speed of the Rampage, it would have given a GF4 Ti a VERY hard-earned victory and would still be superior in some ways. a lot of people think of the Rampage project as being just like all the other 3Dfx chips, but it was not. the Rampage Project was totally seperate from the Voodoo Project. It wasnt something to scoff at, thats for sure.

:LOL:

What if ATi were to say effective memory bandwidth? With colour compression and 24:1 Z compression with 6xFSAA then that gives you well over 300GB/s of effective bandwidth :D

btw what does iirc mean? I've never been able to work it out.
 
IIRC = If I Recall Correctly

And I do have to agree: Their "effective bandwith" marketing ploy sounded kind of impressive at first, but now I'm really having my doubts. I'd say it's closer to half the number they're quoting, in most circumstances.
 
Sage said:
as for nVidia being beyond the Rampage.... in some ways yes, but in some ways no. and as for the speed of the Rampage, it would have given a GF4 Ti a VERY hard-earned victory and would still be superior in some ways.

Tell me, was there any kind of bandwith saving technology in Rampage? Because if not, than it would've had a serious time competing with a more efficient design like the GF3, relying only on brute force (SLI dual-chip dual-memory bus design). As far as I've seen, there was nothing like early-Z, Z-compression, framebuffer compression or such in that chip, and thus it must had been a very inefficient card. The only thing that seemed to be well over the GF3 and Radeon 8500 was the pixel shader - but without the speed it wouldn't have had much of a use.

a lot of people think of the Rampage project as being just like all the other 3Dfx chips, but it was not. the Rampage Project was totally seperate from the Voodoo Project. It wasnt something to scoff at, thats for sure.

That kind of myth-making is exactly what nvidia is trying to capitalize on with this whole "we have 3dfx tech in GeForce FX" thing. Are you already convinced to get such an artifact of 3D graphics? ;)
 
Vince said:
Because nVidia has many very intelligent people who realise that computational and bandwith effeciency could provide a large advantage in the comming generations when your doing heavy shading, which is computationally expensive (Think raytracing in the shader) and your architecture is just pissing it away.
Oh the delicious irony.
I mean Ned Greene was preaching effeciency like 2 years ago.
Really, that long ago?
 
Back
Top