Chaperone

My apologies for reviving the GigaPixel joke, but...
What disturbs me most in the "G70" and "G80" is not the G. It's the numbers. By the time both of those were "announced" by TheInq and other sources, NV60 must already have been in the pipeline, although in a very early stage. Also, according to old but reliable source (NV35 era), NVIDIA estimated the NV60 to be the time at which a 512-bit memory bus would be required. If NVIDIA wanted to just use G because they're also using C for chipset, it would have been G50 and G55, or even G50 and G60.
The problem with that is it implies that (possibly) instead of just changing your architecture, you actually decide to use projects farther away on your roadmap. This, and the 512-bit bit, makes me think some memory-saving technique is not out of the question, albeit very unlikely I'll admit.
Also, for the record, very concrete "distinct VS and PS chips" rumor already existed for the NV30. NVIDIA has been seriously considering this road ever since the 3DFX "buyout".

Uttar
 
Uttar said:
Also, for the record, very concrete "distinct VS and PS chips" rumor already existed for the NV30. NVIDIA has been seriously considering this road ever since the 3DFX "buyout".

Uttar

That's interesting. Not to fuel fire, but Sony's choice of NVidia may very well have been because of such an approach. Toshiba/SCEI, beavering away on a pixel-pusher for PS3 - NVidia approaches SCEI with a next-gen pixel-only chip and asks "what do you think?". NVidia's distinct PS chip blows the Toshiba/SCEI effort away (be it in absolute performance, performance per dollar..whatever - though I guess given NVidia's capability over Tosh/SCEI, it'd be the former or both), and the rest is history. NVidia would be in a better position to offer SCEI a relatively "off-the-shelf" pixel-only solution if they were making a distinct VS/PS divide with their next-gen GPU..

Basically this might increase the odds of PS3's GPU being the distinct PS-only chip going into NVidia's next-gen GPU, if NVidia is taking such an approach? Perhaps a little bigger, if they're feeling ambitious (i.e. more pixel shaders, since they don't need to pair it with a vertex chip..although the silicon requirements of a ps-only chip may be significant already).
 
Uttar said:
My apologies for reviving the GigaPixel joke, but...
What disturbs me most in the "G70" and "G80" is not the G. It's the numbers. By the time both of those were "announced" by TheInq and other sources, NV60 must already have been in the pipeline, although in a very early stage. Also, according to old but reliable source (NV35 era), NVIDIA estimated the NV60 to be the time at which a 512-bit memory bus would be required. If NVIDIA wanted to just use G because they're also using C for chipset, it would have been G50 and G55, or even G50 and G60.
The problem with that is it implies that (possibly) instead of just changing your architecture, you actually decide to use projects farther away on your roadmap. This, and the 512-bit bit, makes me think some memory-saving technique is not out of the question, albeit very unlikely I'll admit.
Also, for the record, very concrete "distinct VS and PS chips" rumor already existed for the NV30. NVIDIA has been seriously considering this road ever since the 3DFX "buyout".

Uttar

That 'G' for Gigapixel wasn't a joke, it was a semi-joke for want of a better word! :devilish:

If ATI can put back a 'new' architecture, R400, and refine it for the R500 so it's 'ready' for market, then just as likely nVidia can bring forward an architecture if it can be refined, be competetive and 'ready'. So both players have two architectures on the market, i.e.,

ATI:

Rx00
Rxx0

nVidia:

NVxx
Gxx

And IMHO, the nVidia distiction is definitely clearer! :p

Since the 3DFX buyout, AFAIK, NV mixed the teams up so that they competed each tech against each other, i.e. Gigapixel, 3DFX, nVidia tech would be chosen on their own merits and the best tech would be put forward. In this scenario, IMR (NV, 3DFX) have until now been chosen but it would follow that TBDR would have become more competetive...

G for GigaPixel, yay! :devilish:
 
At this time we had not seen a single feature from GP's architecture in Nvidia GPUs and there isn't a single patent from GP guys issued after 3dfx acquired them.
To be fair I couldn't find a single patent from Mr. Kilgariff that wasn't filed when he was working at 3Dfx.
 
Jaws said:
If ATI can put back a 'new' architecture, R400, and refine it for the R500 so it's 'ready' for market, then just as likely nVidia can bring forward an architecture if it can be refined, be competetive and 'ready'.

There is a big difference there.

ATi developed an architecture, R400, that was feature rich but under performed. They shelved it for more development time until the performance match the featureset. Basically we have a completed chip that was revised.

With nVidia you are talking about moving along future projects and accellerating their deployment. Usually future projects are aimed at future projections (like fab process and other tech advancements) so nVidia would not be only pushing forward an incomplete architecture, but they would have to push forward on a lot of additional fronts in addition to normal chip development, cut a lot of features, or both.

So with this scenario you are basically starting at scratch, much like ATi did with R400, and pushing forward. The risk is that like ATi's R400, this chip may under perform. Then what?

I am not sure what nVidia is currently doing, but if they dropped the NV50/60 and have proceeded to develop architectures they were projecting for 18-36mo down the road they could either come out as big winners or big losers.

Either way I see the two developments as quite different. ATi had final silicone that under performed so they stuck with their current HW until the new architecture was refined. nVidia may be looking at dumping their current offerings to accellerate future projects. Hopefully for consumers and nVidia this new tech does not need serious refining like R400 did.
 
nAo said:
At this time we had not seen a single feature from GP's architecture in Nvidia GPUs and there isn't a single patent from GP guys issued after 3dfx acquired them.
To be fair I couldn't find a single patent from Mr. Kilgariff that wasn't filed when he was working at 3Dfx.

I may have misunderstood your double negative, but do you mean patents filed by him since being acquired by nVidia? If so,

Mr. Kilgariff

IIRC, 3DFX was bought at the end of 2000. Above, there are 3 patents by Kilariff, filed; June, Nov and Dec of 2001 by nVidia?
 
V3 said:
So what's so good about Gigapixel ? How does it compare to PowerVR ?

That's a good question. It would be nice to see next gen PowerVR and Gigapixel GPUs to compare! ;)

Acerts93 said:
Jaws said:
If ATI can put back a 'new' architecture, R400, and refine it for the R500 so it's 'ready' for market, then just as likely nVidia can bring forward an architecture if it can be refined, be competetive and 'ready'.

There is a big difference there.
...
...

Either way I see the two developments as quite different. ATi had final silicone that under performed so they stuck with their current HW until the new architecture was refined. nVidia may be looking at dumping their current offerings to accellerate future projects. Hopefully for consumers and nVidia this new tech does not need serious refining like R400 did.

What difference does it make if ATI 'adds stuff' or nVidia 'removes stuff' as long as both have a different architecture that's just as good, if not better or offers something different than their respective 'other' architectures...and remains competetive...

These companies are not dumb and will not 'okay' a new architecture unless it's competitive...and contingency plans are present...
 
Jaws said:
That's a good question. It would be nice to see next gen PowerVR and Gigapixel GPUs to compare! ;)

I don't know much about PowerVR, but is there a reason to think that their products are even in the same league?, what exactly is it that they have done?. I often see here talk about PowerVR and people praising it, is there any substance in it?.
 
Dr Evil said:
Jaws said:
That's a good question. It would be nice to see next gen PowerVR and Gigapixel GPUs to compare! ;)

I don't know much about PowerVR, but is there a reason to think that their products are even in the same league?, what exactly is it that they have done?. I often see here talk about PowerVR and people praising it, is there any substance in it?.

Sega Dreamcast GPU, Sega Naomi Arcade, Mobile MBX, Kyro GPU cards etc...

They use TBDR, i.e. Tile based deferred rendering as apposed to IMR, immediate mode rendering which NV, ATI and co. currently use.

Their main advantages are more efficient rendering/ shading with no overdraw, lower bandwidth requirement and efficient AA. However they require beafy CPUs and in this respect CELL would be a perfect complement, IMHO.

But all this is speculation in this thread as Dave has conveniently forgotten the thread he started *cough*... ;)
 
Dr Evil said:
I don't know much about PowerVR, but is there a reason to think that their products are even in the same league?, what exactly is it that they have done?. I often see here talk about PowerVR and people praising it, is there any substance in it?.

When PowerVR (now Imagination Technologies) have actually brought quite a few products to market - something that Gigapixel never did. I believe they remain the only company (so far) to produce chips using deferred rendering tech.

Unfortunately, we haven't seen a desktop part for some years now and their main focus is now on integrated parts - the MBX core is used in many new chips and we should see it in all types of handhelds/phones/cars/set top boxes etc. very soon.

If you want to know more, there is plenty of stuff about PowerVR on this board - just use the search facility! :)
 
Jaws said:
V3 said:
So what's so good about Gigapixel ? How does it compare to PowerVR ?

That's a good question. It would be nice to see next gen PowerVR and Gigapixel GPUs to compare! ;)

Well, can you give us the run down of their TBDR implementation compare to PowerVR ?

Gigapixel was never on my radar, when they were making noises.
 
V3 said:
Jaws said:
V3 said:
So what's so good about Gigapixel ? How does it compare to PowerVR ?

That's a good question. It would be nice to see next gen PowerVR and Gigapixel GPUs to compare! ;)

Well, can you give us the run down of their TBDR implementation compare to PowerVR ?

Gigapixel was never on my radar, when they were making noises.

nVidia,

Gigapixel: Rendering Pipeline

List of Gigapixel patents

Imagination Tech,

PowerVR: Image processing apparatus

List of PowerVR patents

AFAIK, Gigapixel tech never made it into commercial silicon, though I seem to remember there was sample silicon floating around. So comparisions are difficult with PowerVR.

I've enclosed their relevant patents (there maybe more) for comparison. I haven't read them properly so can't give any details but maybe Simon_F can elaborate, as afterall, they would've competed with Gigapixel tech so he may have a good idea on their differences... ;)
 
Every time I read something related to tile based deferred rendering, like those patents, I get a weird feeling.
TBDR does make so much more sense than IMR to me..GRRR :?
 
nAo said:
Every time I read something related to tile based deferred rendering, like those patents, I get a weird feeling.
TBDR does make so much more sense than IMR to me..GRRR :?

See, i've always been divided by this.

Is TBDR not on par in terms of performance/features with current high end cards becasue NV and ATI just didn't care to embrace it?

Or, IF companies with as many resources as NV and ATI went the TBDR way, would they get better results than they get today with their traditional renderers?
 
london-boy said:
nAo said:
Every time I read something related to tile based deferred rendering, like those patents, I get a weird feeling.
TBDR does make so much more sense than IMR to me..GRRR :?

See, i've always been divided by this.

Is TBDR not on par in terms of performance/features with current high end cards becasue NV and ATI just didn't care to embrace it?

Or, IF companies with as many resources as NV and ATI went the TBDR way, would they get better results than they get today with their traditional renderers?

AFAIK, for TBDR implementations like the PowerVR in Kyro, it was the lack of hardware TnL on the GPU that was it's main disadvantage. It relies on powerful CPUs to do this and in the era of VS units on IMR GPUs, the x86 CPUs just haven't kept up.

PowerVR did have a dedicated separate co-processor on the SEGA Naomi Arcade boards called ELAN that did the TnL. However, if the next gen G70/G80 is based on Gigapixel tech, then nVidia may have solved that issue with their VS units.

This is precisely why it's a good match for CELL with it's VS power... ;)
 
Back
Top