Nvidias plans for the next Gen ...

Aeros405

Newcomer
I found this article:
http://news.zdnet.co.uk/story/0,,t269-s2103925,00.html

This is really interesting:
>>The next generation of Nvidia chips will offer a feature to make game developers' lives easier in the form of fully programmable hardware, according to sources. This means that unlike with current chips, which implement features in hardware, developers will be able to specify which functions the chip carries out using a firmware image.

Game developers would be able to test out new chip features even as Nvidia is developing them, meaning a far shorter lead time before those features find their way into new games. It currently takes about 12 months from the time a new graphics chip is launched before new games can fully take advantage of it.

A fully programmable architecture would also make it easier to build multi-processor graphics cards, as one GPU (graphics processing unit) could be programmed to carry out half of the chip's functions while another GPU took care of the remaining functions.
<<

So what the hell is it?
 
So what the hell is it?

A real GPU? ;)

Seriously, such a chip would be a tremendous leap forward. But I highly doubt that it will be really "fully programmable" like represented in the article ("feature on demand"). More likely it is a chip that can handle microcode ("firmware") updates to enable features to a certain extent.

Think of programmable DSPs.
 
Thats fine, this being fully programmable ..is the instructions or code universal or proprietary as game developers certainly don't want two different API's/Instructions to deal with ;)
 
Developers surely won't have to deal with coding down to the metal (firmware), that's Nvidia's job. Devs will just have to handle OpenGL2.0 and DX9. :smile:
 
So what the hell is it?

From that article, it sounds like some sort of oddball high-level FPGA. While FPGAs do offer the ultimate form for programmability, they generally do so at the expense of a rather bad performance/price ratio - doesn't sound like anything Nvidia will do anytime soon.

And I don't see how extra programmability is going to make multi-chip solutions easier - if anything, it could introduce serious problems with inter-chip communication and data coherency (which are easily avoided in more fixed-function chips).
 
Id love to see a massively parallel fully programmable architecture (although it should have a seperate section for the rasterizer, even if that section is in its turn fully programmable ... there is no one size fits all architecture). I think ZDNet has been hitting the bong a little too hard though.

As long as D3D stay's 3D whats the use? I dont foresee DX having a VM interface which would let you program external hardware on such a fine scale in the near future. Hell, I dont foresee developers being in a very good position to make use of them in the beginning :) Even the EE isnt a very good example, its more about low level parallelism and micro management. Massive parallelism will require a different approach.
 
Game developers would be able to test out new chip features even as Nvidia is developing them, meaning a far shorter lead time before those features find their way into new games. It currently takes about 12 months from the time a new graphics chip is launched before new games can fully take advantage of it.

I'd like to know how they think game developers can test features when Nvidia hasn't finished designing them yet. Developers will waste a lot of time waiting for computers to crunch if this just means relying on a c model of the hardware.
 
I think the article is confusing two seperate ideas.

+ fully programmable hardware.
+ more programmable graphics pipeline.

It's well known that NVIDIA uses fully programmable hardware to design its graphics chips. Long before the chips are taped out, NVIDIA runs 1/10th speed, but fully working, simulations of the chips on multi-million-dollar boxes stuffed with FPGA chips.

And it's well known that DX 9 and OpenGL 2.0 are planning on supporting more generalized pixel shader languages.

I think the author of the article optimisticly put these two facts together.

But it's unlikely that a FPGA solution would be cheap enough or fast enough to succeed in the market.
 
My bad, I overlooked the mention of firmware ... they are just saying the hardware will be programmable, not that developers will be able to do the programming. Thats not unthinkable, they could design the logic for a programmable architecture much more carefully than a fixed function chip (because they can just replicate it accross the chip). With the short lifetime, short time to design (ask 3dfx ;), and amount of silicon with poor utilization in a fixed function design it might well be a net win (except perhaps for some specific functions such as texture filtering and other parts of the rasterizer).

It would kind of suck if the chips would only be programmable in the very limited sense of NVIDIA pre-produced configurations though, I hope (against reason) that NVIDIA would fully expose the underlying architecture ... would rock for demo programming, and would be sure to attract academic interest for using it as a more general purpose co-processor (if they are going this way I assume they will want to use the know how in other markets eventually, this would be an excellent way to accelerate that).

If this is the truth NVIDIA will probably just use it as the ultimate marketing tool though, the firmware model would allow them to easily differentiate products. The piecewise improvement strategy they use now (through just not exposing hardware features right away in the drivers) can be taken a whole lot further. It would be a grand waste :(

I still maintain that for game developers it would seem pretty much useless in the short term though even if they exposed it, apart from the oppurtunity to update the features through firmware updates as mentioned, until someone comes up with an efficient architecture independent parallel-VM.

Marco

<font size=-1>[ This Message was edited by: MfA on 2002-02-10 18:01 ]</font>
 
This is good just so long as games that take advantage of the programability aren't tied to a particular architecture/graphic card.
 
Maybe it is Microcode programmable like Pyramid3D, Pixel Fusion 150 and some Rendition chips?

practically this wasn't suprise, because there has been a lot rumours flying around about R300 and it's programmability.
 
3dlabs is doing the same thing....

http://www.eetimes.com/semi/news/OEG20020130S0052

"...3Dlabs has put forward proposals for OpenGL 2.0 that would simplify the high-end graphics interface, in effect turning it into a high-level, C-based graphics-programming language. The plan would enable software for a hybrid graphics architecture the company is designing that will meld pieces of a traditional hardwired graphics pipeline processor with programmable SIMD arrays in a device that could require 70 million transistors...."
 
The level of programmability needed to implement OpenGL 2.0 is a little different than the level of programmability you would need to say produce a chip at the time of DX9 and be able to patch it to efficiently run DX10 applications.
 
General purpose processor is a bit of an oxymoron ... it would not be a general purpose processor in the sense of the CPU in today's desktops. It would be more of an application specific processor :)
 
Would a TTA work - with the 'microcode' being streams of move instructions?. Basically a bloody great big crossbar switch or two surrounded by FUs...

(Call me weird but I have a strange fascination with TTAs/Move-Machines :oops:)
 
Such a structure wouldnt make sense. It would replace the routing nightmare of a huge multiported central registry file of a normal general purpose processor with a cross bar of the same proportions.

The obvious structure for interconnect would be a mesh network. What the computing nodes should be is the tough one, TTA might be an option there (ReMove looks interesting).

Marco

<font size=-1>[ This Message was edited by: MfA on 2002-02-11 16:29 ]</font>
 
Maybe this is a bit off topic, but I'm really wondering whether there is much room for a speed jump in NV30 over the GF4.

unlees nVidia is making a leap forward with a whole new design to solve bandwidth (like GigaPixel tech) I sense that the new part will mainly be about more feature - e.g. DX 9.

Just think about their silicon budget... Sure they may move to the 0.13 process but with 63 million transitors already on the GF4 (.015 process) that change wont really give you silicon etate enough to pack in more than those extra DX 9 features AFAIK.

Add to that the fact that the NV25 has been highly tweaked and optimized (hardware and driver) by now and I have a hard time seeing NV30 making a big performance impact in the short term (at least one year).

Sure the more flexible DX 9 might cut an extra pass here or streamline an instruction there (I don't have the knowledge of that stuff), but a large performance increase with a few million more transitors?

I think that after all NV20/NV25/Nv2a was a larger jump over NV10/15 than NV30 will be.

Regards, LeStoffer.
 
Well , keep in mind that the NV20/25 core will be at least 2 years old by the time "NV30" comes out (not sure but I believe the first Geforce3s were in before October 2000) . Also keep in mind that Nvidia has started getting patents based on Gigapixel technology (whether they actually use it is another matter ). Further, it will be almost 2 years since 3dfx went down. Not saying Nvidia will use Gigapixel in the desktop space or not or that they necessarily will improve the performance of their next gen that much farther than NV20/25 in NV30, just pointing out that it's been a long time since Nvidia actually came out with a new core... I suspect, we'll see presentations on Nvidia's DX9 implementation at GDC and Meltdown if not actual demos of prototype hardware (we know from http://cmp.bluedot.com/re/attendee/gdc_02/speakerPage.esp?speakerId=36539581 that discussion of DX9 shaders will occur at GDC by Nvidia) , if Geforce4 was actually ready for last year, as has been alluded to in previews , and Nvidia releases a DX9 fall part (as Dan Vivoli told Wavey) , it's going to be interesting... (cliche ;))

Edit: Whoops Nvidia changed the wording of their presentation at GDC... A month or two ago it stated discussion of cool new effects using DirectX9 class graphics cards...

<font size=-1>[ This Message was edited by: ben6 on 2002-02-11 22:08 ]</font>
 
Back
Top