nV40 info w/ benchmarks

or maybe they'll start using rotated grid MSAA with gamma correction and they'll prove how this breackthrough inovative technique at 2x provides same quality of former 8x modes at 7 times performance
 
I think that it is wrong to believe that nV’s acceptance of failure with the nV30 lies solely with its performance. IMHO, it was just one part of a strategy that they were going to use to wrestle API design way from Microsoft. They thought that they would do it their way with CG and multiple precision. We all know how it ended but the nV30 was only one piece of the puzzle.
 
Heathen said:
Of course they study the implications. That doesn't mean they judge correctly. That's what I was talking about. And the fact that companies don't often judge correctly is born out almost daily.

Yeah it's a shame nvidia misjudged the situation so badly, never thought I'd hear you admit it though Chal.

Seriously though, you still show no real appreciation of the complexities of project management. The fact that the R300 and it's derivatives achieved such popularity on their own strengths proves ATi made a significant number of correct decisions. Defining 'correctly' is a very ambiguous in black and white is next to impossible unless you get a complete implosion of the company who made that decision and even then there are still benefits for the industry (whatever industry that is) assuming they're willing to learn.

In our specific case it could actually be argued that Nvidia is in a better theoretical position than ATi because they have more to learn from the NV3* project than ATi has to learn from the R3** project.

The only important questions are:
1) Can they learn from what's occurred? (So far the consumers seem more bothered about performance and IQ, and not theoretical objections to the differences between FP24 & FP32)
2) Can they (or more importantly are they willing) to apply these lessons to next generation parts?

If the answer to either question is no then nvidia is in trouble, ATi's issues are less appparent but still no less important for them to answer. Hopefully both companies are up to the challange as resting on your laurels and believing things rosy, while dissing the opposition, is the easiest thing to do in the world.

A stupid person has more to learn than a clever person, but who is in the better position?
 
nelg said:
I think that it is wrong to believe that nV’s acceptance of failure with the nV30 lies solely with its performance. IMHO, it was just one part of a strategy that they were going to use to wrestle API design way from Microsoft. They thought that they would do it their way with CG and multiple precision. We all know how it ended but the nV30 was only one piece of the puzzle.

Cart before the horse. Why is it with NVidia people have to come up with such asinine theories? So NVidia spent hundreds of millions of dollars, not to design a chip to sell for money, but with the goal of stealing control of an API?

I guess the GF2 was just an attempt to control OpenGL with proprietary NV extensions eh?
 
DemoCoder said:
nelg said:
I think that it is wrong to believe that nV’s acceptance of failure with the nV30 lies solely with its performance. IMHO, it was just one part of a strategy that they were going to use to wrestle API design way from Microsoft. They thought that they would do it their way with CG and multiple precision. We all know how it ended but the nV30 was only one piece of the puzzle.

Cart before the horse. Why is it with NVidia people have to come up with such asinine theories? So NVidia spent hundreds of millions of dollars, not to design a chip to sell for money, but with the goal of stealing control of an API?

I guess the GF2 was just an attempt to control OpenGL with proprietary NV extensions eh?
Finally!

Now your Thinking ;)
 
DemoCoder said:
Cart before the horse. Why is it with NVidia people have to come up with such asinine theories? So NVidia spent hundreds of millions of dollars, not to design a chip to sell for money, but with the goal of stealing control of an API?

I guess the GF2 was just an attempt to control OpenGL with proprietary NV extensions eh?

DemoCoder, please tell me then, if you feel this is not the case, then just what was CG and the lack of DX9 compatibility in the FX series all about? Really, I'm not being sarcastic at all....I really want to know your insight on this.
 
http://www.rage3d.gr/board/showthread.php?s=&threadid=340

Designing a graphics chip is all about budgeting transistors and deciding how to allocate them most efficiently. For our R300 DirectX 9 architecture (used in the RADEON 9500, 9600, 9700, and 9800 series), our designers resisted the temptation to add a lot of unnecessary features that we felt game developers would not use, or that would not noticeably improve speed or image quality. Instead, they kept very close to the base DX9 specification and added more pipelines, more shader units, and better compression technology. Not only did this allow our products to easily outperform the competition, but it allowed us to do it with lower clock speeds, with fewer transistors, and without requiring a lot of additional software optimization (which is really appreciated by game developers).

Here is the solution for success. Biggest bang for your buck. I would imagine this is were Nvidia will be focusing their attention after the NV3X fiasco, however some of the bad design decisions they are locked into will negatively imact them.
 
DemoCoder said:
nelg said:
I think that it is wrong to believe that nV’s acceptance of failure with the nV30 lies solely with its performance. IMHO, it was just one part of a strategy that they were going to use to wrestle API design way from Microsoft. They thought that they would do it their way with CG and multiple precision. We all know how it ended but the nV30 was only one piece of the puzzle.

Cart before the horse. Why is it with NVidia people have to come up with such asinine theories? So NVidia spent hundreds of millions of dollars, not to design a chip to sell for money, but with the goal of stealing control of an API?

I guess the GF2 was just an attempt to control OpenGL with proprietary NV extensions eh?

I did say one part of a strategy. IMHO yes, I think that nVidia thought (thinks) they should be the driving force behind the API. They certainly had the mind share, developer relationships and market clout to move in this direction. If successful they would have been able to further cement their position as the leading maker of GPUs. This of course would surely put the horse back were it belonged and kept it there. ;)

edit: My history is very short in the GPU world but it seems nVidia may have tried this before .
It's a quirk of fate that Nvidia had anything to do with the Xbox - or any other Microsoft product. In Nvidia's early days, Huang tried to end-run icrosoft with a proprietary programming interface. The decision, he now admits, almost killed Nvidia. In a move of desperation, he directed his engineers to build GPUs to work with Microsoft's Direct3D standard. It not only saved the company, but established a partnership that eventually led to the contract to develop the chipset for the Xbox - a contract worth as much as $500 million a year.
 
martrox said:
then just what was CG and the lack of DX9 compatibility in the FX series all about?
I always thought it was to coverup the lack of PS power of the NV30, and more importantly the whole FX line since the NV30 was supposed to be the base for their chips for at least 2 years.
 
IMO, Cg was to lock developer support down, not to co-opt an API. Having your hardware as the primary development platform is a huge step toward winning the benchmark wars, and thus online mindshare.
 
IMHO, Cg began as an effort to provide an HLSL tool (there were none available when it shipped unless you count Stanford's RTSL) and expose features of the NVidia HW. IHVs can't always wait for standards bodies to agree and ship some lowest common denominator. That's why OpenGL extensions exist.

Now, certainly this is for the benefit of developers, to attract them to the HW, as all developer tools are, after all, if your HW has extra features, but there is no API support for them, you have wasted silicon and developer's won't bother. HW support for APIs, is if anything, a process of "mapping" what you've got to what they specify. It is not a process of taking what they specify and building a card from that recipe that conforms 100%.

But to flip this upside down is moronic. IHV's build new features into HW without asking other IHVs and ISVs for a "consensus" to approve their features. That is, they don't ask Microsoft to hand them a list of features to implement in their HW. They work on the features first, and they hand those features to MS, and MS mediates with other IHVs on which ones to standardize. People have this mistaken notion that standards processes go like this: discussion/paper spec agreement first -> design of HW begins -> HW produced and sold. Instead, the HW is designed first, IHV's lobby for their featureset to get API support, they end up tweaking their design to fix areas that don't map to the specification, and any unexposed features are left for OGL extensions or future DX releases (more lobbying)


Ultimately, in the standards process, some features are left unexposed. No standard API maps 1:1 with HW. Therefore, IHVs are incentivized to provide hooks to access native features.


The NV1 used quadratic surfaces instead of polygons. NVidia didn't design this chip in this manner so they could "steal control of DirectX 1.0". Come on, a startup company with few employees and little money envisions dominating Microsoft instead of worrying about shipping their HW to market and getting enough money to survive!?

NVidia designed the NV1 that way probably because at the time, there was some debate as to the proper primitive used to build a 3d card, and Nvidia just chose an odd design and it needed a custom API to expose it, just like 3dfx wrote Glide in the early days, because they wanted an API they was lightweight and exposed their featureset. OGL was too bulky, and DirectX too shitty.

There is way too much overanalysis in these forums. Every engineering decision is seen as part of some master plan. The truth is, some guy had an itch to scratch -- write an HLSL shading language, and NVidia wound up with Cg. Previously before Cg, that was a NVidia project that allowed HLSL compilation for GeForce2/GF3 register combiners. If I were an employee with language/compiler design skills, I'd be hacking away on tools, not waiting for some hypothetical tool from OGL or MS years down the pipeline.


Likewise, for the NV1 and NV30, the engineers thought they had come up with some clever architecture and clever features, but they didn't work out. They did not deliberately design a piece of shit card for some master plan to destroy DirectX. They designed a piece of shit card, and then they had to do alot of work to mitigate it's problems as much as possible -- hence cheating, driver optimizations, better compilers, HW tweaks (NV35, etc).


At best, Cg was, as stated, an attempt to get developers interested in using Nvidia HW as their dev platform. But so is NVidia's numerous demos, source code, documentation, and presenatations available on developer.nvidia.com and at GDC -- to help ISVs and make them like your platform and developer support.


For a company that supposedly has so many "evil master plans" (tm), including I guess, a plan from the days of the NV1 up to the NV30 to "steal control of DX", they sure aren't doing very well.
 
DemoCoder, I agree with you completely. Unfortunately some people dont understand what Cg is, let alone what nvidia were trying to accomplish with it. It was a good step forward in developer tools and support. Nvidia is still supporting Cg, as well as the other high level languages. The quality of FXComposer just goes further to prove their dev relations support.

Also, I totally agree with the API/standards/specs thing.
 
thatdude90210 said:
martrox said:
then just what was CG and the lack of DX9 compatibility in the FX series all about?
I always thought it was to coverup the lack of PS power of the NV30, and more importantly the whole FX line since the NV30 was supposed to be the base for their chips for at least 2 years.

it was there to hide the gf3 and gf4 failures as well. people suddently thought BOAH, THE GF3 CAN RUN CG IT MUSTA ENABLE SUPAR-HW-PATH!!

sort of. soon after, people realised that cg just maps to ps1.1/1.3, and the hw is still not able to do real pixelshading (pre ps1.4 is crap, pre ps2.0 is not programable).

cg was great to hide their big failures in terms of designs of opengl extensions for all of their hw. register combiners, texture shaders, nv fragment programs. they are all rather unusable for developers, bad designed, overly complicated, and that proprietary that they will never get adopted to the standard.

that was cg made for: hiding their past, and (then) future faults. with another fault:D
 
AndrewM said:
DemoCoder, I agree with you completely. Unfortunately some people dont understand what Cg is, let alone what nvidia were trying to accomplish with it. It was a good step forward in developer tools and support. Nvidia is still supporting Cg, as well as the other high level languages. The quality of FXComposer just goes further to prove their dev relations support.

Also, I totally agree with the API/standards/specs thing.

hm.. you're a developer?

cg was nothing more than marketing-hyping all the time. huge downloads, filled with tons of buggy thingies, most complex library-api-interferences so that you got forced to eighter not use nvidia tools at all, or take it all and nearly drop support for anything else.

no, cg was not thought out, and not designed well. it was really just marketing, to hype their nv30 start, to hide the power of the radeon9700, to hide the fact that even the 8500 was more powerful in terms of shading capabilities than any gf3/gf4 ever can be. thats all it was made for.

hiding their software faults with a common tool above it. hiding their hw faults by not really supporting the other vendors.

and this has nothing to do with conspiracy. it's rather logical nvidia wanted to do this.

but its not something people accept. nv30 fiasco has shown it.
 
Dave, you're right, the Cg runtime API was quite buggy, and the compiler had some issues. It's quite a bit better now, with the latest release.

At the time it was released tho, there was no other language to use, and it was quite an improvement over using the low level interfaces. Sure, you might not have wanted to ship with it when it first came out, but there was still nothing stopping you from compiling your program to the low level targets and hand tweak/use them that way.

You have been ati biased for quite a while now, and your posts are more of the same rubbish.
 
i'm not ati biased. i'm simply against the marketing bullshit, and the false-promises nvidia gave over the last years.
they hype all their hw, and fail to provide proof that it is really capable of doing what they state.
they always try to do their own thing (similar to sony for example, they can't stick to a real standard any time till they fail, and then they adopt the other)
their extension-design in opengl is laughable. they expose the hw directly, wich is a good thing. they still could actually DESIGN their api-extensions in a nice way. the only nice thing they did was NV_vp, and that was ripped from the dx design specs.
their feature-set they provide in dx is laughable, too. yes, fp32 is great, but there is no way to load, or store, fp32 data. so nv30 is essencially useless for high precicion, complex math stuff that needs multipass. wich is what they hyped it for, too. scientific calculation acceleration.

they are currently hyping to be 6x, 8x faster than gfFX with the nv40, they are again big stating forceware2 provides (up to) 20% performance enhancements. their marketing unit is simply too big compared to the rest.

their real problems don't lie in the actual hw, or sw. they are just a result of their inability to really manage to work together in the own company.



i'm still following the whole rather strong. i honour nvidia for good things they do. and i blame them for the bad things. i can do, and i _do_ the same for ati. it's just a mather of fact that nvidia did too much too wrong to me and the community in general to have good reputation currently in my eyes. they somehow always get away with it. this makes me sad.

i'm in this since gf1.

again, i'm not trying to make ati look good. i'm merely trying to point out that nvidias REASONS are simple capitalistic thoughts, but their company fails to be able to organise itself in a way that the bad parts of capitalism don't drift outside. marketing. (also named propaganda) :D

most people still love nvidia. they did much good to get to such a position, where people follow them blindly. it takes much to not abuse this. even developers got blinded a lot. cg was the first time a lot started to think they now go too far. first cg is nvidias glide statements started to pop up. but nvidias interest, to have their glide, is much older..
 
How can shipping a developer tool be "going too far". That's just silly. NVidia shipped a crappy first-run implementation of their compiler. Developers were warned ahead of time. It was labeled *BETA*. What more do you want? First RenderMonkey version sucked too. Was the first release of ASHLI stable? How long has 3dLabs been shipping a buggy compiler? Developers are expected to be a little more tolerant of beta and inprogress code, since they are the early adopters and give the best feedback.


The idea that just because the first version of a product isn't suffiently abstract or generic enough means it is "really just marketing" is balderdash. The vast majority of version 1.0 of software is very fixed in function, not pluggable, and usually not generic. As most software evolves more features and versions, it becomes more generic, more well thought out, more pluggable. This applies to word processors, 3d engines, web browsers, and compilers.

Cg was buggy and NVidia specific, because, guess what, it's hard to implement NxM different hardware profiles and backends. They did the logical thing that anyone would do: develop a compiler, make it work on one front end, and one backend, hack away until it's stable, and then go back and add in support for other backends. 3dLabs is doing the same thing. They implemented one compiler for their HW, and release only a toy frontend to the public with no backends for say, ARB_fragment_program generation.


I work at a company now that is developing software which breaks (e.g. uses a modified version) of existing IETF internet standards. Clients that use our server must use the new protocol. The reason for choosing to diverge from the standard had ZERO to do with marketing, and everything to do with engineering decisions. None of the internal company discussions mention "lockin" to our server, indeed, we don't care if anyone else implements the protocol, and we are lobbying others to adopt it. We wanted something that delivers an efficiency boost and the existing infrastructure didn't deliver it.


There is too much of a tendency on this board to view everything through a negative prism. When Cg was released, there was nothing else like it. Cg's existence drove MS HLSL, which means it was a good thing. NVidia is within their rights to release IDEs, compilers, libraries, and toolkits that work more efficiently with their HW. IHVs are within their rights to provide proprietary extensions to standards. Developer tools are not evil.
 
developertools that are monolithik, huge, and that force you, if you want to use one bit, to use all, such are stupid.

if they are the only real way to program for some vendor, this is even more stupid. i call this evil.

and btw, there is a negative trend since a while about nvidias taktiks, yes. but actually, i'm not discussing this now, now that it's trend to bitch on nvidia. i stated those statements over 1.5 years ago, and suggested to not go in that direction. short after, they presented nv30, cg, and all the fuzz. i discussed that bad trend of nvidia before those things. once they came, i simply felt dissapointed (as quite some internal in the nvidia company as well btw..).

this is not about now bitching on them. i stated long time ago this will go in the wrong direction, and not be good for nvidia. i for myself loved them, and trusted in them, and helped them. i don't have problems in doing that again. cg failed, they dropped support to a minimum. nv30 failed. their driver department failed. one-fits-all-solutions of them aren't going to be updated anymore really. in the end, all the bad things of nvidia faded away..

except one thing, of course. their marketing:D now the nv40 hype starts to come up... simply boring. i'll wait till there is real hw to touch.

but one thing is for sure. if they do something right, i'll stand behind them. i recently simply forgot to mentoin that i have huge believes in nvidia, and there is still a lot of good stuff from them.

and even if they did a lot wrong. this at least let other move on doing it bether, and right. and if everything else is shit, this is something good.

i like both ati and nvidia. i espencially like competition. we would get nowhere else. i like matrox' work as well, the pathelia displacementmapping is awesome technology. too bad it failed.


i know quite some had to bleed after what happened all in the nvidia coorporation. the nv30 was a big failure. from the start, till now. it cost them very much. even while it's not a failure for the ones who own a (recent) gfFX, because they are quite capable, and finally drivers are more or less fine. but its a 100% failure for nvidia.


they should've never buyed 3dfx.. :D
 
Back
Top