Nvidia GT300 core: Speculation

Status
Not open for further replies.
I think you're oversimplifying it a lot. There's the matter of expertise and comfort with Nvidia's tools and support. It's not like us where we just stick the card in a slot and install a driver. Many companies probably have valuable relationships with Nvidia that can't be replaced simply by sticking an ATi card into the motherboard.

I think in a lot of cases it is just a stick a card in the slot and go situation. I think that companies chose NV for similar reasons that consumers chose McDonalds over Bob's Happy Burger. Known brand, established reputation, least amount of perceived risk.

We really need some actual professional OGL guys to comment here though otherwise it's just useless speculation.
 
I think a very low branch granularity GPU is possible and will provide some very big advantages as everything moves to (sub) pixel sized features.
Please define "very low branch granularity GPU"
 
Known brand, established reputation, least amount of perceived risk.

Wouldnt you do the same if you were running the company? The bigger the company and the more complex the technology, the harder it gets to switch vendors. I don't buy the plug and play argument - the hardware is just one small piece. You're not taking the associated software and support into consideration.

I don't think OGL developers would be experts on this. This decision would be made by technology management.
 
We really need some actual professional OGL guys to comment here though otherwise it's just useless speculation.
It is still just useless speculation unless you get a meaningful sample size and it is randomly selected neither of which is terribly likely. Therefore enjoy your speculation.
 
I don't think OGL developers would be experts on this. This decision would be made by technology management.

A lot of the time it's just "Never change a winning team", I suppose.
You see the same with AMD vs Intel.
"Just as good" isn't a good enough argument to change vendors. You have to offer a significant advantage in some way.
 
Maybe the PhysX people, with their ideas for hardware, have had an effect. NVidia really needs an application to show off any major re-design that's intended to produce significantly reduced divergence penalties. That could be physics acceleration, but there's still the fundamental problem of gameplay physics being "in CPU space" not GPU space.

If on the other hand the design is mostly focussed on maximising the efficiency of data shared by multiple strands and/or scratch buffers for multiple passes, then it's prolly easier to demonstrate a benefit, e.g. with algorithms requiring reductions or seemingly with the new bits of the D3D11 pipeline.

Jawed
 
there's still the fundamental problem of gameplay physics being "in CPU space" not GPU space.

I think there's a lot to accomplish with just good old eye-candy before they consider gameplay physics to be a must have. Bottom line is that gameplay physics adoption has nothing to do with performance and everything to do with proliferation - GT300 isn't going to be the vehicle that brings gameplay physics acceleration to the masses. However, there is a lot to be gained from fancy eye-candy physics.

If on the other hand the design is mostly focussed on maximising the efficiency of data shared by multiple strands and/or scratch buffers for multiple passes, then it's prolly easier to demonstrate a benefit, e.g. with algorithms requiring reductions or seemingly with the new bits of the D3D11 pipeline.

Whatever they do they better capitalize on it to accelerate the 3D apis as well otherwise I imagine they'll end up looking even sillier.
 
Wouldnt you do the same if you were running the company? The bigger the company and the more complex the technology, the harder it gets to switch vendors. I don't buy the plug and play argument - the hardware is just one small piece. You're not taking the associated software and support into consideration.

I don't think OGL developers would be experts on this. This decision would be made by technology management.

Of course I would. AMD needs to combat this with some serious devrel investment because that's the only way ground can be gained. And associated software and support might play a role in some companies, others it probably doesn't play any role. Software development companies probably have lots of interconnected systems that rely on a specific IHVs hardware but I'm sure a lot of other companies are completely hardware agnostic and just want something that runs their OGL CAD apps/maya/3dstudio max etc.

It is still just useless speculation unless you get a meaningful sample size and it is randomly selected neither of which is terribly likely. Therefore enjoy your speculation.

lol good point
 
Which is not so bad, speculation is fun ...

Personally I think it's hard to unite Dally's statement about Larrabee not being radical enough with just continuing the present type of GPU architecture ... in the end Larrabee is not so different apart from the caches. The parallel execution of shaders through SIMD and vectorized loads/stores is very similar. I think a very low branch granularity GPU is possible and will provide some very big advantages as everything moves to (sub) pixel sized features.

It isn't bad, yet it still sounds silly to take one theoretical number you might have heard about (probably transistor count in this case) and start with some funky speculative backwards math based on existing GPUs reasoning.

See speculation is fun as long as someone like you suggests something less evolutionary compared to existing solutions.

A GPU with 5 wide VLIW or even scalar cores capable of independent branching.

Scalar as a CPU or current GPU definition? Either way can I pick the 2nd one or even better a solution that can bounce between VLIW or scalar according to demand? (ok that's a wishlist and no speculation but it's equally fun ;) ).
 
Either way can I pick the 2nd one or even better a solution that can bounce between VLIW or scalar according to demand? (ok that's a wishlist and no speculation but it's equally fun ;) ).

I'm missing why exactly people want the ability to bounce between "scalar" and "vector". It isn't like they need separate registers like LRB. Running scalar means you're tossing out 16x or 32x or whatever of your ALU capacity, and if you want to do that then it's easy, just predicate out all but one lane of your vector. What is really needed beyond what is already there (in NVidia's GPUs) is branch/call by register.
 
A GPU with 5 wide VLIW or even scalar cores capable of independent branching.

I think a better definition would be:

A GPU with branch granularity at the level of a single pixel in the case where ever pixel is effectively running an independent shader and branching on dynamic data such that there is no performance difference compared to a workload in which all pixels are running the same shader.

Yes, its never going to happen. The cost/benefit is way to high to really design a GPU with very low branch granularity.
 
What you're basically saying is this: "why earn $300 when you can earn $100". Do you really want to say something like this? Because that's clearly b.s.


Bolded part isn't true at all. I know this for sure. So all the other parts of this phrase are wrong also.

To bad you are 1000% wrong... I mean come on, you didn't even bring any data to backup your claim.

aib_mkt_jpr_q3_2006.jpg


So back then the $80-$250 market made up 88% of the revenue...
While the enthusiast was only 4%...
Hmmm... who was wrong?

Edit- Razor1 hopefully you weren't addressing me with your post.
 
Last edited by a moderator:
and that too is also incorrect, because in the end it didn't help them gain marketshare, they hurt the entire industry, in the end it was worthless.

The last things for sales to do is drop price, first thing they do is show value, next thing they do is compare quaility, what makes thier product better, last thing they do is drop thier pants and bend over on price. Cost isn't the reason why people buy certian objects if sales gives them a reason to buy the object and they will buy them if they are in the market, and they will buy them according to thier budget. They won't go out there because prices are lower (well most people), but in this market definilty not.
 
Last edited by a moderator:
Scalar as a CPU or current GPU definition?
It doesn't matter, even if a bunch of scalar cores run the same shader program if they can independently branch they are autonomic (apart from the fact that to make good use of the shared memory they should be well coordinated with eachother).
Either way can I pick the 2nd one or even better a solution that can bounce between VLIW or scalar according to demand? (ok that's a wishlist and no speculation but it's equally fun ;) ).
Well, that was what I was getting at with the earlier statement about putting 4 wide crossbars on the shared memory (and potentially the registers as well). That is exactly what it would allow you to do.
 
Is RV770 MPMD, where each of the 10 clusters runs independently?

Why do I think we're being bullshitted?

Jawed
 
and that too is also incorrect, because in the end it didn't help them gain marketshare, they hurt the entire industry, in the end it was worthless.

What a horrible thing they did bringing a new level of performance to an affordable price point.

Wrong attitude for an enthusiast to have.
 
What a horrible thing they did bringing a new level of performance to an affordable price point.

Wrong attitude for an enthusiast to have.


No I didn't say that, did I? its great they did that but for the industry as a whole, for them, and for nV, they just hurt them for no reason at all. You really think lower prices was to benefit consumers? You think thats how corporations operate, to make us happy? Its the otherway around. Corporations want money. Of course to make money they have to keep us happy but that is not done by dropping prices ;), that is a poor marketer's/ salesman's technique to get a sale. In this case, if AMD was able to sell more they would have gotten more money, but with the drop in sales due to the bad economy, the lower price actually had very little effect, why else did ASP's drop for AMD last quarter and they had less net?

http://www.insidefurniture.com/insi...ce-sales-is-desperate-flypaper-marketing.html

Ok this is about furniture companies, but the same thing applies in any industry

Rampant price cutting a path to the next turnaround is cutting off your knows to spite your place really to succeed, and that place is identifying ways to meet the needs of consumers. The next turnaround in the making now is the mother of all turnarounds, taking no prisoners attempting a hope-filled mercantile Bataan death march back to business.
Manufacturers only wanting to sell, sell, sell rely on their own hope-based voodoo methods. They have a surprise coming. Hope isn’t a strategy to stimulate sustained business. Hope without a substantive plan is skywriting always blown away by the winds of reality.

http://nigeria.smetoolkit.org/nigeria/en/content/en/431/Tactics-to-Avoid-Lowering-Your-Prices

http://www.derbymanagement.com/knowledge/pages/tactics/myths.html

Myth 7: Customers care only about price. It's obvious why so much of the selling conversation revolves around the bottom line: It's easy to talk about, there's plenty of information, and it's quantifiable. But anyone who believes that is where the process starts and ends should seriously consider finding a new line of work -- a field where he or she can put their honesty, integrity, and desire to help to good use. After all, that's what salespeople do when we're not talking about unit cost!

http://www.entrepreneur.com/magazine/entrepreneur/2009/april/200710.html

The link above goes through very good marketing/sales tatics associated with price.
 
Last edited by a moderator:
Last time I checked ATI delivered great products with a great price. Can't see anything wrong with that.
 
Last time I checked ATI delivered great products with a great price. Can't see anything wrong with that.


That is true, but there was a down side to it too. Its not like nV was going to sit there and let AMD take marketshare away, if AMD was able to market as well as nV they wouldn't have had to drop prices, thats the whole point!
 
Status
Not open for further replies.
Back
Top