AMD: "[Developers use PhysX only] because they’re paid to do it"

Yeah, given that Nvidia spends additional money on alot software projects, has way better drivers atleast when it comes to OpenGL and Linux, helps out more and supposedly even "buys" developers to use their stuff I wonder where ATI spends their income.
 
Perf/watt, perf/$ as you well know. Nvidia doesn't reach required performance without massive overclocking and associated heat/noise/power. Without that, we'd be looking at every aspect of Fermi's rendering and saying the performance is under par.

In other words AMD is pushing the envelope on nothing. You couldve just said that instead of avoiding the question.

Quadros are not mainstream nor gaming cards. It's irrelevant to mention them and is just a tangent on your part.

The point is that geometry power can be used now. We don't need to wait for the future. What are the obstacles you see today for increasing geometric detail that lead you to believe Evergreen's geometry throughput is good enough?

Different architectures of course.

Ah, so not that easy after all then.

ATI tried to push it into DX years ago with Truform, but Nvidia resisted as they had no capable hardware and no plans for it. Now Nvidia has tessellation as it's one trick pony, we're supposed to want and need every game to be nothing but tessellation?

The fact is that it's in DirectX now and available for use. AMD was praising the benefits of DX11 tessellation as well and pushed for its inclusion in games. Funny how when Nvidia does the same it's another evil plot on their part.
 
The fact is that it's in DirectX now and available for use. AMD was praising the benefits of DX11 tessellation as well and pushed for its inclusion in games. Funny how when Nvidia does the same it's another evil plot on their part.

The internet is such a funny place:

(1) The positive mention of DX11 is a rarity in recent communications from NVIDIA - except perhaps in their messaging that 'DirectX 11 doesn't matter'. For example I don't remember Jensen or others mentioning tessellation (they biggest of the new hardware features) from the stage at GTC. In fact if reports are to be trusted only one game was shown on stage during the whole conference - hardly what I would call treating gaming as a priority!
http://forums.hexus.net/hexus-net/1...s-nvidia-neglecting-gamers-2.html#post1806514

Ironic how the world changed since last year.
 
Bouncing Zabaglione Bros. said:
Not just AMD owners, but everyone who has a lower Nvidia card where the tessellation performance drops off a cliff will suffer for Nvidia's attempt to bend the market to it's own will and make everything all about tessellation, instead of about the best results for all gamers.

Tessellation levels are scalable, just like resolution and all the other settings that differentiate cards of different performance levels. I don't know why you're repeating AMD's hysteria. We want developers to increase geometric detail don't we?
 
In other words AMD is pushing the envelope on nothing. You couldve just said that instead of avoiding the question.

The old "extend and embrace" argument, to justify splitting the market and controlling your own propriety APIs.

You don't think the products AMD produced nearly a year ago, at their price, performance, heat and energy usage were not pushing the envelope, but Nvidia's bastardised and crippled Fermi is great because it helps Nvidia push it's own APIs and one trick pony advantages?

The point is that geometry power can be used now. We don't need to wait for the future. What are the obstacles you see today for increasing geometric detail that lead you to believe Evergreen's geometry throughput is good enough?

I just have to point to the titles on the market. What makes you think that Fermi's heavy tessellation performance, which is only strong on their tiny niche top cards, is going to be of any use in the gaming market for the life of the product? Where are the products that need that performance? And what are you going to say if AMDs tessellation performance goes up with their new generation? Are you going to tell us it's acceptable for Nvidia to use vendor id lockouts like with Batman's AA?

Ah, so not that easy after all then.

I didn't say easy, I said "cheap" in terms of transistors and design. Depends on the architecture as to whether going overkill on something because it's easy and you can't do the harder stuff (like making a cool, quiet card that doesn't need the power of the sun and isn't losing the manufacturer money) is viable.

The fact is that it's in DirectX now and available for use. AMD was praising the benefits of DX11 tessellation as well and pushed for its inclusion in games. Funny how when Nvidia does the same it's another evil plot on their part.

It's only a plot when they are doing it to hurt the market in their own favour. How long before we see Nvidia write some standard tessellation code for a dev, then lock it out with vendor ID checks, then use lawyers to enforce their copyright on said standard code? Not long I think. How is that good for PC gamers?
 
Tessellation levels are scalable, just like resolution and all the other settings that differentiate cards of different performance levels. I don't know why you're repeating AMD's hysteria. We want developers to increase geometric detail don't we?

So you're saying that Nvidia's high tessellation performance isn't really that important because lower performance will still give pretty good results?
 
You don't think the products AMD produced nearly a year ago, at their price, performance, heat and energy usage were not pushing the envelope, but Nvidia's bastardised and crippled Fermi is great because it helps Nvidia push it's own APIs and one trick pony advantages?

Correct.

I just have to point to the titles on the market. What makes you think that Fermi's heavy tessellation performance, which is only strong on their tiny niche top cards, is going to be of any use in the gaming market for the life of the product?

Where'd you get the idea it's only strong on top end stuff?

http://www.hardware.fr/articles/801-5/dossier-sparkle-geforce-gts-450s.html

And what are you going to say if AMDs tessellation performance goes up with their new generation? Are you going to tell us it's acceptable for Nvidia to use vendor id lockouts like with Batman's AA?

How long before we see Nvidia write some standard tessellation code for a dev, then lock it out with vendor ID checks, then use lawyers to enforce their copyright on said standard code? Not long I think. How is that good for PC gamers?

If your best rebuttal is to come up with evil things Nvidia "might" do that have absolutely nothing to do with tessellation then it's pretty clear you've realized that there's nothing to complain about.

So you're saying that Nvidia's high tessellation performance isn't really that important because lower performance will still give pretty good results?

Like linked above their tessellation performance is still pretty good on lower end cards. But your question is silly. That's like asking if Cypress's high performance isn't important because Redwood exists.
 
That games are more than just tessellation.

Does anyone think that Nvidia's extreme tessellation performance is balanced, or that games are going to need/use that level of tessellation in the next couple of years? Is it useful in any game you are playing now?

ATI used to get criticised for too many forward looking features, where Nvidia was always "works great in current games, you'll upgrade in a couple of years when you need more". Are we now saying the situation is reversed and Nvidia are touting a forward looking feature that's not going to be useful before the product gets superseded?

Is it balanced? Perhaps not, similar to how ATI in the past misjudged how quickly the industry would adopt high shader throughput for games. Just like Nvidia back in those days, ATI cards are perhaps more balanced with regards to current workloads. Just like ATI's cards back then, Nvidia's Fermi based cards are a bit unbalanced with an eye towards a future where Tesselation loads are higher.

One isn't necessarily better than the other. I don't mind either approach. Just like I liked the shader heavy approach of ATI because I was looking ahead to what shader heavy programs might provide (as we can see in current games) so too am I looking forward to a day when Tesselation is a large and key factor in every game.

Will it progress similar to shaders with each generation of games becoming more and more heavily weighted towards shader performance? Hard to tell, but I can say for myself I hope so.

ATI used to get knocked for being too heavily shader focused, unfairly I may add. Nvidia getting knocked for having too high of a tesselation focus is also unfair. It's a feature I'm sure everyone would like to see more of.

There are far better things to criticize each company for. I don't think enhancing future looking features is necessarily a bad thing.

Rather if a company is actively discouraging and hampering adoption of non-proprietary future looking tech (Nvidia with regards to Dx10.1 for example) then feel free to lambast them. Or trying to push a proprietary tech with no intentions of opening it up to everyone (ATI perhaps with tesselation a long time ago, or Nvidia with PhysX) where it actively divides and makes PC gaming less attractive...

But trying to look forward at where the industry is going? Or where you'd like the industry, as a whole, to go? No, I rather like that. So yeah, I liked the overabundance (and generally underutilized) of shaders in the Radeon x1950 XTX, and I like the overabundance of tesselation power in the GTX 480. Doesn't mean there weren't drawbacks to each card. But I don't think beefing up forward looking features is a waste.

Regards,
SB
 
Well, isn't the thing with tessellation, if it's done properly, that it can offer something for everybody?

For example, rounding off the barrels of guns. Near perfectly round would be great and something I would like.

But even some rounding is a good thing.

I don't understand the technology but I'll go out on a limb and say that if it can't be dialed in then maybe the game's developers don't have a full handle on it.
 
Is it balanced? Perhaps not, similar to how ATI in the past misjudged how quickly the industry would adopt high shader throughput for games. Just like Nvidia back in those days, ATI cards are perhaps more balanced with regards to current workloads. Just like ATI's cards back then, Nvidia's Fermi based cards are a bit unbalanced with an eye towards a future where Tesselation loads are higher.
Fermi will be sufficient for future tessellation workloads, but it will be short of arithmetics, texturing, bandwidth, fillrate... Who will care of tessellation if the highest playable resolution will be (lets say) 1024*768, where the triangles are small enough even without tessellation and tessellation impact on quality will be neglible?
 
If it's so cheap and easy why did it now make it to DirectX and why didn't AMD also beef up tessellation performance?
You know, AMD is not known to go the cheap and easy route.
And who knows, maybe it wasn't that cheap or that easy either?

ATI tried to push it into DX years ago with Truform, but Nvidia resisted as they had no capable hardware and no plans for it. Now Nvidia has tessellation as it's one trick pony, we're supposed to want and need every game to be nothing but tessellation?
AFAIR, both R200 and NV20 had HOS support, something in the line of RT- and n-Patches respectively.

WRT tessellation: It's useful, not to be bottlenecked by it and it's double useful if it serves your professional line of cards equally well. Of course, games are not all about tessellation, heck, not even all about graphics either. But in the time of consoles as primary development plattforms for the majority of games, it becomes hard to convince developers to integrate something, which really improves PC gaming - because it costs them money and they see only very limited benefit in return.

Using more geometry instead of texture tricks is one way to get this done and since the bus systems (apart from the CPUs) in current PCs are not able to handle hundreds of millions of polygons, tessellation for example makes sense. I want a brick wall to look like a brick wall - not only when I'm viewing it from just about the right angle.

And it's not like AMD wasn't promoting the benefits of tessellation before, isn't it? And after all, their cards have powerful tessellators also which serve to improve the rendered images as well.

But you're right - it's not only about tessellation.

The point is that geometry power can be used now. We don't need to wait for the future. What are the obstacles you see today for increasing geometric detail that lead you to believe Evergreen's geometry throughput is good enough?
As i said above: Engine (and game) development is focused on consoles mostly. That's one large factor with the other one being that a felt 90% of the (installed) market isn't even DX11 ready and so would need to push that geometry through the CPU and bus systems in your average PC.

Fermi will be sufficient for future tessellation workloads, but it will be short of arithmetics, texturing, bandwidth, fillrate... Who will care of tessellation if the highest playable resolution will be (lets say) 1024*768, where the triangles are small enough even without tessellation and tessellation impact on quality will be neglible?
It almost sounds as the bolded is trying to say, the same geometric detail looks better at lower screen resolutions just because the triangles have fewer pixels.
 
Last edited by a moderator:
If your best rebuttal is to come up with evil things Nvidia "might" do that have absolutely nothing to do with tessellation then it's pretty clear you've realized that there's nothing to complain about.

So the fact that they've already done it for AA doesn't count? They also lock out Physx when their own customers have the temerity to use a competitor's hardware, and they lock out 120 hz monitors and motherboard SLI with hardware ids unless you've paid them. It shows their modus operandi, and where they are happy to draw the line on acceptable behaviour.

If they've got devs out writing tessellation code, I'd not be surprised if they are also locking it out with vendor id, because that's what Nvidia as an organisation thinks is reasonable behaviour. Nvidia don't care if it hurts the gaming market or the customer and you'd have to be pretty naive to believe they won't keep on doing the same thing in the future when there's no reason to think they've changed how they operate
 
Did Nvidia actually lock out something on competitor's hardware which is part of an open spec? Like tessellation on a DX11 titel?

Antialiasing in Batman:AA was/is a hack not possible by the standards of DX9, AFAIBT. So while it's not "nice" to not open it up, it's at least understandable - just imagine, there'd be some kind of error, running that code on Ati cards: "Nvidia injected viral code in order to cripple competitors' image quality/game performance ZOMG!"

Physx otoh is their own technology and every company i know of tries to protect their investments. But I am hearing Bullet and Havok are two very capable solutions, one of them even free and almost nearing it's (GPU-accelerated) premiere in an application.
 
If they've got devs out writing tessellation code, I'd not be surprised if they are also locking it out with vendor id,
I'd be surprised to see it actually happen given the present competitive scenario. Few devs will be happy to see their dx11 games get screwed on majority of the install base of dx11 machines.
 
I thought you could do MSAA on dx9 systems. MS's dx sdk even has a demo (with sample code and everything) on AA. AFAICS, it's been there for years now.

Right, but AFAIK, something with the UE3 prevents DX9 from generally using that. Or is that only a limitation of DX9 HW?
 
Loads of games have used UE3 and I am pretty sure the rest use MSAA on dx9 hw.
If that were true, then no dx9 game would have MSAA.

You can use AA under the DX9 API, but the UE3 is not capable of it in combination with dx9. That's a specific problem of the engine. Most of the games with the ue3 do not support AA out of the box.
 
That's what I seemed to remember: No MRT+MSAA in DX9 - not just UE3 but an API limitation. Now just looked it up again in google.

http://www.gamedev.net/community/forums/topic.asp?topic_id=485166
"In the D3D9 help, in the index, look for "Multiple Render Targets". It says "No antialiasing is supported"."

http://http.developer.nvidia.com/GPUGems3/gpugems3_ch23.html
(yes, Evilvidia...)
"Acquiring Depth in DirectX 9 […] More seriously, MRTs are not compatible with multisample antialiasing (MSAA) in DirectX 9."
 
Back
Top