Perf/watt, perf/$ as you well know. Nvidia doesn't reach required performance without massive overclocking and associated heat/noise/power. Without that, we'd be looking at every aspect of Fermi's rendering and saying the performance is under par.
Quadros are not mainstream nor gaming cards. It's irrelevant to mention them and is just a tangent on your part.
Different architectures of course.
ATI tried to push it into DX years ago with Truform, but Nvidia resisted as they had no capable hardware and no plans for it. Now Nvidia has tessellation as it's one trick pony, we're supposed to want and need every game to be nothing but tessellation?
The fact is that it's in DirectX now and available for use. AMD was praising the benefits of DX11 tessellation as well and pushed for its inclusion in games. Funny how when Nvidia does the same it's another evil plot on their part.
http://forums.hexus.net/hexus-net/1...s-nvidia-neglecting-gamers-2.html#post1806514(1) The positive mention of DX11 is a rarity in recent communications from NVIDIA - except perhaps in their messaging that 'DirectX 11 doesn't matter'. For example I don't remember Jensen or others mentioning tessellation (they biggest of the new hardware features) from the stage at GTC. In fact if reports are to be trusted only one game was shown on stage during the whole conference - hardly what I would call treating gaming as a priority!
Bouncing Zabaglione Bros. said:Not just AMD owners, but everyone who has a lower Nvidia card where the tessellation performance drops off a cliff will suffer for Nvidia's attempt to bend the market to it's own will and make everything all about tessellation, instead of about the best results for all gamers.
In other words AMD is pushing the envelope on nothing. You couldve just said that instead of avoiding the question.
The point is that geometry power can be used now. We don't need to wait for the future. What are the obstacles you see today for increasing geometric detail that lead you to believe Evergreen's geometry throughput is good enough?
Ah, so not that easy after all then.
The fact is that it's in DirectX now and available for use. AMD was praising the benefits of DX11 tessellation as well and pushed for its inclusion in games. Funny how when Nvidia does the same it's another evil plot on their part.
Tessellation levels are scalable, just like resolution and all the other settings that differentiate cards of different performance levels. I don't know why you're repeating AMD's hysteria. We want developers to increase geometric detail don't we?
You don't think the products AMD produced nearly a year ago, at their price, performance, heat and energy usage were not pushing the envelope, but Nvidia's bastardised and crippled Fermi is great because it helps Nvidia push it's own APIs and one trick pony advantages?
I just have to point to the titles on the market. What makes you think that Fermi's heavy tessellation performance, which is only strong on their tiny niche top cards, is going to be of any use in the gaming market for the life of the product?
And what are you going to say if AMDs tessellation performance goes up with their new generation? Are you going to tell us it's acceptable for Nvidia to use vendor id lockouts like with Batman's AA?
How long before we see Nvidia write some standard tessellation code for a dev, then lock it out with vendor ID checks, then use lawyers to enforce their copyright on said standard code? Not long I think. How is that good for PC gamers?
So you're saying that Nvidia's high tessellation performance isn't really that important because lower performance will still give pretty good results?
That games are more than just tessellation.
Does anyone think that Nvidia's extreme tessellation performance is balanced, or that games are going to need/use that level of tessellation in the next couple of years? Is it useful in any game you are playing now?
ATI used to get criticised for too many forward looking features, where Nvidia was always "works great in current games, you'll upgrade in a couple of years when you need more". Are we now saying the situation is reversed and Nvidia are touting a forward looking feature that's not going to be useful before the product gets superseded?
Fermi will be sufficient for future tessellation workloads, but it will be short of arithmetics, texturing, bandwidth, fillrate... Who will care of tessellation if the highest playable resolution will be (lets say) 1024*768, where the triangles are small enough even without tessellation and tessellation impact on quality will be neglible?Is it balanced? Perhaps not, similar to how ATI in the past misjudged how quickly the industry would adopt high shader throughput for games. Just like Nvidia back in those days, ATI cards are perhaps more balanced with regards to current workloads. Just like ATI's cards back then, Nvidia's Fermi based cards are a bit unbalanced with an eye towards a future where Tesselation loads are higher.
You know, AMD is not known to go the cheap and easy route.If it's so cheap and easy why did it now make it to DirectX and why didn't AMD also beef up tessellation performance?
AFAIR, both R200 and NV20 had HOS support, something in the line of RT- and n-Patches respectively.ATI tried to push it into DX years ago with Truform, but Nvidia resisted as they had no capable hardware and no plans for it. Now Nvidia has tessellation as it's one trick pony, we're supposed to want and need every game to be nothing but tessellation?
As i said above: Engine (and game) development is focused on consoles mostly. That's one large factor with the other one being that a felt 90% of the (installed) market isn't even DX11 ready and so would need to push that geometry through the CPU and bus systems in your average PC.The point is that geometry power can be used now. We don't need to wait for the future. What are the obstacles you see today for increasing geometric detail that lead you to believe Evergreen's geometry throughput is good enough?
It almost sounds as the bolded is trying to say, the same geometric detail looks better at lower screen resolutions just because the triangles have fewer pixels.Fermi will be sufficient for future tessellation workloads, but it will be short of arithmetics, texturing, bandwidth, fillrate... Who will care of tessellation if the highest playable resolution will be (lets say) 1024*768, where the triangles are small enough even without tessellation and tessellation impact on quality will be neglible?
If your best rebuttal is to come up with evil things Nvidia "might" do that have absolutely nothing to do with tessellation then it's pretty clear you've realized that there's nothing to complain about.
I'd be surprised to see it actually happen given the present competitive scenario. Few devs will be happy to see their dx11 games get screwed on majority of the install base of dx11 machines.If they've got devs out writing tessellation code, I'd not be surprised if they are also locking it out with vendor id,
I thought you could do MSAA on dx9 systems. MS's dx sdk even has a demo (with sample code and everything) on AA. AFAICS, it's been there for years now.Antialiasing in Batman:AA was/is a hack not possible by the standards of DX9,
I thought you could do MSAA on dx9 systems. MS's dx sdk even has a demo (with sample code and everything) on AA. AFAICS, it's been there for years now.
Right, but AFAIK, something with the UE3 prevents DX9 from generally using that.
If that were true, then no dx9 game would have MSAA.Or is that only a limitation of DX9 HW?
Loads of games have used UE3 and I am pretty sure the rest use MSAA on dx9 hw.
If that were true, then no dx9 game would have MSAA.
You can use AA under the DX9 API, but the UE3 is not capable of it in combination with dx9.