AMD: Speculation, Rumors, and Discussion (Archive)

Status
Not open for further replies.
So it's the fault of OpenCL for being low-level, providing you more control and allowing you to tune for different architectures? Do you want languages like RenderScript to take over where you have less control? Then you can never get 100% performance out of any architecture. Take your pick.


Its not a fault, some things have to be done that way, but to do that with every single piece of hardware and generations, just make it a pain in the ass. This is what API's are supposed to do, to lessen the burden on programmers as they set the rules and guidelines to create something that is streamlined and works well on all hardware that supports that API, yes you still need to be weary of the underlying architecture but not so much as to make extremely different paths for all GPU's and then different GPU's of the same gen too.

Look at mining for example, can you tell me the difference between a cuda miner vs. an Open Cl one? which would work better over all if one was to optimize for said hardware and use all the features of a said API?
 
I wonder how much of that is developers having to do an Nvidia specific path and then just going with that instead of using the console rendering path. Especially if those developers are closely tied to Nvidia. In that case you'd lose much of the optimizations that exist on the consoles versions.

iD appear to have decided to use as much of the console rendering path as possible for their Vulkan rendering path for Doom. And it shows.

At worst, there is no performance gain on Nvidia hardware. At console resolutions and settings, however, there are gains on Nvidia hardware. And on hardware that uses GCN, the same as consoles, the gains are massive and universal.

IE - they didn't have to do a special GCN 1.0, 1.1, 1.2, 1.4, etc. rendering path with special tweaks despite the consoles only being GCN 1.0-1.1. Everything from the 7xxx series up to the newest Rx 4xx series benefits quite significantly at all resolutions and on all hardware.

It's relatively clear that Dx12 and Vulkan are primed to radically change how PC graphics are done at the AAA level. How long it takes for developers to come to grasps with that is the variable. I'm sure Nvidia partnered developers will be heavily encouraged not to use the console code. But iD has shown that if you do, there's potential for great performance gains on AMD and possibly even Nvidia hardware.

Regards,
SB


Who stated that, I don't remember ID stating that it closely resembles the console path not for all the different versions of GCN.....

Also this is not an engine thing, different games that use the same engines will use different compute shaders too, so each game that a developer makes if they make custom compute shaders outside of what the engine has already well things change then doesn't it?
 
Last edited:
Its not a fault, some things have to be done that way, but to do that with every single piece of hardware and generations, just make it a pain in the ass. This is what API's are supposed to do, to lessen the burden on programmers as they set the rules and guidelines to create something that is streamlined and works well on all hardware that supports that API, yes you still need to be weary of the underlying architecture but not so much as to make extremely different paths for all GPU's and then different GPU's of the same gen too.

Look at mining for example, can you tell me the difference between a cuda miner vs. an Open Cl one? which would work better over all if one was to optimize for said hardware and use all the features of a said API?

It's the job of middlewares/engines to lessen the burden of development, not an API. I want complete control over the architecture I'm writing my code for else the HW would be sitting underutilized. That's what you are seeing in DOOM vulkan implementation, Fury X had all the raw tflops just waiting to be utilized. And the writing is on the wall.
 
It's the job of middlewares/engines to lessen the burden of development, not an API. I want complete control over the architecture I'm writing my code for else the HW would be sitting underutilized. That's what you are seeing in DOOM vulkan implementation, Fury X had all the raw tflops just waiting to be utilized. And the writing is on the wall.


What you want to say that again? Might want to try to tell Microsoft that about API's cause that isn't their mentality......... nor is it OGL or Vulkan's either.

yeah, So it took close to a year and many man hours in one game to get the Fury X performance out.....

Right, I can think of better ways to make Doom much more fun to play and that would make it better for everyone instead of the 1% of 20% of the marketshare.
 
Do you know the amount of man hours it took Nvidia to develop their DX11 driver and tune it for every single game out there? I'm sure it was no where near that.
 
What you want to say that again? Might want to try to tell Microsoft that about API's cause that isn't their mentality......... nor is it OGL or Vulkan's either.

yeah, So it took close to a year and many man hours in one game to get the Fury X performance out.....

Right, I can think of better ways to make Doom much more fun to play and that would make it better for everyone instead of the 1% of 20% of the marketshare.
Instead of continuously turning this thread into a sh@tfest I think that we can all agree that, in your opinion, Async Compute, Shader Intrinsic etc...are a waste of time because they don't benefit NVidia. We all understood you very clearly I think... now let's go back on track and discuss AMD's tech which is the main subject of this thread.
 
What you want to say that again? Might want to try to tell Microsoft that about API's cause that isn't their mentality......... nor is it OGL or Vulkan's either.

yeah, So it took close to a year and many man hours in one game to get the Fury X performance out.....

Right, I can think of better ways to make Doom much more fun to play and that would make it better for everyone instead of the 1% of 20% of the marketshare.

Except it benefits more than the 1% of 20% as it benefits all GCN based cards. It also encompasses more than that 20% because it also encompasses the entire PS4 and XBO market. I hate to break it to you, but AAA developers have to worry about and code for more than just the PC gaming market.

Here's what iD have to say about it...

id Software itself is pretty clear about the advantages of Vulkan and async compute. We asked the team whether they see a time when async compute will be a major factor in all engines across platforms.

"The time is now, really. Doom is already a clear example where async compute, when used properly, can make drastic enhancements to the performance and look of a game," reckons Billy Khan. "Going forward, compute and async compute will be even more extensively used for idTech6. It is almost certain that more developers will take advantage of compute and async compute as they discover how to effectively use it in their games."

From http://www.eurogamer.net/articles/d...n-patch-shows-game-changing-performance-gains

There's some quotes there as well about how the same techniques were used on the consoles as well as in Vulkan.

Regards,
SB
 
That's what you are seeing in DOOM vulkan implementation, Fury X had all the raw tflops just waiting to be utilized
There are tons of geometry units in 980 Ti just waiting to be utilized, why the hell on earth do I see this triangular nose http://i.imgur.com/OlHJjmL.png of Jensen then?
I would prefer better geometry anytime over those flops in Doom simply because 200 FPS at 1080p is bullshit, I've never asked for this
 
Last edited:
It is almost certain that more developers will take advantage of compute and async compute as they discover how to effectively use it in their games.
Which is a good thing. However it seems there is more than one way of effectively coding this feature since the method they incorporated currently only works for GCN based cards. So there doesn't seem to be a "generic" method of implementing async compute.
 
Instead of continuously turning this thread into a sh@tfest I think that we can all agree that, in your opinion, Async Compute, Shader Intrinsic etc...are a waste of time because they don't benefit NVidia. We all understood you very clearly I think... now let's go back on track and discuss AMD's tech which is the main subject of this thread.


It took AMD one year after their GPU was made to show the "benefits" too little too late, that has always been their problem hasn't it? Yeah and this does have a lot to do with AMD and how they are right now. To ignore that is not only disingenuous but also caters to the same problem. PC upgrade cycles happen too quickly to wait a year before you see something that might or might not help because by that time the competitor now has a new product and AMD has nothing to show for it.

And if you are getting emotionally about anything, well you don't need to post, I'm not getting emotional, just stating the obvious that async isn't the end all be all of this generation nor will it be EVER.

Slient_Buddha
There is no mention of similar paths, similar techniques doesn't not constitute similar paths.
And its easy to see how much time and effort it took iD and AMD to get this out. Just look at when the console version was out, then the PC version then how long it took the Vulkan version. It took 3 months for them to develop the Vulkan version possibly even longer as they had it in their pipeline. And it will still take longer cause they are still working with nV on it too.

Also the quote you picked up perfectly states what I stated before.
It is almost certain that more developers will take advantage of compute and async compute as they discover how to effectively use it in their games.

They have to figure things out to get things to work the way they want to on a per game, per engine, per graphics card basis.
 
Last edited:
There are tons of geometry units in 980 Ti just waiting to be utilized, why the hell on earth do I see this triangular nose http://i.imgur.com/OlHJjmL.png of Jensen then?
I would prefer better geometry anytime over those flops in Doom simply because 200 FPS at 1080p is bullshit, I've never asked for this

You can't be serious! DOOM looks wayyyyy better than whatever 2005ish game screenshot you posted. In fact it's the best looking game at the moment. If you think 200fps is bullshit then please go buy a console.
 
From WCCFTech: "AMD RX 490 4K Gaming Card Listed By Sapphire & Spotted In Official Slide – Launching In 2016."

AMD-RX-490-Sapphire.jpg


WCCFTech speculates that the 490 is a dual Polaris 10.
 
There are tons of geometry units in 980 Ti just waiting to be utilized, why the hell on earth do I see this triangular nose http://i.imgur.com/OlHJjmL.png of Jensen then?
I would prefer better geometry anytime over those flops in Doom simply because 200 FPS at 1080p is bullshit, I've never asked for this
Solid 60fps at 1080p on a $150 card with maxed settings is pretty damn exciting though. It's not all about the top end cards.
 
You can't be serious! DOOM looks wayyyyy better than whatever 2005ish game screenshot you posted. In fact it's the best looking game at the moment. If you think 200fps is bullshit then please go buy a console.


Doom isn't like what the old Doom's were, they didn't push the visual looks at all like iD used to. Paragon a multiplayer online game, their characters look better from an art perspective. Pretty sure they pushed the polygon counts more too than Doom. Although for what they were trying for to get back to the way the original Doom was,and to try to get away from what happened to Doom 3, where you could only have a few monsters on the screen, it was done well.
 
Doom isn't like what the old Doom's were, they didn't push the visual looks at all like iD used to. Paragon a multiplayer online game, their characters look better from an art perspective. Pretty sure they pushed the polygon counts more too than Doom. Although for what they were trying for to get back to the way the original Doom was,and to try to get away from what happened to Doom 3, where you could only have a few monsters on the screen, it was done well.
Off topic but ill say that doom has one of the best PBR implementations ive seen next to star wars and rachet and clank. Also art is subjective.
 
Off topic but ill say that doom has one of the best PBR implementations ive seen next to star wars and rachet and clank. Also art is subjective.

well I'm specifically talking about mesh detail wise, I don't see Doom pushing anything new. still got those crappy bendy elbows where they should be using mesh morphs to fix those, not hard to do but still not done yet lol.

PBR texture wise they did a good job, still don't think its up to Paragon level, yeah the Unreal engine lighting system gives Paragon's game look a bit rubbery but that is the engine not the artwork itself., or star wars or rachet and clank. Even RSI's game looks better texture wise.
 
Status
Not open for further replies.
Back
Top