AMD: Speculation, Rumors, and Discussion (Archive)

Status
Not open for further replies.
Bring on the 470's!

The 4G RX 470 at $150 seems to be the ultimate value for 1080p gaming.

To think the closest thing in price we have now is the 2GB GTX 950..



BTW, it's going around that miners are mostly responsible for the shortage of RX 480 cards:

https://www.reddit.com/r/Amd/comments/4uytr5/eth_miners_are_the_reason_for_the_480_shortage/



On one hand, they're being sold so it's revenue for AMD.. On the other, they're not being sold for playing games, so it will limit the card's influence in the gaming market (like the HD7970 back in 2012/2013) which is terrible..
 
Last edited by a moderator:
But we don't really want a market flooded with cards that can't do conservative rasterisation and raster ordered views. So the miners can keep on keeping on.
 
But we don't really want a market flooded with cards that can't do conservative rasterisation and raster ordered views. So the miners can keep on keeping on.

I hate that we have cards in the market without VESA's Adaptive Sync support a lot more than it not carrying certain performance inducing features like properly working async compute, conservative rasterization, raster ordered views or others.
 
Yes people seem to underestimate the importance of CR and ROVs whilst overplaying "async compute". I find it interesting that NVIDIA hasn't promoted these features like they did with tessellation and Fermi.

It is past time for NVIDIA to bite the bullet and support freesync. They can keep gsync as a premium option if they want and let us decide if it's worth the extra $$$.
 
Yes people seem to underestimate the importance of CR and ROVs whilst overplaying "async compute". I find it interesting that NVIDIA hasn't promoted these features like they did with tessellation and Fermi.
They will, there are about 20 GameWorks games throughout 2017, some of them are bound to promote these features. Also consoles don't support them which makes them not as widespread as other features.

I hate that we have cards in the market without VESA's Adaptive Sync support a lot more than it not carrying certain performance inducing features like properly working async compute,

I thought people knew better by now, supporting Async and gaining performance through it are completely different things, different workloads will yield different results and could positively or negatively impact results. And each architecture could stand to benefit or lose from it depending on the underlying optimization.
 
Last edited:
But we don't really want a market flooded with cards that can't do conservative rasterisation and raster ordered views. So the miners can keep on keeping on.
As the consoles can not do it anyway, the feature has no meaning. Today you need a AMD GPU to be on the safe side when it comes to feature use, because AMD SOCs are powering the consoles.
 
I thought people knew better by now, supporting Async and gaining performance through it are completely different things, different workloads will yield different results and could positively or negatively impact results. And each architecture could stand to benefit or lose from it depending on the underlying optimization.

Is there any evidence of GCN or Pascal losing performance - in real games - from async compute?

A number of console devs have released data showing how their move to async has boosted performance, and the results are most impressive. There's talk of moving more work over to async compute too.
 
Wy not push more "out of order" rasterization instead ?

More seriously, i was ask me how many time it will take for get someone in this thread, who come with Async compute = bad .. COV and ROV = excelllent
 
Last edited:
I hate that we have cards in the market without VESA's Adaptive Sync support a lot more than it not carrying certain performance inducing features like properly working async compute, conservative rasterization, raster ordered views or others.
That's apparently not hindering adaptive sync monitors to become more and more readily available at almost all price points doesn't it?

--
On another note: The religious quest for calling "true async support" per se makes about as much sense as demanding support for a native wavefront/warp size of 8 (or 4 or 2 or 1). That can also improve performance as soon as you haven't been able to fill up your item with the full amount of work it can do. Same principle.
 
Is there any evidence of GCN or Pascal losing performance - in real games - from async compute?
I'm sure devs have managed to break GCN during development. For released products, none that I'm aware.

Possibly Doom with Vulkan for Pascal. Speculating here, but the weird framerate stuttering/vsync issues may be the result of a high priority compute job for the TSSAA. Assuming that stands for temporal super sampling anti aliasing, it would make sense to only perform that call during vblanks. Render a series of frames without AA and then blend between them (temporal) while sampling nearby pixels (super sampling). So every x frames a substantially larger compute ratio would be present and the driver wasn't able to adjust the CU distribution quickly enough. That could explain the relatively good frametimes for most frames and hitches around the 60Hz blanks. It may just be a driver bug needing ironed out and doubt it affects performance, but that would affect the overall experience. Input delay and stuttering might not show up on benchmarks, but they do affect experience. Pascal can handle async code, it likely can't handle swings in the ratios because of it.
 
Is there any evidence of GCN or Pascal losing performance - in real games - from async compute?

A number of console devs have released data showing how their move to async has boosted performance, and the results are most impressive. There's talk of moving more work over to async compute too.

As far as I understood, the only drawback was the fact you could not easily do audio and physics in the GPU due to latency issues. The new GCN microcode update, retroactive to GCN 1.1+ if I remember, address exactly this.

Given that devs can easily monitor GPU occupancy, it would be something noticeable to them during development, too.
 
Wy not push more "out of order" rasterization instead ?

More seriously, i was ask me how many time it will take for get someone in this thread, who come with Async compute = bad .. COV and ROV = excelllent
Who is coming to this thread with "async compute = bad"? Async compute in it's current form is a performance enhancing feature and nothing more. It allows GPU to overlap graphics work with compute work. It is therefore good (tm).
Conservative rasterisation and raster order views are not really performance enhancing features. They make effects that are currently not practical to do in games doable. Just like rasterisation on it's own.
I guess we could say it comes down to whether you think we need more frames or better looking frames.
 
Status
Not open for further replies.
Back
Top