AMD: Speculation, Rumors, and Discussion (Archive)

Status
Not open for further replies.
I don't see how they could possibly sell Fury (Giant chip + Interposer + HBM) at the same price as salvaged GP104. Or rather, they could do that, but then they're back to zero or negative profit margins in a rather important segment (mid-highend).

They will not... everybody discuss thing without having any informations about performance of the new chips, whatever it is the GP"104" or Polaris" .... Every one is swimming ... in the dark ..
 
Last edited:
They will not... everybody discuss thing without having any informations about performance of the new chips, whatever it is the GP"104" or Polaris" .... Every one is swimming ... in the dark ..
Please have a look at the thread title.
 
everybody discuss thing without having any informations about performance of the new chips, whatever it is the GP"104" or Polaris" .... Every one is swimming ... in the dark ..
If only we had some information about the process they'll be using, some die sizes, some information about the architectures, some idea about clocks would have been helpful, not to mention which kind of DRAM. That would have allowed us to make some educated guesses.
 
BTW I noticed in that little diagram with the tags "new" for polaris there was no new tag next to rasterizer. So does this mean no conservative rasterization for polaris?
 
I haven't seen anything specific in Pascal related to VR that makes it better than Maxwell. But I also have a hard time seeing what makes AMD so special.

To me, VR seems to be the 4K of the past: a hook on which to hang a marketing story, but not something that makes a major difference at the chip architecture level. In the end, triangles will still have to be rendered...

I suggest you start here as a primer - http://amd-dev.wpengine.netdna-cdn.com/wordpress/media/2012/10/gr_proc_req_for_enabling_immer_VR.pdf

You may have heard of the author? There are quite a few other VR advantages AMD has (in both hw and sw) that I can help you with if interested.
 
It's very kind of you to point to this marketing pamphlet sponsored by AMD (though I greatly respect the writer), but it's not helpful in pointing out the VR related HW differences between AMD and Nvidia, which is what I was asking about.

Looks like I have to do it myself then.

Liquid VR: direct to display
VRworks: direct mode

Liquid VR: front buffer rendering
VRworks: front buffer rendering

Liquid VR: affinity multi GPU
VRworks: VR SLI

Liquid VR: latest data latch
VRworks: not sure. But it's a software feature where the driver inserts copy operations into the command stream. Nvidia has standalone copy engines as well.

LiquidVR: somewhat related to latest data latch.
VRworks: context priority.

LiquidVR: ? Probably doable in software, but maybe at a higher performance cost than doing it in hardware.
VRworks: multires shading. Based on multiprojection hardware feature of Maxwell.

LiquidVR: async computing
VRworks: not in Maxwell. The async computing seems to be mostly mentioned as a performance improvement and not a specific VR feature. Multires is probably a much bigger boost than async, but who knows?

Given all that: can you make a convincing case that AMD HW is inherently superior for VR than Nvidia? (Or the other way around...) I don't think so.

(Do you still think Pascal will be 1.2GHz?)
 
So basically AMD has a bunch of better tech that you have a hard time seeing what makes it so special?

VR SLI is a joke which has been promised by Nvidia for 18 months and yet to be delivered, it's nowhere near affinity multi-GPU. Nvidia has nothing like latest data latch and their timewarp is bad due to lacking fine-grained preemption. This matters because believe it or not VR is not simply "all about rendering triangles." Latency is even more important than throughput, that's why Woz was feeling dizzy at GTC on Mars with Titan X. ;)

You may also be interested in this video, seeing as it's not AMD sponsored (around 1hr 20 mins) -


You forgot TruAudio as well, I'll let you think about how useful that is in VR. I don't recall suggesting Pascal would be 1.2GHz.
 
Last edited:
Simple to say this, what is the current TAM of VR? what is the possible increase of TAM for VR in the coming generation? And what is the time to release of games/applications for VR? Does this coincide with the necessity for AMD to address this market for the next 2 quarters? is this a correct timing for this or is it too soon? Is this a similar marketing strategy AMD has taken in the past as they deliver products too soon to the market or is their timing getting better?

We have seen AMD/ATi do this time and time again, where they know where the future is, but timing matters most. You can't forget about the present and focus on the future otherwise you get eaten alive. Personally I don't think its the right time. Its not about the hardware either. Its about the products that need the hardware. Hardware developers don't need the consumer hardware to create the software *so to speak*, they can use dual cards to get the horse power they need, they can use top end cards where the software can scale down to etc, etc.

So what is this marketing about? Yes its marketing, why are they saying it. There has to be a reason, and its always a selfish reason lol. They want our money......

There is a message that will drive to a point of action, a sale, so that reason has to be logical to the end consumer. If its good then the product if viable. I don't see VR games coming out in the next 2 quarters that will necessitate that kind of increase in horsepower.... I might be mistaken so please correct me if I'm wrong......
 
Last edited:
There is a message that will drive to a point of action, a sale, so that reason has to be logical to the end consumer. If its good then the product if viable. I don't see VR games coming out in the next 2 quarters that will necessitate that kind of increase in horsepower.... I might be mistaken so please correct me if I'm wrong......
Of course you're right !
AMD has a terrible history of being too much forward looking and forgetting what matters most for the customers: immediate present benefit
Marketing wise, it's really 101 and their lack luster market share proves the point. When Nvidia saw how flawed was 20nm, they designed Maxwell 2 with perf/watt in mind. Right action at right time.
And now, I really don't know what is going on with AMD and Polaris. They showed working silicon in December and they plan to release it in June ? Why so long ? Did they face last minute problems ? yields at GF ? Broken hardware ? or simply the December silicon was really early and just came back few hours ago from the oven ?
Anyway, this waiting game favors Nvidia and I'm afraid that Polaris won't be the success that AMD is hoping for...
 
So basically AMD has a bunch of better tech that you have a hard time seeing what makes it so special?
Spot on! I have a very hard time seeing what makes it so special.

Is AMD currently ahead in VR? According to them and you, they are, as you would expect. But even if they are at this very moment, the question is whether or not that being ahead is a temporary thing or a long lasting competitive advantage. That's why I'm focusing on hardware features and not software. Software can be fixed over time. AMD used to suck at crossfire, but was able to fix it eventually. AMD used to be seriously behind with their variable refresh rate driver support. They fixed most of that eventually as well. Nvidia initially solved their lack of Eyefinity (a hardware advantage) in software. If VR really picks up, not a given at all, whoever is behind will give it the right priority and try to fix it. Simple.

So, once again, I asked originally if there was anything specific in Pascal, and AMD, that makes them fundamentally better for VR. It's much harder for software to be a fundamental advantage.

VR SLI is a joke which has been promised by Nvidia for 18 months and yet to be delivered, it's nowhere near affinity multi-GPU.
Software feature.

Nvidia has nothing like latest data latch and ...
Software feature.

... their timewarp is bad due to lacking fine-grained preemption.
With GP100 having fine-grained compute preemption, there's a good chance that graphics preemption will be present as well. If so, then no fundamental benefit here either. So let's defer this one for later...

This matters because believe it or not VR is not simply "all about rendering triangles." Latency is even more important than throughput, that's why Woz was feeling dizzy at GTC on Mars with Titan X. ;)
Given the same amount of work, latency is also defined by throughput. Nvidia has multi-resolution shading, whereby geometry gets transformed only once, yet can be used for multiple viewports. For lens systems like those in the Rift and the Vive, this can accelerate pixel shading by 1.3x to 2x. Given the need to render at 90 fps, doesn't that sound like a useful feature to have?

So in conclusion: AMD has fine grained preemption, which Nvidia may have as well. AMD has TrueAudio. Nvidia has multi-resolution shading.

It don't think that's a strong case for having a fundamental advantage. YMMV.

I don't recall suggesting Pascal would be 1.2GHz.
"There are for sure reasons to believe that Nvidia may not increase on Maxwell's clock speeds, or not by much. 1.2GHz is the magic number I believe."
 
With GP100 having fine-grained compute preemption, there's a good chance that graphics preemption will be present as well.
As long as compute can preempt graphics, that's already all which is needed.

Given the need to render at 90 fps, doesn't that sound like a useful feature to have?
90fps is an idealized goal, but it's actually sufficient if you are able to perform the timewarp at a stable, reliably refresh synced 90fps. The latency between head movement and screen translation needs to be low in order to avoid simulator sickness - input lag for the remaining inputs isn't so bad, and neither is a lower framerate for the actual content.

And for that, it's really just the lack of accessible / working preemption on pre-Pascal which got devs to curse Nvidia. So if they got to fix that, there is IMHO nothing which would render Pascal unsuited for VR any longer.
 
Are users with Rift and Vive cursing their NVidia graphics? Or throwing up all the time? Wouldn't we have noticed by now that there's a problem there?
 
As long as compute can preempt graphics, that's already all which is needed.
What specifically does compute have to do with VR that it doesn't have with the rest of rendering in modern games? Rendering stuff is graphics task, async time warp for VR is graphics task.
 
Could one of you guys take a minute to explain "async time warp" please? I've seen this crop up a couple times now, and the term is new to me...

Thank you in advance! :)
 
90fps is an idealized goal, but it's actually sufficient if you are able to perform the timewarp at a stable, reliably refresh synced 90fps. The latency between head movement and screen translation needs to be low in order to avoid simulator sickness - input lag for the remaining inputs isn't so bad, and neither is a lower framerate for the actual content.
I have been wondering why they didn't choose (vsync locked) 60 fps rendering with 120 Hz timewarp + 120 Hz display. That results in less head movement latency and 33% lower GPU requirement. 60 fps is enough for most animations and 120 Hz timewarp handles the head movement just fine. Sony's VR display is 120 Hz capable, I wonder whether this is a hardware limitation (120 Hz display is expensive and/or has some quality issues), or whether the 60 fps animation refresh wasn't good enough for Oculus.

Most games generate a per pixel motion vector buffer (motion blur, temporal supersampling). Timewarp shader could do motion reprojection in addition to the head reprojection using the motion vector buffer. At 120 Hz, the reprojection errors should be very small. Just wondering whether someone has tried this. I would be interested to see the results.
 
Last edited:
What specifically does compute have to do with VR that it doesn't have with the rest of rendering in modern games? Rendering stuff is graphics task, async time warp for VR is graphics task.
It's not specifically compute, but the need for the capability to run the timewarp shader at every v-sync, even if the rest of the render pipeline doesn't meet the target framerate, and all of that preferably without needing to fall back onto clasic V-sync.
Doesn't matter whether you are doing it via draw call with proxy geometry or dispatch call, as long as you can get the timing right. Much easier if you have fine grained preemption available in at least some way. Yes, it's possible with a more coarse grained preemption as well, but then you are subject to the old "draw calls should complete ASAP" limitation.

Could one of you guys take a minute to explain "async time warp" please? I've seen this crop up a couple times now, and the term is new to me...

Thank you in advance! :)
You need low latency between the translation of the picture and head movement.
So you take the rendered frame (which might already be a few ms old, especially the input it is based on), and right prior to the scanout, you read the current head position and only apply the translation and correction of the chromatic aberration in the last possible moment. So the displayed frame matches the current head position, even if that wasn't known at the time of actual rendering. "Async" because it's not synchronized with the regular rendering, but with screen refresh instead. The same frame may be reused multiple times this way if the framerate is too low.
 
I have been wondering why they didn't choose (vsync locked) 60 fps rendering with 120 Hz timewarp + 120 Hz display. That results in less head movement latency and 33% lower GPU requirement. 60 fps is enough for most animations and time 120 Hz timewarp would handle the head movement. Sony's VR display is 120 Hz capable, I wonder whether this is a hardware limitation (120 Hz display is expensive and/or has some quality issues), or whether the 60 fps animation refresh wasn't good enough.
No 120 Hz displays in that form factor were available before. I'm curios whether Sony's solution isn't actually doing what you are describing.

Most games generate a per pixel motion vector buffer (motion blur, temporal supersampling). Timewarp shader could do motion reprojection in addition to the head reprojection using the motion vector buffer. At 120 Hz, the reprojection errors should be very small. Just wondering whether someone has tried this. I would be interested to see the results.
Can't remember where I read it, but there was an article on that. Long story short, it's the same issue as with parallax correction, you don't know whats behind an object you reprojected, and you can't fill that gap in without inevitably producing artifacts. Artifacts which the human eye is great at detecting.

Isn't async time warp a compute shader (not a "graphics" task)?
Yes, it is. At least that's one possible implementation, as it technically doesn't need a geometry of any sort.
 
Status
Not open for further replies.
Back
Top