AMD Vega 10, Vega 11, Vega 12 and Vega 20 Rumors and Discussion

Nothing, they've referred to GPUs as SoCs before too.

there is some distinction

6) Many argue that vega is just a refined polaris gpu, how would you respond to this ?

My software team wishes this was true:)

Vega is both a new GPU architecture and also completely new SOC architecture. It's our first InfinityFabric GPU as well

hes not just calling the GPU a SoC, there is a difference between just a gpu architecture and SoC architecture in this case.
 
How do you know maxwell TBR just worked ? It recieved performance upgrades consistantly for a year. All of AMD's gpu's have performed better as drivers improved and most of them ended up besting cards that were originaly faster.

So your example in A sounds silly because no one would pitch it that way. They would say by our $1k card today and look foward to constant performance improvements as software and drivers continue to take advantage of new features .


I will hold off for gaming vega before making my judgement on the card. I've had plenty of amd cards that have aged extremely well compared to nvidia cards

It worked, because Maxwell was competitive from day one. Sure NV made improvements and kept on working on it, but in the end the performance was good from day one. And I am tired of the AMD approach, it is not helping the company to have GPUs taking the performance crown when they are near their EoL.
 

Well that bodes well. Guess that's what all the "it's for compute work!" advertising was for.

The TDP from 225 to 300 can be explained as easily as the sudden bump in performance from 12.5 teraflops on linpack to 13.1. Just crank the clocks up as high as they'll go. You'll get an exponential decrease in performance per watt thanks the finfets, but considering how starved the market is right now for high end GPUs they probably thought it was worth it to just go all out.

Trying to extrapolate what any RX Vega benchmark would be is difficult. The straightforward thing to do would be to assume the average compute performance increase from the, for want of a better word, Rx 480 (single GPU Pro Duo). But the benchmarks are all over the place, and with a new raster pipeline for all appearances any assumed performance gains will be all over the place as well. I also don't envy people considering it over buying a Titan XP, which is where this card seems to be placed against competitively. The two cards swap out benchmark dominance all over the place, usually by quite a lot, though Vega FE comes out ahead in more of the benchmarks.

Ah well, wait and see I suppose for gaming performance.
 
It would seem that the heat spreader is a bit larger than the actual die, which can be inferred as 484mm from Raja's latest tweet. This comes on the heels of PCPer lowering their estimate to 512mm.
 
It worked, because Maxwell was competitive from day one. Sure NV made improvements and kept on working on it, but in the end the performance was good from day one. And I am tired of the AMD approach, it is not helping the company to have GPUs taking the performance crown when they are near their EoL.
Thats faulty logic , Maxwell could have been competitive with a faulty TBR , it could have been non competetive with a fully functional TBR .
 
It would seem that the heat spreader is a bit larger than the actual die, which can be inferred as 484mm from Raja's latest tweet. This comes on the heels of PCPer lowering their estimate to 512mm.

There is heat spreader on Vega ? I didn't see one from the teardown. It's directly the gpu die/core, no ?
 
The binning rasterizer in AMD's patent is fine with batching and binning primitives with transparency.
And it should be equally fine with an UAV write. That's the point. Binning should be visible in that test.
The triangle bin test's having every pixel read and write a common value seems like it would be a dependence of some sort, and that would potentially meet a close batch condition.
An UAV atomic is no dependency in that sense as order isn't guaranteed, just the execution (which disables any HSR which may have happended otherwise). As said before, it could only constitute a false dependency which is caught erroneously.
The triangle test may place a secondary emphasis on HSR since it is counting on the opacity of the triangles to help demonstrate the behavior
As said, HSR can't work in that test becaue of the UAV access.
However, I may have conflated that consideration which is specific to the test with the more general behavior of the batching step. Even if the triangles were transparent, the question remains how the batching would proceed, and the sequence of binning from the batch and pulling in the next one.
The patent is written to indicate that the rasterizer iteratively processes through a batch across all bins, but whether that serializes the whole process is not clear. A non-contrived scene that didn't have perfectly overlapping triangles might include some other kind of distribution or give the tiling hardware a less rigid sequence of bins to work with.
Most likely it will simply work on a few tiles in parallel (as nV GPUs are doing too). And transparencies shouldn't affect the batching or binning (maybe excluding the case of ROV use) but just disabling HSR for those.
 
there is some distinction
I know there is (or is in the sense SoCs are usually thought as)
AMD's logic is that the GPU-portion is GPU, the whole chip (which includes the UVD, VCE and so forth) is SoC. It's clear from this quote from Koduri:
Vega is both a new GPU architecture and also completely new SOC architecture. It’s our first Infinity Fabric GPU as well
AMD has called their GPUs SoCs at least since Polaris
 
The closes perfect square is 484, as Tofu says, if it is 19.5mm wide it should be 24.4 long, but a small difference in measuring could be due to protective coatings, residuals of thermal paste or (if it is soldered) to solder alloy, and so on.
 
Damien is at AMD now? While I'm happy for him, still, aww.:(
We always throw everything that is not engineering on the marketing heap, but there are people who write PR pieces and those who fly to Scandinavia to deliver a board to Sebbbi. And probably a whole bunch of jobs in between.

It'd be fun to read about the day to day, month to month specifics of the jobs of those former editors. Anand at Apple is probably out of the question, but Dave, Scott, Rys, Damien, ... ?

Don't underestimate how much power rasterizers and ROPs can suck up. If you're just doing compute, the power profile of a GPU is very different.
I would be very interested in reading more about that.
 
Raja confirmed his 484mm^2 to be actually square -> 22mm x 22mm

Now, how can you measure 19.8mm if it's actually 22mm?

Maybe the magic 2.2mm show up at VEGA RX launch day? Together with the tile based renderer :)
One wonders once again about the upside of being so cagey about this, and letting rumors gain the upper hand...

Here's the alternative: "it's 484mm2". Do this at launch, or whenever you show the die the first time, and you're done.
 
One wonders once again about the upside of being so cagey about this, and letting rumors gain the upper hand...

Here's the alternative: "it's 484mm2". Do this at launch, or whenever you show the die the first time, and you're done.

I guess they didn't want to give too much information to the competition, but now that the card is in the wild, I'm sure NVIDIA knows exactly how big the die is, so it does seem strange.
 
I'm starting to think the quote from reddit is right, and they're indeed running "Fiji-drivers" without any of the new whizbang. GamerNexus's fresh Doom benchmarks are pretty much spot on same performance as Vega had late last year with actual Fiji-drivers
 
I'm starting to think the quote from reddit is right, and they're indeed running "Fiji-drivers" without any of the new whizbang. GamerNexus's fresh Doom benchmarks are pretty much spot on same performance as Vega had late last year with actual Fiji-drivers
But how is it possible that drivers aren't ready after such a long time?
If Vega FE still uses Fiji drivers after a year of delays then I have no confidence that drivers will be ready for RX Vega's launch. They would probably still suck for months after RX Vega is launched.
 
But how is it possible that drivers aren't ready after such a long time?
If Vega FE still uses Fiji drivers after a year of delays then I have no confidence that drivers will be ready for RX Vega's launch. They would probably still suck for months after RX Vega is launched.
How is Vega late by a year? AMD never said or even suggested it would come in 2016, let alone summer 2016.
 
Back
Top