AMD: Speculation, Rumors, and Discussion (Archive)

Status
Not open for further replies.
Honestly, i think the bigger sku are all based on Vega, not Polaris.. WHo is somewhat funny because im pretty sure a sku " in between" ( like is GP104 ) could be made, maybe witth 190-200W and 1600mhz clock.
We were just speculation on the reasons why AMD would chose to not show a "bigger' Polaris chip in their presentation ;)
 
Not really. If you do that by the time you place your card into availability Nvidia could just adjust their price and deny that "same performance less price" but if you reveal your card after 1070 availability then you can make compare price vs performance in "today market"

They can do that afterwards too, just issue a statement saying we are now giving rebates. It doesn't detract from the 1070 at all.

You are trying to make the case that this rx480 line is like 48xx line, I don't put faith in that, because of how AMD has been marketing these cards. There was a fast response with the 48xx line if I remember correctly and it was literally half the die size or close to it to the competing cards.
 
One point I haven't seen mentioned here, in theory FINFETs achieve higher frequencies at higher temperatures as opposed to traditional planar transistors. They are sort of weird in that way, so it's possible higher operating temps with the temperature stability of a water cooler could yield interesting results. A chip maintaining 80C (or more) and dissipating a lot of heat may get very interesting results.
 
If Rx480 is full Polaris 10 than maybe $300 is a dual cut down Polaris 10? Dreams are okay right?

If dreams is what you want for, $300 card will be RX480 All-in-Wonder Pro. New and improved product for the new Twitch-streaming age. :)
I would actually want this, built-in HDMI PASS-THROUGH so that I can record HQ footage from my PS4

. A $300 card that is 10-15% faster would be right on the heels of 980Ti like the earlier rumors.
I would not want to pay $100 more for just $10-15 faster performance. If all goes well, RX480 will be able to be overclocked that much.
 
Last edited:
For the record in this vid Raja kinda slips out 'less than 150W'

I'm inclined to doubt they are all that far under.
Saying 150W if they could have quoted something more like 'just over 100W' just opened them up to looking bad in perf/W vs the faster, also 150W 1070.
But I'd be happy to be wrong.
 
Nothing new. Just a general overview of entire Computex announcements and speculation about EU pricing, journalist were not allowed to see the rendering settings for the game [it was rendered at 1080p though].
 
That includes the memory controllers and they're a go
od part of it. At any rate, a 4 GiByte version would also need the same number of chips (hopefully!), which should reduce possible power savings from memory size further. All in all, I doubt that that's a large factor.
 
Last edited:
That includes the memory controllers and they're a go
od part of it. At any rate, a 4 GiByte version would also need the same number of chips (hopefully!), which should reduce possible power savings from memory size further. All in all, I doubt that that's a large factor.

I can't find a lot of breakdown data but according to this memory alone accounts for anything from 22 to 35% and memory controller from 16 to 26%.

I'm sure there's some info listed how much actual memory modules consume?
 
Last edited:
I can't find a lot of breakdown data but according to this memory alone accounts for anything from 22 to 35% and memory controller from 16 to 26%.

I'm sure there's some info listed how much actual memory modules consume?
I remembered wrong.
Here's what AMD measured on R9 290X (http://www.amd.com/Documents/High-Bandwidth-Memory-HBM.pdf)
"1. Testing conducted by AMD engineering on the AMD Radeon™ R9 290X GPU vs. an HBM-based device. Data obtained through isolated direct measurement of GDDR5 and HBM power delivery rails at full memory utilization. Power eciency calculated as GB/s of bandwidth delivered per watt of power consumed. AMD Radeon™ R9 290X (10.66 GB/s bandwidth per watt)"

Which equates to 30 watts for the memory chips alone.
 
AotS supports both DX11 and DX12. You obviously don't wanna use DX11 with AMD hardware.


It's obvious they don't wanna tell us how much under 150W it is. If they said it's just over 100W, well.

There are also 4 and 8GB versions. I think I've seen some power consumption numbers for the 4 and 8GB GPUs and the difference was around 40W. If thats the case a 4GB version could well be under 100W.

Can't find those power consumption charts unfortunately, but here's the quote from AT:

"The current GDDR5 power consumption situation is such that by AMD’s estimate 15-20% of Radeon R9 290X’s (250W TDP) power consumption is for memory."
Can't find any reviews of 2GB vs 4GB or 4GB vs 8GB of the same card with power consumption tests included, but for what it's worth, 4GB factory OC'd custom GTX 760 consumes only 11 watts more than reference 2GB GTX 760 (http://www.legitreviews.com/gigabyte-geforce-gtx-760-4gb-video-card-review-2gb-4gb_129062/5)
 
Well I think it is fair to ridicule anyone who uses the excuse AoTS does not look good as a reason for it to be excluded :)
You forgot to say Hitman Ch. 2 can't be used because it may not be working correctly with AMD, while it does for Nvidia.

Cheers
In the review you linked it doesn't work correctly on either one
 
Kontis is wrong there on two levels. Fermis hardware scheduler wasn't anything like a an ACE and it wasn't cut out for area but for power reasons.
To not mention it makes no sense to compare a task scheduler (ACE) that decides when and in which CU to dispatch and execute a whole program with an instruction scheduler (Fermi) that decides when to schedule single instructions on the same SM. It's so funny when people things the lack of the latter Fermi-like scheduler impacts DX12 support when it has absolutely nothing to do with it.
 
Kontis is wrong there on two levels. Fermis hardware scheduler wasn't anything like a an ACE and it wasn't cut out for area but for power reasons.
It seems some people do not realize that there are tons of different shedullers, work distributors and other logic in GPU. It's funny how some of them consistently tend to misinterpret the things just because they've heard somewhere that there is a lack of "hardware scheduler"
 
In the review you linked it doesn't work correctly on either one
My response was moved to the other thread but it does perform better on Nvidia by a fair amount with Chapter 2.
So if you still want to debate why Hitman is not a good example of DX12 its best to take it over there.
Cheers
 
Status
Not open for further replies.
Back
Top