AMD: Speculation, Rumors, and Discussion (Archive)

Status
Not open for further replies.
Well, wasn't GCN made with consoles in mind? If so, it makes sense that it would be designed for the future, in a sense, given it would need to last for 5+ years.
GCN was coming with or without consoles
 
well its always nice to design for the future, but the future things will be different :), things evolve, companies take different directions based on markets and new data. Gambling on the future and forgetting the present, is what made a mess of AMD's GPU marketshare.

This has happened before (just not a significant when talking about marketshare) the x1800xt was a good chip but didn't manage to out sell the 7800 from nV, and the x1900xt was an excellent chip but again too early for shader performance in games to show its true potential, by that point the g80 was available.
 
Yep. I bought my HD7950 expecting it to last a long time (3GB, part was being rebranded as the 280), and it sure has. My primary GPU was a GTX670 which I upgraded to a 970 quite a while ago. Pretty much matches up with what you guys are saying.
 
GCN was coming with or without consoles

I've heard it was designed, to some degree, for sony and microsoft. I don't know if it's true. Maybe AMD designed it that way because they thought it would help them get the console deals.
 
I've heard it was designed, to some degree, for sony and microsoft. I don't know if it's true. Maybe AMD designed it that way because they thought it would help them get the console deals.
Or maybe one of the benefits of the teams being combined into one group so benefited from expertise more associated with CPUs (so a fusion-integration of expertise from both and not just GPU), not sure how it will work with the teams being split again now.
Cheers
 
AMD's marketing should use the fact that their now equal GPUs get better with age and fetch them a higher price than going for a lower price than their performance equivalents from nvidia. With Kepler falling behind GCN1.0 cards, you can see the grumblings on forums and projecting the same for Maxwell in future.

And it wouldn't hurt AMD to get their hardware features implemented even if the game is nvidia sponsored.

http://techreport.com/news/14707/ubisoft-comments-on-assassin-creed-dx10-1-controversy-updated

Unlike when they don't even get it implemented in the game they sponsor,

Weirdly, though there is a marketing deal on Ashes with AMD, they never did ask us to use async compute. Since it was part of D3D12, we just decided to give it a whirl.

www.overclock.net/t/1575638/wccftech-nano-fury-vs-titan-x-fable-legends-dx12-benchmark/110#post_24475280
 
really should be posting this in the AMD Execution thread, but, how can you expect AMD do that when they don't have the marketshare to push dev's to their side, a direct resultant of marketshare is cash....

Doesn't matter how they market it now, its when the intial launch of the cards is what matters, upgrade cycles for people, OEM, etc, tend to have a direct link to reviews of products and product launches or soon after each of them. Not two or three quarters down the road. And in this case we are looking 3 quarters since the launch of Maxwell 2, and add 2 more quarters to that to see "advantage" of older AMD hardware.

Oxide gave it a whirl but didn't do due diligence on how their code ran on other hardware, by doing so ran into erroneous conclusions? Sounds like too many mistakes by a team of fairly senior dev's don't you think?
 
Name of next arch is Polaris?

http://www.hwbattle.com/data/editor/1512/92d129551b4dd2b22676276d4111a08d_1451478310_7063.jpg

http://www.hwbattle.com/bbs/board.php?bo_table=news&wr_id=15345

"Our guiding lights is to power every pixel on every device efficiently. Stars are the most efficient photon generators of our universe. Their efficiency is the inspiration for every pixel we generate."

Polaris (north star), guiding star. "Guiding" arch, " Stars are the most efficient photon generators", most efficient/leading GPU arch/whatever marketing things you can come up with.
 
Name of next arch is Polaris?

http://www.hwbattle.com/data/editor/1512/92d129551b4dd2b22676276d4111a08d_1451478310_7063.jpg

http://www.hwbattle.com/bbs/board.php?bo_table=news&wr_id=15345

"Our guiding lights is to power every pixel on every device efficiently. Stars are the most efficient photon generators of our universe. Their efficiency is the inspiration for every pixel we generate."

Polaris (north star), guiding star. "Guiding" arch, " Stars are the most efficient photon generators", most efficient/leading GPU arch/whatever marketing things you can come up with.

That's poetic, I suppose, but perhaps someone with an astrophysics degree can confirm my suspicion that hydrogen fusion's tendency to throw off neutrinos and a star's habit of throwing out vast amounts of plasma actually means that there's more non-photonic product than any non-nuclear interactions.

Don't know about implying their inspiration for efficiency is a supermassive fusion reactor. Polaris is a multiple star, with the most notable member being a supergiant 2,500 times as luminous as the Sun.
(https://en.wikipedia.org/wiki/Polaris)
In the case of the Sun, I find that I cannot fit a device of its volume and a 3.846x10^26 W power supply in virtually any small form factor case I have encountered so far.
 
OT: To your last point, make your computational density high enough, and your "ultimate-laptop" (of 1 liter volume) might have a power density close to the entire sun:

Even if it achieves such an error rate, it must have an energy throughput (free energy in and thermal energy out) of 4.04×10^26 watts — turning over its own rest mass energy of mc^2 ≈ 10^17 joules in a nanosecond!

http://arxiv.org/pdf/quant-ph/9908043.pdf
 
In defense of using a star, the fusion core of the Sun has a very enviable power density of 276.5 W/M^3, or "similar to an active compost heap". Natural fusion is a brute-force affair.
It's the photon versus everything else ratio for a star overall that pads things out, since more standard mechanisms all eventually devolve to thermal radiation without including things like neutrinos, outflowing gas, or possibly gravity waves like a giant star system.
 
Lately, AMD has been talking up HDR displays for the future. While blacks are not its specialty, Polaris certainly reaches brightness levels somewhat above those of today's LCDs.
 
Name of next arch is Polaris?

http://www.hwbattle.com/data/editor/1512/92d129551b4dd2b22676276d4111a08d_1451478310_7063.jpg

http://www.hwbattle.com/bbs/board.php?bo_table=news&wr_id=15345

"Our guiding lights is to power every pixel on every device efficiently. Stars are the most efficient photon generators of our universe. Their efficiency is the inspiration for every pixel we generate."

Polaris (north star), guiding star. "Guiding" arch, " Stars are the most efficient photon generators", most efficient/leading GPU arch/whatever marketing things you can come up with.


Yeah, this came up a month ago: https://forum.beyond3d.com/posts/1883557/

Ryan Smith said:
Since AMD has all of the subtlety of a boot to the head, let the speculation on Polaris begin.

"Starry skies in Sonoma next week. Should have an excellent view of Polaris"

"Polaris is 2.5 times brighter today than when Ptolemy observed it in 169 A.D"

RK2tC3E.jpg


https://twitter.com/GChip/status/669637153748484096
 
If this is a new GPU architecture, then it's quite unlike AMD to be giving presentations of it so early, or perhaps it's not so early.
 
If this is a new GPU architecture, then it's quite unlike AMD to be giving presentations of it so early, or perhaps it's not so early.

Without saying that it is true or not... The first GCN complete public introduction was made in June 2010.. ..when i said complete, you had all (from microcode, to cache size, the type of architecture everything, even code resultting ) we was know exaxctly what will be this architecture something like 6 months before the first chip will be released.

I really doubt that we will see again a so indepth presentation of an mArch by AMD in a while anyway.. so the name ....
 
Last edited:
AMD has historically not been as vocal/open about its future plans in the same way we've grown accustomed Nvidia being.

Since forming/taking over RTG, Raja has sought to change that. Anandtech's coverage of their visual roadmap for 2016 was yet another indication of this. In this vein, it would make sense that Raja and RTG would further spell out in more concrete terms what to expect from Polaris other than the snippets we've heard from conference calls and loose talk/rumors.

Whether or not they decide to do it is an open question, but I'd be surprised if it was at CES considering the GPU industry has moved more towards later dates of the year for GPU uarch talk. CES these days is more about autonomous vehicles, smartphones, IoT, drones and things like that. Would still be fun if it was CES if AMD decided to disclose further information.
 
AMD has historically not been as vocal/open about its future plans in the same way we've grown accustomed Nvidia being.

Since forming/taking over RTG, Raja has sought to change that. Anandtech's coverage of their visual roadmap for 2016 was yet another indication of this. In this vein, it would make sense that Raja and RTG would further spell out in more concrete terms what to expect from Polaris other than the snippets we've heard from conference calls and loose talk/rumors.

Whether or not they decide to do it is an open question, but I'd be surprised if it was at CES considering the GPU industry has moved more towards later dates of the year for GPU uarch talk. CES these days is more about autonomous vehicles, smartphones, IoT, drones and things like that. Would still be fun if it was CES if AMD decided to disclose further information.

Nvidia have allways been really vocal, but in an marketing way.. doing their plot and roadmaps with graphs and name.. no details on hardware, no details at all in reality.

On the other hand, AMD when vocals, as in the AFDS of june 2011 ( AMD developper sumit ) was shown incredibly detailed informations, more that what we was used for any company, even when the hardware was released.

Slide from the AFDS 2011, introduction of the GCN architecture ( 6 months before the release of GCN GPUs ) http://developer.amd.com/wordpress/media/2013/06/2620_final.pdf

This said, i dont see too the CES as a presentation pole anymore for the exact same mention you are citing.
 
Last edited:
If this is a new GPU architecture, then it's quite unlike AMD to be giving presentations of it so early, or perhaps it's not so early.
It's next evolution of GCN, not completely new architecture (earlier slides suggest there will be bigger changes than GCN1-3 had, though)
 
New rasterizer and tessellation engine, please. I really do not care if they will have a +/- 10% TDP in regard to Pascal GPUs...

Those have been quite improved with Fiji. High tesselation factors are still bottlenecked, but there's very little reason now or in the foreseeable future to change that in a totally significant way. Tessellation needs a lot of other software and hardware support before you're going to see practical reasons for say, a 32x tesselation factor. Perhaps just changing to a wider geometry front end would be beneficial, but there's not a huge need for changing design yet again.

Beyond that relieving registry pressure, increasing single precision performance in compute (a 980ti can beat it in many tests despite the Fury X's theoretical advantage) and getting better performance per watt would all be seen as priority. Fury X was simultaneously TDP and Die size limited. Surely fixed with the move to a new node, but a new series of GPUs on the node could quickly ramp back up to being TDP limited if the architecture isn't made more efficient.
 
Status
Not open for further replies.
Back
Top