AMD: Speculation, Rumors, and Discussion (Archive)

Discussion in 'Architecture and Products' started by iMacmatician, Mar 30, 2015.

Thread Status:
Not open for further replies.
  1. Kaotik

    Kaotik Drunk Member
    Legend

    Joined:
    Apr 16, 2003
    Messages:
    8,166
    Likes Received:
    1,836
    Location:
    Finland
    GCN was coming with or without consoles
     
  2. Razor1

    Veteran

    Joined:
    Jul 24, 2004
    Messages:
    4,232
    Likes Received:
    749
    Location:
    NY, NY
    well its always nice to design for the future, but the future things will be different :), things evolve, companies take different directions based on markets and new data. Gambling on the future and forgetting the present, is what made a mess of AMD's GPU marketshare.

    This has happened before (just not a significant when talking about marketshare) the x1800xt was a good chip but didn't manage to out sell the 7800 from nV, and the x1900xt was an excellent chip but again too early for shader performance in games to show its true potential, by that point the g80 was available.
     
    pharma and homerdog like this.
  3. homerdog

    homerdog donator of the year
    Legend Veteran Subscriber

    Joined:
    Jul 25, 2008
    Messages:
    6,133
    Likes Received:
    905
    Location:
    still camping with a mauler
    Yep. I bought my HD7950 expecting it to last a long time (3GB, part was being rebranded as the 280), and it sure has. My primary GPU was a GTX670 which I upgraded to a 970 quite a while ago. Pretty much matches up with what you guys are saying.
     
  4. dogen

    Regular Newcomer

    Joined:
    Oct 27, 2014
    Messages:
    335
    Likes Received:
    259
    I've heard it was designed, to some degree, for sony and microsoft. I don't know if it's true. Maybe AMD designed it that way because they thought it would help them get the console deals.
     
  5. CSI PC

    Veteran Newcomer

    Joined:
    Sep 2, 2015
    Messages:
    2,050
    Likes Received:
    844
    Or maybe one of the benefits of the teams being combined into one group so benefited from expertise more associated with CPUs (so a fusion-integration of expertise from both and not just GPU), not sure how it will work with the teams being split again now.
    Cheers
     
  6. gamervivek

    Regular Newcomer

    Joined:
    Sep 13, 2008
    Messages:
    715
    Likes Received:
    220
    Location:
    india
    AMD's marketing should use the fact that their now equal GPUs get better with age and fetch them a higher price than going for a lower price than their performance equivalents from nvidia. With Kepler falling behind GCN1.0 cards, you can see the grumblings on forums and projecting the same for Maxwell in future.

    And it wouldn't hurt AMD to get their hardware features implemented even if the game is nvidia sponsored.

    http://techreport.com/news/14707/ubisoft-comments-on-assassin-creed-dx10-1-controversy-updated

    Unlike when they don't even get it implemented in the game they sponsor,

    www.overclock.net/t/1575638/wccftech-nano-fury-vs-titan-x-fable-legends-dx12-benchmark/110#post_24475280
     
  7. Razor1

    Veteran

    Joined:
    Jul 24, 2004
    Messages:
    4,232
    Likes Received:
    749
    Location:
    NY, NY
    really should be posting this in the AMD Execution thread, but, how can you expect AMD do that when they don't have the marketshare to push dev's to their side, a direct resultant of marketshare is cash....

    Doesn't matter how they market it now, its when the intial launch of the cards is what matters, upgrade cycles for people, OEM, etc, tend to have a direct link to reviews of products and product launches or soon after each of them. Not two or three quarters down the road. And in this case we are looking 3 quarters since the launch of Maxwell 2, and add 2 more quarters to that to see "advantage" of older AMD hardware.

    Oxide gave it a whirl but didn't do due diligence on how their code ran on other hardware, by doing so ran into erroneous conclusions? Sounds like too many mistakes by a team of fairly senior dev's don't you think?
     
  8. Newguy

    Regular Newcomer

    Joined:
    Nov 10, 2014
    Messages:
    256
    Likes Received:
    112
    Name of next arch is Polaris?

    http://www.hwbattle.com/data/editor/1512/92d129551b4dd2b22676276d4111a08d_1451478310_7063.jpg

    http://www.hwbattle.com/bbs/board.php?bo_table=news&wr_id=15345

    "Our guiding lights is to power every pixel on every device efficiently. Stars are the most efficient photon generators of our universe. Their efficiency is the inspiration for every pixel we generate."

    Polaris (north star), guiding star. "Guiding" arch, " Stars are the most efficient photon generators", most efficient/leading GPU arch/whatever marketing things you can come up with.
     
    Ollo likes this.
  9. 3dilettante

    Legend Alpha

    Joined:
    Sep 15, 2003
    Messages:
    8,122
    Likes Received:
    2,873
    Location:
    Well within 3d
    That's poetic, I suppose, but perhaps someone with an astrophysics degree can confirm my suspicion that hydrogen fusion's tendency to throw off neutrinos and a star's habit of throwing out vast amounts of plasma actually means that there's more non-photonic product than any non-nuclear interactions.

    Don't know about implying their inspiration for efficiency is a supermassive fusion reactor. Polaris is a multiple star, with the most notable member being a supergiant 2,500 times as luminous as the Sun.
    (https://en.wikipedia.org/wiki/Polaris)
    In the case of the Sun, I find that I cannot fit a device of its volume and a 3.846x10^26 W power supply in virtually any small form factor case I have encountered so far.
     
  10. Raqia

    Regular

    Joined:
    Oct 31, 2003
    Messages:
    508
    Likes Received:
    18
    OT: To your last point, make your computational density high enough, and your "ultimate-laptop" (of 1 liter volume) might have a power density close to the entire sun:


    http://arxiv.org/pdf/quant-ph/9908043.pdf
     
  11. 3dilettante

    Legend Alpha

    Joined:
    Sep 15, 2003
    Messages:
    8,122
    Likes Received:
    2,873
    Location:
    Well within 3d
    In defense of using a star, the fusion core of the Sun has a very enviable power density of 276.5 W/M^3, or "similar to an active compost heap". Natural fusion is a brute-force affair.
    It's the photon versus everything else ratio for a star overall that pads things out, since more standard mechanisms all eventually devolve to thermal radiation without including things like neutrinos, outflowing gas, or possibly gravity waves like a giant star system.
     
  12. Alexko

    Veteran Subscriber

    Joined:
    Aug 31, 2009
    Messages:
    4,491
    Likes Received:
    909
    Lately, AMD has been talking up HDR displays for the future. While blacks are not its specialty, Polaris certainly reaches brightness levels somewhat above those of today's LCDs.
     
  13. Megadrive1988

    Veteran

    Joined:
    May 30, 2002
    Messages:
    4,638
    Likes Received:
    148

    Yeah, this came up a month ago: https://forum.beyond3d.com/posts/1883557/

    [​IMG]

    https://twitter.com/GChip/status/669637153748484096
     
  14. gamervivek

    Regular Newcomer

    Joined:
    Sep 13, 2008
    Messages:
    715
    Likes Received:
    220
    Location:
    india
    If this is a new GPU architecture, then it's quite unlike AMD to be giving presentations of it so early, or perhaps it's not so early.
     
  15. lanek

    Veteran

    Joined:
    Mar 7, 2012
    Messages:
    2,469
    Likes Received:
    315
    Location:
    Switzerland
    Without saying that it is true or not... The first GCN complete public introduction was made in June 2010.. ..when i said complete, you had all (from microcode, to cache size, the type of architecture everything, even code resultting ) we was know exaxctly what will be this architecture something like 6 months before the first chip will be released.

    I really doubt that we will see again a so indepth presentation of an mArch by AMD in a while anyway.. so the name ....
     
    #355 lanek, Dec 31, 2015
    Last edited: Dec 31, 2015
  16. FriendlyNeighbour

    Newcomer

    Joined:
    Sep 18, 2013
    Messages:
    21
    Likes Received:
    8
    AMD has historically not been as vocal/open about its future plans in the same way we've grown accustomed Nvidia being.

    Since forming/taking over RTG, Raja has sought to change that. Anandtech's coverage of their visual roadmap for 2016 was yet another indication of this. In this vein, it would make sense that Raja and RTG would further spell out in more concrete terms what to expect from Polaris other than the snippets we've heard from conference calls and loose talk/rumors.

    Whether or not they decide to do it is an open question, but I'd be surprised if it was at CES considering the GPU industry has moved more towards later dates of the year for GPU uarch talk. CES these days is more about autonomous vehicles, smartphones, IoT, drones and things like that. Would still be fun if it was CES if AMD decided to disclose further information.
     
  17. lanek

    Veteran

    Joined:
    Mar 7, 2012
    Messages:
    2,469
    Likes Received:
    315
    Location:
    Switzerland
    Nvidia have allways been really vocal, but in an marketing way.. doing their plot and roadmaps with graphs and name.. no details on hardware, no details at all in reality.

    On the other hand, AMD when vocals, as in the AFDS of june 2011 ( AMD developper sumit ) was shown incredibly detailed informations, more that what we was used for any company, even when the hardware was released.

    Slide from the AFDS 2011, introduction of the GCN architecture ( 6 months before the release of GCN GPUs ) http://developer.amd.com/wordpress/media/2013/06/2620_final.pdf

    This said, i dont see too the CES as a presentation pole anymore for the exact same mention you are citing.
     
    #357 lanek, Dec 31, 2015
    Last edited: Dec 31, 2015
    firstminion likes this.
  18. Kaotik

    Kaotik Drunk Member
    Legend

    Joined:
    Apr 16, 2003
    Messages:
    8,166
    Likes Received:
    1,836
    Location:
    Finland
    It's next evolution of GCN, not completely new architecture (earlier slides suggest there will be bigger changes than GCN1-3 had, though)
     
  19. Alessio1989

    Regular Newcomer

    Joined:
    Jun 6, 2015
    Messages:
    582
    Likes Received:
    285
    New rasterizer and tessellation engine, please. I really do not care if they will have a +/- 10% TDP in regard to Pascal GPUs...
     
  20. Frenetic Pony

    Regular Newcomer

    Joined:
    Nov 12, 2011
    Messages:
    324
    Likes Received:
    84
    Those have been quite improved with Fiji. High tesselation factors are still bottlenecked, but there's very little reason now or in the foreseeable future to change that in a totally significant way. Tessellation needs a lot of other software and hardware support before you're going to see practical reasons for say, a 32x tesselation factor. Perhaps just changing to a wider geometry front end would be beneficial, but there's not a huge need for changing design yet again.

    Beyond that relieving registry pressure, increasing single precision performance in compute (a 980ti can beat it in many tests despite the Fury X's theoretical advantage) and getting better performance per watt would all be seen as priority. Fury X was simultaneously TDP and Die size limited. Surely fixed with the move to a new node, but a new series of GPUs on the node could quickly ramp back up to being TDP limited if the architecture isn't made more efficient.
     
Loading...
Thread Status:
Not open for further replies.

Share This Page

  • About Us

    Beyond3D has been around for over a decade and prides itself on being the best place on the web for in-depth, technically-driven discussion and analysis of 3D graphics hardware. If you love pixels and transistors, you've come to the right place!

    Beyond3D is proudly published by GPU Tools Ltd.
Loading...