NVIDIA Maxwell Speculation Thread

Discussion in 'Architecture and Products' started by Arun, Feb 9, 2011.

Tags:
  1. zorg

    Newcomer

    Joined:
    Aug 1, 2012
    Messages:
    32
    Likes Received:
    0
    Location:
    Sweden
    I heard this bs on many forum. This is only possible if they make an API for it, that work exactly this way. But running the game on the CPU, and the API on the GPU is more problematic, than just creating a Mantle alternative.
     
  2. Psycho

    Regular

    Joined:
    Jun 7, 2008
    Messages:
    745
    Likes Received:
    39
    Location:
    Copenhagen
    Or to put it in another way: what would those arm cores be capable of (in regards to the draw call bottleneck) that you couldn't do driver side with regular cpu ressources? If the bottleneck was on the driver/gpu side of directx it would be easily solvable.
     
  3. revan

    Newcomer

    Joined:
    Nov 9, 2007
    Messages:
    55
    Likes Received:
    18
    Location:
    look in the sunrise ..will find me
    in my country we have a saying : "A madman throw a stone into a lake and five wisemen jump into the water to take it out" ... This is a Rumors&Speculations thread of course but a rumor must have at least a thin perfume of credibility to deserve a discussion around it ... those guy from "OnLiveSpot" mixed tech infos (whispered to him by sources, moles and little birds) with Star Wars lore and other things, so let me repeat myself : Are we talking about delusional now?
     
    #583 revan, Jan 27, 2014
    Last edited by a moderator: Jan 27, 2014
  4. Picao84

    Veteran Regular

    Joined:
    Feb 15, 2010
    Messages:
    1,555
    Likes Received:
    699
    In my land we also say that when someone criticizes something it must be substantiated. Care to elaborate where is the Star Wars lore there? I am not saying that the site is credible, but that sentence had nothing factually wrong in it.
     
  5. itaru

    Newcomer

    Joined:
    May 27, 2007
    Messages:
    156
    Likes Received:
    15
    Maxwell will be same as xeon phi and BroadCom's Videocore.

    Even if the
    Shader Compile on the GPU side
    Remote procedure calls

    Also, complex scheduling, such as dynamic warp formation, dynamic warp subdivision becomes possible.
     
  6. DSC

    DSC
    Banned

    Joined:
    Jul 12, 2003
    Messages:
    689
    Likes Received:
    3
  7. LordEC911

    Regular

    Joined:
    Nov 25, 2007
    Messages:
    789
    Likes Received:
    74
    Location:
    'Zona
    So this "special sauce ARM" can magically reduce API overhead and drastically increase draw calls without any changes to the API or the engine/game but has to be specially coded to work in Nvidia's own middleware/library which would then have to have changes made to the game code?

    I know you really, really want to believe but please put on your critical thinking cap for a minute.
     
  8. Gipsel

    Veteran

    Joined:
    Jan 4, 2010
    Messages:
    1,620
    Likes Received:
    264
    Location:
    Hamburg, Germany
    The "1 million draw calls under DirectX without code changes" has probably even less truth to it than the pastebin "leak" a while ago. But he likely got the idea from there.
     
  9. xDxD

    Regular

    Joined:
    Jun 7, 2010
    Messages:
    412
    Likes Received:
    1
  10. LordEC911

    Regular

    Joined:
    Nov 25, 2007
    Messages:
    789
    Likes Received:
    74
    Location:
    'Zona
    Well if the pictures of K1 are remotely accurate, the 2xDenver is at most 1/2 a SMX, so 4SMXs.
    On the other hand, what can they decrease and/or remove from the die because of the Denver cores? That will be much harder to calculate and we may never get all the details if that is the case.
     
  11. zorg

    Newcomer

    Joined:
    Aug 1, 2012
    Messages:
    32
    Likes Received:
    0
    Location:
    Sweden
    If one core can turbo up to ~30 GHz. :lol:
     
  12. xDxD

    Regular

    Joined:
    Jun 7, 2010
    Messages:
    412
    Likes Received:
    1
    ahhh...sure, ok.

    :mrgreen:
     
  13. Blazkowicz

    Legend Veteran

    Joined:
    Dec 24, 2004
    Messages:
    5,607
    Likes Received:
    256
    The way it looks remotely credible would be if everything runs on the Denver + GPU and the game
    engine is built around it. i.e. like programming something specifically for a HSA APU.

    Of course I agree it sounds like something merely wrote "16 cores" and "ONE MILLION BILLION BAJILLION DRAWCALLS!!" to get attention.
    Dropping the Steam OS name in the article is suspect : it's sensible and Steam OS might technically run.. but what about Steam itself, and the games? There aren't that many games on Steam linux x86 already. I'm sure Valve would port its game to ARMv8 and it can be a long term solution but most 3rd party games could be "orphans". I can't even buy the Quake games for Steam linux i386.

    I don't say desktop/laptop gaming on ARMv8 will never exist but I can't see how it would be announced so early.
     
  14. CarstenS

    Veteran Subscriber

    Joined:
    May 31, 2002
    Messages:
    4,798
    Likes Received:
    2,056
    Location:
    Germany
    It probably cannot, unless the API is hacked in a major way. The problem many developers are facing right now is - if i understand correctly - d3d11's threaded rendering only shifts the bottleneck to a point where all the commands have to be queued into a single thread again to be fired off to the GPU. Now, if you had that much control in a GPU and if you could get Microsoft to issue an update to DX to allow the last queue construction to be eliminated (i.e. sorted out on the GPU die), then it could work. Don't know if such a thing would be possible in OpenGL via extensions or if you could emulate this on the GPU die.
     
  15. jlippo

    Veteran Regular

    Joined:
    Oct 7, 2004
    Messages:
    1,343
    Likes Received:
    443
    Location:
    Finland
    If driver is the one communicating with a GPU, isn't it possible to have driver which gives jobs/data to ARM cores to do the job without need for DX/GL knowing?
     
  16. Ailuros

    Ailuros Epsilon plus three
    Legend Subscriber

    Joined:
    Feb 7, 2002
    Messages:
    9,420
    Likes Received:
    179
    Location:
    Chania
    Oh you mean the photoshop artwork? Besides why would you want to use performance and not area optimized CPU cores in a high end GPU chip in the first place?

    The point was that it's likely not going to exceed the 8 cores mark for the top dog and of course that there's again an unbelievable amount of rubbish surrounding relative speculation.

    Project Echelon was some sort of "what the future could bring" exersize". Early versions of it mentioned 16 CPU cores integrated, while after a while it bounced down to 8 and that was meant for 2017 and NOT for 2014/5 LOL.... :razz:
     
  17. Albuquerque

    Albuquerque Red-headed step child
    Veteran

    Joined:
    Jun 17, 2004
    Messages:
    3,845
    Likes Received:
    329
    Location:
    35.1415,-90.056
    The problem is the DX API that sits between the application and the driver. Thus, the most optimized driver in the world still couldn't remove the bottleneck that DX11 injects in the middle.

    The only way to circumvent it is to NOT USE the DX API calls in the first place, or else "fix" the scaling issues within DX itself. Hence, Mantle.
     
  18. iMacmatician

    Regular

    Joined:
    Jul 24, 2010
    Messages:
    773
    Likes Received:
    200
    From tmall.com: GTX 750 specs. The image on the page containing the specs is also here (it's fairly large).

    I can't read Chinese but it appears that the 750 has 768 CCs, 1020 MHz base clock, 1085 MHz boost clock, 5 Gbps memory, 1 GB, a 128-bit bus, and DX11.2.

    There are also some OC benchmarks here.

    [​IMG]
     
  19. Picao84

    Veteran Regular

    Joined:
    Feb 15, 2010
    Messages:
    1,555
    Likes Received:
    699
    If this chip really does not require any external power it looks pretty damn impressive. HD6950 power for less than 75W?

    EDIT - I wonder what's that "4+2" thing?
     
  20. ToTTenTranz

    Legend Veteran Subscriber

    Joined:
    Jul 7, 2008
    Messages:
    9,999
    Likes Received:
    4,571
    That's not the first time for a card of this caliber.

    However, all I see is a GTX 750 Ti performing worse than a GTX650 Ti Boost.
    Here we go for another round of erroneous naming..
     
    #600 ToTTenTranz, Jan 29, 2014
    Last edited by a moderator: Jan 29, 2014
Loading...

Share This Page

  • About Us

    Beyond3D has been around for over a decade and prides itself on being the best place on the web for in-depth, technically-driven discussion and analysis of 3D graphics hardware. If you love pixels and transistors, you've come to the right place!

    Beyond3D is proudly published by GPU Tools Ltd.
Loading...