NVIDIA Maxwell Speculation Thread

Discussion in 'Architecture and Products' started by Arun, Feb 9, 2011.

Tags:
  1. silent_guy

    silent_guy Veteran Subscriber

    It's not exactly static in the sense that the signal is either low or high, but as an approximation, I simply take VDDio and divide by half. So 0.7V. On the receiving side, you have 2 40 Ohm resistors in parallel (on to ground, one to lower), so you get 0.7^2/20 = 24mW per IO, or 9.5W for a 384 bit bus.

    This is probably a terribly approximation, since it excludes series resistance etc, but it gives a ballpark number.

    I think this is the most important reason why HBM saves power over conventional DRAMs.
     
  2. silent_guy

    silent_guy Veteran Subscriber

    I'm starting to think that I was very right about this being a terrible approximation, and that it's not anywhere close to being as high as I claimed it would be.
     
  3. CarstenS

    CarstenS Legend Subscriber

  4. silent_guy

    silent_guy Veteran Subscriber

    kresek, pharma, Razor1 and 1 other person like this.
  5. silent_guy

    silent_guy Veteran Subscriber

    There is one strange behavior in that graph as well: compared to GDDR5, the column power consumption is actually higher even though the BW is lower.

    That doesn't make a lot of sense...
     
  6. 3dilettante

    3dilettante Legend Alpha

    Which ones have higher column power but have lower bandwidth?
    The black line seems to be always increasing, and the green bar segment for column power seems to do so as well.

    The on-die power costs seem to rise along with bandwidth, which one would expect since something needs to be read/written and higher bandwidth means more of it.
     
  7. silent_guy

    silent_guy Veteran Subscriber

    Oops, I misread the graph!

    Still, while the HBM1 BW goes up marginally from the fastest GDDR5 number, the column power seems to increase higher than you'd expect given the lower core voltage.
     
  8. 3dilettante

    3dilettante Legend Alpha

    It's eyeballing a graph, but would the relative difference between the GDDR5 data point and the HBM one be in the range of 30-50%?
    Perhaps the internal voltage isn't reduced as much?
    The column power increase seems to be in that ballpark. There is a drop in row power between HBM and HBM2, which might be attributable to pseudo-channel mode cutting the page size per activation in half, given that the graph is assuming only 160 bytes per 1-2KB page activation gets used.
    At least for graphics, this might be too conservative relative to what HPC might get.

    Really changing the curve would require a significant change in the actual core arrays (or even the base storage tech?), which have been kept pretty consistent across all the interfaces in the comparison.
     
  9. iMacmatician

    iMacmatician Regular

    I revisited this rumor while looking for information related to a Pascal rumor, and Hardware Battle has issued an update sometime between the time of my quoted post and now.

     
  10. Kaotik

    Kaotik Drunk Member Legend

    Is it possible to disable driver command lists support in Maxwell gen 2 cards somehow?
    I'd just like proof on whether Hallock was full of it, or actually right, that DCLs help short benchmarks but actually hurt performance in the long run
     
  11. Razor1

    Razor1 Veteran

    Kaotik, not sure where you are going with that, command lists are done on the CPU, but even submitting a command list won't tell a GPU to start working, so I think in theory yeah there should be a way to disable it. But by doing so, you still need to have a completely new command list that will effectively visually look the same which I don't think would be easy to do, unless you know what the original command list is and this is pretty expensive on the CPU.

    Ah yeah missed that statement in the AMA, damn Reddit lol.

    Yeah what Hallock stated, doesn't sound likely, I don't know if nV has much control over the command lists to that degree, essentially what nV would have to do is know how the driver threading would predict the command lists and and what branches. I'm not sure if they have that much control over the application to do that. Any one else?
     
    Last edited: Mar 6, 2016
  12. pharma

    pharma Veteran

    sonen, Esrever, firstminion and 3 others like this.
  13. pharma

    pharma Veteran

    NVIDIA Cuts Prices of GTX 980 Ti, GTX 980, and GTX 970

    https://www.techpowerup.com/223432/nvidia-cuts-prices-of-gtx-980-ti-gtx-980-and-gtx-970
     
  14. CSI PC

    CSI PC Veteran

    I could not see it in the translation myself but do they explain how they measure power consumption?
    Ideally it needs to be from the PCIE connector and slot to exclude everything else.
    Edit:
    NVM found the answer, I should had looked closer at their photograph :)
    Cheers
     
  15. pharma

    pharma Veteran

    June 15, 2016: Counterfeit "GeForce GTX 960" cards in USA & China surfaced
    Before Geforce GTX 1060: Counterfeit GTX 960 at low prices in circulation

    http://www.pcgameshardware.de/Nvidi...-960-zu-guenstigen-Preisen-im-Umlauf-1198678/
     
    CSI PC likes this.
  16. Ryan Smith

    Ryan Smith Regular

    Last edited: Jun 17, 2016
  17. Alexko

    Alexko Veteran Subscriber

    Lightman likes this.
  18. Lightman

    Lightman Veteran Subscriber

    Memory type is also clearly marked on the box!
    Gd5 DDR3 :p
     
    Razor1 and Alexko like this.
  19. xEx

    xEx Veteran

  20. CSI PC

    CSI PC Veteran

    Although the shock is how high the 390x is compared to the 290x, especially as it is was meant to have better dynamic power management.
    The TBP for a 290x & 290 was 250W, and the Fury X 275W (so its real-world figure is pretty good in the measurements).

    Cheers
     
Loading...

Share This Page

Loading...