Nvidia Pascal Announcement

got same information. GP102 = big GP104. ready for back to school

So is DDR5X? Without HBM, what is the fun? This has to be a 12GB 1080Ti, and fast tracked to meet Vega 10..

Just last week, I was wondering how fast a 1080Ti would be using GP102 cores....afaik big Pascal have a lower max boost clocks and that screws up the metrics, no?

So a 2Ghz 1080 (2560 CC) would be close in Tflops to a 1.4Ghz 1080Ti (3584 CC)...
Same goes for texturing and fillrates, no?

Is there such thing as a 6000Mhz DDR5X? The memory bandwidth could be much bigger win, but still....
 
Eh, it's not exactly smart though. Obviously it's better to save. The first video card I bought was a Voodoo 2 for $250 back in 98. I was 11 and got something like $10 pw pocket money. Ditto the 440Ti I bought for $450 when I was earning $8/hr. And again a 9800Pro for $500 when not earning much more.

Not to be pedantic but the R350 refresh had an MSRP of $399.
 
So is DDR5X? Without HBM, what is the fun?
The only "fun" with HBM is smaller size of the card and lower power consumption, that's it AFAIK.
This has to be a 12GB 1080Ti, and fast tracked to meet Vega 10..
Cut Vega(64 CU) is going to compete with 1070/80, might come out in Q4, full Vega(96 CU) is likely to release only next year.

Nvidia is probably on its own schedule for early Volta or another Pascal gen in between.


Is there such thing as a 6000Mhz DDR5X? The memory bandwidth could be much bigger win, but still....
16000Mhz? In theory... In reality they don't even need available 12000Mhz for Titan and Ti.
 
232416bsq3lbmloblboslv.png.thumb.jpg

That's pretty much on par or slightly ahead of RX480..though at lower power..I would expect 120W. NV would have to price it close to RX480. We could very well see $199 for 3 GB.
So is DDR5X? Without HBM, what is the fun? This has to be a 12GB 1080Ti, and fast tracked to meet Vega 10..

The only benefit HBM would bring here is smaller PCB and some power savings. For a significantly higher cost. Yes I know it has higher bandwidth but we saw how well 1080 did with its "limited" bandwidth..so I suspect GP102 will be fine with GDDR5X.

Given the rumour that Vega is pulled forward to October..seems like it might be the other way around.
Just last week, I was wondering how fast a 1080Ti would be using GP102 cores....afaik big Pascal have a lower max boost clocks and that screws up the metrics, no?

So a 2Ghz 1080 (2560 CC) would be close in Tflops to a 1.4Ghz 1080Ti (3584 CC)...
Same goes for texturing and fillrates, no?

Well..at those speeds..the 2 Ghz 1080 would be ahead..but we're comparing Apples and Oranges. In practice.. the GP102 should be clocked lower than the GP104 but not significantly so.
Is there such thing as a 6000Mhz DDR5X? The memory bandwidth could be much bigger win, but still....

Do you mean 12 Gbps GDDR5X? Yes..GDDR5X is up to 14 Gbps but right now it seems only 10 Gbps is in production.
The only "fun" with HBM is smaller size of the card and lower power consumption, that's it AFAIK.

Well higher bandwidth of course..but like I said above..I dont think its really needed.
Nvidia is probably on its own schedule for early Volta or another Pascal gen in between.

They are contracted to Ship Volta for supercomputers in 2017 so another Pascal gen in between is unlikely.
 
So is DDR5X? Without HBM, what is the fun? This has to be a 12GB 1080Ti, and fast tracked to meet Vega 10..
yep GDDR5X on 384 bit interface. Should have no more bandwidth issue than GP104

So a 2Ghz 1080 (2560 CC) would be close in Tflops to a 1.4Ghz 1080Ti (3584 CC)...
Same goes for texturing and fillrates, no?
GP102 is rumored to reach frequencies (very very) close to GP104.
Expected performance boost is 35~45% over GP104 in GPU bound situations
 
If GP102 can reach 2Ghz, i am impress, because their big expensive HPC parts boost clocks only at ~1.4Ghz.

Although it is only ~40% faster on paper than 1080, i hope Nvidia dont sell it as a Titan...even on a new node, consumer graphics seems not getting pushed further.
 
How depressing. A budget card almost beating my factory OCd R390X with its monster power draw, heat output and fan noise... :(

Oh well, I should be happy, and really, I am.

I was wondering if things like ROPs and tessellation are holding back AMD. ..
390X is a compute monster, it should not get beat so easily by a budget part.

How about game software not utilising enough shaders power, PS4 even though uses GCN cores, it only have 1.8Tflops/1152 shaders...AMD holding back AMD...A thought?
 
The HPC part doesn't have kind of liberties with power that GeForce has. HPC GPUs have (almost) always run at lower clocks.

PCI-Express Tesla P100 has a 250W TDP at 1.3 GHz boost clocks. Mezzanine version is 50W more for 180MHz higher boost clocks.
If P102 == P100, then I'm not sure how you expect the consumer version to clock up to 2GHz.
The difference in clocks between Tesla M40 and Titan X, at the same 250W TDP, is ~50MHz. To think the equivalent difference in the Pascal line would be 700MHz is a bit too much of wishful thinking IMO.
 
From the above website:
How do I utilize HDR output with my app?
At a high level, utilizing HDR display only involves these steps.
....
4. Utilize NVAPI to communicate to the driver and display that you wish to display an HDR image
Is it the same for AMD? Do the developers have to utilize proprietary APIs for both vendors (or three vendors if we also count Intel) in order to display HDR? Isn't there some standard way to communicate through DX that you are rendering in HDR?
 
From the above website:

Is it the same for AMD? Do the developers have to utilize proprietary APIs for both vendors (or three vendors if we also count Intel) in order to display HDR? Isn't there some standard way to communicate through DX that you are rendering in HDR?
Considering Xbox One is apparently using standard DirectX 12 now, there should be, or then XB1 devs will be using AMDs way
 
Back
Top