Nvidia Pascal Announcement

NVIDIA 16nm Pascal GP104 GPU Pictured in Full Detail - Powers 8 GB GeForce GTX 1080 and GeForce GTX 1070 Graphics Cards

NVIDIA-Pascal-GP104-GPU_GPU-635x627.jpg


NVIDIA-Pascal-GP104-GPU_GP104-200-A1-635x610.jpg
 
Am I the only one who thinks that looks like Photoshop ?

In the second picture the angle of the bright white text seems off relative to the other text.

EDIT... maybe it is just me... there's a bit of skew between dim and bright text in the first picture too I guess... maybe it's just how oddly bright the lower text looks that seemed off.
 
I imagine the performance gulf between the 1080 and 1070 will be rather big now (retail price as well)
 
I imagine the performance gulf between the 1080 and 1070 will be rather big now (retail price as well)
Well, there has been rumours of 3 models, maybe it's x70, x80 and x80 ti, which of only Ti comes with GDDR5X? (and which should be released a bit later, since Micron still hasn't started mass producing those memories)
Also, just getting more bandwidth doesn't automatically do miracles on performance if the chip already had sufficient bandwidth for most scenarios
 
Maybe the chip can clock higher nicely without losing too much efficiency all the way up to 200-225W or something like that (assuming an initial target of ~175W).

But IMO the chip was probably thought out as being paired with GDDR5X all along, and it's more a case of cut down chips also offering "good enough" performance with GDDR5.
 
(and which should be released a bit later, since Micron still hasn't started mass producing those memories)
Maybe Micron is rushing first mass production lots a little bit, but regular time through fab and packaging is around 3 months. With stated availability in June timeframe, there's little question that mass production must have started already.
 
Maybe Micron is rushing first mass production lots a little bit, but regular time through fab and packaging is around 3 months. With stated availability in June timeframe, there's little question that mass production must have started already.
Where did they state June availability for GDDR5X? Or where did NVIDIA say GP104+GDDR5X will be available in June?
Microns last statement was that they plan to start mass production in summer, a bit earlier they said August.
 
Where did they state June availability for GDDR5X? Or where did NVIDIA say GP104+GDDR5X will be available in June?
I've posted this quote from their March conference call already:
In the Graphic segment, we’re enthusiastic about the early success of our GDDR5X a discrete solution for increasing data rates above 10-gigabits per second. We’ve several major design wins and expect to have the products available by the end of the current fiscal quarter.
Their current fiscal quarter ends of May 31st.

Microns last statement was that they plan to start mass production in summer, a bit earlier they said August.
They must have accelerated things. The conf call came after those early predictions.
I can't blame you for thinking otherwise: everybody seems to be using GDDR5X availability as a crutch to claim that the GDDR5X version will be after the summer or q4, but that Micron statement throws that argument under the bus.
 
I meant the kind of bandwidth that GDDR5X offers in a 256 bit config. Could have been 384 bit GDDR5 when in the drawing board.
I don't think 384 bits was never on the table for this class of chip.

GDDR5 at 8Gbps is already 15% faster than 7Gbps.

I use the rule of thumb that an x% increase in memory clock speed results in an x/2% increase in performance, based on the overclocking table in Anandtech's GTX980 review. And that the first order opposite is true as well: a reduction in BW only results in half the performance loss.

But this means that a lot of work segments are not memory controller limited.

A GTX 980 Ti has 336 GB/s. A GDDR5 gp104 will have 256 GB/s. That's 30% less, but only 15% less in performance. If you now significantly increase the performance of the compute bound work segments, you should be able to make up for that 15% loss. Going from 3072 cores at 1.1GHz to less than that but at much higher clocks. According to that same overclock table, core clock increases by x% increase performance by more than x/2%.

Add it some architectural improvements and there you have it: 980 Ti level performance with old school GDDR5 at lower cost, a perfect replacement for gm204. The GDDR5X is the cherry on the cake to make a true new performance leader, with no competitor in sight.

Cue in everybody complaining again how Nvidia dares to ask high-end prices for a mid-end chip.
 
they showed different games with separate memory and GPU overclocking. Games got around .5% increase per 1% frequency bump on memory where they got .7% difference per 1% GPU bump (more GPU bound than memory bound)
 
What workloads were benchmarked?
Check for yourself.

For a 22% core clock increase, they see a 13% performance increase, or a a scaling of 0.59.
For an 11% memory clock increase, they see a 5.2% performance increase, or a scaling of 0.46.

So core is definitely a bigger limiter than memory.

Edit: bonus comment from Anandtech's gm200 review:
The performance gains from this overclock are a very consistent 16-19% across all 5 of our sample games at 4K, indicating that we're almost entirely GPU-bound as opposed to memory-bound.
 
Last edited:
Back
Top