NVIDIA Kepler speculation thread

As well as time goes by it's going to become more and more unlikely for the companies to have similarly aligned launch dates for their products. Unless one company wants to shorten or extend the developement time for their products just to be able to match the other companies launch window.

And considering how expensive developement is. I don't see either company willingly delaying a launch unless their current product is so successful that they can purposely delay a chip (like Intel in the 80's and early 90's) and make more money.

Hence, I wouldn't be surprised if we started seeing major launches from Nvidia and AMD 6 months apart in the future. And then eventually converging again only to diverge again.

So I wouldn't take silence on either companies parts as a sign that there is something significantly wrong.

Regards,
SB
 
I can't agree. GF110 was just a new revision of a released and well known product. TSMC workers had several Fermi revisions in hands, so it was just another one for them, nothing special. That's the most likely reason why anybody didn't expect it earlier.
Most of the industry missed G80 launch and were unaware of it's specs up until a week or two before it hit the market. Was G80 a new revision of something?
The same goes with Kepler because from the looks of it most of the industry is unaware about almost all the tapeouts of Kepler chips and as of now remains unaware about their specs and possible launch dates.
What Kyle heard may well relate to Kepler but it doesn't mean that it (as the new architecture) will launch in June.

Edit: By the "industry" here I obviously mean Charlie, Fuad, Kyle and all the guys who write about GPUs, not the GPU industry itself.
 
Most of the industry missed G80 launch and were unaware of it's specs up until a week or two before it hit the market. Was G80 a new revision of something?
The same goes with Kepler because from the looks of it most of the industry is unaware about almost all the tapeouts of Kepler chips and as of now remains unaware about their specs and possible launch dates.

Fair point and it does seem to be the case that Kepler is being kept well under wraps. That's no guarantee of an impending G80 like surprise though. They'll be talking about it for sure at GTC in May. Whether we'll see products on shelves before that is anybody's guess.

The early February rumor seems to be catching on. Let's see if anything comes of it.

Don't underestimate the importance of momentum, especially in a channel evironment where you have partners with engineering resources.

Oh, being first has its advantages for sure but other things matter more in the long run. Also, AMD isn't known for driving home an advantage even with massive lead times and superior products. Hopefully that changes under the new leadership.
 
Pretty sure G80 leaks were abundant and month (& more) before the launch. The surprise in the G80 launch was its performance.
 
XDR2 rumors ran rampant too, still doesn't make them true.

XDR2 for Tahiti you mean? Haha yes, but some rumors are more plausible than others. I wouldn't put my head on a block for some of the nonsense going around now though. Also the February rumor is for an announcement, not availability.
 
Also the February rumor is for an announcement, not availability.
Ah didn't know that.. so it's still some time away then.

rpg.314 said:
What about it?
R400 was, after R300 supposed to be the first USC card.
But was replaced by R420. made it's debut as R500/Xenos in the 360 and finally as R600 on the desktop.. after G80.
 
R400 was, after R300 supposed to be the first USC card.
But was replaced by R420. made it's debut as R500/Xenos in the 360 and finally as R600 on the desktop.. after G80.

If Xenos was "R500" then what was R5xx desktop?

It was obviously way too early for a USC in the desktop, a console is a completely different environment and Xenos isn't fully DX10 if I recall correctly. Your point was that R400 knew what was going to be in DX10, while chances are high the original design wasn't even DX10 as it ended up to be and ATI canned any USC architecture from the desktop before DX10/R600. What am I missing here?
 
If Xenos was "R500" then what was R5xx desktop?
R520 etc.

Your point was that R400 knew what was going to be in DX10, while chances are high the original design wasn't even DX10 as it ended up to be and ATI canned any USC architecture from the desktop before DX10/R600. What am I missing here?

That USC was an alternative many years before G80 shocked the industry by actually being USC.

What if Nvidia is skipping kepler and going straight to maxwell!!!!:oops::oops:
Because they can't thrive on Fermi till 2015?
 
I'm trying to track down where I read this, but I think one of the 7970 reviews mentioned murmurs from a green source about Tahiti's ROP count.
The efficiency changes aside, there was some commentary that the ROP featureset and peak numbers were slight evolutions of what had gone before.

The old performance slide from Nvidia used a composit with unknown weighting of FLOP, bandwidth, and ROP performance.
Memory bandwidth did not increase as much between the 280 to 580 compared to the graph bars. Depending on whether Nvidia counted the missing MUL, ALU could be a debatable contributor.
ROP throughput did nearly double in that timeframe.
Perhaps Nvidia is going to improve there again?
 
Maybe NVidia will go tile based deferred :p

How close is Fermi? Or, put another way, what would need to be changed-in/added-to Fermi to do this?
 
That USC was an alternative many years before G80 shocked the industry by actually being USC.
Even here, at the B3D forum, the peeps were rather surprised by the fact. There was a strong believe that the architecture would be some sort of extended G70 design with at least 32 pixel pipes and untold number of vertex units. :p
 
No, the real surprise was its USC architecture. Nearly no one expected that.
I just did some googling and the correct specs leaked more than a month earlier. I'll admit that it was a well kept secret but not as much as its made out to be.

GK104 Spoiler? Translated from here
AMD tells us more easily able to mount frequency Pitcairn since TSMC has gradually refined its manufacturing process to reduce leakage currents compared to what was found on the first batch of GPU Tahiti. AMD tells us about it if it has a wide margin for overclocking it is not by chance but due to the thermal envelope of 250W, which has limited the scope for the reference frequencies. We would therefore not surprised to see AMD offer an evolution of the Radeon HD 7970 with higher frequencies in the next few months, taking advantage of improvements related to the production to complicate the life of Kepler! Finally, note that AMD indicates that the margin for overclocking the Radeon HD 7950 will also be substantial, but still keeps its specifications confidential.
 
I think that GK104's availability should be more or less the same as Tahiti's. If Tahiti will be in healthy supply soon there is no reason for GK104 to not have one on the launch day.

He's not talking about a paper-launch or not, he's talking about WHEN you can buy it.
 
Back
Top