I'll put money on them shipping A3 in quantity.
By quantity I thought you were going to reveal how much you were willing to wager!
I'll put money on them shipping A3 in quantity.
Hopefully NV will do the smart move and price it the same making Cypres obsolete, at least until AMD lower prices. I think forcing them to do so earns a lot of mind share though.
That certainly seems quite plausible. So basically what you're saying is that you would agree that that other guy's claim that most server farms typically rely on budget single socket desktop components is unlikely?
Still, even lower TDP, higher efficiency components and the like don't really do much to change the probability that any Nehalem based solution will still be a factor or so off when it comes to computational density and power efficiency compared to a Fermi based Tesla setup.
Yeah, those A2 Teslas were ~550mhz, so maybe up to 600mhz is doable.It's interesting looking at the clocks on the various nvidia 40nm parts(in rough release order):
GT218 589/1402
GT216 625/1360
GT215 550/1340
GT215(OC) 585/1420
GF100 A1(Hotboy) 495/1100
GF100 A2(Various) ???/1200-1350
GF100 A3(Nvidia) ???/1250-1400
They seem very clustered around 1300-1400Mhz for shaders(excluding GF100 A1 obviously). Assuming they keep the same ratio as GF100 A1 core clock is looking around 550-630 maybe?
Contrast with the G9x generation which covered 1375->1836Mhz
That certainly seems quite plausible. So basically what you're saying is that you would agree that that other guy's claim that most server farms typically rely on budget single socket desktop components is unlikely?
Still, even lower TDP, higher efficiency components and the like don't really do much to change the probability that any Nehalem based solution will still be a factor or so off when it comes to computational density and power efficiency compared to a Fermi based Tesla setup.
No, not at all. I know who he is, and I believe every word he says in this and several other contexts. I also know what several data centers use, and they almost all have 1S boxes now.
-Charlie
PEAK != reality.
I don't remember if this caveat was mentioned previously, but in at least one specific case it is possible for a metal spin to increase clock speed. If a small number of paths are a significant bottleneck they might be able to be fixed. Sometimes all it takes is some added buffering or increasing drive strength along the offending path.For yield, speed or power, an A4 won't do much. (I'm a bit on a mission to kill the idea in people's brain that metal spins have a lot of impact on those. ) A B spin is something else entirely, of course, but that would be a much longer term solution.
I guess we just have different experiences. Well, I'll readily admit that I'm not familiar with trends in the US, but of the 8 or 9 data centers I've visited in Europe and Asia over the last 2 years, pretty much all of them are on the sweet spot of 2 socket blades, down from 4 socket rack units, and I don't know anyone who is seriously using i7s and Phenoms in place of Xeons and Opterons.
I guess we just have different experiences. Well, I'll readily admit that I'm not familiar with trends in the US, but of the 8 or 9 data centers I've visited in Europe and Asia over the last 2 years, pretty much all of them are on the sweet spot of 2 socket blades, down from 4 socket rack units, and I don't know anyone who is seriously using i7s and Phenoms in place of Xeons and Opterons.
How is this any different on any other architecture?
thats the exact world i live in and i haven't seen much different of late, i call BS on 1 socket installs in DC's it makes no sense. If you guys realised the cost per sq meter of a datacenter you will quickly see density is king. there are so many fronts it doesn't make sense on.
its inefficient in terms of power consumption from both increased inefficientices of AC to DC conversion in 1p servers vs blades and redundant parts within each server.
very costly in terms of localized cooling , for example i would need one APC 1/2 rack ac unit for 3 blade enclosures in a 40 RU rack, i would need 3-4 for 14.4 racks and much bigger water pumps.
given a 2 socket blade with 64 gig of ram with 2 istanbul cpu's is around 8-10K (AUD) single socket servers dont make sense. Sure there is hard disk to consider but most things in this space local disk doesn't have the performace anyway.
PS. im a network "engineer" in a DC
edit: lol, i forgot completely about networking costs, with 1 socket servers thats pretty much forcing you to go top of rack switching, with blades you can go something like a catalyst 6500 or nexus 7000 and just use fibre back to a couple of central points. assuming a redundant network design it is significatly cheaper as well as far more scalable to centralise your network access layer.
A3 will have to ship, but will it ship as a GeForce?
Did they tape-out any smaller GF10X parts yet?
Basically, yes. If they can't get the clocks up after 3 tries for metal layer tweaks, a fourth probably won't produce miracles. It could, but I doubt it will in practice.
I am betting on a serious revamp of the architecture before a 28nm part, either that or NV will just suck it down in the benchmarks while proclaiming a crushing lead in some really odd benchmark that they do well.
-Charlie
1P servers currently have a ~50% density advantage. Takes only minutes to go to the websites of the various vendors that play in this space.
yes but not as general deployment and thats hardly worth noting, there is also alot more intergration with blade enclosures.blades are just marketing. People were selling racks with shared PSUs long before blades ever became a catch phrase.
The problem is you are thinking in terms of racks. The people doing this at scale are thinking about what a cargo container requires. And they are doing their best to do it almost entirely on ambient air. The data centers of the Now and Future look a lot more like shipping warehouses than anything high tech.
so, a very small percentage of data centre workloads, most VM's in our enviroment( we have 1000's) are pulling well under a gig.MS/Google/Amazon/Etc are pushing in the range of <500 per socket.
and failing, lets see we took voice, we are taking storage, i dont see routers/ switches/load balancers/IPS/WDM etc going away any time soon.Ahh, the people all the other people are trying to make redundant.
side of rack actually. Here's a shocker, making everything a blade doesn't decrease the amount of switch ports that are required.
Cause its a lot easier to get an application running close to peak on a homogeneous system with 20+ years of tools infrastructure than it is to get it running close to peak on a heterogeneous system with <2 years of tools infrastructure.
I might get a board in January, yeahYou also still stand by your January launch?
Yeah, those A2 Teslas were ~550mhz, so maybe up to 600mhz is doable.
You getting a card just teases us. That's not the same as a launch!I might get a board in January, yeah