thats the exact world i live in and i haven't seen much different of late, i call BS on 1 socket installs in DC's it makes no sense. If you guys realised the cost per sq meter of a datacenter you will quickly see density is king. there are so many fronts it doesn't make sense on.
its inefficient in terms of power consumption from both increased inefficientices of AC to DC conversion in 1p servers vs blades and redundant parts within each server.
its very low density, i cant get 576 opertion cores in a 40 RU rack right now vs 14.4 racks to do that as 1 socket servers.
very costly in terms of localized cooling , for example i would need one APC 1/2 rack ac unit for 3 blade enclosures in a 40 RU rack, i would need 3-4 for 14.4 racks and much bigger water pumps.
given a 2 socket blade with 64 gig of ram with 2 istanbul cpu's is around 8-10K (AUD) single socket servers dont make sense. Sure there is hard disk to consider but most things in this space local disk doesn't have the performace anyway.
PS. im a network "engineer" in a DC
edit: lol, i forgot completely about networking costs, with 1 socket servers thats pretty much forcing you to go top of rack switching, with blades you can go something like a catalyst 6500 or nexus 7000 and just use fibre back to a couple of central points. assuming a redundant network design it is significatly cheaper as well as far more scalable to centralise your network access layer.