Haswell vs Kaveri

Any ideas yet as to what makes bolt-on special? Does it have a thunder-bolton-chip?
 
Bolton will have support for 8x SATA 6 Gbit/s, four times USB 3.0 and fourteen USB 2.0 connectors. The D4 version also provides RAID 0, 1, 5 and 10, as well as direct communication with the CPU for a PCI-Express x4 slot. The specified TDP is just 7.8 watts. The Bolton D4 chipset is scheduled to be released in the first quarter of next year. AMD states that manufacturers are required to use Bolton D3 or Bolton D4 in 2013 to make motherboards compatible with future Richland processors as well

Which could indicate that Richland is more than just Trinity and new chipset
http://uk.hardware.info/news/29051/rumours-on-amds-bolton-d4-chipset
 
Could Near Threshold Voltage be used in Haswell?

It's nice conceptual babble but this thing can only be used in processors for which performance doesnt matter but extremely high battery life is higly appreciated ... so its for Atom. It's not for performance wise desktop CPU, and i'm seriously guesstimating that even ultrabooks based on dual core haswell wouldnt benefit from it
 
Transistors doped to work at near threshold aren't going to behave the same when jacked up to turbo levels, and the circuits tuned to handle problems at NVT may not be needed or could pose an impediment to reaching normal voltage clocks.

NVT is a "beneficial inherited" side effect of Intels 22nm silicon process khm. Triple Gate Implementation. Not impurities itself or otherwise it would be presented much earlier.

They just explore what could now benefit because no one still implemented TG and its not so beneficial for frequency scaling as it is for transistor squeeze count per mm2
 
Nothing a huge pipeline can't fix
LOL ... right youre on mockup to Pentium4 path :rolleyes: But concept was good ... In those days when no one really know or care about performance per watt you could sell any crap and covice people that theyre buying great stuff with "Intel inside" all they'd need now is 1500W PSU :rofl:

NTV could be part of Intel's answer to big.LITTLE
Pure generally viewing it could be, but conceptually it's totally different space. btw ARM is going different with their "legacy" or small bit support while Intel is just keeps on gluing wider instructions onto few decade old concept. From that perspective big.LITTLE will never be possible on x86 architecture. It might be feasible only if they redesign x86 from scrap for new x86-128 incarnation ... if this ever comes to life

It depends. The advantage PDSOI (what GF uses) gives goes down with every shrink, and it might not give that much anymore.
Oooh you right. Except that SOIs introduced for lower leakage. Only reasoning beyond ditching SOI is reducing manufacturing cost.

It's highly doubtful that they're gonna behave the same way for FX series or they have intels that none of their server partners would use their CPUs produced on 28nm. Even if GloFo SOI 32nm process was botched so much because of HKMG-FG implementation, there's no way that they could now easily afford to spend die area (20%) just to redesign their lineup for 28nm.

In fact then 28nm won't bring no shrinkage benefits at all ... and Theo is again reiterating some stupid marketing nonsenses. Only real reason is that GloFo might totally dumped 28nm HKMG SOI until 22nm appears .... whitch is bAAAAAd
 
Last edited by a moderator:
AMD gave that (8 CUs with GCN architecture) away by themselves in a footnote of some slide (financial analyst day).
kaveri_slidetualp.png
the full presentation

AMD at least wished for it but as GCN is executes much better than VLIW4, when it comes to graphics, they could stick with same number of VLIW4 cores used in Trinity and still gain 20% at the same GPU frequency.

Their GPU in APU already is sharing 128b DDR3 IMC with dual-module CPU which retrains its graphic performance. Thus some high jumps in core counts (eg. 8CU vs 6CU) aren't something they desperately need especially due to lack of PDSOI availability for 28nm node while Trinity die is already 220mm2. Dumping SOI and going to 28nm pretty much means that Trinity if it'd be shrinked to 28nm it would stay around same die size as they need to widen gaps between transistors for bulk process.

They could gain some additional graphic performance just by adopting ddr3-2133 support. To sum at total to 25% over 6CU GPU used in Trinity.

The only reason to go for +2CUs is obviously just for bragging rights for the total FLOPs performance, as this is the cheapest way and still those FLOPs are plain old 32bit FLOPs (which are deliberately avoided to be duly noted anywhere in those slides)

As additional 2CUs would merly require ~15mm2 they could easily implement it but there's no way they should use 4RBE ("16ROPs") we saw in Cape Verde Pro as this would at leas require additional 64-bit GDDR5 IMC (aka. some version of SidePort) to properly employ it.

Also, I am skeptical of the 16 rops. To put it frankly, it won't be able to keep them fed -- might as well drop to 8.
I'm there with you if they don't implement another gddr5 IMC which require additional (~20mm2) and 2RBEs (at least 15mm2). IMO its just too much to expect chip size grew up for 50mm2 in the same time while they're dumping SOI and no real shrink gain from 32nm would be visible.

I don't know if this comes from improvements in memory controllers, better Z-buffering, greater emphasys on less bandwidth-consuming operations during game development or others, but the truth is that general performance-per-GB/s in graphics cards has been steadily rising.

Mostly it comes from better compression algorithms which now could be easily implemented when there so much raw CU performance lying around (marketing FLOPs) that would be wasted and starved if it should reside on poor ddr3 memory bandwidth shared with CPU thru L3

Kaveri means "buddy" in Finnish. Just so you know.

Tnx. It's a nice name. But still what Kabini means then? And arent those Indian codenames?
 
NVT is a "beneficial inherited" side effect of Intels 22nm silicon process khm. Triple Gate Implementation. Not impurities itself or otherwise it would be presented much earlier.
Intel's 22nm process was helpful in that the new physical structure and advanced manufacturing methods allowed for additional knobs for designers to tweak and a gate that was much better at shutting off at low voltages.
Research into logic and memory in that voltage region predates Intel's 22nm process, especially since the NTV Pentium is at 32nm.


Oooh you right. Except that SOIs introduced for lower leakage. Only reasoning beyond ditching SOI is reducing manufacturing cost.
There were other reasons, like the fact that SOI insulated gates from the bulk silicon below and removing that capacitance allowed for a modest increase in clock speed for a given amount of engineering effort.
It seems telling that Intel, who had tons of volume and scads of engineers, said the incremental increase in cost per die was worse than initially more expensive physical design efforts.
Companies with smaller volumes or fewer engineers--IBM and AMD--opted for SOI.
SOI may have further complicated AMD's transition to smaller geometries. SOI and the switch to 300mm wafers may have combined to hurt AMD's switch to the 65nm node, with troubles with variability over the larger wafer and SOI potentially making the devices more sensitive to such variation. One or the other has been pointed at as a reason why AMD had poorer SRAM density at that node than intel.

PD-SOI seems to be adding more headaches as gates become smaller and its benefits over bulk slip, while 3d gates or FD-SOI move past PD-SOI's collection of problems from bulk and SOI.
More recently, there are concerns that the physical fragility introduced by SOI is limiting the amount of strain that can be applied to transistors sitting above the oxide layer.
 
Finding the punchline right now isn't worth fifty dollars to me.
I suppose we'll find out some time in the future.

It seems quite likely that interposer memory can require a premium due to increased manufacturing costs and uncertain yields. Competitive pressures aren't quite like they have been from AMD, and ARM-based products don't quite reside in the segment this would be sold in.
Intel can make just as much money with much less effort and risk.
 
I saw somewhere that Haswell was postponed to H2 2013, and we're still going to see an IvyBridge revision.

I'm looking for the link..
 
depending on price of the i7 4770k I might purchase that late next year and ditch my bulldozer. Should be a nice increase in performance and a drastic drop in power consumption
 
I wonder why you can't have an unlocked processor that also has all the checkboxes filled in, or an even more expensive K chip that does.
 
Maybe because the people who purchase the unlocked versions aren't really those who will need VT-D and TXT.
Maybe, Intel cannot certificate the functionality of these modules (TXT mostly, right?) in an overclockable CPU.
 
Screwing up customers to choose between this or that. Sounds evil but that's how top management usually works. It's a sh*t.
All businesses try to optimize the amount of money they can extract from the valuables they've created, big and small. You're not entitled to it. Each incremental feature needs to be designed, simulated, produced, tested etc. There's a cost associated to each one of them. It'd be a disservice to the company owners (me, probably you too) to give things of value away for free.
 
So, you are basically claiming that customers and customers satisfaction, and customers' good opinion about the big corporation are of no or less value?

You see that we here as customers don't agree with this policy.

And perhaps, there are some companies out there which would like to have both feature sets available.
 
Back
Top