Intel Broadwell for desktops

I dislike the K series because they actually lack some of the functionality of the non-K chips but are more expensive. Haswell K-series actually disables TSX :oops: that was a really dumb and pointless move IMO.
Yeah no argument on any of that stuff obviously... although in reality few consumers are probably going to use anything that makes use of TSX, I still agree it's pointless and petty to disable it. The chips that disable AES-NI are even more infuriating as that is clearly useful to everyone.
 
i.e. IVB-E/Xeon? There are some legitimate reasons why that stuff takes longer to design, but I agree in principal that there's less pressure to do it quickly with the lack of competition.

Look back to Nehalem and the high-end parts were first to market - the mainstream 45nm dual-cores were cancelled and only came along with Westmere based parts.

Difference is now the architectures are designed primarily for lower power devices and so those take the lead - the high-end doesn't just lack competition, it lacks demand.

As far as enthusiast gamers go, if you don't care about TDP, have adequate cooling and so forth and want that extra last boost in performance regardless of the efficiency, that's what overclocking is for.

Lets face it, if a gamer has those criteria they'll be spending their money on a R9-290. ;)
 
Look back to Nehalem and the high-end parts were first to market - the mainstream 45nm dual-cores were cancelled and only came along with Westmere based parts.
Yeah but that's not that different now. Just because they called the 965 "extreme" doesn't mean it had more cores or wasn't based on the same chip as the others. It's the equivalent of the 4770 today, except they charged a huge premium for a few hundred mhz it at the time :) The bloomfield stuff was "mainstream" enough in pricing.

Now for $500 you can get a Xeon derived chip that actually has more cores and a wider memory interface. I don't think the situation has really gotten any worse...

Lets face it, if a gamer has those criteria they'll be spending their money on a R9-290. ;)
Heh, sure, but the type of person why buys top end parts typically doesn't skimp on any component :)
 
Last edited by a moderator:
Based on all available information it looks like AMD is calling it quits with Vishera. Sad news. I was hoping for an updated FX series. With the new consoles being highly threaded and Mantle offering threading benefits on PC, AMD looked poised to make a comeback in gaming performance.

So Intel is under absolutely no pressure going forward to offer 8+ cores on the desktop.
 
So Intel is under absolutely no pressure going forward to offer 8+ cores on the desktop.
Like I said, the pressure on that front would not come from AMD anyways. If you want IHVs to pack in more cores for less money (because you already can buy the Xeon parts...), game developers and other applications need to start making better use of the ones they have.

The reality is that for almost everyone, even four cores go mostly unused, let alone hyper-threading. While my 6 core gives me bragging rights and occasionally gets a workout on compiling or some ISPC kernel I'm playing with, it pretty much never gets stressed by games and consumer applications. i.e. I don't think you can make a compelling argument that consumers really need more cores until software catches up.

I would like to see some better benchmarks of BF4 on 4, 6 and 8 cores though, as they claimed to have improved the multithreading due to PS4/Xbone. Haven't been CPU bound on my machine though, so it's hard to tell.
 
Yeah but that's not that different now. Just because they called the 965 "extreme" doesn't mean it had more cores or wasn't based on the same chip as the others. It's the equivalent of the 4770 today, except they charged a huge premium for a few hundred mhz it at the time :) The bloomfield stuff was "mainstream" enough in pricing.

Errrrm, Bloomfield was literally the same chip as the Xeon with some disabled QPI links. I will admit the pricing on the i7-920 was fairly competitive, but the platform cost was fairly high as X58 was an expensive chipset (and featured genuine 2 x PCIe 16x support, something mainstream Intel chipsets simply don't have) and you had triple channel memory.

Even so my point still stands - Lynnfield (the i7-870 being the i7-4770 of its day) was the 'mainstream' quad-core platform and launched almost a year later (and never got a 32nm quad-core). The LGA1366 platform got a 32nm, 6-core refresh a few months later. The high-end performance desktop/workstation parts lead the mainstream parts, and ULV was barely even a consideration after the flop of the Penryn CULV line.
 
The high-end performance desktop/workstation parts lead the mainstream parts, and ULV was barely even a consideration after the flop of the Penryn CULV line.
Sure, I just disagree that that indicates the enthusiast market is any less served today than it was then. Maybe you/homerdog didn't mean that, but it seemed implied.

Arguably you can get to the top end EE chips for cheaper (~$500) than you could in those days. As far as the newer architectures being introduced on the lower power parts first, I think that's just pure physics... obviously the power benefits of bleeding edge process tech are going to be the most pronounced on lower power parts, whereas the deltas at the high end are less important (see SNB-E -> IVB-E... I doubt too many folks with SNB-E are feeling large pressure to upgrade right now).
 
As far as the newer architectures being introduced on the lower power parts first, I think that's just pure physics... obviously the power benefits of bleeding edge process tech are going to be the most pronounced on lower power parts, whereas the deltas at the high end are less important (see SNB-E -> IVB-E... I doubt too many folks with SNB-E are feeling large pressure to upgrade right now).

Well again it comes down to design decision made - there could be more benefits realised in the 130-150W TDP range if the process design was biased towards gains in that region, but it isn't as they view the 6.5-15W range as more important.

Its the negative of having microarchitectures and processes that span such a wide range of TDPs, you end up with a design optimised for one end of the spectrum and at the other end of it you get some fairly uninspiring products. The 4960X IVB-E is a fantastic part if you're an accountant though.
 
Like I said, the pressure on that front would not come from AMD anyways. If you want IHVs to pack in more cores for less money (because you already can buy the Xeon parts...),

Don't they run at very low clock speeds though? I don't know how well they overclock put I assume they'd get no-where near the speeds of the fastest quads and Ivy-E's.
 
Don't they run at very low clock speeds though? I don't know how well they overclock put I assume they'd get no-where near the speeds of the fastest quads and Ivy-E's.

For desktop workloads I'd think the 8-core chip would be nice for IVB-E. The fastest 10-core chip would be worth investigating , that keeps you at 3GHz base clock, but only 3.6GHz max turbo...
http://ark.intel.com/products/75279/Intel-Xeon-Processor-E5-2690-v2-25M-Cache-3_00-GHz

In general terms you are right though, I run a SNB-E E5-2687W (8-core, 150W) in a desktop X79 motherboard and for the majority of tasks performance is 95-100% of the performance of an 3960X (6-core, 130W), so equal or sometimes a little slower. You only see a benefit in very well threaded applications, and even then the best case is about 20%. I'd quite like to swap it for a E5-2687W v2 though.
 
Sure, I just disagree that that indicates the enthusiast market is any less served today than it was then. Maybe you/homerdog didn't mean that, but it seemed implied.

Arguably you can get to the top end EE chips for cheaper (~$500) than you could in those days. As far as the newer architectures being introduced on the lower power parts first, I think that's just pure physics... obviously the power benefits of bleeding edge process tech are going to be the most pronounced on lower power parts, whereas the deltas at the high end are less important (see SNB-E -> IVB-E... I doubt too many folks with SNB-E are feeling large pressure to upgrade right now).

It's not that the enthusiast market is less served, just that it's less of a focus for Intel than before.
 
Don't they run at very low clock speeds though? I don't know how well they overclock put I assume they'd get no-where near the speeds of the fastest quads and Ivy-E's.
As Thorburn noted, there are options. That said, as you add more cores, you are going to reduce the clock frequencies, no two ways around it. You still have to be able to power and cool the chip :)

It's not that the enthusiast market is less served, just that it's less of a focus for Intel than before.
Perhaps, but like I said I doubt that enthusiasts even make much use beyond 4 cores other than bragging rights (using myself as an example). What is really needed to drive this market is better software.
 
No doubt. Perhaps games will start using >4 cores in the near future with the new consoles being so reliant on threading for good performance. Of course the same could be said for the last gen consoles and we know how that worked out..

Anyway Mantle should help devs take advantage of more cores for rendering. Here's hoping D3D goes on to incorporate more of the functionality found in Mantle.
 
Perhaps, but like I said I doubt that enthusiasts even make much use beyond 4 cores other than bragging rights (using myself as an example). What is really needed to drive this market is better software.
Well, when the most affordable Intel chips are dual cores and quad cores, why would companies bother to make software for more cores?

Intel is the one who should be throwing more cores at consumers so software companies have a reason to improve there software.

If you build it, he will come.
 
It's no secret that multithreading is a classic case of chicken and egg problem; and like most chicken and egg problems, the solution must come from both sides at once, even in small doses. It is sort of happening: some applications scale well across many threads, some Intel quad-cores can handle 8 threads with HT, some AMD CPUs feature 8 cores, etc.

Still, a good and very easy step from Intel would be to enable HT in all (or at least most) SKUs. But that might hurt margins, and if there's one thing Intel hates, it's low margins.
 
It's no secret that multithreading is a classic case of chicken and egg problem; and like most chicken and egg problems, the solution must come from both sides at once, even in small doses. It is sort of happening: some applications scale well across many threads, some Intel quad-cores can handle 8 threads with HT, some AMD CPUs feature 8 cores, etc.
Actually, it's a viable chicken not producing an egg problem.

Parallel sw drives parallelism. More cores don't, they just enable it.
 
Yeah I have no sympathy for the "chicken and egg" problem at this point. It is well known how to write software that scales up to large core counts at this point. There's no more excused to be made in terms of market... software is still just way too sequential.

Of course it's a hard problem - a very hard problem. There is a ton of legacy code that doesn't get re-written overnight. Fresh starts like what Oxide is doing probably have the best chance of succeeding. So far we're still pretty much in subsystem-parallel hell (audio on a thread, physics on a thread, etc) and unlikely to get out of it until people start ditching a lot of code, libraries and arguably, languages.

But yeah, there's no real debate at this point: software is the gating factor. As rpg.314 says, if there was much scalable software, you'd already have consumer CPUs with more cores. But for now using the power budget to run at silly frequencies (4-5GHz...) is still going to be better for the vast majority of users.

It's not really a chicken and egg problem per se. Software with sufficient expressed parallelism will not pay any penalty running on fewer cores. People just have to stop doing parallelism by "moving X into another thread" and stopping when they get adequate use of 2-4 cores...
 
Yeah I have no sympathy for the "chicken and egg" problem at this point. It is well known how to write software that scales up to large core counts at this point. There's no more excused to be made in terms of market... software is still just way too sequential.

Of course it's a hard problem - a very hard problem. There is a ton of legacy code that doesn't get re-written overnight. Fresh starts like what Oxide is doing probably have the best chance of succeeding. So far we're still pretty much in subsystem-parallel hell (audio on a thread, physics on a thread, etc) and unlikely to get out of it until people start ditching a lot of code, libraries and arguably, languages.

But yeah, there's no real debate at this point: software is the gating factor. As rpg.314 says, if there was much scalable software, you'd already have consumer CPUs with more cores. But for now using the power budget to run at silly frequencies (4-5GHz...) is still going to be better for the vast majority of users.

It's not really a chicken and egg problem per se. Software with sufficient expressed parallelism will not pay any penalty running on fewer cores. People just have to stop doing parallelism by "moving X into another thread" and stopping when they get adequate use of 2-4 cores...

If you exclude audio/video/image editing and games, there is pretty much no parallelism out there in consumer apps. All of those run very well on GPUs, which are getting more flexible and more integrated with CPUs all the time. So I don't see any near term trigger for more cores at all.

Widespread depth camera's, courtesy Primesense aka Apple, might change that. IIRC, kinect style posture extraction is compute intensive and parallelizes well.
 
Back
Top