I Can Hazwell?

Well, Intel never made claims about the entire device using "20x less power" -- and again, how the hell do you multiply a value by a whole positive number and end up with a smaller number? I hate that shit. Anyway, you're of course exactly right -- the biggest power draw will be the screen, exacerbated by the fact that it's a touch screen.

The Atom Z2760 Clover Trail Tablet I was talking about also uses touchscreen, and is a Windows 8 device.

1.4W with screen-on idle. There's no other x86 platforms that can match that level of power use. Platform power itself probably did decrease order of magnitude compared to previous Atom platforms.
 
Any news about Haswell and chipsets supporting it ? My Q6600 is getting old in games and video apps like Handbrake :(
 
Haswell is on a release schedule for June 2013. So, you have to wait till then to see what's going on...

But I am almost in the same situation as you with Q9450 overclocked a little bit, and capable to run Crysis 3 decently at everything Medium, and with some slight/ negligible issues when half of the settings are at High.

So, I would miss Haswell and go for upgrade anytime maybe in 2-3 years time, also waiting to replace my DDR3 1333 with something at least DDR4 3000. :D

*3770K supports officially only DDR3 1600, so I am still in the waiting for a decent upgrade. :D
 
Haswell is on a release schedule for June 2013. So, you have to wait till then to see what's going on...

But I am almost in the same situation as you with Q9450 overclocked a little bit, and capable to run Crysis 3 decently at everything Medium, and with some slight/ negligible issues when half of the settings are at High.

So, I would miss Haswell and go for upgrade anytime maybe in 2-3 years time, also waiting to replace my DDR3 1333 with something at least DDR4 3000. :D

*3770K supports officially only DDR3 1600, so I am still in the waiting for a decent upgrade. :D

I'm personally waiting for Hybrid Memory Cube.

http://www.cadence.com/Community/bl...ry-cube-will-revolutionize-system-memory.aspx

Might be a high end option in 2-3 years.
 
I thought it would be (at least) the chip AFTER broadwell... DDR4 will come to intel server chips first, starting next year at the earliest IIRC.
 
Yeah, we're not getting DDR4 until 2014 or 15. Even then, it'll be incredibly expensive compared to its predecessor until probably 2017 if the pattern for DDR2 and DDR3 holds...
 
http://www.tomshardware.com/reviews/core-i7-4770k-haswell-performance,3461.html

nMUumxT.png


:runaway:
 
It does look really nice in that case. Optimized to the max for it no doubt.

Indeed, but Mandelbrot set is super synthetic, I don't think there are many real world full problems that are similar. Its kernel is mostly multiplications and since each pixel is independent it's easy to parallelize. Note that Haswell improves FP throughput not just by adding FMA but by being able to simultaneously execute FMUL + FMUL instead of just FMUL + FADD like its predecessors.
 
Isn't Ivy optimized too? 78% faster at the same clocks is nothing to scoff at. That's pretty impressive.

We've known for a long time that Haswell would have double the integer SIMD width and two 256-bit FMA units that are both capable of FMUL (and one of FADD), as opposed to one 256-bit FADD and one 256-bit FMUL unit. We should be very concerned if a simple test couldn't be derived to demonstrate the addition of these units.

Mendelbrot is extremely parallel (easy to populate all the SIMD lanes and easy to hide the latency of the operations) and has a fairly high ALU to memory ratio. It probably doesn't even need the improved L1 load/store bandwidth to take advantage of the increased ALU throughput.

That doesn't mean that plenty of useful software won't have a big benefit from the improvements but rarely anywhere close to this dramatic.
 
I'd be surprised if most games used anything but SSE2 or 3. Maybe if we're lucky, SSSE3.

Code using up to AVX (maybe just AVX128) is going to be useful and probably utilized on consoles. The question is how much work it is to dynamically dispatch between this and weaker fallbacks. Probably not that much. Code size isn't a huge deal.
 
Actually a question here: how useful is AVX128 vs SSE 4.2?

The main feature is that it has three way addressing. I don't have a great idea how much it improves performance, but consider this: one of Ivy Bridge's most major changes is its ability to fold moves in the register renamer, a feature to get around the lack of three way addressing. But those moves still take up bandwidth on the front end (typically from the uop cache), so their impact is not totally removed. Sandy Bridge also had a fairly wide decode and lots of execution units lying around that could do moves. This isn't the case on Jaguar, where decode is one of the more narrow parts of the design and wasting a slot on a move will have a bigger impact. So I expect AVX128 on Jaguar to be more useful than move elimination was on IB. Yet the latter still had a measurable impact a lot of the time (I doubt the other change, some more dynamic buffers for single threaded scenarios, contributed as much).

So I think it'll probably at least be worth using, if compilers can take advantage of it well.
 
I don't know how to take it, regarding relevance of AVX, but Intel disables it on all Pentium and Celeron processors, because, hum!, because they feel like it.
Hopefully they'll enable it on Haswell celeron and pentium and go on disabling the other features instead (clearvideo, quicksync, TxT etc.)

Game developers and middleware developers can do both binaries with SSE 2 or 3 and AVX as feature levels, probably (as says Exophase). Around 1997 we had Pod and Pod MMX (and worse, they'd be various hard-wired versions for a few proto-GPUs).
Since consoles CPUs have AVX it seems a given that most major game engines and gaming middleware will use it.
 
Intel's also disabling TSX on the 4770K model.

They can't even make the new instruction sets consistent on the top and middle end models anymore. What the fuck, man. Just what the fuck.
 
Back
Top