Predict: The Next Generation Console Tech

Status
Not open for further replies.
To be honest, common sense tells us we can ignore the claim of a dual A9 having half the performance of an X5450. It simply isn't possible given the size, power and architecture differences. For this to be true it would need more performance per clock/core than a Sandybridge at about 1/100th the power draw. Cleary a ludicrous expectation.



A8 obviously is slower than A9. But we are comparing core for core, clock for clock performance, not overall performance with the A9 using more cores at a higher clock speed.

That said, even if we take A15 to be twice as fast as A8 core for core and clock for clock, a quad version of it is still going to be 1/3 to 1/2 the performance of an X5450. Or an Octo core version, assuming perfect core scaling is going to be at 2/3rds to equal in performance to an X5450. But thats with all factors leaning in the A15's favour.

The 2.8x claim though is totally unrealistic.

Correct me if im wrong, we almost agree here?

As i said here* and sorry im not clear enough,if A15 cortex 8 core reach 50% QX9770** in console universe (not general porpose aplications here) its an excelent result with =~1/15 TDP.

*
" Maybe you're right and now we're almost in the realm of speculation, these links* indicate that the A15 is higher than the A8, but still below core per core than the X86 can offer(if customised for console aplications reach 50% Core 2 Extreme QX9770 its an excelent result), but who knows with customizations and without limitation watts of portable and I said earlier, an A15 can provide considerable performance and very low consumption."

** http://www.cpubenchmark.net/cpu.php?cpu=Intel+Core2+Extreme+X9770+@+3.20GHz

Xeon 5450(90% power QX9770 in this numbers)
http://www.cpubenchmark.net/cpu.php?cpu=Intel+Xeon+E5450+@+3.00GHz



You're absolutely right, forget all the benchmarks (coremark, DMIPS etc.) and estimates of performances A9 and A15 vs. Xeon x5450 etc,probably because they are only marketing numbers, but perhaps with more equipment coming to market (and respective reviews and benchs) with A9 (already impressive on the first generation games PSVITA) and A15 (the Nova A9600 ,OMAP5 and others) reachs mainstrean we're surprised favorably with their performances.
 
Last edited by a moderator:
I'm still expecting a tailor-made custom Kepler or Maxwell generation GPU in PS4. That alone will make PS4 game graphics so much better than those of the PS3. It really doesn't matter as much on the CPU side of things. Sure it would be nice to get 32-64 SPU chip in there, but a 16 SPU chip combined with a massively more powerful GPU will still insure PS4 graphics leap over 360/PS3.


Hopefully you're right and will be amazing, my 2 cents ...perhaps the kepler* to be released end of the year(already taped out) with as somekind mobile version is more likely to come to consoles in 2013, but if next gen console coming in 2014 a "mobile Maxwell"(2/3 performance counterpart pc) maybe is possible for closed box console.

But after the experiences of non-positive/problematic with MS (intense negotiations about cost NV2A) and Sony (weak,disappointing RSX,PS Vita ...they are not going with Tegra...), a manufacturer still want to go with nvidia?

*http://hothardware.com/News/NVIDIA-Exposes-GPU-Roadmap-Kepler-Arrives-2011-Maxwell-in-2013/

In CPU area, is that after so much criticism about the complexity claiming by thirdies i would not bet that cell architecture is adopted again(maybe only for BC).
 
Last edited by a moderator:
I'm still expecting a tailor-made custom Kepler or Maxwell generation GPU in PS4. That alone will make PS4 game graphics so much better than those of the PS3. It really doesn't matter as much on the CPU side of things. Sure it would be nice to get 32-64 SPU chip in there, but a 16 SPU chip combined with a massively more powerful GPU will still insure PS4 graphics leap over 360/PS3.

SCE Spain/Portugal CEO (at the same time some big guy @ SCEE big boss I think?) doesn't share your enthusiasm though, as he said that he doesn't believe the graphics will be much better than PS3 if at all.

Also there's the rumor that AMD scored deals for all 3 next gen consoles.
 
SCE Spain/Portugal CEO (at the same time some big guy @ SCEE big boss I think?) doesn't share your enthusiasm though, as he said that he doesn't believe the graphics will be much better than PS3 if at all.

Also there's the rumor that AMD scored deals for all 3 next gen consoles.


The next Xbox is most likely yes* and Wii already announced, but Sony too?


In this case maybe Radeon HD 6850/6970M (all Barts core here) or 5870(benchs 2/3 power Ge GTX580) with some tweaks (better tesselation) perhaps the best options with power and low TDP/wattage.**


* http://www.neogaf.com/forum/showpost.php?p=26403963&postcount=464


** from here: http://www.techpowerup.com/forums/showthread.php?t=148717

perfwatt.gif
 
Last edited by a moderator:
The next Xbox is most likely yes* and Wii already announced, but Sony too?

Well, it's just a rumor of course but
http://www.hardocp.com/article/2011/07/07/e3_rumors_on_next_generation_console_hardware/

The Big GPU News




What looks to be a "done deal" at this point is that AMD will be the GPU choice on all three next generation consoles. Yes, all the big guns in the console world, Nintendo, Microsoft, and Sony, are looking very much to be part of Team AMD for GPU. That is correct, NVIDIA, "NO SOUP FOR YOU!" But NVIDIA already knew this, now you do too.

There are going to be game spaces that NVIDIA does succeed in beyond add in cards and that will likely be in the handheld device realm but we do not see much NVIDIA green under our TV sets. NVIDIA was planning to have very much underwritten its GPU business with Tegra and Tegra 2 revenues by now, but that is moving much slower than the upper brass at NVIDIA wishes. Tegra 2 penetration has been sluggish to say the least.

AMD has always been easier to work with than NVIDIA on the console front. Well that may not be exactly true, but Microsoft did not spend months in arbitration with NVIDIA over Xbox 1 GPU and MCP costs back in 2002 and 2003. I always felt as though that bridge was burned.
 
SCE Spain/Portugal CEO (at the same time some big guy @ SCEE big boss I think?) doesn't share your enthusiasm though, as he said that he doesn't believe the graphics will be much better than PS3 if at all.
That sounds impossible. Any current gpu (let alone a 2012-2013 design) would be a huge improvement over RSX. Probably just PR speak.
 
Last edited by a moderator:
That sounds impossible. Any current gpu (let alone a 2012-2013 design) would be a huge improvement over RSX. Probably just PR speak.

I read it more as "it will not be our focus, any decent stuff will be good enough in the gfx department"...
 
SCE Spain/Portugal CEO (at the same time some big guy @ SCEE big boss I think?) doesn't share your enthusiasm though, as he said that he doesn't believe the graphics will be much better than PS3 if at all.

Also there's the rumor that AMD scored deals for all 3 next gen consoles.

I don't believe the rumor. I believe Nvidia will be in one of the next gen consoles, most likely PS4.
I also don't put much stock in what the SCE Spain/Portugal CEO says, he's probably not in the loop like the CEOs in Japan and U.S, are.
 
Everyone needs to read 3 books as I have: Opening The Xbox, The Xbox 360 Uncloaked and The Race For a New Game Machine (360/PS3). They are all really worth it.
 
I read it more as random pr person looks latest Epic demo and at BF3 running on consoles and says eh theirs not much of a difference.
 
Everyone needs to read 3 books as I have: Opening The Xbox, The Xbox 360 Uncloaked and The Race For a New Game Machine (360/PS3). They are all really worth it.

Sorry about of topic.

So is true Xbox created in 14 months(about hardware system final specs PCB,taped out chips) ?


If something happened in a similar way today even with double the time we still had hope in next generation console "really powerful" (on par or at least half powerful high end PC) for 2013.

However i believe it is not possible in the current context, because unfornately i dont see MS and Sony spend as much as before because "wii efect" and may not belong to this sphere... but pressure from shareholders / stock market and the current world economic situation (earthquake in Japan taking years to recover, huge debts in the U.S and. Euro zone etc.) that demands caution in scenario a probable decrease of the consumer market.
 
Last edited by a moderator:
Sorry to return to this theme,but i found interesting this view taken here:

http://timothylottes.blogspot.com/2011/06/fxaa3-console-status-update-2.html

Ask/Question
" Александр said...
"Algorithms which run well on current console level performance envelope will at some point be great for future mobile." I understand you. But how about next gen?

And if you can. I have read one of your posts and saw there that PS3 GPU is only 132 Gflops. Is it true? I assumed that it is around 200. Plaese explain."

Answer:
" Timothy Lottes said...
Note the raw numbers on that old post are simply grabbed from web speculation to make a generalization about perf/pixel of various devices. I would not read into the exact numbers, and I'm not going to correct the numbers either :)

As for next gen, next gen is always available on PC. Simply drop resolution to increase performance per pixel. Instant prototype for whatever next generation you want to target."

( praying and crossing my fingers here that developers do not think back to 720P for next generation ...)
 
Nah, he means more to simulate hardware that isn't available yet.

It's unclear if next gen consoles would be more powerful than todays top PC GPU's that you'd need this method, anyway. Probably getting close. However you could also use his technique to simulate a faster PC GPU as well.

I am reasonably certain that next gen consoles will be 1080P standard. Not entirely, though. If you got into a case where a "weaker" competitor started simulating the stronger machine(s) 1080P graphics at 720P, I'm not sure what would happen, people might accept it as "good enough", forcing the stronger machines to drop to 720 to demonstrate an edge again.
 
Correct me if im wrong, we almost agree here?

As i said here* and sorry im not clear enough,if A15 cortex 8 core reach 50% QX9770** in console universe (not general porpose aplications here) its an excelent result with =~1/15 TDP.

I still have my doubts that it will be anywhere near that level of performance, at least outside of some very specific benchmarks (presumably the Cortex cores are pretty well optimised for Java which was the basis of the initial comparison to x86 Barton.)

However at this point they are the only numbers to go off so there's not much more point in speculating. Best case scenario = 2/3 - equal performance.

Whats the TDP of the two chips btw? I'm thinking in terms of performance per watt it's probably less valid to compare to an X5450 and better to compare against a lower clocked Sandybridge, or possibly even IvyBridge given the timings. What would be the TDP on say a low power variant of IvyBridge at say 2.4Ghz (estimated performance parity to a 3Ghz Penryn)?

Probably still a lot higher than the Octo A15 but the gap would be no-where near as large.
 
Kind of old news, but you guys think MS looking to buy part of Nvidia would play a part in their next gen plans?

It was recently discovered in an SEC filing by Information Week that Microsoft has the rights to match and beat any deals by any company to buy 30% or more of NVIDIA’s shares: “Under the agreement, if an individual or corporation makes an offer to purchase shares equal to or greater than 30% of the outstanding shares of our common stock, Microsoft may have first and last rights of refusal to purchase the stock.”
 
I still have my doubts that it will be anywhere near that level of performance, at least outside of some very specific benchmarks (presumably the Cortex cores are pretty well optimised for Java which was the basis of the initial comparison to x86 Barton.)

However at this point they are the only numbers to go off so there's not much more point in speculating. Best case scenario = 2/3 - equal performance.

Whats the TDP of the two chips btw? I'm thinking in terms of performance per watt it's probably less valid to compare to an X5450 and better to compare against a lower clocked Sandybridge, or possibly even IvyBridge given the timings. What would be the TDP on say a low power variant of IvyBridge at say 2.4Ghz (estimated performance parity to a 3Ghz Penryn)?

Probably still a lot higher than the Octo A15 but the gap would be no-where near as large.

I agree.


And indeed, from 5450 with 120 watts would be overwhelming performance advantage vs. watts for ARMs(still in miliwatt universe),the correct would be compare it with Sandy Bridge and if not my mystake A15 and information cell phones manufacturers ARM A15 2 core has TDP 1.9 watts* at 2.5GHz @28nm (OMAP and Nova a9600),perhaps eight cores achieve something like 7 / 8 watts.

Sandy bridge maybe would still be disadvantageous in the aspect wattage against A15 octo, but Ivy may make things even closer if it comes at 22nm and underclocked(but A15 going to 22nm too..).

About A15
* http://mobile.arm.com/markets/home/gaming.php

http://wmpoweruser.com/tag/arm/

Interesting article here:

http://www.xbitlabs.com/news/cpu/di...22nm_Tri_Gate_Process_Technology_Company.html

Core 2 quad QX9770 -> 136 watts
http://ark.intel.com/Product.aspx?id=34444

Sandy Bridge;

http://ark.intel.com/ProductCollection.aspx?codeName=29900

http://www.tomshardware.com/forum/101900-8-sandy-bridge-lowered-price-free-shipping-extras

17 watts!
http://www.tomshardware.com/news/sandy-bridge-ulv-macbook-air,12980.html

http://ark.intel.com/Product.aspx?id=34444

Ivybridge
http://www.tomshardware.com/news/ivy-bridge-sandy-launch-processors,12828.html

Last Edit:

Great Arun Article from here about handhelds:

http://www.beyond3d.com/content/articles/111/4
 
Last edited by a moderator:
Intel has spent countless of transistors for the last ten percents of single thread performance in their architecture. Gains in single core IPC and clock improvements have been smaller and smaller for a long time now. If I remember correctly Intel's high end CPU design used to have the rule that if you can improve the performance by 1% by increasing the power use by 2% then it's all good. Improving IPC beyond what Intel has done is really hard, and has really diminishing returns.

If ARM is aiming for similar single thread performance (IPC and clock) than Intel, they need to make lots of power usage sacrifices. Getting half way there (50% single thread of Intel) might be pretty straightforward, but after that it's going to get pretty hard really fast. However if they are happy with lower single thread performance, and achieve the performance parity by core count scaling, they might have a winner in the high end computing as well (consoles, servers, etc). Current ARM cores use low amount of memory bandwidth, so theoretically we could put more of them in a single die compared to high IPC cores, and be ok with the same memory bandwidth (similar memory bus and memory type). So we could have more ARM cores sharing a fast memory bus... but I don't have any clue how well ARM cache coherency protocols scale when the core count rises :)
 
Status
Not open for further replies.
Back
Top