I just wonder if Mobile might become the new PC, in terms of pushing graphics incrementally in between console generations, and what if any effect that could have on console cycles.
Not very impressed by Dead Space, the lighting is pretty static and the characters don't even have shadows. This seems to be a rather annoying trend in even the highest end IOS/Android games at the moment.
Hmmm, I guess you're right. The graphics are good for a mobile device but really lame in comparison with current-gen consoles.
On paper, mobile parts sound really promising. ARM's Mali Dual-Core GPU is supposed to take up only 5.5mm at 45nm. How many of them can you fit onto the silicon of the original RSX? On the other hand, the dual-CPU dual-GPU A5 in the iPad 2 is 122mm in size. At most you can quadruple the total silicon budget to around 500mm. Does four times the power of the iPad 2 match even the current-gen?
Hmmm, I guess you're right. The graphics are good for a mobile device but really lame in comparison with current-gen consoles.
On paper, mobile parts sound really promising. ARM's Mali Dual-Core GPU is supposed to take up only 5.5mm at 45nm. How many of them can you fit onto the silicon of the original RSX? On the other hand, the dual-CPU dual-GPU A5 in the iPad 2 is 122mm in size. At most you can quadruple the total silicon budget to around 500mm. Does four times the power of the iPad 2 match even the current-gen?
Well, it's not just quadrupling the size. Keeping in mind that the iPad is a portable device, power consumption and heat are also concerns. They could probably raise the clocks quite a bit, too, because... they can.
(Developers really needs a "very powerfull cpu" doing everything and low range GPU or extremelly eficiently and "reasonable-powerfull-GPU-for-five-years-cycle" + cpu "medium range like/enough power" extremely user-friendly?... Yeah.. we know the answer... )
Well yes we know the answer
Anyway I kind of agree with in regard to the "optima"l set-up from a performances POV.
I wonder about opening a spin off thread possibly with a pool to discuss what are members expectations in regard to how Next systems will compare to contemporary PC.
My self I've next to no hope about next systems to match high end PC rig (even letting sli crossfire out of the picture). We also already know what to expect through engine as Frostbyte 2.
There will be improvements but it gives a pretty clear picture of what we can expect.
The only way I see consoles matching PC and being consistent with them is the long run, is to use a completely different approach and pass on "close to off the shelves parts". Sad part is that it would cost a lot in R&D to pull out only to suffer in the product early life. My believe is that to achieve that we need more fixed function hardware, power efficient vector processing units and some mid/high performance CPU cores.
I also believe it's better to have all this on a single chip and if more power is needed one would simply use twice the same part. It would look like an even more heterogeneous Cell.
A "plain" many core design (throughput cores, flat memory space, text units) is unlikely, cost in silicon and possibly power may still be prohibitive. Software would take a lot of time to mature and it would most likely end at a perf deficit in multi-platform games (most likely running @lower res). That would be my "geeky" choice but something more like the Cell would less of a jump in regard to the programming model, would offer better perfs per Watts and mm², etc. Overall whereas more geeky it get me to think again about Andrew Richards statement in regard to the Cell "it's unfailed". I believe the concept could really shine in a next gen system, but it needs change and nobody seems willing to found the R&D.
Anyway, there's no hope. Sony have state they won't spend much in R&D, we already know that N went with a "super 360" and I can't see MS doing something else than a "super² 360".
For Sony I expect a super² 360 + cell relic somewhere on the die for BC sake
I expect a "lock down" in regard to hardware especially now with sites like DF no manufacturer will take the risk to run big AAA games at, lower resolution, etc.
Well yes we know the answer
Anyway I kind of agree with in regard to the "optima"l set-up from a performances POV.
I wonder about opening a spin off thread possibly with a pool to discuss what are members expectations in regard to how Next systems will compare to contemporary PC.
My self I've next to no hope about next systems to match high end PC rig (even letting sli crossfire out of the picture). We also already know what to expect through engine as Frostbyte 2.
There will be improvements but it gives a pretty clear picture of what we can expect.
The only way I see consoles matching PC and being consistent with them is the long run, is to use a completely different approach and pass on "close to off the shelves parts". Sad part is that it would cost a lot in R&D to pull out only to suffer in the product early life. My believe is that to achieve that we need more fixed function hardware, power efficient vector processing units and some mid/high performance CPU cores.
I also believe it's better to have all this on a single chip and if more power is needed one would simply use twice the same part. It would look like an even more heterogeneous Cell.
A "plain" many core design (throughput cores, flat memory space, text units) is unlikely, cost in silicon and possibly power may still be prohibitive. Software would take a lot of time to mature and it would most likely end at a perf deficit in multi-platform games (most likely running @lower res). That would be my "geeky" choice but something more like the Cell would less of a jump in regard to the programming model, would offer better perfs per Watts and mm², etc. Overall whereas more geeky it get me to think again about Andrew Richards statement in regard to the Cell "it's unfailed". I believe the concept could really shine in a next gen system, but it needs change and nobody seems willing to found the R&D.
Anyway, there's no hope. Sony have state they won't spend much in R&D, we already know that N went with a "super 360" and I can't see MS doing something else than a "super² 360".
For Sony I expect a super² 360 + cell relic somewhere on the die for BC sake
I expect a "lock down" in regard to hardware especially now with sites like DF no manufacturer will take the risk to run big AAA games at, lower resolution, etc.
We know the answer ... I was so arrogant I borrowed the thought of a developer friend (MDK2 and others) who complained Emotion Engine that architectures such as Cell and ended with their purposes week ...
And it really would be interesting to have a discussion Topics about hardware required for next generation, but in the end perhaps all of us to be disappointed with what will show us for the next 5 / 6 years (most are 5 or 6 years?).
I also dream that the next gen consoles achieve the same high GAP on high end PCs, but with increasing wattage consumption GPUs and CPUs and manufacturing process have been slow to down sub 40/45nm levels(32/28nm is not enough ) may not be able to see the same jump that we saw the launch on X360 with revolutionary R-500 / C1 unified shaders ... unless the next game consoles had a higher TDP tolerance for anything but what we saw in 200/250 watts x360 and ps3 launch date .. and it might not be able to occur because we have game consoles " burn "in proportions never seen before and sony and MS are seeking more game consoles" wii like "without major necessity heat dissipation (and spending$$).
Frostbite 2 would be wonderful to catch up with the reange "Tflops" (2+ Tflops + even on SLI and 4TFlops in "one die" with Maxwell ...),but unfortunately I left my hand gamer "daring" desiring a Maxwell ...Im have already satisfied with a PowerVR 6 Rogue "revolutionary" with an ARM to make life easier for developers and some "SPU way of life" providing flexibility and new options for more developer "Engaged in the pursuit of new horizons" ... and backwards.
And really well observed ... "to close off the shelves parts "... perhaps the only chance of the next gen consoles would face of high end PCs I believe in my opinion ... Sony or MS is to follow the route Ray-Tracing (Caustic on Rogue / IMG annyone?) as proposed with Intel Larrabee,but unfonatelly we dont see this renderer universe and you well said it would require going to spend more and probably Sony and MS will not go in this way, so hardly we'll see this is advances like Emotion Engine 6.2Gflops already 1999/2000 (launch date march/2000) + GS 1.2/2.4Gpixel able to make a ps2 superior against pcs at the time (1999 tech Geforce 256SDR/480 megapixel),an x360 GPU revolutionary,a ps3 Cpu with cell above (talk about Cpu GFLOPS single precision universe ...) of everything that existed (in 1998 PowerVR 2DC exceeding Voodoo 2 and TNTs).
We really have to be content with custom hardware "medium range" cpgpus ("low range" to "very low range" if compare with 5 or 6 year console cycle...) with low TDP, of its components , but at least we will likely see software quite "mature" using much of the potential overall, we may seeing a return to the search for the evolution of software side ... maybe we are seeing also a period of reductions "refresh" of generations ... (to less than 5 / 6 years) .
There is no hope for gamers like us (I have see this gaming world for over 30 years) lovers of "new hardware game inventions" or even revolution (or enough "great evolution"...),more power with 10 to 15 times more than the Previous generation ...
Returning to the universe "2 cents" ... I particularly hope N / Café achieve something like 480Gflops * (scalar), Xbox "X720GFlops" "scalar" and im not count with adoption of VLIW4 (Krishna or even Radeon 6850 + ARM like) and a "miracle" (yeah im lying im still dreaming...) Sony to "mega-super-ultra-Rogue PowerVR 6"(crossing fingers here!) in 1.6 Tflops TBDR able to how results like Geforce 580GTX or even more(2.5 more times eficience)... excuse me for this dream ...
* Despite being on Gflops (especially efficiency of these is actually more important) today companys like AMD/ATI and Nvidia and others return to this "metric" to have an assessment of the capabilities of the hardware.
Keeping execution units working productively is what makes an architecture effective.
That's all about the subtleties like scheduling and latency handling. Focusing on FLOPS or one big number outside the context of the full picture is just missing the point.
There's a reason why certain CPU and GPU architectures keep on enduring and evolving while others fade out.
I think some geeks just can't control themselves and have hard-on for FLOPS talk....kinda like those people who have hard-on for SSD storage and 100 core 6GHz processor...not happy until wet dream spec reach epic alternate reality proportions...tri-winning.
We know the answer ... I was so arrogant I borrowed the thought of a developer friend (MDK2 and others) who complained Emotion Engine that architectures such as Cell and ended with their purposes week ...
And it really would be interesting to have a discussion Topics about hardware required for next generation, but in the end perhaps all of us to be disappointed with what will show us for the next 5 / 6 years (most are 5 or 6 years?).
I'm not expecting to be "disappointed", Frosbyte engine 2 on high PC looks outstanding. It's the first game leveraging this tech, artists will learn to do better, tech will further improves, etc.
It's more like this time around discussion on the hardware may be moot, we might end with very differences between the hardware and the designs choices among the 3 manufacturers. I like console hardware as it was a place for hardware "experiments". It seems that increasing R&D cost and viability of off the shelves parts is about to kill that trend. As a gamer I don't care much but from a geeky /personal interest it's different.
I also dream that the next gen consoles achieve the same high GAP on high end PCs, but with increasing wattage consumption GPUs and CPUs and manufacturing process have been slow to down sub 40/45nm levels(32/28nm is not enough ) may not be able to see the same jump that we saw the launch on X360 with revolutionary R-500 / C1 unified shaders ... unless the next game consoles had a higher TDP tolerance for anything but what we saw in 200/250 watts x360 and ps3 launch date .. and it might not be able to occur because we have game consoles " burn "in proportions never seen before and sony and MS are seeking more game consoles" wii like "without major necessity heat dissipation (and spending$$).
I hope for a healthy jump in power but on the other hand I don't want a bulky system. The 360S should be the max size at launch for me. Looking at MS positioning in regard to services "in general" (bing, streamed content, etc.) I feel like they may pass on a noisy system too. I expect their system to look more classy, more like a high end CE device, a lot less marketed as a gaming device.
Frostbite 2 would be wonderful to catch up with the reange "Tflops" (2+ Tflops + even on SLI and 4TFlops in "one die" with Maxwell ...),but unfortunately I left my hand gamer "daring" desiring a Maxwell ...Im have already satisfied with a PowerVR 6 Rogue "revolutionary" with an ARM to make life easier for developers and some "SPU way of life" providing flexibility and new options for more developer "Engaged in the pursuit of new horizons" ... and backwards.
Frostbite 2 would do well with less than that. Actually I'm not sure the FLOPS count will continue to sky rocket in the GPUs realm, manufacturers have hit (badly) the power wall and my bet is that the trend will be more about what you can do with these FLOPs vs the theorical peak.
I'm not sure about PowerVR, they are doing neat GPUs that for sure but that's not really what I was advocating for.
I'm not sure I get what you mean by "ARM makes developers lifes easier + SPUs way of life (hell? )". From what I get ARM cores (as in NGP) make life for developers easier vs say a ps3 as any SMP design would. Nothing more nothing left.
And really well observed ... "to close off the shelves parts "... perhaps the only chance of the next gen consoles would face of high end PCs I believe in my opinion ... Sony or MS is to follow the route Ray-Tracing (Caustic on Rogue / IMG annyone?) as proposed with Intel Larrabee,but unfonatelly we dont see this renderer universe and you well said it would require going to spend more and probably Sony and MS will not go in this way, so hardly we'll see this is advances like Emotion Engine 6.2Gflops already 1999/2000 (launch date march/2000) + GS 1.2/2.4Gpixel able to make a ps2 superior against pcs at the time (1999 tech Geforce 256SDR/480 megapixel),an x360 GPU revolutionary,a ps3 Cpu with cell above (talk about Cpu GFLOPS single precision universe ...) of everything that existed (in 1998 PowerVR 2DC exceeding Voodoo 2 and TNTs).
Honestly and whereas interested in Larrabee-like, I believe that ray casting is doomed. It's not even the standard for offline renderer. Ray casting could be a great thing if ID is succeful with its "mega geometry" project. It would be used to generate pixels not for lightning.
intel insists on showing ugly ray casting demo but the heart of their software efforts for larrabee was not there. Larrabee was intended to work as a tile based renderer.
We really have to be content with custom hardware "medium range" cpgpus ("low range" to "very low range" if compare with 5 or 6 year console cycle...) with low TDP, of its components , but at least we will likely see software quite "mature" using much of the potential overall, we may seeing a return to the search for the evolution of software side ... maybe we are seeing also a period of reductions "refresh" of generations ... (to less than 5 / 6 years) .
There is no hope for gamers like us (I have see this gaming world for over 30 years) lovers of "new hardware game inventions" or even revolution (or enough "great evolution"...),more power with 10 to 15 times more than the Previous generation ...
That's where I go back to my idea (which is not really a Cell + power VR) even if it won't happen as R&D both on hardware and software might prove prohibitive.
First as a disclaimer for a while have been convinced that a big single chip is the way to go. Problem is that packing mid/high perfs CPUs cores and GPUs won't get you that much power. Imho delivering something in the same range as llano won't make it (at least for Ms and Sony).
We're in need for more power efficient parts (or two chips systems, bigger systems, warmer systems, more expansive systems, etc.).
* SPUs qualifies they delivered 25MFLOPs while consuming 1Watts. I think that such hardware, power efficient vector processors may have their place in customized design.
* I'm sad that we don't get more info about PICA tech used in the 3DS. Especially in regard to perfs per Watts and mm² (well designed and implemented on silicon it has to be more efficient per Watts and mm²).
* ARM15 (and above) clocked accordingly could be nice but IBM can do better imho. I read a paper about POWER A2, they achieved something that looks pretty impressive (not a match for console but it show IBM engineering power).
The idea would be to push further what Dice is doing in Frostbite engine 2, do as much as possible of the vertex and pixel shading on the vector processors and all this would be backed up by a super specialized (and efficient) fragment processor. My belief is that it could beat an APU chip using the same power (Watts).
In a dream world I would want to see a larrabee done right, developers mixing rasterization and ray casting to generate fragments, as well as others techniques depending on what they want to achieve. So "complete" (not really perfs concerns will be always be there) freedom to the devs and middle ware/engine vendors that may end pushing techs with significant difference in what they achieve.
Returning to the universe "2 cents" ... I particularly hope N / Café achieve something like 480Gflops * (scalar), Xbox "X720GFlops" "scalar" and im not count with adoption of VLIW4 (Krishna or even Radeon 6850 + ARM like) and a "miracle" (yeah im lying im still dreaming...) Sony to "mega-super-ultra-Rogue PowerVR 6"(crossing fingers here!) in 1.6 Tflops TBDR able to how results like Geforce 580GTX or even more(2.5 more times efficiency)... excuse me for this dream ...
* Despite being on Gflops (especially efficiency of these is actually more important) today companys like AMD/ATI and Nvidia and others return to this "metric" to have an assessment of the capabilities of the hardware.
That's a bit too much enthusiastic. There can't be miracle one should wait and see before thinking powerVR will successfully compete with the big guys (Nvidia and AMD). So far PowerVr provides low power, low throughput cores that are order of magnitude slower than NV/AMD parts. Power characteristic are impressive as well as the overall design but I would bet their compute density is far far away from AMD one for example. They have room to scale that's for sure but it's not trivial.
It's the same than X86 vs ARM. People are happily believing that beating Intel will be breathe something that for me remains to be proved (actually I believe analysts completely underestimate Intel and in the same time the challenge in front of ARM engineers).
Keeping execution units working productively is what makes an architecture effective.
That's all about the subtleties like scheduling and latency handling. Focusing on FLOPS or one big number outside the context of the full picture is just missing the point.
There's a reason why certain CPU and GPU architectures keep on enduring and evolving while others fade out.
I think some geeks just can't control themselves and have hard-on for FLOPS talk....kinda like those people who have hard-on for SSD storage and 100 core 6GHz processor...not happy until wet dream spec reach epic alternate reality proportions...tri-winning.
Well that's plainly mean and arrogant... that's not what we're speaking about even though Heinrich went into some FLOPS madness but it's assumed madness
FLOPS on insulation are nothing but in GPUs of the same family it give a rough idea of the power one have at hand per pixel/vertex/etc.
So the Wii U demonstration is over. Any enhanced predictions on innards?
Also, ideas on what may be powering the controller/2nd display/tablet? They said it does image processing independent of the console.
I have to say, for all the hype that it was on par with the current consoles graphically, the demo reel didn't really show it.
For what it's worth: IBM tells us that within the Wii U there's a 45nm custom chip with "a lot" of embedded DRAM. It's a silicon on insulator design and packs the same processor technology found in Watson. Clock speed not given. Nor GPU.
Really interesting, I don't know why IBM or Nintendo would hide who's the GPU provider. I got me wondering about the odds of Nintendo of putting a super PICA in this thing or an improved T&L class gpu, woot?
Really interesting, I don't know why IBM or Nintendo would hide who's the GPU provider. I got me wondering about the odds of Nintendo of putting a super PICA in this thing or an improved T&L class gpu, woot?
I am hoping it is a decent ATI GPU as many think, you don't have to go overboard to with power to get a very nice GPU good features. As long as it's powerful enough I'm happy
I am hoping it is a decent ATI GPU as many think, you don't have to go overboard to with power to get a very nice GPU good features. As long as it's powerful enough I'm happy
Keeping execution units working productively is what makes an architecture effective.
That's all about the subtleties like scheduling and latency handling. Focusing on FLOPS or one big number outside the context of the full picture is just missing the point.
There's a reason why certain CPU and GPU architectures keep on enduring and evolving while others fade out.
I fully agree with your assertion, but certainly software legacy maybe can extract the maximum potential of this hardwares and keep the execution units constantly "busy" (also in FLOPS ...) and probably this aspect that we still see sites like top500.org,tomshardware, Anandtech,tech,institutes like Stanford among others TechReport * is this kind of reference tools ** (shadertoymark and others) that can measure constant processing (efficiency units execution).