Predict: The Next Generation Console Tech

Status
Not open for further replies.
But if they put it off forever they'll be launching only moments before Nintendo's next-next generation console!!

Is it not fair to say that a console released in 2013 could simply be a more expensive version of what may be produced in 2014? If they're willing to take it on the chin a little we could still have some great hardware in 2013 even if it becomes practical only in 2014.
 
Depends on what tech appears when. Releasing a year later could of made the PS3 much much cheaper and had a version of Nvidia's DX10 arch. However as you said you cannot wait forever.
 
Why DDR4? If we are talking about UMA design, i don't think i make sense. It probably has better latency than GDDR5, but it's going to be considerably slower for a UMA design. I also doubt that it's going to have a mass market availability by 2013.
GDDR5 seems a better choice.
 
I only mentioned DDR4 as an alternative for 2014+. There's power consumption and density (lower voltage, smaller process node) to consider with respect to the # of chips they'll use & final RAM count.

Also consider that with DDR4 manufacturers are targeting a much higher volume as it's naturally the replacement for DDR3.

With respect to bandwidth, it's not as though we haven't seen the mitigation of higher bandwidth requirements via larger texture caches or TBDR or eDRAM.

Anyways, it's just an alternative - higher density GDDR5 (beyond4G! :p.... Gbit) isn't on the roadmap.
 
I only mentioned DDR4 as an alternative for 2014+. There's power consumption and density (lower voltage, smaller process node) to consider with respect to the # of chips they'll use & final RAM count.

Also consider that with DDR4 manufacturers are targeting a much higher volume as it's naturally the replacement for DDR3.

With respect to bandwidth, it's not as though we haven't seen the mitigation of higher bandwidth requirements via larger texture caches or TBDR or eDRAM.

Anyways, it's just an alternative - higher density GDDR5 (beyond4G! :p.... Gbit) isn't on the roadmap.

Im agree with you,DDR4 at much higher volume and coming soon to mainstream in 2012 maybe makes very cheap alternative for console target to 2014.

And samsung,hynix and others have now samples,taped out 4Gb modules 2.13 and 2.4GHz at 30nm... 8GB RAM(8Gb modules...) console dream come true;) (for Crytech,others devs too).

http://it-chuiko.com/computers/7915-samsung-sozdaet-pervyj-v-mire-modul-pamyati-ddr4.html

http://it-chuiko.com/tags/DDR4/

http://www.techpowerup.com/137663/S...st-DDR4-DRAM-Using-30nm-Class-Technology.html
 
Last edited by a moderator:
So is DDR4 supposed to be purely a quad pump design a la GDDR5?

It seems that DD4 is a different way of dealing with access memory (correct me if I
'm wrong):


" DDR4 also anticipates a change in topology. It discards dual and triple channel approaches (used since the original first generation DDR[28]) in favor of point-to-point where each channel in the memory controller is connected to a single module.[2][3] This mirrors the trend also seen in the earlier transition from PCI to PCI Express, where parallelism was moved from the interface to the controller,[3] and is likely to simplify timing in modern high-speed data buses.[3] Switched memory banks are also an anticipated option for servers.[2][3] "


Im understand right? They are have 2GBytes( testing 16Gb) DDR4 modules ( http://www.hynix.co.kr/gl/pr_room/news_data_readA.jsp?NEWS_DATE=2011-04-04:08:55:16 ) ?

" Three months later in April 2011, Hynix announced the production of 2 GB 2400 MHz clock speed DDR4 modules, also running at 1.2 V on a process between 30 and 39 nm (exact process unspecified),[8] adding that it anticipated commencing high volume production in the second half of 2012.[8] Semiconductor processes for DDR4 are expected to transition to sub-30 nm at some point between late 2012 and 2014.[2][23]
[edit] "

http://en.wikipedia.org/wiki/DDR4_SDRAM
 
Last edited by a moderator:
Well, exactly. The longer they put it off, the better the chances of cheaper, higher density or faster DDR4. Didn't people want their 8GB consoles to actually be feasible??
;)
----------------
It's not so much a question if it'll exist, but whether it will be ready for mass production. Early adoption of the process won't be cheap - far from it as capacities will be constrained and as it'll take time to get better yields.

Speaking of mature process, I wonder where will be IBM 32nm process enabling DRAM.
Some pages ago I was fantasizing about the odds of many cores design, after reading more about about POWERPC A2 and about IBM expectation for DRAM @32nm (they expect better result than at 45nm) it sounds a bit less like one's geeky/wacky speculation.
I don't expect Sony or MS to use something else than 32/28nm processes for their next gen design as I don't believe that either TSMC or GF will have properly transitioned to better processes by the time next gens launch (especially if you consider that they may want to start production x months before release).

I think it's possible to come up with something competitive as long as IBM find some interest in founding the project a bit. Is there a market for a more "number crunching" orientated power A2? I'm not sure.
Would Sony take the risk to move to such a architecture? Imo after the losses on the ps3 and their last statements on the matter I believe there is no way. For MS? It's possible but I would put the odds pretty lows. Ms is tied to directx evolutions but say directx 12 is still not completely defined or there is no compliant part ready for launch a many-cores backed by a tiny directx11 class "ROP/RBE less" GPU(s) with support for tiling may do the trick.

The programming model could be:
everything from scene traversal to vertex handling is done on throughput cores.
Data is read on fly by GPU(s) on board from L2.(dumping in RAM if necessary).
GPU(s) does its stuffs, rasterize, texture, fill render target write tiles somewhere on the L2 (dumping in RAM if necessary).
Throughput cores read tiles of the RT, do deferred shading, post process operations, etc.
results are write back to RAM and sent to display.
 
Last edited by a moderator:
Some "details"

“The multicore architecture of the console is a natural fit for our in-house HD engines, such as the Anvil engine,” Parenteau said during a developer roundtable at E3.
Parenteau, one of the few people in the industry that knows of the Wii U’s deepest hardware specifications, said the console’s “large memory capacity” would allow for enhancements to Anvil tech.

He said the extra processing power would allow for precalculating data and increasing cache sizes.

All the graphical shaders used in Assassin’s Creed are fully functional on Wii U”, Parenteau added.

He claimed that developers with Wii experience will find a familiar set of APIs in Wii U.
“New features, such as multicore processing, will extend the APIs in a natural way with low-level and straightforward calls,” he added

http://www.develop-online.net/news/37999/Programming-guru-gives-insight-into-Wii-U-tech

Anyone still thinks it is a lower performer than PS360?
 
"..enhancements to Anvil tech.."

Is the Anvil tech already running on Wii?
"Large memory capacity" is not "larger" memory capacity. In other words, there is no comparison made. The "...enhancements to Anvil tech." could mean enhancements they already have planned for the other consoles. In that respect, he could be talking about the other HD consoles. There is just no way to be sure.

There is too much leaping to conclusions without the proper information. We should wait for a lot more information before making that leap of faith. Isn't that the, usual, prescribed medicine?

Right now, all we know is that three debuted games are running at 720p no AA on Wii U.
 
He said the extra processing power would allow for precalculating data and increasing cache sizes.
From my understanding, this may not be the greatest thing to brag about. >_> (That's all I'll say aboot that)

He said the extra processing power would allow for precalculating data and increasing cache sizes.

I'm not sure I understand the context of this or how processing power is relevant to precalculating data or the size of a cache.
 
Some "details"

http://www.develop-online.net/news/37999/Programming-guru-gives-insight-into-Wii-U-tech

Anyone still thinks it is a lower performer than PS360?

I was assuming also that he was talking relative to the performance of existing implementations of Anvil on other HD consoles, but his lines on the API on the original Wii confuses these matters. So I have to agree with AlStrong for a moment that there is no clear indication yet that we're looking at an upgrade, even if it is incredibly hard to imagine there won't be (but we should all know Nintendo by now, and wait for evidence).
 
"Large memory capacity" is not "larger" memory capacity. In other words, there is no comparison made. The "...enhancements to Anvil tech." could mean enhancements they already have planned for the other consoles. In that respect, he could be talking about the other HD consoles. There is just no way to be sure.

There is too much leaping to conclusions without the proper information. We should wait for a lot more information before making that leap of faith. Isn't that the, usual, prescribed medicine?

Right now, all we know is that three debuted games are running at 720p no AA on Wii U.

If it is permitted by the large memory, then it is a enacement over the other consoles.

From my understanding, this may not be the greatest thing to brag about. >_> (That's all I'll say aboot that)



I'm not sure I understand the context of this or how processing power is relevant to precalculating data or the size of a cache.

Inst that the way deferred lights and such work?

calculate->store in ram->calculate again other stuff->store in Ram-> calculated those two (precalculated stuff) to get even better
 
Some "details"



http://www.develop-online.net/news/37999/Programming-guru-gives-insight-into-Wii-U-tech

Anyone still thinks it is a lower performer than PS360?

Also from that article:

The company has since admitted that, during its E3 reveal of the new tech, it should have put more emphasis on the system’s hardware.

And yet how hard is it to put a powerpoint together? Let some Devs talk about more specifics? Or loosen the reigns on IBM and AMD to tout their designs?
 
Inst that the way deferred lights and such work?

Precalculated is generally used in the context of offline computation e.g. baking lighting information into the map. This is opposed to real-time calculation.

calculate->store in ram->calculate again other stuff->store in Ram-> calculated those two (precalculated stuff) to get even better

You might as well apply this to all games because you have to store the rendertarget information somewhere.
 
Status
Not open for further replies.
Back
Top