Predict: The Next Generation Console Tech

Status
Not open for further replies.
You know with so many fruitful and game changing technologies in development right now lets all petition Sony and Microsoft to stop the console war for a year or two more.

I want:

HSA
big.small processing*
HMC
TSV
AVX
DX12
OCL
C++ AMP

And a sexy sleek design with a small footprint.

Heck I even want an SSD, Wireless-AC and a decent OS on these new consoles.
I want it all on 22nm or less and under 150 watts total system power consumption (here in Europe electricity prices are going through the roof!!)

What you think guys? Who wants a new shiny console in 2014 with all the above?

* A bit like big.little but not necessarily ARM based

Of course I want it all for £329/$499 or less!
 
Last edited by a moderator:
You know with so many fruitful and game changing technologies in development right now lets all petition Sony and Microsoft to stop the console war for a year or two more.

I want:

HSA
big.small processing*
HMC
TSV
AVX
DX12
OCL
C++ AMP

And a sexy sleek design with a small footprint.

Heck I even want an SSD, Wireless-AC and a decent OS on these new consoles.
I want it all on 22nm or less and under 150 watts total system power consumption (here in Europe electricity prices are going through the roof!!)

What you think guys? Who wants a new shiny console in 2014 with all the above?

* A bit like big.little but not necessarily ARM based

Of course I want it all for £329/$499 or less!

The list of crimes one would have to commit for this would probably mean the only place they could run to would be North Korea.
 
http://www.koreanewswire.co.kr/newsRead.php?no=622880&ected=

Maybe of interest to you guys, sorry if old, don´t know if it means anything for the Xbox 720, but still interesting that Microsoft joins a team so close to the metal.
Interesting... the "specs" would be final by the end of the year, so I doubt there's any time left to have this in a console in 2013. Any chance?

There's not much info on their website. They give a bullet list of advantages which looks like they are naturally provided by TSV anyway, but I'm curious if it is any different to the planned JEDEC high-power WideIO... or are they simply a competitor?

EDIT: it seems to be practically compatible with the WideIO standard. Is it actually the higher-power version of WideIO they were talking about, not a competitor?

http://www.infoneedle.com/posting/100175?snc=20641
The cross-section is shown in Figure 2c and the logic/memory interface (LMI) is following the new JEDEC Wide I/O SDR (JESD229) standard (http://www.jedec.org/), which is shown in Figure 2d.
 
Last edited by a moderator:
UE4 is seems to be about dynamic lighting.

John Carmack Megatexture tech is pre-baked lighting.


When I look at beautiful oil painting, the light is in a way is "pre-baked" and comes from within the artist. UE4 is going to soak up all this processing just to have dynamic lighting. The stuff I've seen from ID looks more compelling and dramatic.

I always though that the megatexture was unrelated to whether your lighting model was dynamic or baked and that was chosen in Rage due the framerate target... I mean, I can see the benefits of using pre-baked when the whole point of megatexture is to optimize texture loading, but i can't see how that would make impractical to use dynamic lighting... Care to explain why?
 
Well, I have an architectural question.

Imagine that you have a multicore CPU that instead of the typical SIMD unit it has a lower speed MIMD configuration (like the ones in the GPU) in the same place. Is that possible or is just a waste of time? I suppose that the MIMD unit would be at a lower clock speed than the rest of the CPU.
 
Well, I have an architectural question.

Imagine that you have a multicore CPU that instead of the typical SIMD unit it has a lower speed MIMD configuration (like the ones in the GPU) in the same place. Is that possible or is just a waste of time? I suppose that the MIMD unit would be at a lower clock speed than the rest of the CPU.

Sounds like the integrated GPUs we have in current Intel and AMD processors. They are getting closer to being flexible and easier to program to with each generation. While not MIMD they are lower clockspeed and excel at executing parallel tasks.

What is the closest GPU architecture to MIMD? Fermi seems a good candidate or GCN...
 
You know with so many fruitful and game changing technologies in development right now lets all petition Sony and Microsoft to stop the console war for a year or two more.

I want:

HSA
big.small processing*
HMC
TSV
AVX
DX12
OCL
C++ AMP

And a sexy sleek design with a small footprint.

Heck I even want an SSD, Wireless-AC and a decent OS on these new consoles.
I want it all on 22nm or less and under 150 watts total system power consumption (here in Europe electricity prices are going through the roof!!)

What you think guys? Who wants a new shiny console in 2014 with all the above?

* A bit like big.little but not necessarily ARM based

Of course I want it all for £329/$499 or less!

It is a horrible shame that these companies won't wait an extra year or two. I know the current hardware is quite old, but the wait would be worth it IMO and it would also give me time to build a gaming PC to hold me over until then. :p
 
You know with so many fruitful and game changing technologies in development right now lets all petition Sony and Microsoft to stop the console war for a year or two more.
The thing with technology is that there's always a fruitful and game-changing technology just 12-18 months out. If you wait for the next-big thing, you'll always be waiting, especially when the delivery of commericially viable new technologies is both unpredicatable and frequently well behind expectations. A tech you are relying on for a year may not be available for a year or two after (65nm in the case of this gen of consoles). It's better the select from the technologies available and know they are there to use for your product's release.
 
We are fairly close to TSV and it may be used in some next gen console, C++ AMP is being pushed in a big way and whlist we may not be ready prime time for a Hybrid Memory Cube the TSV negates the need for it somewhat.

All in all hopefully we will see some of the traditional bottlenecks of current computing systems slowly ebbed away at. No one minds a new console that is easy for developers to make good games for and has innovative features to make it more efficient.
 
The thing with technology is that there's always a fruitful and game-changing technology just 12-18 months out. If you wait for the next-big thing, you'll always be waiting, especially when the delivery of commericially viable new technologies is both unpredicatable and frequently well behind expectations. A tech you are relying on for a year may not be available for a year or two after (65nm in the case of this gen of consoles). It's better the select from the technologies available and know they are there to use for your product's release.

This is very true, though I still wish they would wait an extra year if it helps their chances of putting in more than 2GB of memory in these machines.
 
Would it be stupid to use the mobile wideIO chips, which are supposed to be mass produced in 2013, but use 32x 1Gb chips in parallel? That could reach 544GB/s, and be ultra low power.
 
Aren't mobile DRAMs meant to be soldered/stacked onto the processor/SoC?
Yes I think so, and that would be impossible here considering the area required, so I was thinking 2.5D.

http://www.cadence.com/Community/bl...n-the-jedec-wide-i-o-standard-for-3d-ics.aspx
The spec doesn't mandate the interconnect method between memory and logic chips - it could be face-to-face, side-by-side with interposer, or stacked memory on top of logic. The exact mechanical placement of the interface on the logic or memory chip is up to the designer
I suppose there would be a need to have logic and buffers inside the interposer, raising the cost. Impedance and assembly would be the same as soldering it directly on the SoC?

Shoemaker also noted that discussions are underway about the next generation of wide I/O. There are proposals for significantly higher bandwidth, and explicit support for 2.5D assembly. There's also a JEDEC High Bandwidth Memory group that is leveraging some of the wide I/O work, but this group is targeting high-performance applications. Wide I/O is more focused on mobile devices and tablets and other power-sensitive applications.
I got confused at what they mean by "explicit".
 
Last edited by a moderator:
It is a horrible shame that these companies won't wait an extra year or two. I know the current hardware is quite old, but the wait would be worth it IMO and it would also give me time to build a gaming PC to hold me over until then. :p

Hey, they're still over a year out at least (excepting the Wii U half-node). Do it! Do it now! :LOL:

In all seriousness, though, the next couple of months should see quite a bit of price competition on PC components. Nvidia are finally pushing AMD on price and Intel are basically competing with themselves with Ivy Bridge being just a slight bump over Sandy Bridge. I've even seen 240GB SSD's for <$200.
 
Hey, they're still over a year out at least (excepting the Wii U half-node). Do it! Do it now! :LOL:

In all seriousness, though, the next couple of months should see quite a bit of price competition on PC components. Nvidia are finally pushing AMD on price and Intel are basically competing with themselves with Ivy Bridge being just a slight bump over Sandy Bridge. I've even seen 240GB SSD's for <$200.

This is OT but my issue right now is money, my student loans are draining me dry.:cry:
 
Wow!:oops:

I very much understand what "playing devils advocate" is. That doesn't change however that the statement you made, hypothetical or not, was nonesense.

That if the PS3 had been more important in the Western markets where Unreal engine games were more successful, that Unreal engine games would be more important on the PS3? Well, I don't think that's necessarily nonsense, much less "plain ignorant fanboyish drivel".

Also on your notion of "PS3 not commanding half the HD market, because of PC", I clearly said half the HD console installed base.

... yes, you did say that. And I said that was arbitrarily ignoring the PC market where Unreal based games sold well (seeing as how the Unreal based games on the PC are undeniably part of the market). Which was important because the conversation was about the importance of Unreal based games. Re-iterating that you had decided to arbitrarily exclude PC games from the market is a, frankly, dumbfounding response ( "but it doesn't count because *I* wasn't counting it!!" ).

And to repeat again - the comparison Mr Fox was making was about how much Sony should be influenced by Epic in the next generation compared to Sony first party studios, seeing as how first party games were bigger headliners on the PS3 this generation.

I'm just re-re-iterating now though, and this is still OT, so I won't re-re-re-iterate this and won't stay OT any longer. I promise. This probably belonged in the "Next-gen discussion" thread from the beginning.
 
I always though that the megatexture was unrelated to whether your lighting model was dynamic or baked and that was chosen in Rage due the framerate target... I mean, I can see the benefits of using pre-baked when the whole point of megatexture is to optimize texture loading, but i can't see how that would make impractical to use dynamic lighting... Care to explain why?

I haven't had the chance to play Trials Evolution yet (still dealing with RRoD), but doesn't that use dynamic lighting? I've got high hopes for virtual texturing next gen, just so long as consoles get decent access times to the texture data as standard.
 
I always though that the megatexture was unrelated to whether your lighting model was dynamic or baked and that was chosen in Rage due the framerate target... I mean, I can see the benefits of using pre-baked when the whole point of megatexture is to optimize texture loading, but i can't see how that would make impractical to use dynamic lighting... Care to explain why?

Lightmaps are a natural fit for unique virtual textures (since lightmaps are required to be unique across all surfaces), but they're certainly not a requirement.
 
From what I gather, the next gen of consoles should be all about HSA and compute efficiency.

The high concept seems to be to move away from "brute force" dedicated CPU + GPU combos - and more into the direction of combining heterogeneous kinds of logic in a way that allows devs to easily run different parts of their code on exactly the kind hardware that is most efficient at computing it.

Taking this to an extreme, a really nice PS4 should actually consist of x86 CPU + GPGPU + Cell-derived 1PPU/4SPU modules + programmable logic, all working together in a single, unified address space.

If done right, such a system would offer a lot of things at the same time: easy programmabilty on the surface, lots of room for deeper system-specific optimizations, exceptional perf/mm², perfect backwards and cross-plattform compatibility.

If you're too lazy to be an "early adoptor" of HSA, just code for the traditional CPU + GPU combo. If you're keen to exploit all the power, use HSA to efficiently distribute your code: if you need to process voice recognition - put it on the FPGA; need to compute a lot of general instructions? The SPEs can't wait. The key isn't to have hundreds of cores and lots of Ghz, but to have the right logic for the right code.
 
Last edited by a moderator:
Don't you also want to maximise utilisation though?

No use having silicon on a board that many devs won't use. If you have an x86 CPU, what's the need for a 1PPU/4SPE module? Why not have 4 x 1PPU/4SPE modules with those "PPU"s being updated versions capabale of performing whatever function an x86 CPU would perform with comparable efficiency?

Hardware redundancy is also no good.

You want efficiency across the board.
 
Status
Not open for further replies.
Back
Top