Predict: The Next Generation Console Tech

Status
Not open for further replies.
I am very conservative about Nintendo Café possible CPU, I believe that we are going to see a PowerPC 970 based configuration (single core or dual core) as a CPU and an AMD E4690 as the GPU, memory configuration being NUMA like PS3 but with 512MB for the E4690 and 256MB for the 970. The idea is putting a lot of the work that is being done in the multicore configurations PS3 and 360 into the GPU using OpenCL.
Very interesting option,cause maybe we can see some performance wise over ps3 and x360 at very low wattage*.

* At 28nm -> 10 watts?
http://www.anandtech.com/show/4307/amd-launches-radeon-e6760

Edited:

E4690 could deliver 384Gflops(Xenos/R-500/C1 = 240Gflops)?

http://www.techsource.com/downloads/brochures/pdf/GPGPU_White_Paper.pdf

http://www.pietruszkiewicz.com/sites/default/files/upm_estii_gpu_lecture.pdf
 
Last edited by a moderator:
I am very conservative about Nintendo Café possible CPU, I believe that we are going to see a PowerPC 970 based configuration (single core or dual core) as a CPU and an AMD E4690 as the GPU, memory configuration being NUMA like PS3 but with 512MB for the E4690 and 256MB for the 970. The idea is putting a lot of the work that is being done in the multicore configurations PS3 and 360 into the GPU using OpenCL.

What about the E6760? TDP is only 35 Watts and with 480 stream processors, 24 texture units and 8 ROPs even in its standard configuration (600MHz core and 1GB 800MHz GDDR5 (3.2GHz data rate) it should be fine. It also can output to 6 displays which is something Nintendo is looking at with its controller.

Pair the above with a dual core CPU with enough power to keep the above GPU ticking away without being a bottleneck and you have the makings of a very decent system.

In agreement with you urian I do think the 1GB in the E6760 would probably be overkill in Nintendo's eyes and may be would reduce it to 512MB and another 256MB/512MB of reasonably quick DDR3 should do the trick. It would help reduce TDP too on the GPU side at least.
 
How did they admit that? Because of Sony's security breach? Or because Nintendo didn't even try?
Nintendo or Sony would have to be braindead before sharing revenues with a 3rd party who provides nothing that they couldn't do themselves.

Go to IGN and search for the news.

I am also not sure if Nintendo could develop on their own a system equivalent to at least Sony´s online, they are still just a gaming company, the same way they couldnt, almost alone develop a CPU/GPU like sony did with cell.
 
Go to IGN and search for the news.

I am also not sure if Nintendo could develop on their own a system equivalent to at least Sony´s online, they are still just a gaming company, the same way they couldnt, almost alone develop a CPU/GPU like sony did with cell.

The Sony / Toshiba / IBM alliance was hardly Sony 'almost alone'. ;)
 
Go to IGN and search for the news.

What are "the news" ?

I am also not sure if Nintendo could develop on their own a system equivalent to at least Sony´s online, they are still just a gaming company, the same way they couldnt, almost alone develop a CPU/GPU like sony did with cell.

As jonabbey already said, Sony didn't design the CPU alone. But it's all a matter of investment and it's return. It's not that Valve was a network company when they made steam. They saw that there was a potential market and took the oportunity. What's stopping Nintendo to do the same? If you don't have the knowledge for this, you can always hire people who have.
 
The Sony / Toshiba / IBM alliance was hardly Sony 'almost alone'. ;)

Sure, but from what I remember Sony was, by far, the main force behind Cell, like it was one of the main forces behind Blue Ray.

Do anyone think that Nintendo could have done something like those even in equal terms/partners being just a gaming company?

What are "the news" ?

Where Nintendo recognize that they cant do everything alone, I am really not in the mood of link hunting ;).

As jonabbey already said, Sony didn't design the CPU alone. But it's all a matter of investment and it's return. It's not that Valve was a network company when they made steam. They saw that there was a potential market and took the oportunity. What's stopping Nintendo to do the same? If you don't have the knowledge for this, you can always hire people who have.

Well for one Steam/XBLive/PSN in its initial days are much less than they are today, so we would be demanding from a "unexperienced" company the same that the others did in 5years+.

On the other side, outsourcing can be advantageous in many cases, it all depends if you want a horizontal or a vertical business plan.
 
Last edited by a moderator:
I also think the article was somewhat disappointing in the technical aspect too, but it was well grounded in the business side,cause Nintendo forcing M$ and Sony going to invest more in your next gen console.

This link* with good talk about some idea of what could be Project Cafe.

In the end talking about GPUs aspect if Nintendo comes with a Radeon 4850/RV770 custom, Microsoft with a Fusion II / Krishna / Radeon 5770/juniper or even 6850, sony Geforce GTX460 ** as a whole package(cpu+gpu,ram,drive etc) range of 200 to 300+ watts under DX10/11, I have the impression the gap will only change the number of frames per second at 1080P with 3D(despite maybe little more textures,shading processing,tesselation etc).
Well Sweeney has bee taken for fool when he stated that Crysis on high PC rig looks marginally better than Geow the 360. It's been classified as marketing bullshit.

Point is keeping in mind how power consumption and thermal dissipation sky rocketted since 2005/2006 (they hit the ceiling by the way) we can't expect much more than the kind of spec you're speaking about.
Frost engine 2 looks really good on a good PC rig and it may be a disappointment or not but we can't indeed expect much more (the engine will mature, artists wil do more with les, etc. but the engine is pretty much ready to push next gen systems. May be we can expect more geomtry through tesselation. But that's it we have already a good picture about what next gen systems could push through "direct 11 only" PC games.

Maybe im just a dreamer, but I think the only one who could provide something different,is Sony... if going with PowerVR 6(maybe better relation processing power/wattage on the market...even only IP...Imagetech TBDR in powervr could compete with AMD and Nvidia?) customized for high clock (600/800MHz?) and 16 cores (MP16) with Cell "more powerful" (16SPUs) helping in light interactivity something like "extreme deferred shading" ("full global illumination") or even ray-tracing at 720P (if results really better than paradigm scan-line/rasterize/shaders).


* http://www.bit-tech.net/news/gaming/2011/04/25/project-cafe-system-specs-leaked/1

** http://techreport.com/articles.x/17747/12
I'm not sure the problem is the hardware, I believe is more on the software side.
Assuming manufacturers avoid big boxes, we're looking @ ~1TFLOPS of compute power. I believe that such figures are in reach for throughput cores /manycore designs.
You may still need a GPU somewhere but I don't believe that the hardware is the problem, it's software. Sweeney speaks but I get nothing ready, Id is more advanced with there "mega geometry" researches (nut sure if that's the proper name). You have thing like Atomontage and that's it. Nothing is ready, even if hardware were to prove a compliant platform we won't see the mix of various rendering techniques Sweeney talked about in some paper.
It's kind of depressing I would happily trade some "graphic fidelity" for something that looks different, something fresher but most won't. They want the most realistic war game as possible.
As explained by the atomontage developer voxels would open a lot of possibilities for physic based gameplay, it won't happen any time soon; For some reasons I believe that really casual gamers and female gamers could be more receptive about the benefit those kind of techniques brings that your average gamers.

Next gen is about more beautiful CoD that's it, nothing new is to be expected in regard of gameplay, letting may some new forms of inputs aside. I'm not expecting major breakthrough in regard to animation or AI either. It's going round in regard to gameplay but most are happy with that, competitive online play mostly shooters are set to rules the battelfield for quiet some time, new techniques doesn't bring something sensitive in regard to those game and may come at the cost of graphic fidelity tho... won' happen.
 
Well Sweeney has bee taken for fool when he stated that Crysis on high PC rig looks marginally better than Geow the 360. It's been classified as marketing bullshit.

I guess that depends on whose looking. I could tell a big difference, could my Mom or even a "casual" PS360 player? Maybe not. I dont think there's an order of magnitude or even generational gap there. Especially if we were talking about Gears 3...
 
I was thinking today, I was pricing out a hypothetical PC rebuild, 8GB RAM is pretty cheap, newegg sent me a flyer with 8GB of name brand stuff for 64.99. I also spotted some generic type stuff for 4GB for 24.99 after mail in rebate, or 50 bucks for 8GB, although I know DDR3 doesn't directly translate to what the consoles will use. It doesnt seem to me that 50 bucks is vastly more than what 512 MB of GDDR3 went for in PS360's day.

My pie in the sky dream is to see 12GB in next gen. It doesn't seem costwise impossible at all. I'm tired of all this lowered spec assumption, I dont think it has to be reality at all.

I cant even imagine what devs would do with that kind of room. They wouldn't even know how to deal with it, they'd cry with joy. Games would look photoreal. To have the console standard with more RAM than the average gaming PC of the time.

Just a thought :p

I was trying to think, what amount of RAM did standard PC's have back in 2005? I'm guessing maybe 2GB? Maybe the consoles will go more with what is standard for video cards, as I suppose they did in 2005. I think in 2005 video cards only had 512MB at the top. Here in 2011, we still see 1Gb almost as standard with some 2GB cards (6950). VRAM on discrete GPU's hasn't really kept up with Moore's law. DDR/System ram has, though. Looking it up, some articles show that top video cards in summer 2005 only had 256MB of RAM.
 
My pie in the sky dream is to see 12GB in next gen. It doesn't seem costwise impossible at all. I'm tired of all this lowered spec assumption, I dont think it has to be reality at all.

12GB of 64-bit 1600MHz DDR3 is exactly what we need.

I was trying to think, what amount of RAM did standard PC's have back in 2005? I'm guessing maybe 2GB?
2GB was about as good as it got with Windows XP 32-bit. Anything more was just overkill.

Looking it up, some articles show that top video cards in summer 2005 only had 256MB of RAM.
512MB cards did exist by then. The next page even mentions them.

Games would look photoreal.
I think you'd want to tackle lighting and shadowing before slapping 2k x 2k textures on everything, not to mention the implications on the art budget or the loading times or streaming issues (texture popping or blending in the high res mip level)...
 
I guess that depends on whose looking. I could tell a big difference, could my Mom or even a "casual" PS360 player? Maybe not. I dont think there's an order of magnitude or even generational gap there. Especially if we were talking about Gears 3...
His talk was not really about who could tell the difference, it was more like "all games that are renderer with polygons only, using the same APi, etC... tends to look "close" to another. The quote is from "the end of GPU roadmap" if you want to go through it again. By the way I don't think we will some implementation of REYES in realtime anytime soon, but some point he's making are valid. Something as flexible as larrabee would open a lot of possibilities to blend various forms of rendering.

I tend to agree with the atomontage guys, Voxels can add a lot to the gameplay as they are really part of a "physical" world. I'm droling for some H&S (like diablo) where your spells would have effects on your environment (sand storms changing to landscape, turn dirt in mud, set thing on fire, bend or dig into the earth, real fine grained destruction, "consistent physic effects" so at some point metal melts, some material are attacked by acid ,etc ) I would happily live with some tradeoff in the graphical aspect and using voxels the effect would be more believable than what we saw in Fracture or I don't remember which game.
It should vary per game and according to what serve the gameplay in the best way.
 
early 2005 had about still 512MB main memory as the standard, it was still the day of the ddr1, 3GHz pentium 4, and a radeon 9200SE, FX5200 64bit or geforce 6200 on the low end.

I know I helded 512MB as the sweet usable spot for a while, but more needed for far cry and higher types of games.
for a console I believe 4GB ddr3 for the CPU (or ddr4 which might be in the timeframe), and 1GB for graphics would work well, with PCIe 3.0 + coherency extension or hypertransport between them
 
12GB of RAM :)
Say manufacturers are limited by power consumption and more importantly thermal dissipation and invest more heavily on ram, are there search on some form rendering that would make use of this extra memory space?
 
After the PS3 are we really gonna see a console next gen with split memory pools?

Xbox360 uses 512MB of GDDR5 and that's it as far as i know. PS3 has 256MB of XDR and 256MB of GDDR5(?).

So why would the next gen consoles have any DDR3 memory?

Wouldn't it be more likely for the next gen consoles to have a single unified pool of GDDR5 or XDR2 memory shared between GPU & CPU?
 
Would there be any point in having 12gb of ram if there isn't enough horsepower to actually do something usefull with it? Using high res textures isn't going to make things look that much better if the lighting etc isn't any good. For me, the best looking games these days are the ones that make the best use of lighting etc. Not so much the best looking textures.
 
Well, for gameplay, I don't think you'd need a whole lot (compared to 12GB) just to get the player up and running. I mean, we're also looking at 720p to 1080p (consider pixel: texel ratio) as well as streaming.

Even if they had uncompressed textures (normal map, spec map etc etc), it'd be a big waste of bandwidth.

That amount of RAM would allow for more variety in geometry too, but I think we'd be looking at a content creation issue to fill up 12GB...
 
Well, for gameplay, I don't think you'd need a whole lot (compared to 12GB) just to get the player up and running. I mean, we're also looking at 720p to 1080p (consider pixel: texel ratio) as well as streaming.

Even if they had uncompressed textures (normal map, spec map etc etc), it'd be a big waste of bandwidth.

That amount of RAM would allow for more variety in geometry too, but I think we'd be looking at a content creation issue to fill up 12GB...
Well 12GB is a lot, less say we forget about the number and say a huge amount of ram.
On PC most gamers must have ~3GB of ram available (say 4GB for the main RAM, 1GB for GPU -2GB for windows+various softwares). 4GB for game only would already be a lot and 8GB would be a hell lot more.
There is really nothing more to do than wasting space with uncompressed textures, or just more assets or geometry?
I remember reading some presentations (for Geow for example) where devs did not compute AO every frame but use some form of "spacial convergence" (or something like that) to avoid computing the whole thing every frame. It's just an example but assuming there is plenty of ram available could devs use this kind of approach for save more computational power than what they do now?
Pushing further say you have some thing like larrabee core done right, would it be an option to create from your ROM on the fly pretty complex and clever data structures and back more informations than what kept usually in RAM, a bit like a G-buffer but pushed to another level.
OK the more data the more bandwidth constrained you are, but you may not want to access all the data every frame, just hit a sweet spot between alleviating a lot of calculations and not becoming bandwidth constrained.
As data transfer rate from the ROM is slow, how about procedurally generated details, but not on the fly. The result would be kept in RAM in some convenient format.

It could be like loading time would just be more than loading time a lot of computation would occur. May be it's not an option but as GPU hit the power whole some time ago and CPUs too, I just wonder if there could be meaningful way to trade memory space for computation.
 
Last edited by a moderator:
Well, for gameplay, I don't think you'd need a whole lot (compared to 12GB) just to get the player up and running. I mean, we're also looking at 720p to 1080p (consider pixel: texel ratio) as well as streaming.

Even if they had uncompressed textures (normal map, spec map etc etc), it'd be a big waste of bandwidth.

That amount of RAM would allow for more variety in geometry too, but I think we'd be looking at a content creation issue to fill up 12GB...

i know that ps3 has to scale down resolution in order to play a 3d title, is due to memory amount or missing fillrate?
 
Both. The bigger concern with rendering two frames is really the geometry cost.

We're also talking about 12GB of ram. Unless you're rendering 2560x1600 with 8-32x MSAA and 8x FP16 MRTs, you're not going to put much of a dent into it.

I just wonder if there could be meaningful way to trade memory space for computation.

4k x 4k shadows with no filtering. :p ( no soft penumbras or anything)
 
Status
Not open for further replies.
Back
Top