Predict: The Next Generation Console Tech

Status
Not open for further replies.
Sooner or later the jump to biological should occur. Hopefully we don't get a dark ages of a few decades.
Considering that smaller parts in modern chips are already just a handful of atoms I don't think that really helps. Now if someone would figure out how to scale the "wires" down, that would be something.
 
Considering that smaller parts in modern chips are already just a handful of atoms I don't think that really helps. Now if someone would figure out how to scale the "wires" down, that would be something.

You know you're going to get bombarded by a bunch of links now don't you?
 
3x?....I'm pretty sure PS4/Durango will be 10x, easy, as every prior gen has been.

3X would be the 6670, which doesnt seem very costly.

I don't see how it's possible to get a 10x computational increase in either CPU or GPU terms in the console space, outside of building a 250-300w+ jet engine model console.
 
3x?....I'm pretty sure PS4/Durango will be 10x, easy, as every prior gen has been.

3X would be the 6670, which doesnt seem very costly.

The context was putting the following into the WiiU:

french toast said:
Something like a quad core OoO PPC cpu.
2gb gddr5 unified.
AMD 6850 class gpu.

That is perfectly doable for 2012 and would provide plenty of power to hit 3x real gaming increase..

The 3x figure is kind of an odd thing to say considering the GPU is going to be doing the brunt of the work, but that wasn't the point in terms of cost.

-------------

That Nintendo were using an underclocked 48xx series and still had some thermal issues for early kits makes it hard to believe that a "6850-class" would be that much more feasible.

Edit2:
There's a cascade effect when raising the power: better/more sophisticated cooling solution for such a tiny real-estate, shipping weight multiplied by millions, motherboard components..

We've already got some hint of the relative costs of GDDR5 and DDR3 (triple). Now, I'm not saying that DDR3 is going to be used, but in light of edram being present to take care of bandwidth concerns, there's really not that much of a case to picking GDDR5 over DDR3.
 
They could just use a lower clocked 6850. I suppose architectural advantages would still make it (much?) faster. A quick google tells there is only a ~10% tdp difference between a 4850 a 6850 so if they can get a downclocked 4850 to work a 6850 should fit in there as well.
 
If you underclocked it, then it wouldn't really be a "6850" class, would it. ;)

Now if he had mentioned Barts, then we would have had a set hardware with any clock. That is a pretty general and different statement than specifying a particular GPU. i.e. 6850 is specifically not 6870 due to a difference in clocks but they are the same chip.

The part they used in the dev kit at the time wasn't even an 800ALU/10SIMD part btw, so it not only had a lower clock, but also fewer active units. There's a little more to it than that, but we can't know for sure (there did exist a 40nm 4830 Mobility chip for instance, thus widening the power gap, but again, we can't know for sure what they used). The only reason I mention that is because you have to keep in mind what the availability of 55nm desktop parts in 2011 would be.
 
IdeaMan has posted again on NeoGAF

http://www.neogaf.com/forum/showpost.php?p=36382948&postcount=10178

A news concerning the dev kits, as promised

Third-party studios will receive the "v5" dev kits very soon (before the next month). It's even possible that some just got them

- They are apparently the dev kits which where in possession of Nintendo lately (whereas third-parties had the V4).
- There is a noticeable increase of performances with them, a noticeable but not huge however. We don't know if this amelioration is due to a change of components, tweaking of the previously present ones (frequencies, etc.), or a global & light optimization of the hardware, the dev kit itself, the software development kit that may be linked to it, etc.

Some details about the context and assumptions:

- Except maybe big japanese studios (like Capcom), i doubt that other third-parties received these dev kits (much) earlier. It's likely that my sources are one of the firsts to be granted of the latest revisions. This means that all the declarations, the news, from foreign developers, that you've read until now, were in a V4 dev kit framework, before the release of these newest ones.

- A previous post of lherre indicated that Nintendo guys were testing an engine and saw slightly improved results on the V5 compare to V4.
Context: it was posted 1 month ago, when lherre decided to come out of his lair in answer to my very first message to confirm that Wii U was clearly closer to "2x than 5x xbox360". And he basically added that the expected more advanced dev kit in Nintendo headquarters at the time that i've talked about will not change dramatically the Wii U power status. So he surely knew of these engine benchmarks before my intervention, maybe some days, maybe some weeks.
Assumption: considering the 1month+ interval between these internal tests and the V5 delivery to third-parties (minus the dev kits assembly and shipping), there's a chance that the V5 dev kits in the hands of foreign studios have received even more hardware refinements and optimizations than the models used for these Nintendo tests some time before.

- They could have a different code name, but my sources called them V5, as in "the ones following the V4 that they currently develop on". Therefore, it appears they are a rather major revision.


[Now, a detail that could be important, but i'm not sure at all of it, as what my sources told me isn't clear and rely on subjectivity (maybe they didn't understand well what Nintendo said). Take that with caution, and it would be cool if a developer working on the Wii U could precise this.

- It seems these kits may be "near final"

Maybe Nintendo still have more advanced dev kits, but apparently not to a point to constitute a real new future revision (perhaps just small increments in numbers like 5.1/5.2). There may be new dev kit shippings to come but it could point that there is a progressive stabilization in hardware changes. It could also be a word-confusion from my sources between "last" and "latest", or maybe Nintendo thought that when i gathered these infos (a few weeks) but in fact there will be new revisions in the work because they decided to modify the components again, etc. So big grain of salt for this part]

I hope we'll hear more about these dev kits, what kind of modifications occurred on them, and their performances.
 
This'll make loss leading strategies even worse if you can't get your costs down after die shrinks.

Then again, the Xbox is still sitting at really only $100 price cut from 7 years ago. You can probably almost make a case you dont need that much in the way of cuts throughout the generation anymore, provided you start at 399.

And increasing yields alone could probably help a lot.
 
If XB3 uses eDRAM again, do you think they'll go for at least 1 TB/sec bandwidth (internally) which would "only" be 4x the current XB360?

1TB of internal bw (or even quite a bit more, depending on the size of the pool) would be easily reachable, but there would be little point. Even with X360, nothing can really make use of all of the internal BW. The reason it's *so* high is not that it's useful for it to be that high, but because even ridiculously wide internal busses are cheap.

For a modern GPU and a pool of 20-40 MB, 256-512 MB/s would be the line of reason after which there is little gain. Of course, they went way, way past that line last time, so why not. :)
 
Yeah, I'd be more concerned about the amount of eDRAM so that devs don't have to deal with tiling; they'd be more likely to use MSAA and thus actually make use of such high bandwidth (aside from MRTs and alpha blending).
 
If eDRAM was to be used again why would it need to be used only as a write only framebuffer? Why note make it more of an L3 style GPU cache where you can read, write, and store in it? Seems devs could come up with some pretty intelligent tiling systems if they so wished if that was the case. I know it would be larger as such but it would also be less of a one-purpose piece of silicon.
 
That's the only way I could see eDRAM working next gen, if developers were allowed to use or bypass it for when it fits their needs best, unlike this gen where you had to use it on the 360.
 
I remember an old, old quote by Nvidia's Tony Tamasi about high-end games needing 3 TB/sec.

"Parallelism" is the keyword underscoring several semiconductor architectures this decade. Now it's the turn of the memory: Rambus announced the availability of "micro-threading" for XDR2 memory, the company's next generation memory technology. Memory clock speeds will catapult to 8 GHz, up from a current maximum of 4.8 GHz of XDR1.

"The bandwidth requirements of game platforms and graphical applications have been growing exponentially," Steven Woo, Rambus' senior principal engineer at Rambus, told Tom's Hardware Guide. "About every five or six years, it goes up by a factor of 10. PlayStation 3, for example, will have a memory bandwidth capability of 50 GByte per second." If this trend continues, projected Woo, a theoretical 2010 model "PlayStation 4" could require ten times the memory bandwidth as next year's PlayStation 3. A statistical projection made in 2004 by NVIDIA's Vice President of Technical Marketing, Tony Tamasi - cited by Woo - anticipates that a top-of-the-line 3D game could conceivably require memory bandwidth of 3 TByte per second.

http://www.tomshardware.com/news/xdr2-quintuple-memory-data-transfer-speeds-2007,1152.html


I believe 3 TB/sec bandwidth could only be achieved by eDRAM, which I am a fan of.


Well, there is Rambus - Terabyte Bandwidth Initiative, but that's only 1 TB/sec. Perhaps a future version could be multi TB/sec
 
I believe 3 TB/sec bandwidth could only be achieved by eDRAM, which I am a fan of.

Well, there is Rambus - Terabyte Bandwidth Initiative, but that's only 1 TB/sec. Perhaps a future version could be multi TB/sec
Everything I read so far is pointing out to wide memory on an interposer (ddr4?) being a near ideal solution, memory would be low cost, and massive bandwidth should be easily achieved. I suppose latency would be extremely bad compared to edram though...
 
Status
Not open for further replies.
Back
Top