Predict: The Next Generation Console Tech

Status
Not open for further replies.
The past few pages have been sadly hilarious indeed.
 
Is there any benefit to XDR in stacked form? I don't know how the bus format affects stacked access.

No, only downsides. With either a flip stack or a TSV stack you aren't limited in any way by number of connection for any reasonable number of connections. Therefore what you want to concentrate on is power and area. In both cases high speed differential is a detriment. Most of the stacked memory designs are looking at 512-1024b buses running at relatively slow frequencies in the range of .25-1 Gb/s with the outliers in the <= 2 Gb/s range, all singed ended.
 
If they are going with stacked memory, isn't completely irrelevant what is mass-produced and what not?

No, even for stacked memory, you would like to be using a commodity part. Ideally you would want to have a standard interface (both in pins, signaling technology, and pin/ball geometry with equal die size in the best of all possible world, but you'll settle for different die sizes) such that the product is available from multiple vendors and there is price competition.
 
1.6 Gbs * 128 /8 = 25.6 GB/s. Math, its hard.

How did you come up with the 1.6 Gbs number? It should be 800Mbs, resulting in 12.8GB/s.
FYI, DDR3-1600 is also known as PC3-12800.

As for the 10 time lower latency; I know it's not actually 10 times lower, I was aiming to have you produce a calculation resulting in a smart
"look it's not 10 times lower; it's only 3 times lower!!"
-kind of post. I wanted you to realize the difference for yourself.

Cell would have been severely bottlenecked using DDR3 memory. Not to mention it did not exist for several years
 
Last edited by a moderator:
How did you come up with the 1.6 Gbs number? It should be 800Mbs, resulting in 12.8GB/s.
FYI, DDR3-1600 is also known as PC3-12800.

I really cannot believe that you actually typed that! Where does the 1.6 Gb/s number come with DDR3-1600? Gee, I wonder...

As for the 10 time lower latency; I know it's not actually 10 times lower, I was aiming to have you produce a calculation resulting in a smart
"look it's not 10 times lower; it's only 3 times lower!!"
-kind of post. I wanted you to realize the difference for yourself.

Rather unlikely at this point. Needless to say, you were wrong, very wrong. In fact you basically had the latency advantages reversed. But nice try at saving face.
 
Cell would have been severely bottlenecked using DDR3 memory. Not to mention it did not exist for several years
But GDDR3, with a similar clock rate and throughput, did. In fact, the GDDR3 on the PS3 is actually faster than the XDR, by a significant margin.
 
well that explains it!

Just for further clarification, dual-channel mode that the majority of PC's have used for a long time now, effectively makes the bus 128-bit. This is where the 25.6GB/s comes from. So yes, DDR3 is a much better solution than XDR in this day and age, and GDDR3 was a slightly better solution in 2005/6 because of slightly lower latency and cost. The 360 has an all round better memory system than the PS3.

As for PS4, I personally think a 2GB GDDR5 and 4GB DDR3 setup would be perfect! Maybe not cost effective, but more so than 4GB/2GB and surely not much more costly than 2GB/2GB with densities where they are... Unless they are happy with a 64-bit bus, now that really would be a step back from XDR.
 
General Question: Did Epic, and possibly others (e.g. Square Enix), really miss the target with their new engines/demos?

The original Samaritan demo was on 3x580s Fermis but was scaled back to a single 680 Kepler and was said to target about 2.5TFLOPs of performance. Unreal Engine 4 seems to be aiming equally high, although they are saying things get "interesting" above 1 TFLOPs. Square's Agnis Philosophy demo seems to also run on really beefy hardware. It seems the demos being shown require a factor more performance than the rumored "Xbox 3" at over 1 TFLOPs and even a not insignificant amount higher than the rumored PS3 at 1.8 TFLOPs (40% gap).

So why are developers aiming so high, especially with middleware, when they must know the hardware is going to lag far behind (if the rumors are true)?
 
Maybe Microsoft is deliberately trying to keep the speculation low key. I.E. It is better if people have low expectations and be surprised than for people to have high expectations and be disappointed.
 
General Question: Did Epic, and possibly others (e.g. Square Enix), really miss the target with their new engines/demos?

The original Samaritan demo was on 3x580s Fermis but was scaled back to a single 680 Kepler and was said to target about 2.5TFLOPs of performance. Unreal Engine 4 seems to be aiming equally high, although they are saying things get "interesting" above 1 TFLOPs. Square's Agnis Philosophy demo seems to also run on really beefy hardware. It seems the demos being shown require a factor more performance than the rumored "Xbox 3" at over 1 TFLOPs and even a not insignificant amount higher than the rumored PS3 at 1.8 TFLOPs (40% gap).

So why are developers aiming so high, especially with middleware, when they must know the hardware is going to lag far behind (if the rumors are true)?



It's probably because those companies wanted to showcase the tech going into their new engines. It might have something to do with being shown hardware that is powerful enough to handle their next gen stuff whether it be from Sony, MS, or Apple. It could also be that they plan on giving a big push to the PC side of games.
 
Wow, that post really did make some news. Anywho, kind of funny how I found it considering I went to the patent application page and did a search on Nintendo and this was in the list.
Wow, just wow. So that's the real story behind it... An amazing story if you ask me. You seem to be kind of peculiar judging from your searches.

Also very observant, because normally people skim over legal papers or interpret them quickly reading a couple of lines.

They say necessity is the mother of invention, but your search didn't seem to have anything to do with inventiveness nor with something people seemed wanting to find badly. Maybe sheer chance would be a better definition.
 
Wow, just wow. So that's the real story behind it... An amazing story if you ask me. You seem to be kind of peculiar judging from your searches.

Also very observant, because normally people skim over legal papers or interpret them quickly reading a couple of lines.

They say necessity is the mother of invention, but your search didn't seem to have anything to do with inventiveness nor with something people seemed wanting to find badly. Maybe sheer chance would be a better definition.

Probability may have been on my side considering I had put in all different types of searches the past couple weeks. Nothing relevant came from such things as Sony and GPU, or MS and AMD, or for specific things like edram and what not. Putting Nintendo by itself did the trick. Chance yes, but after so many searches one is bound to cross something sooner or later.
 
So why are developers aiming so high, especially with middleware, when they must know the hardware is going to lag far behind (if the rumors are true)?
I agree with Sonic. They are going to want to show off their tech - who wants to aim lower than their rivals and show weaker graphics even if more reaslitic? And they can always scale down. And they can release to PC, meaning their engines aren't misleading if the consoles can't handle them. I don't see that aiming conservatively with their engines would have done them any favours no matter what hardware comes out.
 
Well... how likely is it that, all of a sudden, we see "massive improvements" in PC games again? I mean, it has been like this (i.e. badly ported console game with no added asset work) for quite a while now. This won't change. At least I don't believe it will, unless the new consoles are severely underpowered compared to a mid/lowrange PC at the time the consoles come out.
 
I´ve read on another site this pastebin date could be relatively accurate, since the info was the first one to reveal the codename of both PS4 and Xbox 3.

http://pastebin.com/j4jVaUv0

Basically, the guy claimed the devs were not too happy, as the Xbox 3 was going to be consirably more powerful than the PS4, with an 8 core 64 bit 1.2 TFLOP CPU, 4 GB of RAM. He also told way before vgleaks the PS4 was going to have a 32 bit 4 core processor and 2 Gb of RAM.

Is 1.2 TFLOP a good number for the CPU guys?
 
1.2 TFLOP CPU
Top perfoming 8 core Xeon server CPUs are long way from that kind of peak performance (even when running pure AVX vectorized code). Not even Haswell is expected to reach that kind of peak performance. So I wouldn't hold up much hope for that rumor (no matter how the flops are calculated) :)

Also IBM:s 16 core (64 thread) supercomputer CPU (PPC A2) has peak of 204.8 GFLOP/s. Having six times more CPU power in a console would surely be fun... but highly unlikely.
 
Status
Not open for further replies.
Back
Top