aaronspink
Veteran
Oh, they were only rumours. Obvious when one clicks links, but we haven't all got time to read every linked article.
Not just rumors, old rumors about products that are currently shipping and do not match the rumors
Oh, they were only rumours. Obvious when one clicks links, but we haven't all got time to read every linked article.
Is there any benefit to XDR in stacked form? I don't know how the bus format affects stacked access.
If they are going with stacked memory, isn't completely irrelevant what is mass-produced and what not?
1.6 Gbs * 128 /8 = 25.6 GB/s. Math, its hard.
How did you come up with the 1.6 Gbs number? It should be 800Mbs, resulting in 12.8GB/s.
FYI, DDR3-1600 is also known as PC3-12800.
PC memory is on a 64-bit bus.
1.6 Gbs per bus pin * 64-bit bus / 8 Bits/Byte = 12.8 GB/s
How did you come up with the 1.6 Gbs number? It should be 800Mbs, resulting in 12.8GB/s.
FYI, DDR3-1600 is also known as PC3-12800.
As for the 10 time lower latency; I know it's not actually 10 times lower, I was aiming to have you produce a calculation resulting in a smart
"look it's not 10 times lower; it's only 3 times lower!!"
-kind of post. I wanted you to realize the difference for yourself.
But GDDR3, with a similar clock rate and throughput, did. In fact, the GDDR3 on the PS3 is actually faster than the XDR, by a significant margin.Cell would have been severely bottlenecked using DDR3 memory. Not to mention it did not exist for several years
well that explains it!
General Question: Did Epic, and possibly others (e.g. Square Enix), really miss the target with their new engines/demos?
The original Samaritan demo was on 3x580s Fermis but was scaled back to a single 680 Kepler and was said to target about 2.5TFLOPs of performance. Unreal Engine 4 seems to be aiming equally high, although they are saying things get "interesting" above 1 TFLOPs. Square's Agnis Philosophy demo seems to also run on really beefy hardware. It seems the demos being shown require a factor more performance than the rumored "Xbox 3" at over 1 TFLOPs and even a not insignificant amount higher than the rumored PS3 at 1.8 TFLOPs (40% gap).
So why are developers aiming so high, especially with middleware, when they must know the hardware is going to lag far behind (if the rumors are true)?
Wow, just wow. So that's the real story behind it... An amazing story if you ask me. You seem to be kind of peculiar judging from your searches.Wow, that post really did make some news. Anywho, kind of funny how I found it considering I went to the patent application page and did a search on Nintendo and this was in the list.
Wow, just wow. So that's the real story behind it... An amazing story if you ask me. You seem to be kind of peculiar judging from your searches.
Also very observant, because normally people skim over legal papers or interpret them quickly reading a couple of lines.
They say necessity is the mother of invention, but your search didn't seem to have anything to do with inventiveness nor with something people seemed wanting to find badly. Maybe sheer chance would be a better definition.
I agree with Sonic. They are going to want to show off their tech - who wants to aim lower than their rivals and show weaker graphics even if more reaslitic? And they can always scale down. And they can release to PC, meaning their engines aren't misleading if the consoles can't handle them. I don't see that aiming conservatively with their engines would have done them any favours no matter what hardware comes out.So why are developers aiming so high, especially with middleware, when they must know the hardware is going to lag far behind (if the rumors are true)?
Top perfoming 8 core Xeon server CPUs are long way from that kind of peak performance (even when running pure AVX vectorized code). Not even Haswell is expected to reach that kind of peak performance. So I wouldn't hold up much hope for that rumor (no matter how the flops are calculated)1.2 TFLOP CPU