I don't really see how they plan to more than double perf while using AVX3 unless they either double core count or double clock rates. I think that lends credibility to the 72 core custom Atom based rumor, a few more cores clocked much higher seems more doable (and desirable) than putting 120+...
This line from the Haswell article was interesting:
So, if Intels 50GB/s is equal to 100-130GB/s because of its low latency and cache perks, assuming Xbone's is a cache as well, does that mean Xbone's 102.4GB/s is equal to 205-270GB/s? :wink:
If that happened they would rule the console world, most likely. However, the odds of Apple and Nintendo teaming up are worse than the odds of getting hit with lightning and a comet at the same time.
But how many developers have taken advantage of that in games? Pretty much none, IIRC. He has a valid point, it will be the first time games were made for a large pool of RAM from the beginning.
...And risk production in late 2013, just like risk production of 28nm was late 2010, two years from actual products, right? Wait I've heard this song and dance before...
That's the reality of it. Moore's law is slowing, soon to stop. Each node is getting harder and more expensive than the last. Most programming doesn't even need more processing power. Software needs to catch up and find something interesting to do with more power, otherwise, why do you need...
Good AI isn't a processing power problem. Some of the best AI was made in nearly decade old games and can run on hardware of that era. Good AI is a software coding problem. Which really means its a "how much time and money developers/publishers want to waste on AI" problem.
Your point is moot, you couldn't move fast enough to cover your eyes that fast, you broke your own arguement. :wink:
Lets say you are moving at 300 feet per second or 204.54 miles per hour, in your F1 car. Turn your head 90 degrees to the left and tell me how many details you can pick out in...
Been trying to to figure this out as well, so far all I can come up with is with 680 GTX cards, 4GB models using 5-8 watts more than similar 2GB models, so I suspect that the # of chips/density scales pretty close to linear.
Exactly. Now, they might have a 30nm "class" shrink, which may account for 20-25% savings, but taking into account other factors I think that ends up back where you started.
*I say "class" because IIRC 40nm "class" was 46nm and If I remember right 30nm "class" was 39nm, just off the top of...