anexanhume
Veteran
8GB along with Cape Verde seems so odd. They tend to skimp on RAM, yet 8GB is a hefty amount even by today's standards. Perhaps there is some truth to the dual but not xfire GPUs rumor back in April?
i ask who thought it was a good idea to put a guy from the failure called zune in a high position of xbox planning...
AI can be parallelized very well. Lets say you have for example 100+ enemies (we are talking about next gen game after all). The shared part of the game world can be considered to be static for the duration of the frame (except for the stages where updates are applied). All AI characters can do their pathfinding requests, visibility checks (ray casts) and decision making concurrently. These operations do not modify shared data, and thus require no synchronization whatsoever (multiple read accesses to same data structures require no synchronization).
AI cooperating at 60 Hz time steps is plenty fast enough (which from a simulation point of view means they are completely independent). Unless you are trying to simulate nearly instantaneously communicating hunter killer robots. The edge case instabilities with timestep simulation in things like routing are an opportunity to improve your AI model and get more realistic behaviour (anyone who has been in traffic knows that humans have edge case instabilities in routing as well).
There are rather major limitations though. The shared data has to be very small in order to fit into core-local memory. The AI characters can do independent decision making only as long as their decisions do not depend on what any other AI is doing. Not to mention that AI is fundamentally about decision making, i.e. branching, which generally wreaks havok with very parallel architectures with their small local memories, long pipelines and typically light weight branch prediction/handling hardware.
So AI parallelizes nicely as long as you want to do comparatively trivial stuff on small data sets. As usual.
That's not to say it's useless. I'm just pointing out some limitations for the benefit of those who do not have much personal experience with parallel codes.
AI cooperating at 60 Hz time steps is plenty fast enough (which from a simulation point of view means they are completely independent). Unless you are trying to simulate nearly instantaneously communicating hunter killer robots. The edge case instabilities with timestep simulation in things like routing are an opportunity to improve your AI model and get more realistic behaviour (anyone who has been in traffic knows that humans have edge case instabilities in routing as well).
Performance != TF. What I'm scared of is that the 4x-6x graphics performance takes efficiency gains into account, and thus means a lot less flops.
Hopefully, this document was early and they got scared into adding more compute power.
Yeah, assuming GCN, especially for more complex pixel shaders the scalar architecture will yield very high efficiencies, compared to the scalar+vector architecture in XB360. A lot of programmers will be surprised just how much more mileage they will get per flop.
I really wonder where the excess overhead for GPU Compute / AMP will be as that GPU is going to be taxed.
8GB along with Cape Verde seems so odd. They tend to skimp on RAM, yet 8GB is a hefty amount even by today's standards. Perhaps there is some truth to the dual but not xfire GPUs rumor back in April?
Obviously we are still working on early, vague details, But based on those details my non-fictional take looks like this.
Developers: "Give us more power."
MS: "We'll double the memory like last time."
Developers: "Not good enough."
MS: "Deal with it."
Consumer: MS, give me morz GPUz!
MS: We gave you morz RAMz.
Consumer: We needz more GPUz!
MS: Deal with it.
Consumer: Hmmm I wonderz what Sony has under the hood... morz GPUz!
8x the raw power of the GPU is ~2TF. However, 8 the raw power with a GCN-class architecture would be much, much more than 8x the real performance. Efficiency has *at least* doubled, if not much more. I'd expect "8x performance" to mean something like 1TF chip.