Predict: The Next Generation Console Tech

Status
Not open for further replies.
8GB along with Cape Verde seems so odd. They tend to skimp on RAM, yet 8GB is a hefty amount even by today's standards. Perhaps there is some truth to the dual but not xfire GPUs rumor back in April?
 
i ask who thought it was a good idea to put a guy from the failure called zune in a high position of xbox planning...

Who said he was in a high position, almost any senior PM at MS could have created/given that presentation. Presentations like this are a dime a dozen at MS unless you know the context of why it was given it's hard to weigh the value of the content.

FWIW I also don't like to equate individual competence with project success they are rarely related on large teams. There were probably a lot of smart people on the Zune project, I know there were a LOT of smart people on WinFS and that more or less managed to kill Vista as it was back then.
 
AI can be parallelized very well. Lets say you have for example 100+ enemies (we are talking about next gen game after all). The shared part of the game world can be considered to be static for the duration of the frame (except for the stages where updates are applied). All AI characters can do their pathfinding requests, visibility checks (ray casts) and decision making concurrently. These operations do not modify shared data, and thus require no synchronization whatsoever (multiple read accesses to same data structures require no synchronization).

There are rather major limitations though. The shared data has to be very small in order to fit into core-local memory. The AI characters can do independent decision making only as long as their decisions do not depend on what any other AI is doing. Not to mention that AI is fundamentally about decision making, i.e. branching, which generally wreaks havok with very parallel architectures with their small local memories, long pipelines and typically light weight branch prediction/handling hardware.

So AI parallelizes nicely as long as you want to do comparatively trivial stuff on small data sets. As usual.
That's not to say it's useless. I'm just pointing out some limitations for the benefit of those who do not have much personal experience with parallel codes.
 
Last edited by a moderator:
AI cooperating at 60 Hz time steps is plenty fast enough (which from a simulation point of view means they are completely independent). Unless you are trying to simulate nearly instantaneously communicating hunter killer robots. The edge case instabilities with timestep simulation in things like routing are an opportunity to improve your AI model and get more realistic behaviour (anyone who has been in traffic knows that humans have edge case instabilities in routing as well).
 
AI cooperating at 60 Hz time steps is plenty fast enough (which from a simulation point of view means they are completely independent). Unless you are trying to simulate nearly instantaneously communicating hunter killer robots. The edge case instabilities with timestep simulation in things like routing are an opportunity to improve your AI model and get more realistic behaviour (anyone who has been in traffic knows that humans have edge case instabilities in routing as well).


Pick a lane already dammit!!! :p:LOL:
 
There are rather major limitations though. The shared data has to be very small in order to fit into core-local memory. The AI characters can do independent decision making only as long as their decisions do not depend on what any other AI is doing. Not to mention that AI is fundamentally about decision making, i.e. branching, which generally wreaks havok with very parallel architectures with their small local memories, long pipelines and typically light weight branch prediction/handling hardware.

So AI parallelizes nicely as long as you want to do comparatively trivial stuff on small data sets. As usual.
That's not to say it's useless. I'm just pointing out some limitations for the benefit of those who do not have much personal experience with parallel codes.

Can you give me a real life example where you get stuck? Large shared data is similar to a framebuffer, but that doesn't stop it being processed in parallel. You can have one step in the process isolate what AI is close enough to what other AI to group them together for chunking, you can bit-flag lots of state and decisions on AI and run them past decision logic, and distribute the data according to the key flags that were set. There's not a lot you cannot do - most of us just aren't used to it.
 
AI cooperating at 60 Hz time steps is plenty fast enough (which from a simulation point of view means they are completely independent). Unless you are trying to simulate nearly instantaneously communicating hunter killer robots. The edge case instabilities with timestep simulation in things like routing are an opportunity to improve your AI model and get more realistic behaviour (anyone who has been in traffic knows that humans have edge case instabilities in routing as well).

Wouldn't the NP nature of round robin style individual communications through all pairs of AI be the big show stopper, not how often they need to communicate?
 
Performance != TF. What I'm scared of is that the 4x-6x graphics performance takes efficiency gains into account, and thus means a lot less flops.

Hopefully, this document was early and they got scared into adding more compute power.

Yeah, assuming GCN, especially for more complex pixel shaders the scalar architecture will yield very high efficiencies, compared to the scalar+vector architecture in XB360. A lot of programmers will be surprised just how much more mileage they will get per flop.

See, this is where I am very disturbed by the GPU rumors. It isn't so much the graphics alone, but it is the major computational gains GPUs have made and the breadth of problems they can solve.

The work loads a GCN like GPU can do compared to Xenos are night and day; to contrast what more do you get from a ballooning CPU budget? Definitely not as much from the GPU side!

The benefit of a GPU-centric design means up front the next gen will have a "next gen look" and over the product life span there will be substantial performance in the reservoir to explore and exploit new techniques. A CPU-centric design is throwing a hole lot of transistors for very modest computational gains.

I thought MS was at the forefront of investing in GPGPU and it being a disruptive force for the middling CPU market. If they throw in a 800GFLOPs-1TFLOPs GPU in a market full of True HD 720p and Full HD 1080p displays and people looking for Next Gen differentiation I really wonder where the excess overhead for GPU Compute / AMP will be as that GPU is going to be taxed.

Of course MS and Sony also have CPU makers in their ears trying to get silicon budgets shifted their way as well so I am sure there is some politicking behind closed doors.
 
I really wonder where the excess overhead for GPU Compute / AMP will be as that GPU is going to be taxed.

I'll make an observation here, I would guess that even if you were 100% utilizing the output of your GPU to render a scene something like 20 to 60% of the ALU flops would be going unused.
ALU's aren't used at all when you;re rendering shadow, they are grossly underutilized when rendering post effects. If you do a deferred renderer then when you lay down the initial pass they are underutilized.

It's hard to say how much compute you could do without impacting rendering, and a lot depends on the mix of texture units, ROPS etc to ALU's, but I would guess it's a none trivial amount.
 
ERP, as a developer, what is your view of GPU Compute? Especially as we transition from DX9.x hardware like Xenos and RSX.

Do you see compute, in game development, taking on more and more tasks that were traditionally the domain of the CPU only; is it too limited to be worth the investment; or is it too early to say?

As a developer would you like to see a shift in silicon resources toward compute or do you think there is still a lot of things that a proper many-core CPU could do if it was a development baseline (assuming we have seen the industry stall due to the low core count of the 360 and most PCs)?

What should we as fans, enthusiests, and consumers be looking forward to and what should excite us and what marketing pit falls should we be wary of (e.g. macho flops)? I am sure in the next 12 months we will see the emergence of the "next PR war" so an educated heads up to avoid the 'next big number' like raw peak mips, MHz, polygons, flops, cores, etc that we need to avoid being enchanted by?

As a developer what are the two biggest issues you want to see new platforms resource/aid?
 
Well I don't work on a game team anymore, so you should take my opinion with a huge grain of salt.

Compute certainly has it's place, it's actually hard to write performant none trivial compute jobs.
Part of that today is tools, without a enough information to determine why your compute job is running slowly there is a lot of guess work involved.

I think next gen challenges are going to be a lot like this gens were, it's as much about dealing with team size growth and production issues as it is technology. What's the right balance for geometry, how complex should you're shaders be etc etc.
3d graphics algorithms will continue to move forwards and I think you'll see a big step forwards as the generation progresses as a result. I think compute will be a big part of that.
 
8GB along with Cape Verde seems so odd. They tend to skimp on RAM, yet 8GB is a hefty amount even by today's standards. Perhaps there is some truth to the dual but not xfire GPUs rumor back in April?

Obviously we are still working on early, vague details, But based on those details my non-fictional take looks like this.

Developers: "Give us more power."

MS: "We'll double the memory like last time."

Developers: "Not good enough."

MS: "Deal with it."
 
Obviously we are still working on early, vague details, But based on those details my non-fictional take looks like this.

Developers: "Give us more power."

MS: "We'll double the memory like last time."

Developers: "Not good enough."

MS: "Deal with it."

Devs asks for more ram, DICE want 8gb for example.
 
Consumer: MS, give me morz GPUz!

MS: We gave you morz RAMz.

Consumer: We needz more GPUz!

MS: Deal with it.

Consumer: Hmmm I wonderz what Sony has under the hood... morz GPUz!

attachment.php

gpuz.jpg
 

Attachments

  • gpuz.jpg
    gpuz.jpg
    19.5 KB · Views: 304
I'll take fancy shaders and lighting over texture resolution and loading times any day of the week. Someone should contact that GAF guy who knew Wii U specs and tell him to give a closer look because meltdown is about to happen in near future :LOL:

In the end it could end up like this...

Developers : Give us more power MS!

MS : Here is 8 gigs of cheapest RAM available.

Developers : Huh...Its cool I guess. Give us more graphics processing power.

MS : NO! Here is 1 TFLOP and deal with it.

Developers : Sigh...Ok. Here is screen tearing, frame dropping sub hd multiplatform game for you. Deal with it!
 
Interesting the consumer doesn't seem to take part in those dialogues :)

That being said, SONY won't need more than 2GB of RAM if they're really going for cloud gaming in the long run.

All they need is a well-priced system that has enough power to sustain "traditional" console gaming for another few years - and is ready to be gradually integrated into next-gen cloud gaming stuff.

As far as I'm concerned, it's also time for more variable SKUs. Like having a "base model" without Bluray drive (but with a few USB3 ports for later upgrades).
 
8x the raw power of the GPU is ~2TF. However, 8 the raw power with a GCN-class architecture would be much, much more than 8x the real performance. Efficiency has *at least* doubled, if not much more. I'd expect "8x performance" to mean something like 1TF chip.

This is along my thinking and something almost no one ever considers when totting up the specs, you don't need 10 times the units to get 10 times the power of a 8 year old console.
 
if the hd 4770 is the gpu inside the Wiiu, why Nintendo talk about only 1,5 times the raw power of the current gen console ?
 
Status
Not open for further replies.
Back
Top