Likely that the next consoles will have 1GB or more of RAM?

I'd guess 512 MB as a maximum. High speed memory is still expensive, and no one would be willing to pay for 1 GB on a console.

I'm sure a future-gen console could do just fine with 256 MB. Considering that the "512 mb 99% used" on a PC mostly comes from Windows XP taking up 300 mb of RAM, I don't think memory requirements on console are anywhere near as steep as PC.
 
I'd like to see what a console could accomplish with 256+MB of RAM, especially DDR or 1T. (like on the GCN)

Considering how good games look today on, say, the GameCube (which has the least non-streaming memory among next-gen consoles) it'd be interesting to see what developers could do. Prolly not much that we aren't expecting..

More polygons, more textures, higher detail everything, better sound/music, etc. :)
 
Heh, think of Metroid Prime, in its current 24MB state.

Now think of Metroid Prime designed to take advantage of 64MB. :eek:
 
Qroach said:
i agree, i was also thinking 512, down to 128 for possible configurations. I think going from 64-1G is a bit too much and ram still costs quite a bit to make that too expensive for a consle. Even a few years from now I expect.

Agreed 100%

:D
 
I am not a programmer, but as I understand ram is more of a programming technique issue. Some games have terrible ram usage, other have great usage. A game like Max Payne on PS2 loads a new section in every few minutes, and would obviously play a lot better with more ram. But Jak & Daxter doesn't stop to load at all, and features levels that are huge. It's a question of how it's done.

I think streaming technology is more important - no matter how much ram you give programmers, they will always run out sooner or later. Give a console a suitably efficient way of moving data in and out of memeory without affecting processor performance, and you have as much ram as you want - because you can use the ram many times over in each frame!
 
BenSkywalker said:
Not to mention 3D textures

<shudders in thought of storing and runs away>

Seriously though, has anyone seen a proposed way of effeciently storing these? Well, beyond nVidia's "S3TC+" scheme?

Also keep in mind that the more complex things get, the less low level optimzations you will have(as a factor of both budget constraints and simply having enough time).

Actually, was thinking of this the other day when I was reading an article in Popular Science about LOTR:TT and thier propietary scheme called Massive that does the dymanic AI and animation for the massive battles sequences with possibly tens of thousands of onscreen comabatents. Using this as a loose idea of what will eventually be possible, the time take to create this media will be so expensive I was wondering if we'd see the creation of a company who creates libraries of objects [diffrent LODs, ect] and then support the common 3D engines.

Just an idea, more or less an extention of each development house creating a library of objects that are used as Sweeney talked of awhile ago.
 
How likely is it that developers even start using 3d textures rather than procedural ones?

By the time gfx hardware has enough RAM to make 3d textures feasible procedural textures of even a high quality should be quite fast.

So unless, you need it to look in a very specific way procedural textures should be the better choice.
 
Don't procedural ones require memory also ? I still think bandwidth is still far more important, though.
 
Vince said:
Using this as a loose idea of what will eventually be possible, the time take to create this media will be so expensive I was wondering if we'd see the creation of a company who creates libraries of objects [diffrent LODs, ect] and then support the common 3D engines.

Just an idea, more or less an extention of each development house creating a library of objects that are used as Sweeney talked of awhile ago.

I've been thinking about the same thing for some time now. Being a 3D artist myself, I'm quite familiar with production times and I can tell you that it will be the next big problem to solve. People have ssen the Dawn demo from Nvidia and will expect the same detail - but such a character takes 1-2 months to create, maybe even more!

I'm quite sure that the next big thing will be middleware 3D content for games. There are incredible possibilities here... However one difficult case would be characters, as you cannot easily modify a purchased 3D model to create a totally different person (I'm also a longtime unbeliever of parametric or photo-based head modeling tools - especially in such quality levels that Dx9 will offer, these applications just cannot produce models that are good enough). However I don't think that similar looking heroes will scare the industry - they might decide to make them digital actors, complete with textures, clothes and motion libraries that can be licensed to star in a game... ;)

Anyway, I'm pretty interested how this issue will be solved. The first companies jumping on the business might be able to score a lot...
 
I'm still hopeful the next round of consoles, especially PS3 and XB2, have at least 1GB combined memory. Of course, I'd like to see more, like 2-4 GB, because there are so many things in the next gen of games that could use more memory. massively improved textures, dozens or hundreds of high-poly charactors running around the screen, environments, AI, etc - as you guys have mentioned.

The next Nintendo machine will probably have more like 256-512 MB total, as Nintendo ALWAYS puts in the cheapest possible components, keeping total unit cost down. it is their design pholosephy, from the Famicom to the Super Famicom to the N64 to even the GameCube.
 
megadrive0088 said:
The next Nintendo machine will probably have more like 256-512 MB total, as Nintendo ALWAYS puts in the cheapest possible components, keeping total unit cost down. it is their design pholosephy, from the Famicom to the Super Famicom to the N64 to even the GameCube.

Disagree, N64 was an attempt at pulling out all the stops and making one badass powerful machine, too bad they forgot to account for BAAAAAD memory latency... :(
 
megadrive0088 said:
The next Nintendo machine will probably have more like 256-512 MB total, as Nintendo ALWAYS puts in the cheapest possible components, keeping total unit cost down. it is their design pholosephy, from the Famicom to the Super Famicom to the N64 to even the GameCube.

Yes and the reasoning behind that is the fact that Nintendo wants to be breaking even on the hardware no more than 6 months after launch.

Disagree, N64 was an attempt at pulling out all the stops and making one badass powerful machine, too bad they forgot to account for BAAAAAD memory latency...

Actually the reason why they used Rambus was because of costs, not the cost of the memory itself, but reduced PCB costs from the ability to use fewer layers for the main PCB which Rambus allowed. NEC manufactured the Rambus memory as well as the MIPS cpu so to Nintendo the memory itself turned out to be pretty cheap. Just like Flipper and the 1T-SRAM inside GCN. If Nintendo wanted performance memory for N64 they would've used VRAM.

The cartridge format was more of a bad design choice than using Rambus and again that boils down to having a cheap console. Adding a CDROM drive would've increased the cost significantly. However the cartridge format did increase costs for developers/publishers which was one of the main reasons why there were fewer developers supporting N64.
 
Tagrineth: Well, 24MB of 1T-SRAM.. yeah. Of course, I doubt that Metroid Prime would be possible in its current state (near-zero load times near-constant 60fps) without the 16MB of A-RAM that GameCube has.. if you forgot. :)
 
We can go on and on about what if X console had Y amount of memory and what not, but at the end of the day more memory means higher costs, costs that the consumer might not be willing to pay and costs that will take longer to recuperate neglecting software safes of course ;)
 
Blade said:
Tagrineth: Well, 24MB of 1T-SRAM.. yeah. Of course, I doubt that Metroid Prime would be possible in its current state (near-zero load times near-constant 60fps) without the 16MB of A-RAM that GameCube has.. if you forgot. :)

Yeah, true, but I doubt most of the "current" rooms are stored in A-RAM... just wouldn't really be feasible, considering the A-RAM's bandwidth.

Also some of the load times I noticed (shoot a door and wait 20-40 seconds for it to open!) could be further removed if the system could load all adjacent rooms into the 1T rather than the A-RAM.
 
I've got to agree with Laa-Yosh... Art assets (visual and aural) represent one of the most critical problems in development today. As game worlds get larger and more complex, *somebody* needs to create the content. Like ERP mentioned in another thread, people licensce engines quite often so they can start content creation right away. I've seen too many occasions were minor changes to the game cause massive reworkings of tons of art assets which just made delays longer (or those changes/fixes were never implimented to save on art time).

A large (one could say the bulk) portion of our studio was populated with content creators and the process of creating it was what pretty much consumed most of the development time...
 
Isn’t it right that the faster the machine, the less RAM you need to do the same job? I mean procedural generation of geometry, subdivision surfaces and procedural textures (3D or 2D), all only take up a small piece of the bandwidth and memory pie, vs. an explicit description which is needed on slower machines.
I’ve seen some amazing 64Kb demos that of course needed a powerful machine and a customised engine to run but were the actual graphics data only took up 64Kb!
I’m not saying, that future consoles wont need more memory than the current ones, its just a question if the budget wouldn’t be better spent on faster calculating power rather than on a whole gigabyte of memory?
Personally I think 500 Mb would be more than sufficient for anything on a television screen even a HDTV one.
 
Squeak said:
Isn’t it right that the faster the machine, the less RAM you need to do the same job? I mean procedural generation of geometry, subdivision surfaces and procedural textures (3D or 2D), all only take up a small piece of the bandwidth and memory pie, vs. an explicit description which is needed on slower machines.

There's a limit on just how far you can go with procedurals. Fractal textures tend to fit natural stuff quite well, so you can use such functions to model plants, generate displacement maps for water, rocks, and such things (see Bug's Life). You'd still need to use several levels of details, each with a different frequency - one simple Perlin noise would do you no good. So procedurals will need even more computing power than you'd expect... And that's why artist always have to make compromises. Procedurals require computing resources, image maps and geometry require memory and bandwith... you have to find a good balance.

However, more complex shapes and textures with less random factor and figurative details have to be stored explicitely, like creatures, machines and so on. Subdivision surfaces are a cool thing (cheers to the Geforce FX techdemos), but still require relatively high resolution meshes (like, several thousands of polygons for a character).

By the way, I've seen a (rendered) human model with completely procedural textures that looked as good as any other with bitmap textures; however it requred large images to serve as masks to blend between the skin, lip and eyebrow materials. Haven't heard about water with bitmaps though ;)
 
There's a limit on just how far you can go with procedurals. Fractal textures tend to fit natural stuff quite well, so you can use such functions to model plants, generate displacement maps for water, rocks, and such things (see Bug's Life). You'd still need to use several levels of details, each with a different frequency - one simple Perlin noise would do you no good. So procedurals will need even more computing power than you'd expect... And that's why artist always have to make compromises. Procedurals require computing resources, image maps and geometry require memory and bandwith... you have to find a good balance.

Your right of course, that there is a limit to how far you can take procedural description on the current generation, and also, but to a lesser extent (probably) on the next. But don’t you think that the balance will shift gradually towards procedural only? It is after all the way the real world was/is made (unless your religious :) ).
The next gen. will most likely have enough power to fill every pixel on the screen with at least three polygons every frame. With LODing and early z-checking wouldn’t that be sufficient to do all the splines and straight surfaces you could ever want?
 
Squeak said:
Your right of course, that there is a limit to how far you can take procedural description on the current generation, and also, but to a lesser extent (probably) on the next. But don’t you think that the balance will shift gradually towards procedural only? It is after all the way the real world was/is made (unless your religious :) ).

No, I don't think so. Complex forms and patterns contain very little randomness, and you'll eventually reach a point where the procedural approach takes too much programming time and too many cycles to execute. I've already mentioned layering procedurals as a common practice in offline rendering - but you have the time to wait for a frame there.

There are most likely ways to build a nose procedurally, or generate the bump/displacement textures for the little wrinkles on and old face - but just the research to find these methods would take horrible amounts of time, not to mention how many instructions and parameters such a program would need. You will end up better if you hire an artist to paint and model, and spend the bandwith and memory on storing the models and textures.

Pixar is one of the current leaders in proceduralism, as Renderman is quite good for this approach. Instead of taking a bunch of predefined procedural textures and objects, you can pretty much build up everything by coding a shader. Still they have texture painters and use thousands of bitmaps in their movies - and most of the other companies are way behind them or simply have a different approach.


I'd say that the first applications of procedurals should be water and smoke, with plants following very close. Outcast2 was said to have many procedural maps and geometry to build the gameworld, but I'm not sure that we're gonna see that game ever. Detail maps are also generally some sort of fractals, and their grayscale nature should reduce the performance requirements. Most of the man-made objects and structures should stick to bitmaps for a long time though.

The next gen. will most likely have enough power to fill every pixel on the screen with at least three polygons every frame. With LODing and early z-checking wouldn’t that be sufficient to do all the splines and straight surfaces you could ever want?

Personally, I'd rather call all kinds of HOS (from NURBS to Subdivs) a from of geometry compression instead of procedural modeling. With them, you do not put any additional detail into the model, you only use a more compact format to describe a surface.
You can of course use a procedural map to displace a higher order surface, water and many kinds of natural phenomena are good examples. But for example terrain needs more than a quadratic plane and a noise map - to have any kind of control, you need to have a bitmap texture to displace it. You can then go on and layer a procedural on top of it to generate additional detail, but anything you really need to control should be modelled and painted...
 
Back
Top