Predict: The Next Generation Console Tech

Status
Not open for further replies.
Honestly, it's very annoying that you make assumptions and draw conclusions with not even superficial knowledge of the technology involved in these issues. You do not contribute much here beyond thread derailment and pointless debates, I'd really suggest to do some reading for a while.
There are many threads here on B3D as well, complete with links to white papers and presentations, posts from actual developers and so on, all of which you probably should be a lot more familiar with before getting into those debates...

I'm sorry. Its just that its frustrating for me too. All of you seem to regard something that is basically the equivalent of virtual memory in Windows (a process otherwise known as swapping, a method sure to significantly slow down any application that got a part of its workset swapped to the HDD) as some kind of a holy grail that will let games look good despite not really having the requisite amount of RAM?

Even John Carmack himself acknowledges that MegaTexture does not eliminate texture pop-in:

John Carmack discussed as recently as QuakeCon earlier this month some of the issues with optimising Rage on all platforms, and confirmed that tech limitations still result in such things as texture pop-in on versions of the game.

So please, one more time - is there actually such a thing as a virtual texturing solution that eliminates all the IQ problems that traditionally come with using it?
 
Understand that loading every texture with all it's mip levels into VRAM is extremely inefficient. Carmack has been asking for virtual texturing for more than a decade now. Look up his .plan file on the subject.

The pop in issue is the consequence of a slow background storage (HDD is very slow on the X360, about as slow as a USB pen drive). Spending significantly more on RAM chips, wide buses, more complex motherboards (with higher defect rates!) and more cooling is a far, far worse investment than speeding up the storage to better support a virtual texturing system. It's also different from standard virtual memory because there's no dynamically modified data that you want to write back to the storage.

Also, Rage is only the first implementation of virtual texturing, and I'd wait for final release tests to see just how much of a problem there is during actual gameplay anyway. Your question does not make any sense without context, by the way.
 
Even John Carmack himself acknowledges that MegaTexture does not eliminate texture pop-in:
Nothing removes popup as long as you have more data than fits in RAM. As games will be using up much more of the 50G (+) disks in future you can be sure pop-up is here to stay. Difference between 4 vs 8G is quite insignificant if you have to stream in data ayway. It only matters if you are targeting certain minimum level HW and don't build in systems for handling streaming a'la your examples of 320M GPUs.
 
So this is the crux of the problem - you all have made up your minds that texture-pop in and the accompanying ugliness is all part and parcel of the console gaming experience. Instead of trying to tackle the underlying cause, you're content to limit your thinking to just alleviating the symptoms.

Basically, in my mind, it comes down to this - you have to compare the potential install size of a game to the amount of RAM you can put in a box with it still making financial sense.

Some here think that twice the RAM needs twice the bandwidth to be effective - it doesn't, compared to the alternatives such as streaming from SSD or HDD or the worst option - an optical drive. After all, if you can financially pull off twice or quadruple the capacity on the same bandwidth, and maybe have to sacrifice a SSD and put in a HDD instead - so be it.

You will only be trading loading times for better IQ without any pop-in. And you will have all the normal optimization techniques still available, once the RAM can no longer hold enourmous worlds of even very compressed textures you can still divide it into levels or areas, or even cells like Oblivion did.

But you don't have to put up with watching texture detail slowly come in under your very eyes while you're already playing, destroying all feeling of immersion.
 
Stop with the pop in sillyness and READ what virtual texturing is about. You're absolutely unqualified for this discussion in any form until you finally do your homework.
 
Stop with the pop in sillyness and READ what virtual texturing is about. You're absolutely unqualified for this discussion in any form until you finally do your homework.

Do I have to quote hoho to you? "Nothing removes popup as long as you have more data than fits in RAM."

I'm pretty sure that the main bottleneck in virtual texturing is the speed at which textures are streamed into VRAM, am I correct?

Do you suggest that compressing the texture data streamed into memory will overcome this issue, speeding up the process by 100x - 1000x times (that's the approximate difference in data bandwidth between a mainstream $100 64GB SSD and ~$110 24GB of DDR3 1333MHz or the fastest GDDR5 available right now, probably at a similar price in 2GB quantities), and if so, what's the evidence behind it?
 
Last edited by a moderator:
So this is the crux of the problem - you all have made up your minds that texture-pop in and the accompanying ugliness is all part and parcel of the console gaming experience. Instead of trying to tackle the underlying cause, you're content to limit your thinking to just alleviating the symptoms.
No. The rest of us just recognise that you can't throw endless amounts of money at solving a problem. So you add an extra 4 GBs of RAM at considerable losses to solve a little extra texture popin. Great. What are you going to do about shader aliasing? Add in an extra load of GPU transistors to supersample everything? What are you going to do about non-realistic GI fakes? Throw in double the CPUs? And before you say "RAM is cheap", if your console is already going to have visible limitations, what's to be gained losing hundreds of millions solving a bit of texture popin or fidelity? Especially when investing less money in better storage solutions can provide better net benefits. There'll always be shortcomings, until maybe some paradigm shift in processing. It's part and parcel of finite hardware.

At this point I'll say just leave the consoles alone and stick with PCs. I don't really understand why you started posting here, and I'm close to interpreting your relentless repetition and lack of understanding as trolling just to aggravate people.
 
so because e.g. Rage allows a huge lot of texture to go through limited memory, then memory limitation is now a thing of a past? every game will use megatexture or texture streaming?

one obvious thing is if you've got more memory, you will stream in textures earlier, and will discard them later. so less pop-in and less HDD or CD grinding in a given area.
I don't demean your point, sure there probably won't be 16GB ddr3 and going to war over it is weird.



supersampling shaders? why not :) you can take multiple samples for shadows, supersample a specular lighting component (as in a demo made by Humus) then it's all developer's choice.
you could make a smooth 720p rendering or noisy 1080p rendering, possibly let the user choose between them.

and you may have games not so reliant on shader complexity (like Rage, again). I predict we'll see multiplatform games that want to run on Intel graphics and AMD APUs :). other games that will go all-out on lighting and shadows and other things, so that lavish supersampling is out of the question.
 
Do I have to quote hoho to you? "Nothing removes popup as long as you have more data than fits in RAM."

I'm pretty sure that the main bottleneck in virtual texturing is the speed at which textures are streamed into VRAM, am I correct?

To be honest, I'm not sure what you are trying to argue.

In the case of Rage, the speed they can get texture data from the disk and decoded into ram is certainly an issue - however a cost effective solution isn't adding more ram.

I may be wrong, but I recall their main texture atlas - where texture tiles are decompressed - is 4096x2048. With three textures (albedo/specular/normalmap I believe) in DXT5 that's 24MB. The source data is all compressed using a variant of JPEG-XR (Microsoft's HDPhoto format), and I suspect they probably runtime encode the textures to YCbCr DXT5.

They likely use a lot of ram as a fast cache.

Consider;
With a faster CPU (or a more flexible GPU) they could decompress and decode the image data much faster, requiring a smaller texture atlas.
With lower latency storage they could drastically reduce cache sizes, and keep a lot of assets out of main memory.
With larger and faster storage they could store world data in larger chunks - with a lot of duplicate data.

Or... They could have 8GB of memory and use 6GB as a fast cache - where the majority of that data isn't used.

The trouble is, the later option is a really bad one. It's extremely costly, and ultimately doesn't solve the problem. A balanced approach where all parts of the system are improved would result in a dramatically more efficient solution than simply maxing out one part of the system - it would also be a cheaper solution that would most likely require less ram for efficient operation.

And please, don't say 'But 8GB for a PC is only $$$'. This kind of argument is ignorant of the wider implications of motherboard complexity, power usage, bandwidth considerations, component count and sourcing, packaging, heat etc - problems that are far from trivial when mass producing 50+ million consoles that need to have an average ~ $250 BOM.
 
Hoho isn't entirely right. You see delayed texture loading (pop in) only when the polygon is visible, but there are many ways to deal with this issue, just as UE3 did in ME2, Gears2 and other games. Just a few off the top of my mind:
- include invisible boxes in the game level that initiate the loading progress so that the player can't get to the given polygon faster than the required MIP level loads; you can also cover the area from view with careful level design
- initiate a cutscene, conversation, special effect to stop the flow of the game while the data loads
- do not allow player teleportation, limit the maximum speed of movement etc.

The reason you may see such delayed loads in Rage is also quite simple: id Software has significantly overshot Tech 5's hardware requirements and even they were not able to optimze the system enough to make it work flawlessly on consoles. Back in 2007 when they laid down the design for the engine, they probably underestimated the real demands it would place on the system. Tough luck, can happen to anyone, the difference with id is that they could still make it work acceptably. We'll see how disturbing it is in the final game itself, so far we don't have completely reliable data, and the issue itself probably isn't present all the time anyway.


Now you clearly don't understand virtual texturing at all either, but I'm going to give it a try.
Common GPU based rendering requires you to load the entire MIP-pyramid of each texture you need for the final image into VRAM. So for a 2K texture map, you need to load its 1K version, the 512 version and so on.
Let's say you use this texture for a character. The GPU doesn't care how big this character is on screen, so you'll need the entire memory space of that 2K texture no matter if the character is only 100 pixels high or if only its head fills the screen. Most of the time it's a fair distance away from the camera and you only see one side, so about half of the 1Kx1K texture could be enough to draw it; but you still need to store the entire 2K map. This means that even in the average case 7/8 of that space is wasted.

So games with this regular approach have two choices:
- Break up the game levels into chunks that can fit into memory at the same time, and pause the game and put up a loading screen whenever the player leaves the area. This usually means serious limitations especially on consoles, but midrange PCs also have to pay the price by reducing texture detail. It is also very annoying to constantly break up the gameplay experience, particularly when you need to load data from optical media which may take an entire minute or more on current console hardware.
- Break up the levels into even smaller chunks, but while the player is busy in one area, keep on loading data for the next in the background. Depending on the type of game, there are some other choices here: a completely open world game needs to cover several possible directions for where the player can go, whereas a linear game only needs to split up the memory in two pieces, the one the player's in and the one following it. This approach has the potential to completely eliminate loading within a level.
Also, you need to match level design and break-up in order to make sure that by the time the player enters an area, all of its data is present in memory, otherwise you'll experience the delayed loading and have to watch the higher MIP level of the textures (and sometimes objects) popping in. This takes time and experience which most devs only gained on their second generation titles.

But both approaches are still wasting a lot of the memory most of the time by storing data that's unnecessary to render the current frame. So all these games also have to utilize the most primitive form of compression, which is repeating the same textures (but it can be covered up by using multiple texture layers on top of each other).


The main idea behind virtual texturing is to drop the old GPU tradition of loading the entire MIP pyramid of each texture, and only load and store the texels that you actually need to render the frame. So even for the above mentioned average case of a character you can get away with only ~1/8th of the memory requirements, and for objects further away the wins are even greater.
Theoretically you only need one texel per rendered pixel, so even if you use three RGBA channels (color, normal, spec, plus stuff like emissive stored in alpha channels) and an overdraw factor of 3, the maximum texture memory requirement for 1080p is still 1920x1080 pixels x 3 channels x 4 bytes x 3 overdraw = 72 MB. And we didn't even use texture compression. Anything more than that is a waste of resources because the actual data is not visible in the rendered frame.
Virtual texturing in practice also works not with single texels, but blocks of texels; Rage for example uses 128x128 texel sized tiles. The actual size depends on a lot of factors that are specific for the given implementation of the system, and of course it means another multiplier in most cases because no matter how small something gets you'll still need to keep at least one tile of its texture in memory; and even if only a 1-pixel wide slice is shown you also have to load all tiles involved. But combined with standard DXTC texture compression the memory requirements of a 1080p frame are still quite small.

Of course you want to move around in the game world, and a 180 degree turn means you probably need to replace 80-90% of your textures (FPS games have a gun covering up a lot of the screen, TPS games have the player character, racing games have a car or a cockpit etc.). If you had to load all this data from optical disk, of course you'd see textures pop in everywhere. But this can be overcome by a properly designed caching system that'll keep previously loaded tiles around for a while, and even pre-load the ones that might be required in the near future, based on the player's traversal in the game level.


Now the main issue with Rage is that it actually moves beyond standard virtual texturing and adds yet another level of complexity by using completely unique virtual texturing. This means that there's absolutely no texture re-use in the game, so it has to load and cache even more data than any other game.
I don't even dare to try to estimate how much runtime memory Rage would need without virtual texturing, but I suppose it could easily fill 4 gigs, maybe even 8, and load times for the complete MIP pyramids would stress even an SSD, not to mention optical drives. That 100GB figure Carmack mentioned is far from accurate - the source data set is probably closer to an entire terrabyte, and it gets compressed into ~21GBs for the X360 version using a pretty lossy algorithm.
This means that even with DXTC compression (1:4 usually) the game would need 250GBs of memory for all its levels, so for an 8 GB system (of which you probably can't use more than 7GB) you'd have to break it up into 30-35 discrete chunks and add loading screens between them - if you don't use any background loading.
If you do background loading, the number of chunks goes beyond 100, since it's an open world game and the player should be able to traverse in any direction in the wastelands and cities. At this point it'd probably become problematic to drive a fast dune buggy because the player migth be able to leave an area before the next chunk can complete loading its data into memory.

And this is for a game that actually runs on a current gen system with 1/16th of that 8GB memory you are so fixated on. Imagine how problematic it'd be to build a game if you wanted to have 4 times the texture resolution, it would never work even with 16GB of memory.

So virtual texturing is a very promising technology and once its small weaknesses are ironed out, even the completely unique textured versions will be vastly superior to the standard load-everything-at-once approach, both in visuals and in loading times. And if you re-use textures, which most studios will do because of the lack of insane art budgets, it'll become even more efficient and free of any pop-ins, lossy compression or any other visual artifacts.

I hope now you can see why most of us believe this to be the way games should advance...
 
Or... They could have 8GB of memory and use 6GB as a fast cache - where the majority of that data isn't used.

The trouble is, the later option is a really bad one. It's extremely costly, and ultimately doesn't solve the problem.

What's particularly important is that more RAM cannot help with the worst case scenario, when the data you require is not available in memory and you have to load it from disk. But it's always preferable to have the worst case and average case to be as close to each other as possible.

In fact, if we assume that the next consoles have 8GB of memory, it'll more than likely mean that the background storage speed is even slower compared to the entire system than what we have today, so the penalty for a complete miss will be even more severe.

So the preferred trade-off is less memory for caching and far, far faster background storage that'll help the worst case a lot more.
 
So games with this regular approach have two choices:
- Break up the game levels into chunks that can fit into memory at the same time, and pause the game and put up a loading screen whenever the player leaves the area. This usually means serious limitations especially on consoles, but midrange PCs also have to pay the price by reducing texture detail. It is also very annoying to constantly break up the gameplay experience, particularly when you need to load data from optical media which may take an entire minute or more on current console hardware.
- Break up the levels into even smaller chunks, but while the player is busy in one area, keep on loading data for the next in the background. Depending on the type of game, there are some other choices here: a completely open world game needs to cover several possible directions for where the player can go, whereas a linear game only needs to split up the memory in two pieces, the one the player's in and the one following it. This approach has the potential to completely eliminate loading within a level.

While i agree with most of your points, especially one i didnt include in the quote (the one about how to intelligently predict/precache/cover upcoming pieces of assets), i ask you once again to please forget about the "one minute or more" loading on current consoles. It's perfectly possible to load about 240MBs of textures in 14-15 seconds from a 2x blu ray, even faster from better optical drives (9sec) or even a slow hard drive (4sec). Actually, the second approach you mention is pretty much the most practical - you don't just have the opportunity to load in (in the background) textures, but other assets or game related data as well. Of course it's not nearly as optimal as virtual texturing, but for the moment, it's just a more managable way to go.

To be honest, i expect virtual texturing to become mainstream with the first hardware implementation, including I/O, de/recompression, filtering, etc.
 
The one minute+ loading times are not theoretical but practical for a lot of games I've played on an Xbox360, usually with HDD installs.

If that 14-15 seconds figure really is the case, and not just for your own game project, then I really have to wonder why we have absymal load times on a lot of current games. Either everyone's a lot less clever or you have a special situation that allows faster loading, but something here just doesn't seem to fit.

For example, if I understand you well enough you are not just simply loading stuff, but also decompressing it on the fly, right? That already means loading far less than 240MB, depending on the rate of compression; the optical drive itself certainly can't maintain loading speeds like that.
So it's okay if it works for you, but some games might not have enough CPU power left to do that (although if Rage can do that at 60fps, it's kinda hard to justify not doing it on most 30fps games), others might have data that doesn't compress as well or can't be delayed by the time it takes to decompress, and so on.

Then again, at least we're not talking about how 8GB would be infinitely better than 4GB + better storage ;)
 
I'm just as puzzled about some games' loading times. To be honest, people don't really talk about it, but it usually simply comes down to inefficient methods of serialising data, and lack of time to polish everything up. Completely understandable in most cases, but since we were comparing it to an alternative that comes from an AAAAAA studio with whenitsdone+ years in development, i thought it's only fair to look at an optimised version :))

Of course it's compressed. I just ran a test on a very real dataset, all textures are DXT1/5 compressed, containing diffuse/normal/and "mask" textures as well. 242MB -> 155MB. This could be improved by different arrangement of the source data, but it was good enough for me. CPU usage is really low, decompression is close to memcpy speeds, and resource creation is basically zero. I was experimenting with streaming, but to be honest my project is big enough for a downloadable shmup as it is :D

Hehe well no, i completely agree with you on that 8/4gb issue :))
 
can we get a 7200 rpm HDD as standard?
with tremendous increase in density it would be really fast.

regarding optical, something that already exists and is based on common established tech may be preferable so what about triple layer blu ray drive. it being overkill sometimes, you may have single and dual layer games.
 
Well, in that case it really is a shame whenever I sit idle with the controller in my hand. Maybe it'd be better if they would make the load times yet a bit more longer, so that one could take a bathroom break? ;)


As far as I know even the current hard drive in the X360 would be faster if it would have a better connection to the system memory. I mean DigitalFoundry's tests suggest it sits on a USB connection, since it's about as fast as a pen drive.
 
99% of platter based HDDs have hard limit of around 100MB/s data transfer speed even if they're using SATA3. The bottleneck is mechanical ie the RPM not the bus. The other 1% are 10K+ rpm HDDs and still can only go up to around 140MB/s. It makes sense that the Xbox HDD only gets around 60MB/s transfer speed since it's a low rpm "laptop" HDD.
 
Epic partly solved the texture loading problems from UE3 in Gears of War by introducing the "phone call sections", in these cleverly designed levels, the main character receives a phone call, and he begins to listen/speak to the other person. Because the main character is really concentrated on the phone call, he cannot run, he can only walk real slow..

So while the player is still in control, and at the same time learning really valuable information about the plot, the game is actually loading textures from another part of the level :p

Of course not all developers can come up with something as clever, but this clearly shows the skill of Epic, which is among the best * on the 360
this was all accomplished without hdd cache!
 
Last edited by a moderator:
I think that was far more common in Gears 1, but not as much in Gears 2. Just as Mass Effect 1 was one of the worst offenders, whereas ME2 had absolutely no texture loading issues (all with an HDD install on my system).

Just as I've said earlier, it takes time and experience to deal with such issues...
 
Just as I've said earlier, it takes time and experience to deal with such issues...


This doesn't apply to everyone though.. A developer once made a game, it was their first game that generation on that platform. After booting, not a single loading screen would be seen during the epic 12 hour campaign. You would expect this game to have medium or low quality textures, but that was not the case: the game had textures which would be regarded among the best (both resolution, variety, and art), even almost 4 years after it's release.

But how did they do it? Was it magic? Talent? Great hardware?
Actually a combination of all :cool:
The game used a really big distribution medium so all textures could fit in great quality. Also, the hardware had a HDD which was constantly caching new parts of the level, but without any forced install, it all happened transparently.
The levels where cleverly designed to the actual story; example: after visiting a submarine, it would be triggered to blow up, afterwards you could not visit it, you were chased by bad guys.
Really big transitions would be dealt with even better: the developers recorded the cutscenes, and chose the bitrate so that new game content could be side loaded while the cutscene was keeping the player occupied!

So not really magic, but actually a really talented use of the great hardware the developer had available.

Note: sadly: this only works for games like Gears of War :(

ontopic:
I believe there could be a slightly upgraded PS3 / 360's to deal with ram, cpu power and storage capabilities. So better 3D, better 1080P support.
Games will ship on the same discs which are backwards compatible, but! in the new console it will lead to better graphics/performance.

This would allow for the best transition ever; developers can compile the game to the 2 profiles automatically, a little bit of extra work, but everyone can keep buying the games.
Plus if the hardware is profitable day 1; then people could invest in a new consoles some time later to have their 'old' games with better performance/graphics, and the manufacturers will have profits regardless.
Crazy? Maybe, but not totally, right? :)
 
Status
Not open for further replies.
Back
Top