Crysis could be on consoles from Cevart.

Status
Not open for further replies.
It slows down by 100% every time.
The disk is a shared I/O resource, which means that if somebody else is reading disk right now, right this very moment: you won't get any data, you'll get the data after 1/2 second with 80mb/sec speed, but that's not what you need to sync with 1/60 second frame. Do you read me? :)

So in other words your previous statement about one moment getting 300MB/s and the next getting 2MB/s was complete rubbish?

The point is that regardless of how much it slows, and its not really 100% because from a human perspective, the speed will simply be averaged over the two (or more) tasks, the OS isn't constantly hitting my HDD for non game related activities during that game. When load a game on my PC, it can expect pretty much exclusive access to the HDD from a practical point of view, that obviously includes OS calls to the HDD in connection to the game.

As I've told numerous times: Oblivion is a good example how crippled you can get when writing game for PC, with PC specific decisions and then get a horrible performance on console.

So says you. Bottom line is at that level of detail with than big a world the game loads faster on the PC thanks mainly to more system RAM. May be you could have implemented it a different way on the consoles to work better, but those same optimisations would likely be of equal or greater use on the PC since it pretty much has more of everything in this regard.

Tell me, how do you make a game like Shadow of The Colossus?
The media reads at 2.5Mb/sec, you have 32Mb of system RAM and you need to do seamless free-roaming world?

Extreme lack of detail, very low res textures, very agressive LOD and great art direction to make it look good anyway.

Predictable patterns are better.

Sure, i'll just take your word for that shall I.

EDIT: Sorry if this is starting to sound like a PC vs console debate but the high level question thats being discussed here is how well will Crysis translate to a console given its smaller available memory.
 
Last edited by a moderator:
If ONLY a single process could only ever access the drive until it was finished, then a modern operating system would take EONS to load. Here's a newsflash: the OS is smart enough to share :) If you need to load an 80mb file, you can intermix that read with other files that also need reading. Yes, it really does work that way!

Amazing. Do you know how it works?
Please, enlighten me!

Even better, chances are you don't need the ENTIRE 80mb file, you likely only need a specific portion of it. Here's another great idea that someone figured out EONS ago: you can lread just the piece of the file you need, versus all 80mb of it into memory and then throwing the rest away!

Just for the record: do you also think that one CPU runs all the threads in one time slice?

Here's how: You use very small, very low resolution, very compressed textures. Or better yet, you use one larger (relatively speaking) texture that actually contains multiple smaller textures that can be applied to various surfaces (think of a person -- one texture can be used for clothes, skin, face, hair on a polygon model by simply using different parts of the image).

Do not see how it helps me to load data from 2.5mb/sec device and display it at 30fps.

Then you use simple models, simple terrain, lower-quality mono-channel sounds, a midi music score, a VERY aggressive LOD system, you keep the total visible characters very low, which keeps AI pathing down, and you stuff as much as physically possible into the memory. And you continually run the CD rom pulling anything and everything you can as the character moves or the scene changes.

That's how.

No that doesn't go even close to solution, because you haven't addressed the "continually run the CD rom pulling anything and everything you can" part.
My question is: how exactly you do it, if the character is "unpredictable" and the device is so horribly slow?
 
Umm, because thats how Sony designed the PS3. If 50/50 wasn't a reasonable expectation of the average split (which obviously varies from game to game) then it would have been a pretty dumb choice would it not?

And why is that? Don't you think that there can be other considerations, like the obvious: memory is produced in 128/256/512 etc. chunks?
And the bus to RSX is 256bit?

And im sure Sony and the associated experts that were involved in the PS3's design know a hell of a lot more about it than either you or I.

Agreed. So they allowed fast access to both RAM pools.

If you want to get technical, try demonstrating to me why PS3's split is wrong and what it should have been.

The split is very good, but it has nothing to do with programming. Simple, eh?

No thats a complete turn around from what you were saying before. No-ones saying its close to impossible to implement a free roaming game on consoles due to memory limitations. All thats being said is because of the greater available memory, PC's have an advantage. That advantage can translate into larger worlds, higher detail for a given size of world or fewer/shorter load times.

Good point. So I'm saying that it can be translated in larger worlds ONLY when it's targeted for 512Mb VRAM minimum.

And besides, the specific point that we were addressing in this particular part of the post was were you said consoles have a VRAM advantage over most PC's which while partially true is not relevant to how the games would compare at maximum settings were they scale to take advantage of those (many) PC's that have a huge framebuffer advantage over consoles.

If you call the larger resolution and more AA - an advantage, I would not argue with that.
But I think that advantage is - more advanced: lighting, shadows, effects, geometry etc.

As far as I know you can't on the PS3 either but I might be wrong about that. Even in the 360 though I assume data would need to be loaded as system data first in order to be pre-processed and allocated to the GPU by the CPU.

You can allocate memory from both pools directly on PS3. You'll get plain old void pointer. Regular malloc.

Surely thats blindingly obvious. If the data isn't cached in memory and then its needed by the GPU, you will have to wait for it to be loaded from the HDD or worse, the DVD.

I won't wait, I predict and load in advance.

If its already in the system memory then it will load much, much faster.

But still not fast enough to load in 1/60 sec.
So it's useless, unless you predict and load in advance, you need to predict less, but still - you do need. :)
 
The point is that regardless of how much it slows, and its not really 100% because from a human perspective, the speed will simply be averaged over the two (or more) tasks, the OS isn't constantly hitting my HDD for non game related activities during that game. When load a game on my PC, it can expect pretty much exclusive access to the HDD from a practical point of view, that obviously includes OS calls to the HDD in connection to the game.

Ok, you have average 60mb/sec, and console has 30Mb/sec, what does it mean?
For me it means that you have to predict 2 sec in advance for PC and 4 sec - for console.
Something else?

So says you. Bottom line is at that level of detail with than big a world the game loads faster on the PC thanks mainly to more system RAM. May be you could have implemented it a different way on the consoles to work better, but those same optimisations would likely be of equal or greater use on the PC since it pretty much has more of everything in this regard.

Ok, let's see how you optimize PC with these:
- load textures into VRAM from HDD using DMA
- write into texture when semaphore has lifted and GPU finished with this one and goes to next.

Extreme lack of detail, very low res textures, very agressive LOD and great art direction to make it look good anyway.

It reminds me about something... err.. Gears of War!
 
Amazing. Do you know how it works?
Please, enlighten me!
Sure, do you want it in C++? Or do you just want the general overview? Here's the 30,000 ft view: operating system IO scheduler submits requests to the disk queue, the driver accesses them. If something needs to ursurp control over the currently running IO, the OS submits a seperate request to stop the current read in-place and start another. It's even better on multi-disk systems.

Just for the record: do you also think that one CPU runs all the threads in one time slice?
What the hell does that have to do with anything we're talking about? And no, that's utterly ridiculous. A single CPU runs a single thread at a time, but it changes threads so ridiculously fast that you really never know (or see) what's going on. What does that have to do with anything?
Do not see how it helps me to load data from 2.5mb/sec device and display it at 30fps.
Then you're not thinking about the big picture. 30Fps has jack squat to do with memory capacity, so I don't even understand why you mention it. Reading from a 2.5mb/sec device is unavoidable, so hence why you are FORCED to use low-res textures, low-poly models and terrain, very few objects, even fewer AI components, et al. This gives you the ability to have enough data in ram to cover what the user can see, and what they might see if they do something unpredictable like turn around fast.

Why do you run the CD Rom constantly? Because you have no choice. If the user is heading east, the viz data you have loaded (even at the low quality it already is) will run out soon -- so you need to be loading the data that is in their direction. But what if they turn? You need to immediately be loading the data that is in their new direction.

Another way that game made this easier? Movement speed. You are limited by how much you can change the scenery around you by how fast your character can move through the scene. If your maximum character speed still only allows you to traverse 1/16th of the visible scene in 15 seconds, you've got plenty of time to load up a few more object meshes (what, maybe a few kilobytes each) and a few more chunks of the terrain.

As for the single large texture that can be applied to multiple things? Think about resource allocation. You can cram 64 individual 128x128 textures into a single 1024x1024 texture that can continually reside in ram. You now get three benefits:

1. The compression on a graphic with more "data" is potentially better, as there are potentially more common datapoints to allow higher compression.

2. You don't have to issue a drive seek to load a paltry tiny image. Optical drive transfer may be several mb/sec, but a optical drive seek is between 70-150msec or worse. That's definitely below your 30fps threshold that you threw out, and more than certainly below and way too long to spend for a 128x128 texture or two.

3. Just a few of these "master textures" could entail an entire level, meaning you never have to go reading the drive for more texture data period. Terrain data and object data is considerably "smaller" than texture data, especially when reading incrementally versus entire files.

Ok, let's see how you optimize PC with these:
- load textures into VRAM from HDD using DMA
- write into texture when semaphore has lifted and GPU finished with this one and goes to next.

What are you even talking about? Those are things that you MUST concern yourself with on a console to get even passable performance; those are NOT things that you even have to think about on a PC to get exceptional performance.

Apples to oranges.

Howabout how you optimize for 200Km^2 worth of visible data, tens of thousands of objects, several hundred unique sounds, several dozen AI characters and an entire physics and input system into about 460mb of total system memory alloc versus how you'd do it with 2.5Gb of total system memory alloc?

There, apples to apples.
 
Last edited by a moderator:
Here's the 30,000 ft view: operating system IO scheduler submits requests to the disk queue, the driver accesses them. If something needs to ursurp control over the currently running IO, the OS submits a seperate request to stop the current read in-place and start another. It's even better on multi-disk systems.

Good, so we do have queue. And the queue holds requests, and requests are for certain amounts of data (read - sector). And can you read half sector? No. So any request further in queue needs to wait till you read it. Furthermore, in most cases it needs to wait for the seek also.
And how much time do you have? 1/60 sec = ~17ms
Do every atomic one sector read operation can finish in 17ms? Don't think so.
That's what I'm talking about.

What the hell does that have to do with anything we're talking about? And no, that's utterly ridiculous. A single CPU runs a single thread at a time, but it changes threads so ridiculously fast that you really never know (or see) what's going on. What does that have to do with anything?

See above.

Then you're not thinking about the big picture. 30Fps has jack squat to do with memory capacity, so I don't even understand why you mention it. Reading from a 2.5mb/sec device is unavoidable, so hence why you are FORCED to use low-res textures, low-poly models and terrain, very few objects, even fewer AI components, et al. This gives you the ability to have enough data in ram to cover what the user can see, and what they might see if they do something unpredictable like turn around fast.

Good, when user turns around on PC can you render from system RAM?
No? So you're VRAM limited here also?
So the way to do such game on console: hold visible sphere in VRAM, stream data from HDD to VRAM when user goes halfway to sphere boundary.
For the PC: load it into system RAM from disk and DMA it to VRAM afterwards.
Where is exactly the advantage here?

Why do you run the CD Rom constantly? Because you have no choice. If the user is heading east, the viz data you have loaded (even at the low quality it already is) will run out soon -- so you need to be loading the data that is in their direction. But what if they turn? You need to immediately be loading the data that is in their new direction.

You do not need to immediately load other direction, because turn - doesn't mean anything, but if they'll run to opposite side - you start loading.

Another way that game made this easier? Movement speed. You are limited by how much you can change the scenery around you by how fast your character can move through the scene. If your maximum character speed still only allows you to traverse 1/16th of the visible scene in 15 seconds, you've got plenty of time to load up a few more object meshes (what, maybe a few kilobytes each) and a few more chunks of the terrain.

Yeah, only 10 posts passed and you started to talk with my words... :)

As for the single large texture that can be applied to multiple things? Think about resource allocation. You can cram 64 individual 128x128 textures into a single 1024x1024 texture that can continually reside in ram. You now get three benefits:

1. The compression on a graphic with more "data" is potentially better, as there are potentially more common datapoints to allow higher compression.

2. You don't have to issue a drive seek to load a paltry tiny image. Optical drive transfer may be several mb/sec, but a optical drive seek is between 70-150msec or worse. That's definitely below your 30fps threshold that you threw out, and more than certainly below and way too long to spend for a 128x128 texture or two.

3. Just a few of these "master textures" could entail an entire level, meaning you never have to go reading the drive for more texture data period. Terrain data and object data is considerably "smaller" than texture data, especially when reading incrementally versus entire files.

Good points, you're not hopeless as I thought earlier!

What are you even talking about? Those are things that you MUST concern yourself with on a console to get even passable performance; those are NOT things that you even have to think about on a PC to get exceptional performance.

You do not need to get data as fast as possible to VRAM on PC? How's that?
You do not have huge performance problems and space wastage on PC because you really do not know whether you can alter resources in frame that renders this very moment?

Howabout how you optimize for 200Km^2 worth of visible data, tens of thousands of objects, several hundred unique sounds, several dozen AI characters and an entire physics and input system into about 460mb of total system memory alloc versus how you'd do it with 2.5Gb of total system memory alloc?

There, apples to apples.

It's simple, good LODs, good data layout, cut everything that's not visible.
You have 1000 DIPs per frame on PC after all, how'd you render these "tens of thousands of objects" is beyond me.
 
And why is that? Don't you think that there can be other considerations, like the obvious: memory is produced in 128/256/512 etc. chunks?
And the bus to RSX is 256bit?

There's some room for changing that if the balance of 50/50 wasn't roughly ok. A different memory bus and/or different chip sizes. Perhaps less system memory or graphics memory etc..

However more likely is the fact that the split memory design wouldn't have even been feasible given the limitations on memory size you mention above if the 50/50 split wasn't good enough.

Agreed. So they allowed fast access to both RAM pools.

Fast, but not as fast or efficient as just using the local memory, especially not for Cell. So the balance had to be decent in the first place.

Good point. So I'm saying that it can be translated in larger worlds ONLY when it's targeted for 512Mb VRAM minimum.

Which im saying is wrong since additional system RAM allows you to keep your graphics RAM fed better meaning the GPU doesn't have to wait for data once the world gets beyond a certain size/detail that streaming directly from a HDD or DVD couldn't keep up with.

If you call the larger resolution and more AA - an advantage, I would not argue with that.
But I think that advantage is - more advanced: lighting, shadows, effects, geometry etc.

Exactly what do you think more VRAM would allow? We're not just talking about larger framebuffers. We are talking about higher detail assets/textures and more of them. We are talking about greater view distance due to the availability of those assets and we are talking better quality due to improved LOD.

I won't wait, I predict and load in advance.

Err, load in advance to were? Your memory is already full with the data thats being currently processed, there is no space available to pre-cache anything that you predict will be required in the future. Thats the whole point. Were a console would run out of space the PC keeps going due to its larger available memory.

But still not fast enough to load in 1/60 sec.
So it's useless, unless you predict and load in advance, you need to predict less, but still - you do need. :)

That makes no sense at all. It completely depends on what your loading and what the required framerate is. Its going to be a constant stream, not a one off event and having that constant stream running over 10x faster than it would if you only had a HDD or DVD drive to stream from is an obvious advantage.
 
Ok, you have average 60mb/sec, and console has 30Mb/sec, what does it mean?
For me it means that you have to predict 2 sec in advance for PC and 4 sec - for console.
Something else?

Huh? No what it means is that you can only get data into the memory at half the speed and thus you either need twice as many/twice as long loading times or you need the data to be half the size, i.e. lower detail or less of it. And thats assuming all other things bare equal, which they arn't.

Ok, let's see how you optimize PC with these:
- load textures into VRAM from HDD using DMA
- write into texture when semaphore has lifted and GPU finished with this one and goes to next.

And your proof thats not already being done with Oblivion on the 360?

Besides, you talk again about loading textures from the HDD. Explain to me how thats advantageous compared to loading them from system memory at 10x the speed?


It reminds me about something... err.. Gears of War!

Oh come on now, thats just blatent flame bait. Gears is in no way comparable to God of War, well aside from its initials ;)
 
Good, so we do have queue. And the queue holds requests, and requests are for certain amounts of data (read - sector). And can you read half sector? No. So any request further in queue needs to wait till you read it. Furthermore, in most cases it needs to wait for the seek also.
And how much time do you have? 1/60 sec = ~17ms
Do every atomic one sector read operation can finish in 17ms? Don't think so.
That's what I'm talking about.
What? So now you're saying a harddrive load is too slow to update in time? Wow, guess what, you're right! You've just waved the white flag of surrender; your argument is completely finished. The end.

Good, when user turns around on PC can you render from system RAM?
. Yes you can, ableit at reduced speed depending on how much data, but it's not a 100% invalidation of everything in VRAM when you do a 180-degree turn. Unless you programmed badly... If you need the extra data, it's in system ram, and system ram is accessible at a rate of several gb/sec with access times in the hundreds of ns...

So the way to do such game on console: hold visible sphere in VRAM, stream data from HDD to VRAM when user goes halfway to sphere boundary. For the PC: load it into system RAM from disk and DMA it to VRAM afterwards. Where is exactly the advantage here?
The advantage is where you shot yourself in the foot in the first quote above... A drive seek takes too long to sustain 60fps framerate by your own admission; if you're waiting for drive data, you're too late. If you're only precaching data directly to VRAM and not using system ram for anything but engine code and sounds, then what are you really doing? According to your previous posts, VRAM is the end-all-be-all for engine load. So since you're not even considering sounds, AI, phsyics and the entire input system, what ELSE would you be using the system ram for?

You do not need to get data as fast as possible to VRAM on PC? How's that? You do not have huge performance problems and space wastage on PC because you really do not know whether you can alter resources in frame that renders this very moment?
You never alter resources in a frame at the same time as the render pass. I don't even know what you think you're talking about there; quit making up nonsensical scenarios that have no possibility in real life.

As for the rest of that nonsense post? YOu don't build a DMA transaction system on a PC unless of course you're building the operating system and drivers. My engine needs only to ask my host OS to go fetch something and it does -- at optimal speed. You aren't going to "optimize" for something that your application doesn't even touch...

It's simple, good LODs, good data layout, cut everything that's not visible. You have 1000 DIPs per frame on PC after all, how'd you render these "tens of thousands of objects" is beyond me.
You didn't answer the question, again. Just as before, you gave some overgeneralized mumbling that really has no actual MEANING.

How are you going to optimize for all that data for 460mb of total alloc, versus how you'd optimize it for 2.5Gb of alloc? Not "gee, I'll use LOD and culling and data layout." No, that's two buzzwords you picked from a review somewhere. (By the way, you did finally figure out that lower resolution textures were a form of LOD, right?)

Here's the cliff notes:

Hard drives are slow. Optical drives are even slower. If the only point of your "argument" is that you can stream things from a harddrive, then yay -- you're right. But if you are thinking that system ram has zero use for visible data, then you're wrong on more than a few levels. No matter the platform, system ram multiple orders of magnitude faster for fetching data than the drive. Data that you keep there requires no 7ms seek times, no paltry double-digits Mb/sec transfer rates. Not taking advantage of this ram is ludicrous at best, and purely wasteful at worst.
 
You do not need to get data as fast as possible to VRAM on PC? How's that?

So now your admitting its important to get data into the VRAM as fast as possible.

And yet you still won't acknowledge that having that data sitting in system RAM waiting to transfer when its needed is an advantage? Despite system RAM being able to transfer the data many times faster than a HDD or DVD? :???:
 
There's some room for changing that if the balance of 50/50 wasn't roughly ok. A different memory bus and/or different chip sizes. Perhaps less system memory or graphics memory etc..

It's so unrealistic.
Other memory bus = other GPU.
Not even bus distribution = slowdowns.
So from hardware point of view this split - is most obvious and the cheapest.
And for programmers... who cares?

Fast, but not as fast or efficient as just using the local memory, especially not for Cell. So the balance had to be decent in the first place.

You do not need it for Cell, you need it when you get out of VRAM boundaries.
So this problem was addressed.

Which im saying is wrong since additional system RAM allows you to keep your graphics RAM fed better meaning the GPU doesn't have to wait for data once the world gets beyond a certain size/detail that streaming directly from a HDD or DVD couldn't keep up with.

Yeah, good! Streaming is the right word. But there is one problem: streaming can be as fast as the slowest link in the chain.
So once again, if you can fit all the world in RAM: you get very fast streaming.
If you don't (huge worlds, anyone?) - you get HDD-speed straming, no matter what.

Exactly what do you think more VRAM would allow? We're not just talking about larger framebuffers. We are talking about higher detail assets/textures and more of them. We are talking about greater view distance due to the availability of those assets and we are talking better quality due to improved LOD.

Good, when every PC game will target minimum of 512Mb VRAM we will see all these marvels, but not now :)

Err, load in advance to were? Your memory is already full with the data thats being currently processed, there is no space available to pre-cache anything that you predict will be required in the future.

1. You can save some space.
2. You can load the resources while GPU processes them also: smart work with semaphores.

Were a console would run out of space the PC keeps going due to its larger available memory.

PC has same amount of VRAM. So you'll need to save some VRAM for loading from system RAM also. But you can not do the console tricks here, you'll need to have some spare VRAM that is not processed.
 
Huh? No what it means is that you can only get data into the memory at half the speed and thus you either need twice as many/twice as long loading times or you need the data to be half the size, i.e. lower detail or less of it. And thats assuming all other things bare equal, which they arn't.

Or you can just start loading it not 2 sec before it's needed, but - 4 sec.
No loading times, just pure readahead.
You seem to fail to understand that you don't program games like this: "oh, sh, player runs to the right, I need more data, let's load it from disk!", but: "player starts running to the right, let's do some pre-loading, so that by the time he gets there the data will be in VRAM".

And your proof thats not already being done with Oblivion on the 360?

Complete rebuild of the engine? Are you kidding?
People who do 2000 DIPs per frame, are not bothered with such low level stuff.

Besides, you talk again about loading textures from the HDD. Explain to me how thats advantageous compared to loading them from system memory at 10x the speed?

It's not advantageous, it's just as fast as loading it from HDD through system RAM.

Oh come on now, thats just blatent flame bait. Gears is in no way comparable to God of War, well aside from its initials ;)


You don't even know how good it describes the Gears. ;)
 
What? So now you're saying a harddrive load is too slow to update in time? Wow, guess what, you're right! You've just waved the white flag of surrender; your argument is completely finished. The end.

Not that, what I mean is the HDD is slow and can not load data fast no matter how you do it (using system RAM or no).
So to successfully use HDD you need to load data much before it's actually used.

. Yes you can, ableit at reduced speed depending on how much data, but it's not a 100% invalidation of everything in VRAM when you do a 180-degree turn. Unless you programmed badly... If you need the extra data, it's in system ram, and system ram is accessible at a rate of several gb/sec with access times in the hundreds of ns...

Framerate drops? Ok, if you think it's good to live with them - you're welcome.

The advantage is where you shot yourself in the foot in the first quote above... A drive seek takes too long to sustain 60fps framerate by your own admission; if you're waiting for drive data, you're too late.

Just remember: Shadow of The Colossus! ;)

You never alter resources in a frame at the same time as the render pass. I don't even know what you think you're talking about there; quit making up nonsensical scenarios that have no possibility in real life.

Do you have papers from GDC2007?
I do not think I can distribute them, so can not show you.
But the main process is simple:
On PPU you do render calls using API
On SPU you prepare buffers/resources and pull/push for RSX semaphores
So you can know exactly where the RSX is rendering now and what you can change meanwhile.

As for the rest of that nonsense post? YOu don't build a DMA transaction system on a PC unless of course you're building the operating system and drivers. My engine needs only to ask my host OS to go fetch something and it does -- at optimal speed. You aren't going to "optimize" for something that your application doesn't even touch...

Ok, slow framerates, framerate drops, GPU stalls, CPU stalls. No wonder that it's so commonly present on PC...

Here's the cliff notes:

Hard drives are slow. Optical drives are even slower. If the only point of your "argument" is that you can stream things from a harddrive, then yay -- you're right. But if you are thinking that system ram has zero use for visible data, then you're wrong on more than a few levels. No matter the platform, system ram multiple orders of magnitude faster for fetching data than the drive. Data that you keep there requires no 7ms seek times, no paltry double-digits Mb/sec transfer rates. Not taking advantage of this ram is ludicrous at best, and purely wasteful at worst.

But how would you take advantage of it, if your world does not fit into the RAM?
To load 40Mb of data from HDD to system RAM you need to spend ~690ms.
How can you reduce this number?
 
So now your admitting its important to get data into the VRAM as fast as possible.

And yet you still won't acknowledge that having that data sitting in system RAM waiting to transfer when its needed is an advantage?

And who exactly made it "sitting in system RAM waiting to transfer"?
Magic? :)
Didn't it get there by means of "horribly slow" HDD read? :)
 
Not that, what I mean is the HDD is slow and can not load data fast no matter how you do it (using system RAM or no).
So to successfully use HDD you need to load data much before it's actually used.
Done. You just nullified every arguement you've made in this thread.

YOu need to load data much before it's actually used. And where are you going to load it to? VRAM? You need that for data that's being rendered right now, not data that might be used at some point in the future. This is where you load to system ram, and you can only load so much as what will fit. Which means a platform with more system memory can keep higher-reslution assets (all types of assets) available for use at all times.

You just conceeded. The end. The entire rest of your nonsensical diatribe is moot.
 
Done. You just nullified every arguement you've made in this thread.

YOu need to load data much before it's actually used. And where are you going to load it to? VRAM? You need that for data that's being rendered right now, not data that might be used at some point in the future. This is where you load to system ram, and you can only load so much as what will fit. Which means a platform with more system memory can keep higher-reslution assets (all types of assets) available for use at all times.

You just conceeded. The end. The entire rest of your nonsensical diatribe is moot.

That pretty much settles it, and what has been said all along to Psorcerer, one has to prefetch the data. One cannot go form HDD to RAM direct to screen/processing since the HDD transfer rate will be the bottleneck. :smile:
 
Given this is console games, pissing matches over pc stuff would seem to be OT. So please move along. . .

No offense, but I don't think this is a pissing match. Nobody in here is saying that PC's are inherintly "better" (I think most would say that it depends on the game); the conversation seems to be about how much different it would be on consoles.
 
For 32mb of RAM I think it's quite an achievement anyway!
And where exactly I've said it runs on 60fps, anyway? :)

The advantage is where you shot yourself in the foot in the first quote above... A drive seek takes too long to sustain 60fps framerate by your own admission; if you're waiting for drive data, you're too late.

Just remember: Shadow of The Colossus! ;)

What else do you mean with your comment above? ;)
 
Status
Not open for further replies.
Back
Top