Xbox 360 Beta Kits in Action

Quote from megagames.com Crossfire/SLI article:

In Alternate Frame Rendering mode (AFR) one card works on the odd frames and the other computes the even frames of the scene, therefore offering a noticeable increase in performance in the game. Games that support this mode include Half-Life 2.

Half Life 2:
1101234204.png


Compatibility is more of an SLI thing than a Crossfire thing, but it does mean that games can be coded to work well with SLI, which is the method we're talking about here.
 
Yeah but that get's fuxxored with stuff like motion-blur(I imagine depth of field too), IIRC.

Basically sli suxxors for REAL next-gen games.

The aigeia demos too don't seem as impressive as I'd thought, and it's likely not to catch up.

So given:
a.) pc cpus are crippled for the foreseable future compared to console ones with relation to game s/w.
b.) sli suxxors for REAL next-gen games.
c.) cpu<->gpu b/w is the crippled in pc arena.
d.) Not likely physics helper h/w will catch on, IMHO.
e.) Supah high budget for art asset generation is necessary, few games likely to deliver in pc arena
f.) VISTA H/W-MEMORY HOG'S on the horizon

We get:
Consoles are gonna get supah exclusives on supah physics/particles/cloth-muscle simulations and gphx are to keep up to par for quite some time.(it'll take more than 2x perf to offer any significant leap in gphx quality given similar features, and sli seems a no go go... so several years down the road ).

That $9000+rig ($900sub-zero cooling solution) overclocked 5GhzPIV, or overclocked dual-core(not sure how high these overclock), with overclocked sli nextgen$600gpu, 4+GB of the latest ram, 2x4GB-DDR-pci-hdd-esque-solution, raptor, 1TB+ of HDD space, supah hi-rez extra large CRT(or hi-rez projector)... is not gonna have any significant advantage over a $300 box, and whatevah advantage will only be seen in a a couple of games several years down the road.... :LOL:
 
^^ I agree. PCs are going to be a long way behind in the next-generation because of the shift to multi-core CPUs in consoles. PC games are probably going to be programmed for single-core CPUs for the next 3-4 years, despite dual-core CPUs being on the market, which gives consoles a bigger edge than any previous generation.
 
Powderkeg, you're an idiot. Have you ever even considered CPU limitations? SLI is usually far faster than ~17% if you have a CPU to feed it. Even in your Far Cry example, the increase given by SLI is greater with the 6600GT and 6800 (both at around 45%). Obviously the CPU is holding back the faster 6800GT SLI platform.

Powderkeg said:
SLI is not significantly more powerful than a single card. 10-30% performance increases are the norm, and it is only when you go to extremely high resolutions with large amounts of FSAA and AF applied do you see any significant advantage.
Do you know why? Average CPU load per pixel goes down at higher res., thus reducing the CPU constriction.

Don't get me wrong, I think still SLI is stupid, not only because of the cost, but also because the software out there today isn't demanding enough to make much use of it, and it complicated render to texture.

But this does not diminish the power of SLI. Each card has fewer pixels to render. So what if each individual pixel pipeline isn't any stronger? With SLI, you have more of them, because you have two cards. Shader instruction processing IS increased. If you wanted, you CAN double the length of all shaders in a game, and the framerate would likely drop by only 10%, so long as you don't have render-to-texture data travelling betweens frames. The two cards are NOT doing the "exact same thing" - each has a different workload.

As for the G70's major increase in shading power, its pipes are only around 10-15% faster than NV40's. It's the number of pixel shading pipes that gives G70 its power, and even then, the 6800 SLI has more "shader instruction processing power".

Look here for example:
IMG0013245.gif

At 1280x1024 (no FSAA), SLI gives the 6800U a 79% boost, and beats the 7800GTX by 24%.
 
You guys really need to calm down .


1) Some games will perform much better on a single g70 than a sli nv40 rig no matter the speeds . There will be effects were a sli set up wont show any advantage .

2) Dual core games will start coming out in 2006/2007 not to mention x86-64 games , ppu games and games that require athlon 64 3000s and p4 3ghz as baselines , higher ram amounts and more video ram . So pcs will do just fine esp as amd and intel release tri and quad core processers in the high end .

3. I'm sure in many ways the rsx will be faster than a sli 6800ultra set up and a g70 set up . However this is to be expected. The rsx wont be used in anything untill march of 2006 . The 6800ultra will already be 2 years old at that point , the g70 will be almost a year old . The question is how it performs with the cards coming out around the smae time and that summer . The rsx may keep up with a r520 and a g70 , but how about a r600 and a g80 which in themselves will have 512 megs of ram 256 bit buses and cost most likely what hte ps3 as a whole will cost or more .
 
Pozer said:
SanGreal said:
I'm the first one to stand up for the 360 hardware, but you're just going to have to accept that most of the launch titles are going to look like crap.

I don't think all of them will. I think a few will stand out but most will be above average. Which is ok cause the ps2 launch games were the same way. I remember playing the PS2 Unreal Tournament port and thinking how dreadful it looked and was surprised the ps2 couldn't even handle a year old game engine. I was just at the mall and played the Burnout Revenge demo on a ps2 koisk and was blown away.

Bottom line, it just means the x360 will have a big graphics arc over its life. Something I don't think the xbox1 had due to its PC based hardware.

So many excuses.

I find it amusing how some were quick to point of the difficulty of the PS2 and even the PS3 before we were recently updated, which only allowed developers to utilize 50% of the console's power at launch, but turn to it as an excuse to compensate for some of the things that is being shoved down our throats. I hate to be the one to point it out, but if the Xbox 360 will be as friendly as Microsoft has made it out to be with XNA, there is just no reason for the console to even be placed on the same scale as the PS2 unless everything said was smoke and mirrors to stir up a hype fest, which wouldn't speak so well when you're talking about developers' support in comparison to what some are saying about Sony's console. Now if it is easy to program for, then why is it that they would need anymore time than anyone else before we finally see this miracle?

I think instead of deluding yourself into thinking we're going to get some miracle come TGS or any other time, your best bet is to shoot low and save yourself the disappointment.
 
At 1280x1024 (no FSAA), SLI gives the 6800U a 79% boost, and beats the 7800GTX by 24%.

Notice when u up the res to 1920 about double the pixels ( i blieve don't want to do the math ) the g70 catches up to the 6800sli rig ?

That is because the single card is better suited for increasing the performance as its much easier to optimize and program for one card . Both driver wise and game wise . You have a 2.6 frame diffrence . Most likely if the g70 and nv40 were capable of fsaa the g70 would pull away from the 6800ultra sli rig . The 7800 ultra when it comes out to go against the r520 will most likely be faster at the higher res than the 6800ultra sli . It would most likely pass up the 512 meg boards too . The g70 is only 24 pipelines vs 16 of the nv40. A 32 pipelined g70 would destroy a 6800ultra sli rig .


The problem is you will never get a 100% out of the second card
 
jvd said:
2) Dual core games will start coming out in 2006/2007 not to mention x86-64 games , ppu games and games that require athlon 64 3000s and p4 3ghz as baselines , higher ram amounts and more video ram . So pcs will do just fine esp as amd and intel release tri and quad core processers in the high end .

......................................................

3The rsx may keep up with a r520 and a g70 , but how about a r600 and a g80 which in themselves will have 512 megs of ram 256 bit buses and cost most likely what hte ps3 as a whole will cost or more .

There is a thread about this in the games forum, but I think that this shows I a lot of people (even in a pure HW view) think that Consoles>>>PCs, plus zidane1strife reasons.

BTW http://www.beyond3d.com/forum/viewtopic.php?t=25134
 
jvd said:
At 1280x1024 (no FSAA), SLI gives the 6800U a 79% boost, and beats the 7800GTX by 24%.

Notice when u up the res to 1920 about double the pixels ( i blieve don't want to do the math ) the g70 catches up to the 6800sli rig ?

That could of course also be caused by the fact the the 6800 generally has had a hard time at higher then 1600*1200 resolutions. I think it was Dave that mentioned that this might be because of hiearchical Z and such not being optimized for those resolutions on the NV40.
 
Mintmaster said:
Powderkeg, you're an idiot. Have you ever even considered CPU limitations? SLI is usually far faster than ~17% if you have a CPU to feed it. Even in your Far Cry example, the increase given by SLI is greater with the 6600GT and 6800 (both at around 45%). Obviously the CPU is holding back the faster 6800GT SLI platform.

Which supports what I said.

SLI offers no improvement except in situations where increased memory bandwidth offers an improvement. If the card is CPU or GPU limited, there is no significant increase.




Do you know why? Average CPU load per pixel goes down at higher res., thus reducing the CPU constriction.

Same would be true on a single card. But you haven't shown why SLI is faster than a single card, all you are doing is giving examples of how both would be effected equally.

But this does not diminish the power of SLI. Each card has fewer pixels to render. So what if each individual pixel pipeline isn't any stronger? With SLI, you have more of them, because you have two cards.

Which again supports what I said. SLI offers a similar advantage as lowering the resolution gives. The exact same thing would happen if you doubled the memory bandwidth on a single card.

In the end, when a single card renders fewer pixels that results in an increase in performance unless the game is CPU limited. SLI gives the exact same effect, just reversed. It only offers significant improvements over a single card in high resolutions, where single cards become memory bandwidth limited.

Shader instruction processing IS increased. If you wanted, you CAN double the length of all shaders in a game, and the framerate would likely drop by only 10%, so long as you don't have render-to-texture data travelling betweens frames. The two cards are NOT doing the "exact same thing" - each has a different workload.

The only difference in workload is one does odd-lines of the frame while the other does even. Aside from that, both cards have to run the same shader instructions, because both cards have to render the same end result to the same frame.

As for the G70's major increase in shading power, its pipes are only around 10-15% faster than NV40's. It's the number of pixel shading pipes that gives G70 its power, and even then, the 6800 SLI has more "shader instruction processing power".

At 1280x1024 (no FSAA), SLI gives the 6800U a 79% boost, and beats the 7800GTX by 24%.

And again, this is a memory bandwidth limitation. Drop the resolution to 640X480, and you loose that 79% boost.
 
Last edited by a moderator:
Back
Top