Concerning Console Graphics chipsets

I have a fairly good idea that:

Ps3 RSX is: G70 (7800 GTX structure) + TurboCache + Lower Memory Bandwidth (compared to 7800 GTX) + Higher Clock Speed sharing 512 MB of Ram in the system with Cell

Xbox 360 is : Unified Architecture + 500 Mhz Clockspeed + Directx 9.0L(directx 9.5 as you say API) + EdRam (10 mb)

Now i heard an interview concerning Xenos of a call of duty developer and ATI developer saying the features and performance (not necessarily speed performance) will be equaled within 8 months or a year from now in PC cards, if thats the case then its the R600 but Nvidia will already have a G72 which will be conceivably more advanced than the RSX(G70+) chipset.

Talking in forums and people who talk console why is there an impression that within a few years the GRAPHICS of ps3 will surpass the graphics of Xbox 360 when the PS3 graphics of Nvidia PC chipset will be more advanced in PC by the time PS3 is released in North America. Is there any logic to this? shouldnt it be the other way around that over time Xbox 360 graphics will look better (albiet physics will be at a lower quantity) than PS3?
 
no my question is why would anyone considering the graphics of a chipset getting better if its in nearly the same range of performance. i can understand physics getting better, scale of cities getting bigger but graphics are solely dependent on gpus
 
Taking advantage of the GPU/System might take a while.

Of course the longer you work with a given set of hardware the more little things you'll find out about it... and to a certain extent the more performance you'll be able to get out of it.

PC realm doesn't really have this, the way they get more performance out of a game is upgrading the hardware.

On a closed box system the GPUs of both X2 and PS3 should stay pretty comparable with PCs for a bit (on the average case for consumers that is) -- of course games on PC will look better soon after launch, but unless you upgrade your comp every launch cycle you won't be seeing it.

Don't take what dev's say as the holy grail of solid information either. What a dev says and what a dev means or is thinking in his/her head may be different... There is one thing you can count on this next generation of consoles -- We'll have consoles that look better than games on PCs for a good while and slowly PC games (if you have a good enough PC) will start to look better to the point where at the end of the generation PC games look a good deal better than their Console counterparts. That is all you can count on. This generation might be slightly different in that the returns on improving graphics are diminishing (we'll have to wait and see what kind of crazy stuff happens in the future though).
 
Bobbler said:
Taking advantage of the GPU/System might take a while.

Of course the longer you work with a given set of hardware the more little things you'll find out about it... and to a certain extent the more performance you'll be able to get out of it.

PC realm doesn't really have this, the way they get more performance out of a game is upgrading the hardware.

On a closed box system the GPUs of both X2 and PS3 should stay pretty comparable with PCs for a bit (on the average case for consumers that is) -- of course games on PC will look better soon after launch, but unless you upgrade your comp every launch cycle you won't be seeing it.

Don't take what dev's say as the holy grail of solid information either. What a dev says and what a dev means or is thinking in his/her head may be different... There is one thing you can count on this next generation of consoles -- We'll have consoles that look better than games on PCs for a good while and slowly PC games (if you have a good enough PC) will start to look better to the point where at the end of the generation PC games look a good deal better than their Console counterparts. That is all you can count on. This generation might be slightly different in that the returns on improving graphics are diminishing (we'll have to wait and see what kind of crazy stuff happens in the future though).


yes but with the advent of directx 10, wont that consoles looking better longer be shortened significantly?
 
Doubtful -- Dx10 seems more like a change in how things are done (unified shaders, etc), and not as many new things (compared to the difference between dx8 and dx9c I don't think dx10 will be anywhere near it).

It is hard to say though... normal mapping sort of changed things drastically and same with heavy shaders -- can dx10 era offer something as significant as those? Personally I think we're going to be hitting diminishing returns on generational improvements of graphics.

Also, look at Xbox -- it still holds up pretty well (for its target resolution) against newer games. D3 on Xbox looks pretty good for hardware thats 3-4 years old. It certainly isn't dx9 era hardware in the Xbox, yet it still is able to do quite a bit.
 
Bobbler said:
It is hard to say though... normal mapping sort of changed things drastically and same with heavy shaders -- can dx10 era offer something as significant as those? Personally I think we're going to be hitting diminishing returns on generational improvements of graphics.
Maybe displacement mapping? Unified Shaders, due to all 48 ALUs having vertex texturing ability, should open some doors. Ditto the hardware tesselation. And while Xenos does not have a geometry shader, it seems the cache lock mechanism and dedicating an entire CPU core to streaming content (Xenos can read the CPU cache) should fill this role nicely.

And of course with much more powerful shader setups we will eventually shackle the constraints of fixed function legacy support for DX7/DX8 and get to explore some new things. Games like Far Cry, HL2, Doom 3, etc... all play on DX7 hardware. Basically the DX9 features are bells and whistles and not the baseline for their game design.

Eventually that will change, and when it does I think we will be surprised at what devs can do. Another big change, which the consoles can do, is really push out high geometry. Obviously that takes memory and bandwidth (which both consoles work around), but one of the reasons we see so many games with low geometry is because of console porting and, again, legacy support.

My personally belief is the next 2 years will see an acceleration in 3D graphics. I think GPUs have really been held back by legacy support and the older consoles and that the new consoles will become the new baseline and development will transition with those in mind as the standard. And that should mean some significant changes in how games are designed. Of course that is my opinion :D

Also, look at Xbox -- it still holds up pretty well (for its target resolution) against newer games. D3 on Xbox looks pretty good for hardware thats 3-4 years old. It certainly isn't dx9 era hardware in the Xbox, yet it still is able to do quite a bit.
While I agree a lot of Xbox games look pretty decent compared to PC games (including the PC ports like HL2 which so far seems to look nice, Far Cry looked good on the Xbox as well), overall I would not think Doom 3 is a good example just because its target hardware is more DX8 class (GF3). Doom looks good, but it is mainly the dynamic lighting/shadowing plus normal mapped characters. Doom 3 really is not doing anything, technically, the Xbox should be incapable of. Obviously there is the performance gap which was covered by shrinking down some things, reducing texture quality, and making more streaming level segments. Of course it is a miracle they could port it to a system with 64MB of TOTAL memory :oops: I personally think D3 has a very rich and solid look (with AA it always made me feel like it was PS1 class CGI). So not knocking the game itself as it is a good example of putting hardware to use. But the NV2A in the Xbox has the ability to do the things D3 asks.
 
Acert93 said:
While I agree a lot of Xbox games look pretty decent compared to PC games (including the PC ports like HL2 which so far seems to look nice, Far Cry looked good on the Xbox as well), overall I would not think Doom 3 is a good example just because its target hardware is more DX8 class (GF3). Doom looks good, but it is mainly the dynamic lighting/shadowing plus normal mapped characters. Doom 3 really is not doing anything, technically, the Xbox should be incapable of. Obviously there is the performance gap which was covered by shrinking down some things, reducing texture quality, and making more streaming level segments. Of course it is a miracle they could port it to a system with 64MB of TOTAL memory :oops: I personally think D3 has a very rich and solid look (with AA it always made me feel like it was PS1 class CGI). So not knocking the game itself as it is a good example of putting hardware to use. But the NV2A in the Xbox has the ability to do the things D3 asks.

That is sort of my point though. We are barely just now seeing Dx9 games made -- Xenos and RSX are made for DX9+. So like Doom3 on the Xbox, games in the future will find their way to the consoles and still look somewhat comparable (near the end of the generation). We won't be seeing games made for Dx10 as their target for quite a while so the consoles should last for as long as any previous generation consoles have (between the closed box and rather beefy starting point they can last the 5 years and be no worse off than current gen consoles are at this point in time).
 
Bobbler said:
That is sort of my point though. We are barely just now seeing Dx9 games made -- Xenos and RSX are made for DX9+. So like Doom3 on the Xbox, games in the future will find their way to the consoles and still look somewhat comparable (near the end of the generation). We won't be seeing games made for Dx10 as their target for quite a while so the consoles should last for as long as any previous generation consoles have (between the closed box and rather beefy starting point they can last the 5 years and be no worse off than current gen consoles are at this point in time).

RSX = DX9 + OpenGl
Xenos = DX9 + DX9.0L
 
Acert93 said:
But the NV2A in the Xbox has the ability to do the things D3 asks.

How good was the NV2A GPU in the Xbox compared to what the PCs had at the sametime? I'm kind of curious to see if next-gen consoles are fairing better next time around compared to the sametime this gen.
 
Many effects missing

Acert93 said:
Doom 3 really is not doing anything, technically, the Xbox should be incapable of.

Anything can be done on most hardware, question is can it do it at playable frame-rate. Many things in PC version that Xbox could not do at playable frame-rate so features removed. Regardless of cuts, frame-rate often still barely at playable rate.

Doom3 on Xbox had many differences from PC version. Not just level format changed but also textures were lower resolution, geometry simplified, lighting simplified with many real-time lighting and shadow effects gone too, and normal maps were lower resolution. Even way damage on enemy model was changed. Xbox version looks good for a console game but not comparable to PC version in technical terms. But obviously devleloper did good job with Xbox version art direction because many think it is very similar to PC version despite so many technical compromises. Best thing about xbox version is extensive normal mapping.

Regarding NV2A vs other hardware of time, IIRC in simple performance specs it was = GeForce2 GTS. However hardware architecture was newer more flexible and programmable style so it was better for new kinds of effects.
 
yes but with the advent of directx 10, wont that consoles looking better longer be shortened significantly?

I'm a bit confused as to why people are talking DirectX as a sort of quality metric. It's a standard for graphics hardware, but doesn't explicitly denote presence or absence of features of a system per se. This is seen when a new DirectX comes out, and you download new drivers to support it. If you haven't got hardware features on the GPU it can be disabled or you could emulate it in software. You can use ATi's shader creation software without a SM3.0 card in a software rasterizer (though of course it's dead slow!)

eg. If DirectX 10 adds virtual bozotron rendering, and nVidia release a GPU with hardware virtual bozotron rendering, that doesn't instantly mean XB360 and PS3 will not have virtual bozotrons. If that technology can be rendered on the CPU for example the system as a whole can support it. MS can release a API for XB360 that includes virtual bozotron rendering and maps it onto the CPU. If the technology fits well the VMX units, XB360 could be considered a DX10 system if virtual bozotrons rendering is the only new feature, even though the GPU itself isn't a DX10 part supporting all DX10 features. Like PS2's GPU is so far from a DX anything part, not supporting DX at all, but as a system PS2 can achieve a lot that DXn can (DX7 I think).

Looking at XB360 and PS3 as systems, even though Xenos is a DX9+ part and RSX is DX9 (assuming), there's nothing DX9+ can do that DX9 can't on PS3 AFAIK. Unless XB360's DX API makes use of MEMEXPORT to achieve rsults that PS3 as a system can't match, the DX compliance of a GPU has little relation to what's achievable on a system. And over time the graphics of a system aren't going to be determined solely by the GPU, but the whole system.

On a related note, what's the differences in specs between DirectX 9.5 (whatever Xenos is) and OpenGL 2.0 Embedded?
 
Shifty Geezer said:
On a related note, what's the differences in specs between DirectX 9.5 (whatever Xenos is) and OpenGL 2.0 Embedded?

Who pays for developing Open GL? Is it Open Source? With MS, it's obvious where the money comes from but is there some business model (licensing?) which funds continued Open GL development? Can Open GL keep up or is WGF going to eclipse it?

Can anyone keep up with the graphics researchers that MS has on the payroll? Or are advances in graphics still mostly being made at universities? (But MS has labs at Cambridge and other elite institutions?).
 
mckmas8808 said:
How good was the NV2A GPU in the Xbox compared to what the PCs had at the sametime? I'm kind of curious to see if next-gen consoles are fairing better next time around compared to the sametime this gen.

NV2a was technically superior to the best GPU available at the time - the GF3 Ti500. In effect it has the core of a Ti500 but with the addition of an extra vertex shader and some other minor core ehnancements.

It did however run 7Mhz slower than the Ti500 and was connected to memory that was quite a bit slower 64MB @ 6.4GB/sec for the whole system vs 64MB @ 8GB/sec dedicated to graphics.

Compare that to Xenos which is clearly technically superior to R520 while in terms of raw performance, its difficult to judge because of their radically different architectures.

However consider that while NV2a was required to render graphics at considerably lower resolution and image quality settings than PC's of the time, Xenos is being asked to render at PC levels of quality, and thus what power it does have will be spread thinner than NV2a.
 
wco81 said:
Who pays for developing Open GL? Is it Open Source? With MS, it's obvious where the money comes from but is there some business model (licensing?) which funds continued Open GL development? Can Open GL keep up or is WGF going to eclipse it?

Can anyone keep up with the graphics researchers that MS has on the payroll? Or are advances in graphics still mostly being made at universities? (But MS has labs at Cambridge and other elite institutions?).
MS aren't inventing rendering hardware technologies. They cahoot with the GPU manufacturers AFAIK to discuss features that they can add in hardware. MS then provide a software interface for the GPUs, DirectX.

OpenGL is likewise a software interface for the graphics cards. ATi and nVidia provide drivers so that when you issue a DirectX or OpenGL graphics command, it's mapped onto the hardware.

The development of OpenGL isn't any one company, but a collection (http://www.opengl.org/about/arb/overview.html). The GPU manufacturers are free to develop new render technologies and submit features to the OpenGL specification.

The way I understand it, if ATi invents virtual bozotron rendering on their GPU, they can submit it to become a supported standard in OpenGL, and talk to MS to see if MS will include an interface in DirectX.

I don't think there's any reason to think DirectX will develop advanced graphics features that won't appear in OpenGL. As it currently stands I don't know what the differences between OpenGL v2.0 and the version of DX on Xenos is (hence I asked!). AFAIK OpenGL v2.0 supports more features such as CSGs, but I'm very hazy on the differences and might be off the mark with this understanding.
 
pjbliverpool said:
However consider that while NV2a was required to render graphics at considerably lower resolution and image quality settings than PC's of the time, Xenos is being asked to render at PC levels of quality, and thus what power it does have will be spread thinner than NV2a.
Pretty good summary PJB.

On this last part I have a couple comments about MS's approach. While some Xbox1 games supported 720p, in general the standard games were 480p. So 720p is a good 3x jump in resolution. So the Xenos is being asked to do 720p standard, while NV2A was not, so it will be tasked with more and being spread a little thinner.

This is one reason MS probably made 720p standard. 1080p is 6x as many pixels and even top end GPUs choke at that resolution on current PC software--so it is highly unlikely that the consoles would handle 1080p well on the really cutting edge games of tomorrow.

Another important point is the eDRAM. This resolves one of the bigger bottlenecks in framebuffer/backbuffer that you have with current GPUs. This shifts the bottleneck to Shader logic which Xenos has in spades compared to current flagship GPUs (in peak performance, not counting any effeciency savings by the unified architecture, threaded scheduler, or decoupled TMUs).

1024x768 / 1280x1024 without AA seems to be a pretty sweet spot even for mainstream performance (e.g. 6600gt) and enthusiest (X800/6800, 7800/X1800) in everything in the market. They do take a larger hit at higher resolutions + AA/AF, so it appears to me, on the whole, that most current PC games are limited by resources other than shaders in general.

Obviously the PC is going to advance, and at some point offer software beyond the *quality* the consoles can do. But, just like this generation, AAA software titles like Far Cry, Half-Life 2, Doom 3, etc will get ported. They will do what they have always done: turn down the details and features.

With the R580 coming soon we should ge a small taste of what Xenos is like, in some ways. It will be lacking features, eDRAM, and wont have the USA (and probably wont have ANY games to really test the shader logic without running into other bottlenecks) but 48 ALUs in the pixel shader array should, in Pixel Shader Benchmarks, give a general idea of Xenos (Especially with some of the vertex work in the 360 being offloaded to the CPU). R580 will be here in 6 months; by next fall when we see DX10 cards Xenos and RSX will be behind both IHV's flagship models, and in 2007 they will be competing in the area of the performance midrange products. Of course their advantage is they are in a closed box and can be designed to directly.


@Shifty: Dave's chart may be a good place to start on Xenos' featureset compared to OpenGL. Below that he adds MRT, Alpha-to-Mak, and elsewhere in the article he mentions MEMEXPORT, hardware tesselation, HOS, FP10, MSAA+FP10/16, etc.

I am not sure how DX10 compares to OpenGL 2.0; but it may just be easier to compare the Xenos featureset to whatever chips/cards (X1800, 7800) you have in mind.

Like I mentioned earlier, the hardware tesselation and vertex shader logic (i.e. having the ability to load balance 48 ALUs to vertex shading, all with vertex shading ability) should allow some techniques that are possible on current GPUs but just too slow to do in general.

I think this gen will be similar to last. It takes devs a little while to learn what works and what does not. There are a lot of techniques out there that work, but the hardware has been too slow to do it. Just like normal maps, devs will find the ones that work and refine them and learn a good balance between quality and features. The focus on programmable shaders, in general, has been with the goal to open these doors. So it is more up to the developers in how they choose to use the hardware now.

Hopefully this means we get a lot more games with their own distinct look and feel. I think, at least for the next couple years, the tension will be more what can the hardware do at an acceptible speed (not whether "can it do it or not"). There will be different approaches in the consoles (e.g. being able to do hardware tesselation may need a different approach in the PS3 to get the same effect) but I think, at first, it is more of an issue of doing things differently. Down the road will their unique features and functions give an edge to either side?

Dunno.
 
Acert93 said:
@Shifty: Dave's chart may be a good place to start on Xenos' featureset compared to OpenGL. Below that he adds MRT, Alpha-to-Mak, and elsewhere in the article he mentions MEMEXPORT, hardware tesselation, HOS, FP10, MSAA+FP10/16, etc.

I am not sure how DX10 compares to OpenGL 2.0; but it may just be easier to compare the Xenos featureset to whatever chips/cards (X1800, 7800) you have in mind.
I was actually wanting to know what graphics options are being presented to the devs through the graphics API. eg. Tesselation. Assuming XB360's implementation of DX has functions for tesselation (I don't know how tesselation is used and accessed), does OGL 2.0, which Sony+nVidia would provide using the SPE's if it's not on RSX, or would PS3 developers need to write their own functions? The feature set needn't be implemented solely on the GPU so comparing Xenos to RSX doesn't show how many hoops devs will have to jump through to solve certain problems.
 
Is there a fill rate spec for Rev GPU to compare to Xenos, anyways? I'd think it is a bit premature to say which may find itself throttled with one at SD vs. and the other one at HD, until we know at least that, no?
 
Shifty Geezer said:
I was actually wanting to know what graphics options are being presented to the devs through the graphics API. eg. Tesselation. Assuming XB360's implementation of DX has functions for tesselation (I don't know how tesselation is used and accessed), does OGL 2.0, which Sony+nVidia would provide using the SPE's if it's not on RSX, or would PS3 developers need to write their own functions? The feature set needn't be implemented solely on the GPU so comparing Xenos to RSX doesn't show how many hoops devs will have to jump through to solve certain problems.

PS3 has OGL 2.0? News to me...
 
pjbliverpool said:
However consider that while NV2a was required to render graphics at considerably lower resolution and image quality settings than PC's of the time, Xenos is being asked to render at PC levels of quality, and thus what power it does have will be spread thinner than NV2a.


... WRONG.

/GameStar/dev: Are there performance issues with the multi-threaded CryEngine 2 running on single core pc's?

Cevat Yerli: The code can run sequentially. You're losing a bit of efficiency, but what you are gaining with optimization is higher. SO the price of sustaining a loss of the frame rate when running on a single-thread pc is so small, that you can easily get it back from that. Most of the PC games are not optimized anyway, Far Cry isn't either.

/GameStar/dev: With consoles, developers are getting astounding performance out of average hardware, because they have to. If this was the case with pc's you'd probably only need a Geoforce4 TI for running Doom 3.

Cevat Yerli: My point exactly. The evolution of hardware is running at such a fast rate, that you don't get to work with it for long. It's the same with cpu's, you have to take your time to optimize. The biggest problem with that are the cache misses. Also, you should avoid a global memory between the individual threads. Simply put: if we are reaching into the same pot, the pot must not change. If I am reaching for an element before you, you are not getting it anymore - or not the one that you expected at the very least. To bypass this you ideally have to change something in one step, pass the result on to the open memory and release it for other cpu's (unlocking).

... the optimalization is the answer.
 
Back
Top