Concerning Console Graphics chipsets

joe75 said:
... WRONG.

/GameStar/dev: Are there performance issues with the multi-threaded CryEngine 2 running on single core pc's?

Cevat Yerli: The code can run sequentially. You're losing a bit of efficiency, but what you are gaining with optimization is higher. SO the price of sustaining a loss of the frame rate when running on a single-thread pc is so small, that you can easily get it back from that. Most of the PC games are not optimized anyway, Far Cry isn't either.

/GameStar/dev: With consoles, developers are getting astounding performance out of average hardware, because they have to. If this was the case with pc's you'd probably only need a Geoforce4 TI for running Doom 3.

Cevat Yerli: My point exactly. The evolution of hardware is running at such a fast rate, that you don't get to work with it for long. It's the same with cpu's, you have to take your time to optimize. The biggest problem with that are the cache misses. Also, you should avoid a global memory between the individual threads. Simply put: if we are reaching into the same pot, the pot must not change. If I am reaching for an element before you, you are not getting it anymore - or not the one that you expected at the very least. To bypass this you ideally have to change something in one step, pass the result on to the open memory and release it for other cpu's (unlocking).

... the optimalization is the answer.

Optimisation is the answer to what? Your response has nothing to do with my previous post.

Yes software will be optimised for Xenos just as it was optimised for NV2a. Thats unrelated to the fact I was pointing out about Xenos needed more power proportianate to PC's of the time than NV2a because its being asked to render at the same quality as PC's of its time rather than much lower quality.
 
Shifty Geezer said:
I was actually wanting to know what graphics options are being presented to the devs through the graphics API. eg. Tesselation. Assuming XB360's implementation of DX has functions for tesselation (I don't know how tesselation is used and accessed), does OGL 2.0, which Sony+nVidia would provide using the SPE's if it's not on RSX, or would PS3 developers need to write their own functions? The feature set needn't be implemented solely on the GPU so comparing Xenos to RSX doesn't show how many hoops devs will have to jump through to solve certain problems.
That I would not know ;) I am sure much of that is either NDA-ed right now or devs are too busy to discuss. From what I understand the API layer in the 360 is very thin, is a custom version of DX meant just for Xenos, and (of course) devs can write directly to the hardware.

Using your example of using the SPEs for tesselation... well, nice idea but I am sure no one who knows that could give an answer ;)

Of course there are some fine lines and grey areas... using tesselation as an example:

• GPU+API supported
• GPU supported (not in API)
• API support using a software solution (no GPU support)
• No API or no hardware support; any solution would need to be designed by the developer

The first and second example are not very difficult. e.g. ATI cards current support Geometry Instancing even though the APIs they support do not. This just requires the developer to make the calls directly to the hardware.

The third example could be a great solution if designed well. Putting something like that in API could be very nice for developers with a time constraint. How good it is would depend a lot on the hardware and actual design of the API. Could be tricky, but using your example of RSX+SPEs, well, there is a bit of performance there and good bandwidth. May not be as effecient as a native integrated design; but then again the brute force and flexibility of the SPEs could be a huge win.

In general I think this is kind of what MS has done with the geometry shader. Since the Xenon CPU is designed to do a Dot Product per cycle and can stream content to the GPU directly without affecting memory bandwidth the Xbox 360 basically has the functionality of the geometry shader without the overhead of the silicon. So if a developer needs the Xbox 360 to do this task they ask the CPU to do it, if not they can have the CPU do something else.

I am sure the SPEs will be able to do the same type of offloading. Whether this stuff makes it into an API... who knows. My guess would be no... only because the needs of each game are different. If you are going to do tesselation on the PS3 or use geometry shading you might as well write an engine that is really effecient for what you are doing and has the exact features you need. But that is only a guess.

Of course this is just a wild jab in the dark ;)
 
Seing to that DeanoC is surprised about OGL 2.0 in RSX, the point seems moot anyway. Whatever features OGL 2.0 has won't be seen in PS3's API if OGL 2.0 isn't supported.

Still, I thought I'd heard rumours and a Google comes up trumps.
Los Angeles (CA) - A consortium of the world's leading embedded systems manufacturers and developers, meeting at SIGGRAPH 2005, today announced OpenGL ES 2.0, an improved derivative of OpenGL for use in cellphones, embedded systems, including Sony's Playstation 3.
http://www.tgdaily.com/2005/08/01/open_gl_es_2/

So unless Deano was making a major distinction between OpenGL and OpenGL ES as used in PS3, I'm confused.:???:
 
Well, there is also the issue of "support". e.g. DX7 cards can use the DX9 API and run DX9 games. The issue, of course, is whether a DX7 GPU supports DX9 features, which of course it does not.

It is an odd semantic, but in the GPU world there is a pretty big difference between "supporting an API" versus "supporting all the features in an API".

Hopefully we can learn more about RSX and the PS3 API soon, but I get this feeling we may not until after launch :(
 
Back
Top