RSX is a faster 7800GTX

Status
Not open for further replies.

sklaar

Newcomer
bit-tech Interview

For those that don't know, if you've been living under a rock for the last few months, the graphics processor in the PlayStation 3 is NVIDIA-designed and is called RSX. How does it compare next to the 7800?

"The two products share the same heritage, the same technology. But RSX is faster," said Kirk.

But for how much longer, we wonder? With the PlayStation 3 not due until March 2006, won't the next generation of PC graphics be here by then? "At the time consoles are announced, they are so far beyond what people are used to, it's unimaginable," David comments. "At the time they're shipped, there's a narrower window until the next PC architecture." In other words, RSX looks incredible now, but when it launches, there'll be a smaller time until PC looks better.

"However, what consoles have is a huge price advantage." And 'huge' is the appropriate word: pricing is still to be announed by Sony, but Playstation 3 could debut at £399 - the price of a 7800GTX board, yet offering so much more.

Whilst their relationship with Microsoft has become publicly tenuous, what about NVIDIA's relationship with their new console partner?

"So far our relationship with Sony has been great. We have a much closer relationship and share a much broader vision for the future of computing and graphics.

"When we came together a few years ago, we found a vision and experience that we shared. It sounds cliched, but Japanese companies are often trying to create a vision and make the technology follow that, not the other way round. We believed in that."
 
sklaar said:
For those that don't know, if you've been living under a rock for the last few months, the graphics processor in the PlayStation 3 is NVIDIA-designed and is called RSX. How does it compare next to the 7800?

I know you didn't write that, but coming from someone whose post count is 1, i found this thread quite funny...
 
thanks for the heads up, sklaar :)



I will post a link to the article here: http://www.bit-tech.net/bits/2005/07/11/nvidia_rsx_interview/1.html



BTW, David Kirk may have hinted at the solution to PS3 's AA w/ HDR:
Using AA with HDR

For those of you with super-duper graphics cards, you will have come across a problem: you can't use Anti-Aliasing when using HDR lighting, for example in Far Cry. In these cases, it's a situation where you have to choose one or the other. Why is this, and when is the problem going to get solved?

"OK, so the problem is this. With a conventional rendering pipeline, you render straight into the final buffer - so the whole scene is rendered straight into the frame buffer and you can apply the AA to the scene right there."




"But with HDR, you render individual components from a scene and then composite them into a final buffer. It's more like the way films work, where objects on the screen are rendered separately and then composited together. Because they're rendered separately, it's hard to apply FSAA (note the full-screen prefix, not composited-image AA! -Ed) So traditional AA doesn't make sense here."

So if it can't be done in existing hardware, why not create a new hardware feature of the graphics card that will do both?

"It would be expensive for us to try and do it in hardware, and it wouldn't really make sense - it doesn't make sense, going into the future, for us to keep applying AA at the hardware level. What will happen is that as games are created for HDR, AA will be done in-engine according to the specification of the developer.

"Maybe at some point, that process will be accelerated in hardware, but that's not in the immediate future."

But if the problem is the size of the frame buffer, wouldn't the new range of 512MB cards help this?

"With more frame buffer size, yes, you could possibly get closer. But you're talking more like 2GB than 512MB."
[source: http://www.bit-tech.net/bits/2005/07/11/nvidia_rsx_interview/3.html ]
 
Any idea how that would work? Are these objects created along with say an AA'd mask for compositing? Can someone describe this in more detail? Like in a scene of 200 monsters in a field with trees, is every object renderd in full and composited? Sounds like overkill. Why not just render per pixel?

Obviously HDR rendering is different to normal rendering, and that means I haven't got a clue how it works!
 
David Kirk on Unified Shader architectures:
Unified Shader architectures

Architecturally, one of the ongoing debates in graphics is about whether or not a Unified Shader pipeline is desirable. Let's get some quick context.

Graphics scenes are made up a vertices and pixels. These are shaded by two different parts of the graphics chip: the vertex pipeline and the pixel pipeline. There are two further aspects for this: the hardware and software layer. It is possible to have a unified software layer for programming, but a separated hardware layer.

ATI are firmly of the belief that unifying these two pipelines will lead to more efficiency.

Microsoft have written a unified software layer into the next version of WGF. Does this signify that Microsoft and ATI are on the same track, and NVIDIA are not following the same path?

"Well, let's get something straight. Microsoft makes APIs (Application Programming Interfaces- Ed) not hardware. WGF is a specification for an API specification - it's software, not hardware."




"For them, implementing Unified Shaders means a unified programming model. Since they don't build hardware, they're not saying anything about hardware.

"Debating unified against separate shader architecture is not really the important question. The strategy is simply to make the vertex and pixel pipelines go fast. The tactic is how you build an architecture to execute that strategy. We're just trying to work out what is the most efficient way.

"It's far harder to design a unified processor - it has to do, by design, twice as much. Another word for 'unified' is 'shared', and another word for 'shared' is 'competing'. It's a challenge to create a chip that does load balancing and performance prediction. It's extremely important, especially in a console architecture, for the performance to be predicable. With all that balancing, it's difficult to make the performance predictable. I've even heard that some developers dislike the unified pipe, and will be handling vertex pipeline calculations on the Xbox 360's triple-core CPU." :cry:

"Right now, I think the 7800 is doing pretty well for a discrete architecture?

So what about the future?

"We will do a unified architecture in hardware when it makes sense. When it's possible to make the hardware work faster unified, then of course we will. It will be easier to build in the future, but for the meantime, there's plenty of mileage left in this architecture."
[source: http://www.bit-tech.net/bits/2005/07/11/nvidia_rsx_interview/4.html ]





:)cry: smilie added by me)
 
RSX is faster than the 7800GTX

Uhhhh, duh!?!?! Did anyone doubt this especially in a closed platform like the PS3?




and another word for 'shared' is 'competing'



Nice way to put a negative spin on something positive the competition is doing. nice try

Give me a break, since when does shared and competing have the same meaning. I think someone needs to brush on their English skills.(yea I know me too, but this isn't getting published)
Quote:
The strategy is simply to make the vertex and pixel pipelines go fast


more spin and wrong

Quote:
I've even heard that some developers dislike the unified pipe, and will be handling vertex pipeline calculations on the Xbox 360's triple-core CPU."



Yea and I heard there actually is cars that run on water, and it's a government conspiracy to cover it.
 
Wunderchu said:
David Kirk on Unified Shader architectures:
Unified Shader architectures

Architecturally, one of the ongoing debates in graphics is about whether or not a Unified Shader pipeline is desirable. Let's get some quick context.

Graphics scenes are made up a vertices and pixels. These are shaded by two different parts of the graphics chip: the vertex pipeline and the pixel pipeline. There are two further aspects for this: the hardware and software layer. It is possible to have a unified software layer for programming, but a separated hardware layer.

ATI are firmly of the belief that unifying these two pipelines will lead to more efficiency.

Microsoft have written a unified software layer into the next version of WGF. Does this signify that Microsoft and ATI are on the same track, and NVIDIA are not following the same path?

"Well, let's get something straight. Microsoft makes APIs (Application Programming Interfaces- Ed) not hardware. WGF is a specification for an API specification - it's software, not hardware."




"For them, implementing Unified Shaders means a unified programming model. Since they don't build hardware, they're not saying anything about hardware.

"Debating unified against separate shader architecture is not really the important question. The strategy is simply to make the vertex and pixel pipelines go fast. The tactic is how you build an architecture to execute that strategy. We're just trying to work out what is the most efficient way.

"It's far harder to design a unified processor - it has to do, by design, twice as much. Another word for 'unified' is 'shared', and another word for 'shared' is 'competing'. It's a challenge to create a chip that does load balancing and performance prediction. It's extremely important, especially in a console architecture, for the performance to be predicable. With all that balancing, it's difficult to make the performance predictable. I've even heard that some developers dislike the unified pipe, and will be handling vertex pipeline calculations on the Xbox 360's triple-core CPU." :cry:

"Right now, I think the 7800 is doing pretty well for a discrete architecture?

So what about the future?

"We will do a unified architecture in hardware when it makes sense. When it's possible to make the hardware work faster unified, then of course we will. It will be easier to build in the future, but for the meantime, there's plenty of mileage left in this architecture."
[source: http://www.bit-tech.net/bits/2005/07/11/nvidia_rsx_interview/4.html ]





:)cry: smilie added by me)

nVidia says something negative about Unified shaders... what a shock :LOL:

They will downplay Unified shaders UNTILL they have a unified chipset of their own.... then they will spin it, and claim they were the ones who "did it right" :rolleyes:
 
Ultimately you want your vertices and pixels to go fast - how is that wrong?

Anyway, a less enlightening interview that one might have hoped for - RSX is faster than the 7800GTX? Wow, the clockspeeds don't indicate that at all! ;)

The unified shader comments are probably the more interesting part of the interview.
 
nVidia says something negative about Unified shaders... what a shock


I know how lame, well were behind the competition on this so were going to downplay it's importance. Oh but yea we'll be building Unified shaders in the future.
 
Titanio said:
Ultimately you want your vertices and pixels to go fast - how is that wrong?

Anyway, a less enlightening interview that one might have hoped for - RSX is faster than the 7800GTX? Wow, the clockspeeds don't indicate that at all! ;)

The unified shader comments are probably the more interesting part of the interview.

The most biased too. Funny that you find it interesting :rolleyes:
 
I found the unified shader perspective interesting and not pure propaganda. nVidia have a different approach to unified shaders than ATi, but they haven't scrubbed them off. What they've said is they need the balancing right, and at the moment they haven't got it. In know way are they saying unified shaders suck as a technology - just that it's hard to get right.

This isn't a PR opinion, but a technical observation. The Xenos article states ATi's load balancing algorithms can be overridden by programmers, showing that the load balancing is in essecnce the weak link in the Unified Shader chain.

So if nVidia's experiments with unified shaders have shown it a no go thus far, while ATi's show it's a good thing (though not in the PC space importantly), have ATi managed the secret recipe of balancing, of will Xenos fail to achieve the much higher tooted efficiency?

Also, why haven't ATi got unifed shaders for PC? Perhaps because in a closed box environment load balancing can be tailored to the application, whereas on a PC the 3D should be transparent with the varying architecture, rather than needing developers to be able to write load balancing when most systems don't support load balancing.

As for devs writing vertex work on CPUs...well, large sack of salt of course. But that would enable full pixel shading from the Xenos instead of sharing it with vertex work. In some situations this flexibility mgiht be a good thing
 
Shifty Geezer said:
As for devs writing vertex work on CPUs...well, large sack of salt of course. But that would enable full pixel shading from the Xenos instead of sharing it with vertex work. In some situations this flexibility mgiht be a good thing
true :)

(BTW, my :cry: smilie that I posted is directed at David Kirk's comment that "some developers dislike the unified pipe", not at the comment that they are deciding to use Xenon's CPU for vertex work)
 
Of course NVIDIA aren't going to praise UnifiedShaders, just like ATI weren't praising SM3 when NVIDIA had them and they didn't.

It's still interesting to see why they think their solution is better.

Just like it's interesting to see why MS and Sony think their CPU approaches are better than each other.

Once you filter the PR crap, it's nice to hear why they went for the approach they went.
 
Wunderchu said:
Shifty Geezer said:
As for devs writing vertex work on CPUs...well, large sack of salt of course. But that would enable full pixel shading from the Xenos instead of sharing it with vertex work. In some situations this flexibility mgiht be a good thing
(BTW, my :cry: smilie that I posted is directed at David Kirk's comment that "some developers dislike the unified pipe", not at the comment that they are deciding to use Xenon's CPU for vertex work)
Every new idea is gonna find someone who has to use it who'll dislike it! If just because people like working with what they already know. 'Some' devs means nothing of consequence to me. If he said 'most devs' then it might be something to think about.
 
Shifty Geezer said:
Wunderchu said:
Shifty Geezer said:
As for devs writing vertex work on CPUs...well, large sack of salt of course. But that would enable full pixel shading from the Xenos instead of sharing it with vertex work. In some situations this flexibility mgiht be a good thing
(BTW, my :cry: smilie that I posted is directed at David Kirk's comment that "some developers dislike the unified pipe", not at the comment that they are deciding to use Xenon's CPU for vertex work)
Every new idea is gonna find someone who has to use it who'll dislike it! If just because people like working with what they already know. 'Some' devs means nothing of consequence to me. If he said 'most devs' then it might be something to think about.
heh.. I suppose so :)
 
I thought by competing, he meant that there would be a competition between pixel and vertex operations. He mentions load balancing in the following sentence.


Oh and I bet all devs hate multithreaded programming (compared to before). ;)
 
Alstrong said:
I thought by competing, he meant that there would be a competition between pixel and vertex operations. He mentions load balancing in the following sentence.

Vertex and pixel operations already are competing in current GPUs.

Either the pixel shaders are waiting because the vertex shaders can't keep up - or the vertex shaders can't push any more triangles into the rasteriser because the pixel shaders can't keep up.

Xenos provides an automatic load balancer between the two - so that no ALUs go unused - they're always fully assigned to the current workload, whatever the relative proportions of the vertex and pixel workloads.

The developers can fine-tune the workloads by providing compiler hints about the typical amount of work in, for example, a loop. If the scheduler knows that a loop typically runs up to four times, it can use that information to bias scheduling towards looping 1 to 4 times...

Jawed
 
Status
Not open for further replies.
Back
Top