RSX Related: Inquirer releases Pics of 7800 and Final Specs

Status
Not open for further replies.
Rockster said:
RSX: 8.8 Gigapixels (assuming same number of ROPs as G70)

Seems unlikely as you would need over 70GB/sec to support that in plain jane 32bit color.

How is it different from the X850XT's >8GPixels/s on a memory bandwidth of 38GB/s?

:?:
 
Titanio said:
Rockster said:
External main memory bandwidth:
Xenos: 22GB/s
G70: 38.4GB/s
RSX: 22GB/s + 25GB/s

Xenos: 22GB/s + 32GB/s* :?: I like my asterisk.

I was comparing "External main memory bandwidth". I'd consider the 32GB/s to the daughter die to be more internal bandwidth than external, and moreover not to main memory. I tried thinking of a fair way of putting that in there, but couldn't..the figure doesn't belong in a comparison to the others' main mem figures, I don't think, but the above may appear not to do tell the whole story re. Xenos. I guess just another issue re. how difficult all these chips are to compare directly.

edit - hehe, yeah, I see your asterix now. A qualified comparison is certainly possible, but still..

I don't think there is an "if" about it.

If part X does task A, B, and C--where C is the most intensive task

You cannot compare it directly to Y

If part Y does task A, B and isolates intensive task C elsewhere

PS3 uses its external main memory bandwidth differently than the Xbox 360.

I think in the entire long series of debates it has been an issue of semantics. "Bandwidth" is a general term, that relates to a HW reality, to discuss a real world limitation. When someone comes up with a unique idea that skirts the limitations of this "Bandwidth" it is no longer a feasible comparison because while they may be talking about the same part these same parts do not do the same tasks or have the same limitations.

Imagine if the Rev has a fast vector coprocessor for particles and physics that did 500GFLOPs. Even if the CPU only did, say, 30GFLOPs, it would not be fair to compare Xenon or CELL directly because the Rev is offloading a lot of intensive tasks to a dedicated chip.

Basically that is the situaiton with the Xbox 360, but more so since every game uses a lot of backbuffer.

The comparison of main system bandwidth is an anitquated metric based on the assumption both systems use the main memory in a similar fashion. They don't, so in regards to GPUs and limitations it is not a relevant comparison.

How the bandwidth savings from the eDRAM affect the end product, well, we will have to wait and see. We do know one benefit though: Basically free 4x AA without sucking up main memory bandwidth. The PS3 will be using main memory bandwidth in such a situations, so the question is: What is great:

The extra system bandwidth on PS3

or

The bandwidth savings on Xenos from the eDRAM

I am sure most of us do not know the answer to that, and I suspect that will be a game-by-game issue. Games with HDR, lots of AA, etc... would tend to favor the savings approach it would seem (although Xenos seems to be FP10 oriented, so that comes into play also)
 
Acert93 said:
I don't think there is an "if" about it.

If part X does task A, B, and C--where C is the most intensive task

You cannot compare it directly to Y

If part Y does task A, B and isolates intensive task C elsewhere

PS3 uses its external main memory bandwidth differently than the Xbox 360.

I think in the entire long series of debates it has been an issue of semantics. "Bandwidth" is a general term, that relates to a HW reality, to discuss a real world limitation. When someone comes up with a unique idea that skirts the limitations of this "Bandwidth" it is no longer a feasible comparison because while they may be talking about the same part these same parts do not do the same tasks or have the same limitations.

Imagine if the Rev has a fast vector coprocessor for particles and physics that did 500GFLOPs. Even if the CPU only did, say, 30GFLOPs, it would not be fair to compare Xenon or CELL directly because the Rev is offloading a lot of intensive tasks to a dedicated chip.

Basically that is the situaiton with the Xbox 360, but more so since every game uses a lot of backbuffer.

The comparison of main system bandwidth is an anitquated metric based on the assumption both systems use the main memory in a similar fashion. They don't, so in regards to GPUs and limitations it is not a relevant comparison.

How the bandwidth savings from the eDRAM affect the end product, well, we will have to wait and see. We do know one benefit though: Basically free 4x AA without sucking up main memory bandwidth. The PS3 will be using main memory bandwidth in such a situations, so the question is: What is great:

The extra system bandwidth on PS3

or

The bandwidth savings on Xenos from the eDRAM

I am sure most of us do not know the answer to that, and I suspect that will be a game-by-game issue. Games with HDR, lots of AA, etc... would tend to favor the savings approach it would seem (although Xenos seems to be FP10 oriented, so that comes into play also)

Of course, that's all exactly my point. Before any comparison could be made wrt bandwidth to the GPUs, we'd need to know two things - framebuffer op bandwidth usage and CPU bandwidth usage. And then work it out :) It's a complicated picture to be sure..(more complicated still if you consider the potential for framebuffer op bandwidth savings by using Cell too..there's potentially much to account for on both systems).
 
The G70 is not simply an NV40 with just another 8 of the same pixel pipelines duplicated. There have actually be changes to the pixel shader ALUs and to the ROPs.
 
What are the chances that the RSX includes that hardware video decoder thing that was eating up ~20 million transistors in the 6800 series?

Chances are the RSX doesn't have it and the G70 does. So there is some extra budget there to add some tricks to the bag (albeit not huge tricks, but It seems to me that the RSX is slightly modified, beyond just being higher clocked).
 
therealskywolf said:
Well we know that the RSX does HDR 128 Bits + It works at a higher clock, the G70 is supposed to only have HDr 64 bits right?

Seeing how the transistors count is basically the same (You know that a number doesnt go much up when the company says "OVer 300 M transistors" ) i guess that the other MAIN difference is really all about Cell interaction.

The MAIN difference between the two is clockspeed, 550 MHz RSX and 430 MHz 7800. Another big difference is that PS3 will use 128 bit DDR memory, and PC video cards use 256 bit memory. Everything beyond that seems to be unremarkable differences in implementation details.
 
PC-Engine said:
digitalwanderer said:
PC-Engine said:
Yes and it dumps heat back into your PC case. :LOL:
Yes, like almost every other component in your PC case. Thus why so many of us invest time and effort to insure we have excellent air flow in our PC cases to "vent" this hot air that is produced.

It's quite a novel concept, I'm surprised you haven't heard of it.

Why even have a fan on your graphics card if it just dumps the heat back into the case?

That's like pumping your exhaust from your car back into your intake. :LOL:

I guess you've never heard or seen of this mysterious alien technology either. :LOL:

card1.jpg

You might have a point if you find a single slot card that dumps heat outside the case.
 
Acert93 said:
What is great:

The extra system bandwidth on PS3

or

The bandwidth savings on Xenos from the eDRAM

I am sure most of us do not know the answer to that, and I suspect that will be a game-by-game issue. Games with HDR, lots of AA, etc... would tend to favor the savings approach it would seem (although Xenos seems to be FP10 oriented, so that comes into play also)
Yes, that is the question. Has anyone at ATI given a bandwidth saving pct. number?

The hardware article states.
The eDRAM is always going to be the primary location for any of the bandwidth intensive frame buffer operations and so it is specifically designed to remove the frame buffer memory bandwidth bottleneck - additionally, Z and colour access patterns tend not to be particularly optimal for traditional DRAM controllers where they are frequent read/write penalties, so by placing all of these operations in the eDRAM daughter die, aside from the system calls, this leaves the system memory bus free for texture and vertex data fetches which are both read only and are therefore highly efficient.

The gpu<=>UMA bus can only move 187MBs a clock at 60fps if my calcs are correct giving its 22.4gigabyte Bus so is this a potential bottleneck if you have to move the entire 512MB of ram data to the gpu during a texture or vertex data fetch?
 
BOOMEXPLODE said:
therealskywolf said:
Well we know that the RSX does HDR 128 Bits + It works at a higher clock, the G70 is supposed to only have HDr 64 bits right?

Seeing how the transistors count is basically the same (You know that a number doesnt go much up when the company says "OVer 300 M transistors" ) i guess that the other MAIN difference is really all about Cell interaction.

The MAIN difference between the two is clockspeed, 550 MHz RSX and 430 MHz 7800. Another big difference is that PS3 will use 128 bit DDR memory, and PC video cards use 256 bit memory. Everything beyond that seems to be unremarkable differences in implementation details.

Uh, not saying you're wrong, but where does this sentence come from:

"Everything beyond that seems to be unremarkable differences in implementation details."

How can something 'seem to be' when we haven't seen it at all yet? Good points have been made as to the probable absence of PureVideo and the transistors thus utilized for other purposes, and the fact that at least some level provisions will have to be made for Cell and the FlexIO. At this point I agree that the transistor count similarities have me thinking not too great a difference, but at the same time, there are extremes of thought in the other direction as well.
 
http://www.penstarsys.com/editor/company/nvidia/g70_spec/

Now for the confusion. Earlier this year at a conference call with Jen-Hsun and the gang, it was stated that the first 90 nm parts were going to be introduced this Fall. Now we are hearing something different. At the J.P. Morgan conference, Marv Burkett clearly stated that the first 90 nm part will be introduced this quarter (which definitely cannot be characterized as "Fall"), and that all "large" products will be 90 nm from here on out. This suggests, in very strong language, that the G70 will be 90 nm (as it has not been released as of yet, and it is a large part). So, was the leak last week legitimate? If Marv really meant what he said, then no, the G70 will not be a 110 nm part.

Saw this at another forum.
 
Now that IS confusion. Still if the Inquirer were ever to pick up on something, you'd think it'd be some rumor that the G70 was a 90nm part. It just seems like there'd be at least one corroborating piece of information out there were this really to be the case just six days before it's unveiling.
 
I think that anouther important bit is if any of the consoles use virtual memory, it would reduce the need for BW.

Xenus looks to have some nice tricks in memory (as DeanoC hinted somehere, and MemoExport), but I dont know if it really have VM, on the another hand GameCube already have VM.

IMO, I dont see any reanson for the ATI parts to not have (even GC have) this, but in PS3/NV I doubt that theres is VM.
 
xbdestroya said:
Now that IS confusion. Still if the Inquirer were ever to pick up on something, you'd think it'd be some rumor that the G70 was a 90nm part. It just seems like there'd be at least one corroborating piece of information out there were this really to be the case just six days before it's unveiling.

Sometimes they put midle/low end parts in newer process first, I think they did that with 6600/6200.
 
pc999 said:
I think that anouther important bit is if any of the consoles use virtual memory, it would reduce the need for BW.

Xenus looks to have some nice tricks in memory (as DeanoC hinted somehere, and MemoExport), but I dont know if it really have VM, on the another hand GameCube already have VM.

IMO, I dont see any reanson for the ATI parts to not have (even GC have) this, but in PS3/NV I doubt that theres is VM.

OK you've stumped me exactly what do you percieve is the advantage of virtual memory in this context?
 
pc999 said:
Sometimes they put midle/low end parts in newer process first, I think they did that with 6600/6200.

I think that is how a lot of companies do it (I believe the celerons were some of the first to show up as 90nm, and the lower clock bins for A64s). The lower speed/parts tend to not stress the fabrication process as much it seems -- they are far more forgiving.

It's possible Marv Burkett was talking about a 7600/7200 part maybe?
 
pc999 said:
Sometimes they put midle/low end parts in newer process first, I think they did that with 6600/6200.

This is true, and what I would normally expect - it's just that that news clip from the analysts meeting seems to be in direct contravention to the usual NVidia mentality. They were talking about all 'big chips.' NV43 and NV44 derivative style chips are exactly the kinds of chips that one would normally consider NOT to be big.
 
xbdestroya said:
BOOMEXPLODE said:
The MAIN difference between the two is clockspeed, 550 MHz RSX and 430 MHz 7800. Another big difference is that PS3 will use 128 bit DDR memory, and PC video cards use 256 bit memory. Everything beyond that seems to be unremarkable differences in implementation details.

Uh, not saying you're wrong, but where does this sentence come from:

"Everything beyond that seems to be unremarkable differences in implementation details."

How can something 'seem to be' when we haven't seen it at all yet? Good points have been made as to the probable absence of PureVideo and the transistors thus utilized for other purposes, and the fact that at least some level provisions will have to be made for Cell and the FlexIO. At this point I agree that the transistor count similarities have me thinking not too great a difference, but at the same time, there are extremes of thought in the other direction as well.

Because if you HAD seen it all it would either 'be' or 'not be' rather than 'seeming to be'. Based on what's been seen, RSX is a faster clocked 7800 with its external interfaces re-designed to work in the PS3s design. There is a possibility that 7800 is a 32 "pipeline" part with only 24 pipes enabled, and RSX will have all 32 enabled. For me that's just speculation though. It's interesting that Sony didn't reveal fill-rate when they announced PS3, like they did with PS2. Maybe this is something that's still up in the air.
 
BOOMEXPLODE said:
Because if you HAD seen it all it would either 'be' or 'not be' rather than 'seeming to be'. Based on what's been seen, RSX is a faster clocked 7800 with its external interfaces re-designed to work in the PS3s design. There is a possibility that 7800 is a 32 "pipeline" part with only 24 pipes enabled, and RSX will have all 32 enabled. For me that's just speculation though. It's interesting that Sony didn't reveal fill-rate when they announced PS3, like they did with PS2. Maybe this is something that's still up in the air.

Wait, the 24 to 32 pipe thing is all speculation to you, but the rest of it isn't? You say based on what's been seen - but nothing has been seen yet on RSX. I agree, as I have already stated, that the 300 millions transistor thing is eerily similar between the two chips, but that being said, I still take issue at the ease with which you make sweeping statements based on little more than a transistor count.
 
The demos that were running at E3 were apparently mainly running on SLI machines, as well as G70 parts. Marv talked about how these demos were run on an upcoming product with many similar capabilities as the RSX chip. So, while the RSX will have more features that are aimed at the PS3, we can expect this next generation of cards to nearly match the overall performance and feature-set of the RSX.
This is also interesting. The question still remains what "special features" will the RSX have over the g70 since they are making it sound like the same card higher clocked.
 
ralexand said:
The demos that were running at E3 were apparently mainly running on SLI machines, as well as G70 parts. Marv talked about how these demos were run on an upcoming product with many similar capabilities as the RSX chip. So, while the RSX will have more features that are aimed at the PS3, we can expect this next generation of cards to nearly match the overall performance and feature-set of the RSX.
This is also interesting. The question still remains what "special features" will the RSX have over the g70 since they are making it sound like the same card higher clocked.

I just hope that the RSX isn't some Ultra verson of the g70 :?

I hope to have atleast some unique features to the RSX, but I'll be happy with a great GPU that can hold its own.
 
Status
Not open for further replies.
Back
Top