Is the PS3 on track to deliver full specs?

If there's one thing that Mintmaster's banged into my head, it's that AF isn't "extremely bandwidth intensive," as the Wikipedia author states, but clock intensive--at least on consumer GPUs with finite #s of texture samplers. A Radeon X1900, for instance, can only give you 16 bilinear filtered samples per clock. 16x AF would require 16x as many samples, but you're not getting that in the same clock (which would be bandwidth intensive indeed: 16x moreso), but rather in 16x more clocks. So AF doesn't require more bandwidth per clock, just more clocks to achieve the desired samples. This time spent waiting for the extra AF samples can be offset by increasing pixel shader complexity, so the rest of the GPU is kept usefully busy in the meantime. Or, if you think of it another way, more shaders makes crunching math the bottleneck, making AF close to "free" on otherwise idle texture units.

And, yeah, realizing that 16x AF requires 16x more clocks helps you realize why ATI and NV are so big into "adaptive" AF implementations, to speed things up by not applying AF on every single texture when it's forced via the drivers (rather than specified per-texture by the game).

Well, that's an interesting perspective, tho I'd still wonder if we've just pushed the "bandwidth limitation" up a level of abstraction and only given a false appearance of taking it out of the equation by baking it into the design in the first place.
 
Got a question. If the the RSX DOES lose 50mhz, will it effect the real world performance in anyway?

Of course it effects the system performance. If you hand out some extra GFlops the devs would surely take advantage of them. How much difference 50 MHz may make is probably almost impossible to say, you can afford some slightly more advanced shaders programs (say 5% more instructions) while maintaining the same frame rate. In some cases those extra instructions may make a noticeble difference in other cases you will not notice them at all.
 
I don't thing it's going to be much of a big deal really. However it makes sense they keep quiet about it(if it's true at all) becouse even though it might not bring a 5% overall performance difference it can make a big fuss over the net.
 
Well, that's an interesting perspective, tho I'd still wonder if we've just pushed the "bandwidth limitation" up a level of abstraction and only given a false appearance of taking it out of the equation by baking it into the design in the first place.
geo, you've stumped me. Could you dumb that down a bit? :smile:

If you're saying that there's still a bandwidth limitation thanks to AF, I may agree with you after rereading what I wrote. If the texture units are used for other memory accesses, then AF causing the texture units to hog memory bandwidth for many more cycles per frame may indeed starve other bandwidth-hungry parts of the GPU. Or if the GPU is designed for typical workloads that leave enough bandwidth for both the texture units and, say, the ROPs to share per average frame (or dominate per given cycle), then maybe enabling AF deprives the ROPs of bandwidth because the texture units just don't let up over the course of a frame.
 
Well, that's an interesting perspective, tho I'd still wonder if we've just pushed the "bandwidth limitation" up a level of abstraction and only given a false appearance of taking it out of the equation by baking it into the design in the first place.

I'm probably gonna sound stupid for saying this, but it reminds me a lot of Intel's whole net burst thing.
 
Of course it effects the system performance. If you hand out some extra GFlops the devs would surely take advantage of them.
Considering - according to our sources - PS3 has never ACTUALLY been downgraded except on a spec sheet, it's not really appropriate to say it affects system performance. 650MHz is as fast as RSX's ever been in reality it seems, so we haven't really lost anything. :p
 
geo, you've stumped me. Could you dumb that down a bit? :smile:

Well, I'm just wondering out loud if the number of bilinearly filtered samples per clock is chosen based on bw limitations in the first place, and then they "just make lemonade" with the limitation by getting some other work done while waiting. . . And, actually, now that I re-read what you wrote you do in fact suggest the possibility. . .the question maybe is there a comfy spot above today and before "single cylce 16x AF" (WHEEEEE!)
 
Considering - according to our sources - PS3 has never ACTUALLY been downgraded except on a spec sheet, it's not really appropriate to say it affects system performance. 650MHz is as fast as RSX's ever been in reality it seems, so we haven't really lost anything. :p

?

I think you mean 430Mhz is as fast as it's been so far in devkitland.
 
bah, PS3 is sooooooooo old now. lol. i'm bored of it. can we move onto PS4 spec?
what are STI upto, 5 and half years after the announcement of CELL, actually after 6+ years of development? what is Nvidia's furthest out architecture now being devised? what is Rambus doing now? I want those early PS4 specs now please :)
 
geo, I dunno, single-cycle AF--even single-cycle trilinear--seems to be going in the opposite direction of recent GPUs (Xenos added 16 point-sampled texture units), and I keep reading here that IHVs are going to prefer lots of the most general-case units than a few specialized ones. Plus, there's the X1600XT with 600MHz 128-bit DDR and just 4 bilinear TMUs and the X1800/1900XT with "just" 750MHz 256-bit DDR and a relatively whopping 16 TMUs (plus, R580's feeding 3x the fragment shader units as R520). NV's G70/71, G73, and RSX are even further out on the TMU/bandwidth spectrum.

But, yeah, at some point you'd think better default filtering would become part of the enthusiast-class GPU featureset, and if including higher-level TMUs doesn't mess with whatever algorithms the IHVs have in place for the whole product line, I'm not sure B3D regulars would complain. :)
 
bah, PS3 is sooooooooo old now. lol. i'm bored of it. can we move onto PS4 spec?
what are STI upto, 5 and half years after the announcement of CELL, actually after 6+ years of development? what is Nvidia's furthest out architecture now being devised? what is Rambus doing now? I want those early PS4 specs now please :)


Enjoy things while you can. You can be dead when PS4 arrives....
 
Hmmmm

Welp didn't want to create a new thread, but I've heard from a very reliable source (the same guy to break the 96MB OS story) that three whole SPEs are reserved for the OS couple that with 1 SPE being used for redundancy and that leaves only 4 SPEs for games.
 
Welp didn't want to create a new thread, but I've heard from a very reliable source (the same guy to break the 96MB OS story)

I'm getting a weird sense of de ja vu.

http://www.beyond3d.com/forum/showpost.php?p=774357&postcount=103
http://www.beyond3d.com/forum/showpost.php?p=774399&postcount=106
http://www.beyond3d.com/forum/showpost.php?p=774520&postcount=111

His other crap was apparently pruned from that thread. IIRC he basically flames the posters calling him out and swore up and down to the forum he'd be proven right in the end. *ugh*

I guess we'll see.
 
Last edited by a moderator:
I'm getting a weird sense of de ja vu.

http://www.beyond3d.com/forum/showpost.php?p=774357&postcount=103
http://www.beyond3d.com/forum/showpost.php?p=774399&postcount=106
http://www.beyond3d.com/forum/showpost.php?p=774520&postcount=111

His other crap was apparently pruned from that thread. IIRC he basically flames the posters calling him out and swore up and down to the forum he'd be proven right in the end. *ugh*

I guess we'll see.
Links posted have all been edited. They now show ...
 
Oh my god you're right. He just pruned them now at this very moment! That's just pathetic...

Anyway I never saw them; Liver Kick do you still have their text cached?
 
Back
Top