Is the PS3 on track to deliver full specs?

No, because it would prevent existing users using the boost. The PSP's 'overdrive' mode can be applied to first-gen PSPs, just it would gobble the battery quickly. A PS3 overdrive would need a better cooling method. Or do you mean in 3-4 years time you can run a game on your first gen PS3 and it will up the fan speed, giving the better performance but with more noise?
Yes, I am sure the fans will be controlled by temperature anyhow, so they would just spin faster with some added noise for old consoles, the new die shrinked ones may not need that.

EDIT: Another possibility is of course that the cooling system is dimensioned for the higher clock speed right from the start and the higher clocks would normally not increase the noise level.
 
Last edited by a moderator:
Yes, I am sure the fans will be controlled by temperature anyhow, so they would just spin faster with some added noise for old consoles, the new die shrinked ones may not need that.

EDIT: Another possibility is of course that the cooling system is dimensioned for the higher clock speed right from the start and the higher clocks would normally not increase the noise level.

If the fans/cooling system can already handle it there is no need for it being downclocked though. That doesn't make much sense.
 
If the fans/cooling system can already handle it there is no need for it being downclocked though. That doesn't make much sense.
Well, it depends on what need you may have. Maybe Sony calculates that 500 MHz is enough to get a sufficient edge compared to the 360 for the first-second-(edit: X:th) generation of games. But when they deem the time is right, perhaps when the visual increments of the games start to decrease (independent of the 360), they let the developers have some more graphics horse power to play with and thereby kind of artificially extends the console life cycle. I mean, Sony could throttle down the console at start to gain some control of how the games develop during the life cycle of the console, they can give some extra throttle when they see the need.

Whether that makes sense or not, I leave to more business minded people to judge. ;)
 
Last edited by a moderator:
Well, it depends on what need you may have. Maybe Sony calculates that 500 MHz is enough to get a sufficient edge compared to the 360 for the first-second-(edit: X:th) generation of games. But when they deem the time is right, perhaps when the visual increments of the games start to decrease (independent of the 360), they let the developers have some more graphics horse power to play with and thereby kind of artificially extends the console life cycle. I mean, Sony could throttle down the console at start to gain some control of how the games develop during the life cycle of the console, they can give some extra throttle when they see the need.

Whether that makes sense or not, I leave to more business minded people to judge. ;)


to be honest that would be idiot but i welcome your idea :)
 
IIRC EE was boosted (+50 Mhz), not GS

Well if you compare it with its IEEE variant, IEEE Cell was 4 GHz and most likely consumes twice the power compare to the 3.2 GHz variant in PS3. So it already got a massive downgrade in clock :)

I am still puzzle though why they would decreased 50 MHz, it doesn't account much for heat. 50 MHz is still within modest OC range. If they do drecrease the frequency, I suspect its more to do with yield rather than thermal envelope. Looking at the massive size of the PS3 case, they can fit Cell and 2 RSX and still be alright.

BTW nAo, if Sony were to put a second RSX with its own memory pool in PS3, what would most dev like yourself do ? Upgrade to 1080 or stick with 720 and throw in even longer shader ?
 
Well if you compare it with its IEEE variant, IEEE Cell was 4 GHz and most likely consumes twice the power compare to the 3.2 GHz variant in PS3. So it already got a massive downgrade in clock :)

I am still puzzle though why they would decreased 50 MHz, it doesn't account much for heat. 50 MHz is still within modest OC range. If they do drecrease the frequency, I suspect its more to do with yield rather than thermal envelope. Looking at the massive size of the PS3 case, they can fit Cell and 2 RSX and still be alright.

BTW nAo, if Sony were to put a second RSX with its own memory pool in PS3, what would most dev like yourself do ? Upgrade to 1080 or stick with 720 and throw in even longer shader ?

No no, you're looking at it the wrong way; if anything, the very overclockability would guarantee that chips yielded could do 550. But it's the voltage that I feel must be the key; for the voltage they want to target, hitting 550 consistently must be too difficult at this time. So you either up the voltage and throw all the thermals off, or you ramp the speed down to the next lowest 'consistent' yield at that provided voltage.
 
This is way way off topic, but still i need a little help here.

I have seen that after you see some screenshots of a new game, you almost instantly can identify if the game has AF or not. Well, i searched for AF in wikipedia and the article over there is not clear enough. Even though i saw the image comparison, i still cant see in a game when it has it or not.

Could someone explain me AF in a more simple fashion? a screen comparison would be greatly apreciated.

And sorry for the offtopic.
 
Let us look at history, the official clock frequency of PSP games was downclocked from previously announced clock speeds quite close to the launch of the PSP in order to save the batteries.

I'm not sure about this ... what do you mean precisely? I mean, I know from experience that the PSP's processor speed can be changed to anything from 1 (yes, really) to something like 370 and anythingn in between (though 333 is the 'official' limit I think).

But as far as I know all games and the system itself runs on 222 by default, as said, to save batteries. In the homebrew scene though there is a lot more possible, and certain text readers even have a mode setting it to 5, iirc.

So if they announced the specs for the PSP as 333 but then said that games would generally run at 222, that's different from saying that games would run at 333 and then change that to 222.

Oh well, too off topic anyway.
 
But as far as I know all games and the system itself runs on 222 by default, as said, to save batteries. In the homebrew scene though there is a lot more possible, and certain text readers even have a mode setting it to 5, iirc.

So if they announced the specs for the PSP as 333 but then said that games would generally run at 222, that's different from saying that games would run at 333 and then change that to 222.
What I meant was that Sony set the 222 MHz restriction though the hardware is capable of higher speeds, mind the batteries. My point was that Sony may remove that restriction for games sometime in the future.
 
nAo said:
IIRC EE was boosted (+50 Mhz), not GS
Actually initial spec were 250/125, so everything in the system received a 20% boost.
That said, from developer perspective the boost was more like 50%, since launch software was developped on 200/100 hw for large part of the time.

Fun thing - launch EE was rated at 18Watts, and people thought That was a hot chip? :p

Arwin said:
In the homebrew scene though there is a lot more possible
There is no 222mhz restriction in homebrew, but other then that there's nothing 'more'. Retail games are perfectly free to alternate frequency anyway they like - even from one frame to the next. It can be a nice battery saving feature to run your menus for example at a lower clock.
 
Last edited by a moderator:
Could someone explain me AF in a more simple fashion? a screen comparison would be greatly apreciated.
Kristof could. You can also look back to B3D's GF FX or Radeon 9x00 reviews to see varying levels of AF in Serious Sam screenshots.

Here's how I understand it. (Note that my understanding may differ from truth, but I bang these replies out once in a while to save others from replying and, more [self-]importantly, to leave myself open to correction. :oops: ) It's to enhance texture clarity (read: reduce blur) on surfaces that aren't perpendicular (Kristof's ideal "Type 1") to your in-game viewpoint (most noticable on floors and walls that slope away from you b/c they tend to be regular/flat surfaces than use regular/repeating textures). AFAIK, it means more texture samples taken in one axis (isotropic means the same in all axes, hence anisotropic means not the same).

That image in the Wikipedia article is about as clear as it gets. You can see that ground texture is noticably sharper with AF than without, and that's because AF takes more texture samples in the axis that needs it. Look at Kristof's pics and realize that in the x-axis, the runway texture is close to Type 1, whereas as you look further into the y-axis, the runway texture is Type 3. If you isotropically sample both axes--that is, take the same # of texture samples for both x and y--what's going to happen with the y axis is that the further away from the camera you get, the lower the ratio of texture pixels to screen pixels. At some point, you cross under the Nyquist limit and the textures begin to look blurry onscreen. You use AF to counter this, to raise the ratio of texel to pixel to get back up to the Nyquist limit.

If there's one thing that Mintmaster's banged into my head, it's that AF isn't "extremely bandwidth intensive," as the Wikipedia author states, but clock intensive--at least on consumer GPUs with finite #s of texture samplers. A Radeon X1900, for instance, can only give you 16 bilinear filtered samples per clock. 16x AF would require 16x as many samples, but you're not getting that in the same clock (which would be bandwidth intensive indeed: 16x moreso), but rather in 16x more clocks. So AF doesn't require more bandwidth per clock, just more clocks to achieve the desired samples. This time spent waiting for the extra AF samples can be offset by increasing pixel shader complexity, so the rest of the GPU is kept usefully busy in the meantime. Or, if you think of it another way, more shaders makes crunching math the bottleneck, making AF close to "free" on otherwise idle texture units.

And, yeah, realizing that 16x AF requires 16x more clocks helps you realize why ATI and NV are so big into "adaptive" AF implementations, to speed things up by not applying AF on every single texture when it's forced via the drivers (rather than specified per-texture by the game).
 
  • Like
Reactions: Geo
Retail games are perfectly free to alternate frequency anyway they like - even from one frame to the next. It can be a nice battery saving feature to run your menus for example at a lower clock.

Yes, good point. I was more referring in that case also to the over 222 and particularly over 333 settings.
 
Well if you compare it with its IEEE variant, IEEE Cell was 4 GHz and most likely consumes twice the power compare to the 3.2 GHz variant in PS3. So it already got a massive downgrade in clock :)

I am still puzzle though why they would decreased 50 MHz, it doesn't account much for heat. 50 MHz is still within modest OC range. If they do drecrease the frequency, I suspect its more to do with yield rather than thermal envelope. Looking at the massive size of the PS3 case, they can fit Cell and 2 RSX and still be alright.

BTW nAo, if Sony were to put a second RSX with its own memory pool in PS3, what would most dev like yourself do ? Upgrade to 1080 or stick with 720 and throw in even longer shader ?


I think you and several other people here are looking at the wrong people though.

RSX is Nvidia's baby, and Nvidia has a LONG history of having to reduce clock and memory speeds on products right before they ship, usually due to poor yeilds at the originally stated clock speed.

The Xbox GPU (Nvidia NV2A) took a 66MHZ decrease in clock speed before it finally shipped. In fact I'm having a hard time remembering the last Nvidia GPU that actually shipped at the originally expected clock rate.
 
Back
Top