D
Deleted member 13524
Guest
so again I ask where does PS3 level graphics start?
Well, there's Angry Birds for PS3...
so again I ask where does PS3 level graphics start?
Well, there's Angry Birds for PS3...
on another note is there anything else out now that use this type of CPU / GPU combination so we can look to to see what's the results of having games programed for a Quad-Core CPU / Quad-Core GPU configuration?
Yap, I'd go as far as stating the NGP has more power per screen pixel regarding 3D graphics than the PS3 has for 1080p rendering. I don't get why so many people seem so shocked with the press statements.
But AFAIK most games are rendered at 720p (or lower) and then upscaled, so it's a tricky issue.
As for general processing for A.I., sound, physics and other stuff, that'd be a whole other story.. unless they eventually do GPGPU in the system (but then, that could hamper the 3D performance).
High-end smartphones withh quad A9s may never come out at all, since the A15 devices may come out before there's a market demand for quad A9s.
lol funny when I said that I kinda knew that someone would jump through that loop hole,
but really the games look just as good as the PS3 & 360 games that came out when the consoles was 1st released & it's a handheld so to me that's PS3 level graphics on a handheld
____________
people are saying that the 3DS has GameCube level graphics (witch is true) but I don't see anyone saying that it's not GameCube level graphics because the 3DS games are 400x240 & not 640x480 like the GameCube games ,
no one is saying that NGP has the same power as the PS3 they are saying that the games look like PS3 games on a handheld,
& I think with the tech that's in this handheld with more ram & not needing to push as much pixels as the PS3 does on a big screen that they can make some games that look better than some PS3 games when looked at on a 940x544 5inch screen
when you look at Uncharted NGP you're looking at early work of Sony Bend on new hardware that's almost a year away from release , not the work of Naughty Dog on hardware that they been working with for a few years.
Dynamic distribution is a given, but it does not help the fact the resources allocated to a given tile of work with remain bound until the tile is ready - i.e. there's an implied coarse granularity of work distribution, and thus resource allocation onto tasks.Frankly I still don't understand what you're trying to say. A second more simplified attempt to explain my chain of thought is that I don't see where the actual difference is as long as workload gets distributed over cores in a dynamic fashion.
Well, so far I've been referring to the non-MP setup as a hypothetical 'pool of resources' - ALUs, TMUs, ROPs, what have you. The moment we introduce some sort of topology to that 'GPU soup' the situation stops being so ideal and becomes bound by its own set of resource-allocation limitations, thread affinities, etc.What's even more confusing for me as a layman is what the difference actually would be between a block of MP cores within a SoC compared to a GPU block within another SoC with multiple processing clusters (where each cluster has its own TMU block), especially if in the first case workload distribution is handled with hw assistance (not a pure sw level as with other multi-core GPU configs) and the latter multi-cluster block also uses TBR.
Being a bit pedantic, but surely they are dangling a carrot.with a 28nm handheld then the graphics stick that Sony is dangling wont really exist
I tried to understand your example - do you mind if I attempt to rephrase it?Let's have a heterogeneous workload. Task A is absolutely ALU-bound, and Task B - ultra-light on ALU, TMU-bound instead. Those two run concurrently on the MP setup. ...
At least if DX11-specific performance optimizations aren't being used.
Which would be (honest question)?
It should be able to use PVRTC-I and PVRTC-II at 4bpp and 2bpp formats. Also, the artificial restriction on texture sizes that exists on certain devices (for backwards compatibility) probably won't exist.AFAIK, tesselation, new texture compression algorythms and SM5.0..
Dynamic distribution is a given, but it does not help the fact the resources allocated to a given tile of work with remain bound until the tile is ready - i.e. there's an implied coarse granularity of work distribution, and thus resource allocation onto tasks.
In the most basic of examples - where you have the exact same, homogeneous workload spread across (1) a tiled MP setup, and (2) a non-tiled setup, you're right to not see a difference - as long as all resources remain 100% utilized, the sheer quantitative ratio between the total computational resources of (1) and (2) will give us the performance picture. The difference creeps in when you introduce a heterogeneous workload to said setups. Here's a simple 'bad use case' for the tiled MP setup:
Let's have a heterogeneous workload. Task A is absolutely ALU-bound, and Task B - ultra-light on ALU, TMU-bound instead. Those two run concurrently on the MP setup. Now, the TMUs of the tiles processing task A are twiddling their thumbs, while the tiles processing task B could use some extra TMUs, alas, they cannot, as those vacant TMUs have a certain thread affinity effective on them. On the other side of the spectrum, the non-tiled, common-pool-of-resources setup (aka #2) will exhibit a better utilization of its TMUs (clearly here we assume no hard-wiring of TMUs to ALUs). In the long run, such utilization bubbles in the MP setup will hardly be much more than statistical noise, but we can assume it'd not be impossible to create a deliberate workload that makes the long-term picture quite grim on the MP setup.
Well, so far I've been referring to the non-MP setup as a hypothetical 'pool of resources' - ALUs, TMUs, ROPs, what have you. The moment we introduce some sort of topology to that 'GPU soup' the situation stops being so ideal and becomes bound by its own set of resource-allocation limitations, thread affinities, etc.
I'll try to do a few...<attempt to catch shifty's attention>
MODS: Can we split the technical discussion from the business/gameplay discussion?
</attempt to catch shifty's attention>
Adding a heatsink wouldnt do any good, if there is no ample air and much less circulation.
Would be much better to attach a thermal pad or heatpipe to the backside, but I dunno where it should be attached to? Metallic backplate ?
I doubt anything above 2W total power is infeasible for a PSP2 form-factor.
Probably launching with 28nm is planned because of that reason (diearea isnt much of a concern at 45nm, I red 35 mm² for the GPU and CPU should be < 15 mm²). ARM designs are pretty much the testbed for 28/32nm at many foundries (Clicky here) so might be there is enough confidence to get everything ready for 28nm
The original PSP-1000 has max power around 1.5~2W@ min 4 hour gameplay; I think a larger NGP may be feasible for 2.5~3W?
Sony staff demoed a handful of upcoming first-party NGP titles, including Uncharted, Little Deviants and WipEout. The source said the latter was "the WipEout HD PS3 engine running on PS3 with no changes to the art platform. That means full resolution, full 60 frames per second. It looks exactly the same as it does on PS3 – all the shader effects are in there".
With Sony urging developers to create releases that work across PS3 and NGP, the implications of this are significant. "They want us to do cross-platform," said the source, explaining that the submission process has been streamlined, with only a single submission required for a title on PSN and NGP.
The PSP1000 chipset had (and still has) max power way below 1W.The original PSP-1000 has max power around 1.5~2W@ min 4 hour gameplay; I think a larger NGP may be feasible for 2.5~3W?
"the WipEout HD PS3 engine running on PS3 with no changes to the art platform. That means full resolution, full 60 frames per second. It looks exactly the same as it does on PS3 – all the shader effects are in there".