Sony's Next Generation Portable unveiling - PSP2 in disguise

My zune hd is now 2 years old and the screen looks just as bright as it did when i first bought it .

I don't think you'd notice it if you look at it everyday. If you compare your current zune hd to a brand new one, you may notice some differences, though.
Nonetheless, if you use the zune primarily to hear music, the screen shouldn't have much more than some hundreds of hours of use.



That's got to be an anti piracy measure. DS's standard SD card slot has been very costly in that regard.

AFAIK the SD slot in the DSi has never been successfully used for running DS ROMs.


TBDRs have for good reason a healthy advantage in bandwidth constrained scenarios. There's no bandwidth overhead for MP by the way.

I don't understand that last sentence. Are you implying that a SGX543MP4 could really be ~16x faster than a SGX535 while using only a dual-channel 32bit memory controller, shared with the CPU?
At some point, the MP GPU will choke on bandwidth, TBDR or not...


***edit: by the way the C-50 is clocked at 280MHz and not 500MHz from what I can see: http://www.anandtech.com/show/4134/the-brazos-review-amds-e350-supplants-ion-for-miniitx
Yes, I mentioned that in my first comparison post. It's AMD's lowest power APU, and I figured a 280MHz Robson should be closer to the SGX543MP4 (which rumours are now pointing to 200MHz).
 
I don't think you'd notice it if you look at it everyday. If you compare your current zune hd to a brand new one, you may notice some differences, though.
Nonetheless, if you use the zune primarily to hear music, the screen shouldn't have much more than some hundreds of hours of use.

I use it to watch about 4 1 hour long video casts a week and my nephew will watch a bunch of seaseme street video casts every week.

If i hold it up to my friends new 64gig unit that is about 2 months old the picture quality has remained the same.

mabye i haven't used it enough , but i do have it every day and use it every day. I'm addicted to podcasts

I don't think anyone will see a problem with the ngp either. Unless they play it 8 hours a day every day for 2 years.



I will say this.


I bought my DS when it came out , i think it was $200. I got $100 in trade credit for my DS lite. I traded my ds lite in and got $100 for my ds i and now i get a $100 bucks for my dsi towards the 3DS .

I'm sure by the time my NGP screen starts going bad I could just trade it in towards the NGP 2000 or 3000 .
 
Is PSP2 running at full clocks for those demos....? PSMeet demos are typical...tech demo, a nicely rendered character..empty environments...and they still look "polygonal" and "flat" wrt to PS3 likes...imo at about 85% closeness to PS3 quality character...75% for the environments....i wonder how PSP2 graphics will hold up in a full game...this observation should not befuddle B3D techheads...just thought of bringing up the point.
 
I don't understand that last sentence. Are you implying that a SGX543MP4 could really be ~16x faster than a SGX535 while using only a dual-channel 32bit memory controller, shared with the CPU?
At some point, the MP GPU will choke on bandwidth, TBDR or not...

Any SoC builder will use the bandwidth the SoC really needs. There are obviously quite different bandwidth requirements between a single core A8 and a quad core A9 SoC. I phrased it probably wrong, but there's no additional bandwidth needed for the GPU block consisting of multiple cores. Bandwidth consumption is the same as if it would be one single core with similar performance aspects.

Yes, I mentioned that in my first comparison post. It's AMD's lowest power APU, and I figured a 280MHz Robson should be closer to the SGX543MP4 (which rumours are now pointing to 200MHz).

I don't know yet where the MP4 is clocked at, but assuming it's at 200MHz then I'm not surprised at all if a 64SP/8TMU TBDR can beat a 80SP/8TMU IMR@280MHz in selected cases.
 
....but by avoiding x86 like the plague we will get battery life in the order of 5-6 hours vs 1-2 hours.
Are you saying that you can reduce *system* power five-fold just by switching away from x86?

God, you must have programmed x86 in machine code at some point. :smile:
 
The best blue dyes currently have a half life of around 50,000 hours max while the max for red and green is about 150,000 hours. The solution that Sony and many others use is to shorten the life of red and green dyes to match with the blue dye. Not only does that reduce the production costs as cheaper red and green dyes can be used, it also gives an even decay rate so the PQ stays awesome and avoids the problem you are talking about.



I can't reveal the clocks but you can't count ATi ALUs as single units, you have to count them in blocks of 4 or 5 depending on the architecture used. At a comparable 800x600 resolution C-50 performs very poorly on games like WoW and CoD4.

Power draw is probably the most important aspect of embedded devices, the A9+SGX543MP+ SoC consumes 50% less power than the C-50 while yielding more impressive results. There are downsides and trade-offs, but by avoiding x86 like the plague we will get battery life in the order of 5-6 hours vs 1-2 hours.

I still don't think you can compare the two directly, they offer different experiences and Fusion isn't designed for embedded products. I'm sure if AMD made a Fusion product directed at embedded devices then we can make a proper comparison.

Do the Cortex-A9's come with their NEON SIMD unit or is it another Tegra 2 like thing (a bit of a disappointment for me ;))?
 
Shifty Geezer said:
That's got to be an anti piracy measure. DS's standard SD card slot has been very costly in that regard. A lack of on-board flash is bad though. Mem cards had better be competively priced, or the cost of ownership will be notably higher than the NGP's list price.
Well remember MemoryStick took 4-5 years after launch of PSP to reach competitive pricing with SD.
Anyway, if Rom-Carts having their own save area is true (words can't express how backwards this part is), there's no reason for them to price it competitively because machine will be fully functional without added storage - ie. see 360 Arcade. The cost of ownership will only change for those that want extra functionality outside of games.

What irks me though is that there's a very good chance that on top of premium pricing we won't even get an option for high-capacity - how much do you want to bet largest writable-media will be 2-4x smaller then other Flash media can offer at the same time?
 
See also Arun's post. Oversimplified theoretical example: assume you have a scene that has evenly distributed data across it; once the master tiling unit of the MP splits it up to four (same sized) macro tiles and assigns each to each core, why exactly would each core NOT make full use of its 2 TMUs? Same goes for any other unit.

There's still seems to be some confusion about how things work here.

The master tiling unit has nothing to do with rasterisation, it's only pulling together the disparate tiling lists generated by each core into a single set of presentation order lists during the geometry processing phase of things.

Tiles are then rasterised in their entirety by a single core, note that that's tiles not macro tiles. Tiles are distributed across cores based on a demand basis. Tile granularity+demand driven distribution of work load means that you get pretty much linear scaling of all performance parameters with the number of cores for pretty much any practical scenario (assuming enough memory bandwidth).

John.
 
The stencil performance of this thing will be pretty nice.

IMG had also promised to provide some specifics about on-core MRTs and their performance advantage in SGX way back in 2005 but never did. Would be interesting to consider it now.
 
Anyway, if Rom-Carts having their own save area is true (words can't express how backwards this part is)
No one said that writeable save space on the carts was mutually exclusive with other forms of save management.
It's like you don't even let the tension build up a little before handing out the morsel of info. You could have let speculation run for weeks. Weeks! Just on that NEON VFPU/SIMD part.
 
Nikkei said that NGP can support 3G online gaming, but SCEE said that synchronous 3G online gaming depends heavily on the amount of data transferred. So what kind of 3G online game can NGP provide?

For example, is it possible that Call of Duty on NGP can provide a simplified version of multi-player mode (fewer people in a game, simplifed stage..etc).

Even if only 30~40 % 3G subscribers can have stable connection, I think the subscriber base is still large enough for promoting NGP 3G online gaming in first one~two years.
 
The debug text I'm looking at right now looks pin-sharp..

Dean

Well that's good to hear. Maybe the screen is an in-house product. I do hope so because I think Sony has the better OLED on the market right now. Maybe theirs is more expensive to produce but their super top emission has a perfect viewing angle as opposed to Samsung OLED which though is better than LCD is still just near perfect and often time colors are saturated. Walkman X's screen color accuracy and viewing angle is better than Galaxy S as far as I'm concern. Walkman X's screen has almost 180 degrees viewing angle without any degradation in color or contrast. Sony's will be on the upperhand in OLED technology as long as don't use the pentile arrangement. Although pentile I believe is a lot cheaper to produce (at least for now, until printable oleds are possible).

Although I think it's still possible that Sony is outsourcing its OLED screen.
 
You do realise that PowerVR tiles are on the order of a thousand pixels, not hundreds of thousands? It wouldn't make sense to let multiple cores work on the same tile in the same frame.
And I thought the rhetoric nature of that last sentence was apparent enough. In retrospect, I think I should've used a smiley or such.

Ailuros said:
See also Arun's post. Oversimplified theoretical example: assume you have a scene that has evenly distributed data across it; once the master tiling unit of the MP splits it up to four (same sized) macro tiles and assigns each to each core, why exactly would each core NOT make full use of its 2 TMUs? Same goes for any other unit.
Ailuors, I have no doubts whatsoever that the 2 TMUs per core will be made good use of, statistically. What I was saying (and what I should've worded better, apparently) was that the metric 'M tiles * N TMUs/tile in a multi-tile setup = same number of TMUs in a non-tiled setup' is flawed. In a non-tiled setup you can have all TMUs working toward a single fragment at a time. No so in a multi-tile setup - there the number of TMUs per tile puts a hard cap to how many TMUs can work toward a common fragment. Ergo my hapless mentioning of all units working on the same tile.

If we really meant to compare an MP, locale-division setup to something more canonical, we'd need to go to further lengths, looking at ALU/ROP, ALU/TMU, TMU/ROP, etc, ratios, not just summing things up.
 
Nikkei said that NGP can support 3G online gaming, but SCEE said that synchronous 3G online gaming depends heavily on the amount of data transferred. So what kind of 3G online game can NGP provide?

For example, is it possible that Call of Duty on NGP can provide a simplified version of multi-player mode (fewer people in a game, simplifed stage..etc).

Even if only 30~40 % 3G subscribers can have stable connection, I think the subscriber base is still large enough for promoting NGP 3G online gaming in first one~two years.
3G networks have latency issues, though. Making hard to imagine proper action gaming over 3G as a viable solution.
 
The master tiling unit has nothing to do with rasterisation, it's only pulling together the disparate tiling lists generated by each core into a single set of presentation order lists during the geometry processing phase of things.
That assumes that the master tiling unit can scale with number of cores as well, otherwise it itself would become the bottleneck sooner rather than later.

Tiles are then rasterised in their entirety by a single core, note that that's tiles not macro tiles. Tiles are distributed across cores based on a demand basis. Tile granularity+demand driven distribution of work load means that you get pretty much linear scaling of all performance parameters with the number of cores for pretty much any practical scenario (assuming enough memory bandwidth).
This is probably true.
 
That assumes that the master tiling unit can scale with number of cores as well, otherwise it itself would become the bottleneck sooner rather than later.
The master tiling unit is only linking lists together, it does very little actual work, you would need to instantiate a very large number of cores before it would become a bottleneck.
This is probably true.
Not just probably, it is true ;)
 
am I crazy for thinking that if the 128MB of VRam in the GPU is high bandwidth eDram like what was in the PS2 (4mb) & Xbox 360 (10mb) + the 512MB of ram, that the NGP could emulate the PS2 better than the PS3 & might even be able to put out games that look better than the Xbox 360 & PS3 at it's lower res?

I mean it has to be some reason that they was able to just port PS3 data over to a handheld with 1/2 the VRAM of the PS3.

long story short: what's the bandwidth of the 128MB of VRAM if it's true that it has 128MB of VRAM
 
Back
Top