PowerVR 5?

Chalnoth said:
Scott C said:
Well, a PowerVR type GPU would do MSAA for free.
Perhaps 4x or even 8x at the same speed as no MSAA and ATI and NVidia can't keep up with that, bandwidth saving techniques or not.
No. You still get an increase in fillrate requirements at each triangle edge, and the required increase in depth resolution will put more strain on the depth-sorting part of the chip. The performance drop will be based upon different factors than with a traditional renderer, but it probably won't be much different than a modern video card that supports framebuffer compression and has enough video memory that you don't become texture limited once FSAA is turned on.

lol, here we go again...
I have to agree tho, if the z tests are parralell like I've read, more fanagling would probably be required in hardware to take good advantage of the lesser texturing requirements. Not sure how that would pan out. I still think the main advantage a tiler has for AA is knowing beforehand what the whole scene is going to be, or at least that tiles scene. This could blow taditional renderers out of the water in cases where good AA is the bottleneck.
 
oh in my experience, powervr drivers have been the least arrogant of any video card I've ever owned. It's nice when you can actually uninstall something and it's gone. It's also nice not to be tempted with 17 updates a month that alternately fix and introduce bugs for the benefit of 1 fps in half the games you play.
 
Jerry Cornelius said:
lol, here we go again...
I have to agree tho, if the z tests are parralell like I've read, more fanagling would probably be required in hardware to take good advantage of the lesser texturing requirements. Not sure how that would pan out. I still think the main advantage a tiler has for AA is knowing beforehand what the whole scene is going to be, or at least that tiles scene. This could blow taditional renderers out of the water in cases where good AA is the bottleneck.
Personally, I feel that the main problem with tilers actually is AA, in this respect:

Tilers need to have a scene buffer that stores all of the vertex/triangle data after transform for the frame being rendered. The amount of memory that this buffer will require is unknowable before rendering, and thus any tiler must consider the possibility of a scene buffer overflow.

The simplest way to deal with a scene buffer overflow is to just no longer allow new triangles to be input into the scene buffer, and render as-is, outputting full-resolution depth and color buffers for the scene.

This has sever problems with MSAA, as you'd either have to turn off MSAA when doing this, or multiply the amount of required memory by two times the number of MSAA samples. This sudden, massive increase in required memory bandwidth and space would cause a significant hitch in performance.

But the real issue here is simply that TBDR's are not solving the right issues for the desktop space. Put simply, long shaders are making memory bandwidth concerns less and less important. And modern architectures don't need to do any more processing than a TBDR provided an initial z-pass is done before drawing the frame (something that is essentially required for rendering with most any robust shadowing technique anyway).
 
Long shaders are what they are giving us because they can't solve the memory bandwidth issues.

If you are overflowing the scene buffer you've got bigger problems than, well, overflowing the scene buffer. You probably won't care that you're getting 4 fps instead of 5 fps when this happens.

My thinking is simply that you can pretty much do whatever you want to AA the scene if you have all the data for a tile in your hand at one time.
 
TBDR vs. IMRs aside, I think we need in general more sophisticated AA algorithms for higher sample densities with as low as possible framebuffer consumptions and bandwidth requirements.

I don't expect any major breakthroughs prior to WGF2.0 to be honest, yet it's still high time for some serious innovations.
 
I don't expect serious improvements in anything until we get TBDRs that use Cell chips.

I'm not even a PVR fanboy anymore. :(
 
Ailuros said:
I'm looking at the other segments they're dealing with and in most if not all of them the current IP model is ideal. When it comes to licensing deals in the PDA/mobile market they've swept the floor this far and it's merely just one example.

Oh please, let's see some volume before we talk about who is sweeping the floor. :rolleyes: Little early to be claiming some success in the mobile space...
 
Chalnoth said:
Dave B(TotalVR) said:
Normally I would agree Ailuros, but the potential advantage available with PowerVR outweighs that IMO. They hadn't released a card for ages yet when they released the KYRO II it was beating much more expensive cards in many benchmarks - particularly the high detail variety (at the time) like serious sam.
Okay, two things.
1. They released the Kyro, didn't they?
2. At that time, there were essentially zero memory bandwidth and overdraw reduction techniques in action.

There's no reason to expect that PowerVR can produce a video card that supports modern shaders, is performance-competitive with them, and can produce solid, stable drivers on release.

One day they might just release a card with comparable raw specs as ATi/Nv offerings and wipe the floor with them.
This would be likely if ATI and nVidia currently had nothing more than higher-clocked GeForce 256's with more shader capabilities. Now, not so much.

1) whups yeah, the KYRO I was a good performer for its price too, remember these were both budget cards.

2) Indeed there were not, but ovedraw was not as significant back then either.

3) Series 5 is going into the arcades and has 'beyond VS/PS 3 functionality' so i'd say there most certainly is.
 
Ailuros said:
DaveB(TotalVR),

I'm looking at the other segments they're dealing with and in most if not all of them the current IP model is ideal. When it comes to licensing deals in the PDA/mobile market they've swept the floor this far and it's merely just one example.

Not only is the PC graphics market too demanding, an IP business scheme can be called as "weird" at best, but I can see also a much lower potential when it comes to royalty income from it. Compared to the amount of sold units of the MBX family it would be essentially peanuts. Granted the royalty income of one MBX should be miniscule compared to a highly complex PC graphics chip, yet development costs are also higher and thus the margins fall rather in favour of anything MBX.

Somehow I'd say it would be a strange business decision at best to not continue to struggle to keep/further strengthen their position in the PDA/mobile market and divert valuable resources to a highly risky and cut-throat market where they'll not only start from scratch again when it comes to building a brand name, yet would also compete against giants like ATI/NVIDIA.

Last spring I was almost certain it would be ready to roll; I haven't the vaguest idea what or why it never happened, I'm just trying to think what would be the wisest and safest business decision from their perspective. Many proposed in the past IMG to switch to a fabless semiconductor; I still think it's too risky and it would mean that they'd play russian roulette with the existence of the company.

One can always hope of course, but with the market penetration and size of ATI/NVIDIA (and yes that includes resources and what not) I find it extremely hard to believe that any second tier vendor will ever reach that easily to seriously compete with them. Win impressions most likely yes if a product is fast enough; they did win some with the KYRO2 but it was sadly enough "only" a budget/mainstream offering.

A potential third competitor and insert any name you please, would have to have the resources to write up red numbers for a significant amount of time and release in a timely fashion a full top to bottom line of products with at least 6 official driver sets per year. There was one recent attempt last year, which I'm still expecting to become a "market leader".

IMHO I'd rather see a very potential high end offering to win as much impressions as possible from IMG or nothing. We've had mainstream/budget offerings from them before.


Well, ive always said the best move they can make is to realease a spanking card which is faster than all and use that as a spring board to cover all areas of the market. In particular an integrated video chipset because that is where PVR willreally shine. 1/3 of the required bandwidth in a completely bandwidth limited situation (on a motherboard with DDR 400 ram shared with the pr0k instead of 1Ghz dedicated DDR) will equate to around 3x the performance put simply. Might actually make integrated graphics worth using. If they can cover all areas of the PC market, high end (the the identity) mind range budget and integrated then they would stand to make a helluva lot of profit.

There were talks a while back with via but they were kinda nuked by STM from what I heard whch was why series 4 was shelved. I had been kinda hoping for some time they would re start the deal with series 5 in mind but Ive heard nothing since. A licence with via, for cards and chipsets could really propell PowerVR back into the PC AIB business.
 
Chalnoth said:
Scott C said:
Well, a PowerVR type GPU would do MSAA for free.
Perhaps 4x or even 8x at the same speed as no MSAA and ATI and NVidia can't keep up with that, bandwidth saving techniques or not.
No. You still get an increase in fillrate requirements at each triangle edge, and the required increase in depth resolution will put more strain on the depth-sorting part of the chip. The performance drop will be based upon different factors than with a traditional renderer, but it probably won't be much different than a modern video card that supports framebuffer compression and has enough video memory that you don't become texture limited once FSAA is turned on.

MSAA requires a 4 fold increase in z-buffer memory access and a 4 fold increase in framebuffer access (both of which dont effect a tiler). It also requires a much smaller increase in texture bandwidth and fillrate. There will be an ocean of difference in performance drop between a tiler and an IMR for MSAA.

As for putting more strain on the depth sorting part of the chip...

With the ability to do 32 operations per clock on depth sorting this will not be an issue. The HSR section has never been a limiting factor of a PowerVR based cards performance, this is evidenced by the fact that stencil shadows can be done more or less for free, because they use the same part of the chip.

As for framebuffer compression, is there any reason a PVR card couldn't impliment this?

Plus, its not about free memory, its about memory bandwidth. The only thing that doesn't increase 4x is the texture access whichis why MSAA is faster and preferential to SSAA.
 
Jerry Cornelius said:
Chalnoth said:
Scott C said:
Well, a PowerVR type GPU would do MSAA for free.
Perhaps 4x or even 8x at the same speed as no MSAA and ATI and NVidia can't keep up with that, bandwidth saving techniques or not.
No. You still get an increase in fillrate requirements at each triangle edge, and the required increase in depth resolution will put more strain on the depth-sorting part of the chip. The performance drop will be based upon different factors than with a traditional renderer, but it probably won't be much different than a modern video card that supports framebuffer compression and has enough video memory that you don't become texture limited once FSAA is turned on.

lol, here we go again...
I have to agree tho, if the z tests are parralell like I've read, more fanagling would probably be required in hardware to take good advantage of the lesser texturing requirements. Not sure how that would pan out. I still think the main advantage a tiler has for AA is knowing beforehand what the whole scene is going to be, or at least that tiles scene. This could blow taditional renderers out of the water in cases where good AA is the bottleneck.

Yeah I always figured they could screw MSAA and modify the rasteriser to produce a bitmap of pixels which require more than one sample (and alpha textures could be included here) and even perhaps work out the optimum number of samples per pixel - and their positions. That would require quite a bit more math though.
 
Jerry Cornelius said:
oh in my experience, powervr drivers have been the least arrogant of any video card I've ever owned. It's nice when you can actually uninstall something and it's gone. It's also nice not to be tempted with 17 updates a month that alternately fix and introduce bugs for the benefit of 1 fps in half the games you play.

What you said...
 
"Personally, I feel that the main problem with tilers actually is AA, in this respect:
Tilers need to have a scene buffer that stores all of the vertex/triangle data after transform for the frame being rendered. The amount of memory that this buffer will require is unknowable before rendering, and thus any tiler must consider the possibility of a scene buffer overflow."


Dude, the size of the scene buffer is not a problem, it never has been and it never will. Also, it is knowable because the chances are its going to be very similar in size to what it was in the last frame no?


"The simplest way to deal with a scene buffer overflow is to just no longer allow new triangles to be input into the scene buffer, and render as-is, outputting full-resolution depth and color buffers for the scene."

Or dynamicly increas it in size?

"This has sever problems with MSAA, as you'd either have to turn off MSAA when doing this, or multiply the amount of required memory by two times the number of MSAA samples. This sudden, massive increase in required memory bandwidth and space would cause a significant hitch in performance."

That would do yes, but plenty of work has gone into this and there are patents relevant to it that have been filed since after Kyro II which just had the simple answer of split the scene in two.

"But the real issue here is simply that TBDR's are not solving the right issues for the desktop space. Put simply, long shaders are making memory bandwidth concerns less and less important."

Especially when they are removed as hidden surfaces.


"And modern architectures don't need to do any more processing than a TBDR provided an initial z-pass is done before drawing the frame (something that is essentially required for rendering with most any robust shadowing technique anyway)."

Yes they do, for a start they have to do an initial z-pass also there are zero 100% effective hidden surface removal techniques on IMR's.

A Tiler doesn't require this pass for shadowing either, I still think modifier volumes are the best answer to shadows there is. Imagine placing a procedural 3D exture in that shadow volume to generate soft edges...
 
SiBoy said:
Ailuros said:
I'm looking at the other segments they're dealing with and in most if not all of them the current IP model is ideal. When it comes to licensing deals in the PDA/mobile market they've swept the floor this far and it's merely just one example.

Oh please, let's see some volume before we talk about who is sweeping the floor. :rolleyes: Little early to be claiming some success in the mobile space...

I said licensing deals and nothing more. 5 out of the Top10 semiconductors and others. As a matter of fact I'm pretty certain that those have spent millions for licenses just to leave the IP rotting on shelves. <double> :rolleyes:
 
MSAA requires a 4 fold increase in z-buffer memory access and a 4 fold increase in framebuffer access (both of which dont effect a tiler). It also requires a much smaller increase in texture bandwidth and fillrate. There will be an ocean of difference in performance drop between a tiler and an IMR for MSAA.

For 2/4x samples per pixel and today's standards I do have some doubts. Today's architectures get 2xMSAA virtually "for free" and that because they're only capable of 2x samples per cycle. 4x or higher get achieved via sample loops. For very high sample densities the above would be more in line IMHO, something like 16x or higher.


With the ability to do 32 operations per clock on depth sorting this will not be an issue. The HSR section has never been a limiting factor of a PowerVR based cards performance, this is evidenced by the fact that stencil shadows can be done more or less for free, because they use the same part of the chip.

That's a KYRO :p

As for framebuffer compression, is there any reason a PVR card couldn't impliment this?

I don't think there are any reasons that speak against it, I just don't think a TBDR needs it especially for MSAA. When it comes to antialiasing and the future I'd prefer IHVs to flip to more "exotic" algorithms; irrelevant of architecture it could be eventually possible to get to high sample densities with minimal framebuffer and bandwidth requirements. If the implementation would be a piece of cake we'd be already there obviously. With all the hardware space complex shaders take up though and resources required for R&D, I'm afraid that IHVs have set other priorities for the time being.

Plus, its not about free memory, its about memory bandwidth. The only thing that doesn't increase 4x is the texture access whichis why MSAA is faster and preferential to SSAA.

I don't see why memory footprint isn't a consideration. Let's suppose NV40 would be capable of combining float blending HDR with MSAA; do you really think that the 256MB framebuffer would be sufficient for let's say 16xMSAA with 64bpp HDR in a high resolution, if the bandwidth would be sufficient?
 
Loewe said:
Nappe1 said:
PowerVR, where's the MBX2?

That's not the right question.

PowerVR, where's the MBX? That is the right question.

All this effort for the Dell Axim X50v? :rolleyes:
*patting my X50v* Yeah, it would be great if there were more of these chips in circulation, so I'd get some more games and applications that could take advantage of it. Enigmo, Stunt Car Extreme, and Beta Player with the (extremely hard to find) plugin aren't enough, but they are decent showcases of its potential. People are blown away at the smoothness and clarity of MPEG4 played off this thing, even with my CPU at only 208MHz.
 
Ailuros said:
I said licensing deals and nothing more. 5 out of the Top10 semiconductors and others. As a matter of fact I'm pretty certain that those have spent millions for licenses just to leave the IP rotting on shelves. <double> :rolleyes:

Welcome to the real world, it happens all the time. When the MPEG2 standard first came out CompCore licensed their MPEG2 decoder IP core more than a dozen times to all the major Japanese CE companies. One actually took it to silicon (Hitachi) and not a single company ever shipped product using it.

Same situation here - everyone thinks they need a 3D core, and the only one out there is MBX (nice alternatives...BitBoys?). Once the 3D mobile market matures a bit these same companies will make their REAL selections.
 
Mali100 (Falanx) seems like a very nice competing sollution too. Irrelevant to that Wavey pointed out here on the boards recently that there has been some analyst proposals for ATI to consider buying out IMG. Not there doesn't seem to be any merit to that one this far, but one of the purposes in such a theoretical case scenario would be to buy out that very success and not hypothetical thin air.

Same situation here - everyone thinks they need a 3D core, and the only one out there is MBX (nice alternatives...BitBoys?). Once the 3D mobile market matures a bit these same companies will make their REAL selections.

More than one large semiconductors have announced real chips integrating MBX and further IMG IP cores. Amongst them Intel (2700G), TI (OMAP2) and Renesas (SH3/4) where early adopters.

Renesas SH4 in Pioneer's Cybernavi HDD:

http://download.renesas.com/eng/edge/07/customer.html

As far as the X50v goes, performance is that "bad" that people actually think that the 2700G actually has a VGP:

http://pocketmatrix.com/forums/viewtopic.php?t=19785&postdays=0&postorder=asc&start=15

Just two days ago at ETS2005:

Smart Chips for Smart Phones - Media & Image Processing for Cellular

Wireless Media Processors provide incredible benefits to handsets including improved image processing, power efficiency and amazing 2D and 3D graphics. 2005 will be a banner year for multimedia handsets and the software that will drive consumer adoption.

Moderator: Dr. Jon Peddie, President, Jon Peddie Research

Panelist: Ward Pitkin, WTBU Cellular Systems SW Product Line Manager, Texas Instruments

Panelist: James Bruce, North American Segment Manager, ARM Inc.

Panelist: David Cooper, Product Manager, Imagination Technologies

Panelist: Thomas "Rick" Tewell, Senior Marketing Manager, Fujitsu

Panelist: Ville Miettinen, Chief Technology Officer, Hybrid Graphics

Panelist: Brian Bruning, Director of Handheld Content, Nvidia

Location: Riviera Royale Pavilion - Skybox 209-210

Time: 9:00AM, Thursday, January 6

----------------------------------------------

Completely OT: ex 3dfx DevRel and former Fathammer CEO, Brian Bruning is now the Director for Handheld Content at NVIDIA.
 
Back
Top