PowerVR Series5

If PVR had a design that could easily deliver NV40 league performance and features with less transistors and lower bandwidth requirements (i.e. lower manufacturing costs) then they would have no problems finding a manufacturing partner / IP licensee to build the damn thing.

Seems kinda obvious to me that they don't.
 
They would have had to license it a year ago, when noone knew what the NV40 could do. You cannot take the risk out of PC graphics development.
 
If PVR had a design that could easily deliver NV40 league performance and features with less transistors and lower bandwidth requirements (i.e. lower manufacturing costs) then they would have no problems finding a manufacturing partner / IP licensee to build the damn thing.

Who says they have had a problem getting a partner? Why do some people seem to think that any partnership has to be officially announced immediately?
 
Here's hoping that Series 5 is being pursued by an AIB (perhaps Powercolour / TUL Corporation? -- IIRC their product portfolio already includes NVIDIA, ATI, S3 and XGI chipsets -- why not round it off with PowerVR?). At least the current chatter regarding a PowerVR Series 5 part seems more promising than in previous times! I would be very interested in a Series 5 part, even though I have just upgraded my Kyro II to a Radeon 9800 Pro!

If it is a 8x1 with a core speed in the 300-400MHz range with a synchronous memory clock, would 128-bit DDR-1 RAM be sufficient to ensure that it is not bandwidth limited? With the demand for high-speed 256-bit DDR-1 (300MHz+) and GDDR-3 RAM (600-800MHz) being driven by ATI and NVIDIA, will there still be sufficient supply of moderately paced 128-bit DDR-1 RAM available for it to be price-competitive should it be required for Series 5?

Is it also likely that the Shader 3.0 engine is onchip in Series 5 or instead catered for via an additional/separate chip ala PowerVR's Elan T'n'L processor?

Anyway, since this is my first post -- greetings to one and all! I've been a reader of Beyond3D for ages, but I've always felt too intimidated to post before :D

Cheers,

BrynS
 
L233 said:
If PVR had a design that could easily deliver NV40 league performance and features with less transistors and lower bandwidth requirements (i.e. lower manufacturing costs) then they would have no problems finding a manufacturing partner / IP licensee to build the damn thing.

Seems kinda obvious to me that they don't.

One only has to point to the Kyro.

Basically, it had the specs of a TNT 2/GeForce card, yet, could deliver performance matching close to the GeForce2 (in some circumstances).

It's the power of the TBDR.
 
MfA said:
They would have had to license it a year ago, when noone knew what the NV40 could do. You cannot take the risk out of PC graphics development.

Um, if a company saw a NV40-beating chip a YEAR ago, even without knowing anything about the actual NV40, you don't think their eyes would have popped out of their sockets???

Man, the NV40 is a stinkin' fast chip. Anything even faster than that, with at minimum partially free FSAA on top, would just be plain rediculous. It should be obvious to anyone that would have been a record-setting chip design, who in their right mind would say no to such a thing???

If it existed, it would have been licensed, period. That it hasn't been licensed must therefore mean it doesn't exist...
 
Large semiconductors apparently are not willing to license paperspecs anymore; licenses for MBX started to roll in ONLY after IMG presented it in mid 2002 in FPGA format.

Now considering the development cycles for PS/VS3.0 products such as NV40, I'd love to hear how someone would be able to present operating silicon a year before today, unless someone is that naive to believe that any IHV could have had ready SM3.0 capable silicon in early 2003.

That it hasn't been licensed must therefore mean it doesn't exist...

Obviously SEGA licensed either paperspecs or a ghost-chip then :LOL:
 
Guden Oden said:
Um, if a company saw a NV40-beating chip a YEAR ago, even without knowing anything about the actual NV40, you don't think their eyes would have popped out of their sockets???

There would have been no chip, but a design.

With the possible exception of Matrox any company looking to license wouldnt have the expertise to accurately assess how well it could stack up in the future, and how big the risks were for manufacturing problems, so no I wouldnt have expected any eyes to pop (and Matrox was burned with their own design a little too recently to make those kind of investments IMO). That is the whole problem with licensing, the only companies who can assess the risks well enough to be willing to spend the necessary capital will usually choose to go with their own designs.
 
I'm surprised everyone here is so pro TBDR. As time progresses I believe its advantages become less and less.

Here are the problems I see with it:
+ Both Nvidia and ATI are convincing developers to use a zfill pass before doing any complex pixels. For games that do this a TBDR will have almost no advantage over a traditional raserizer with fast early z/hiz.
+ As geometry complexity increases creating an efficient TBDR becomes increasingly difficult/impossible.
+ Pixel shaders and vertex shaders that modify depth make TBDR more difficult.

Now to be fair I could list off some positives, but since everyone seems so supportive of TBDR you must already know about those so I won't go there.
 
That is the whole problem with licensing, the only companies who can assess the risks well enough to be willing to spend the necessary capital will usually choose to go with their own designs.

If I look into INTEL's direction and their upcoming low level dx9.0 integrated chipsets as a simple example, I could only agree with the above.
 

Imagination Technologies Trading Update

...

Extensions signed with three existing partners, including new market areas

...

Alongside this broadening partnership base, the number of chips committed by partners utilising our technology has increased from 14 in November to 20 today.

...


What do you think about this ?
 
That the number of cores/SoC's under development is constantly increasing for all three subdivisions (Ensigma/Metagence/PowerVR) at IMG.

I can see three different MBX variations at PowerVR's site, which can come with and without a VGP; simple reasoning tells me that those could be 6 already.

Can't be sure though how they really count chips after all.
 
Are we gonna use the series 5 or a custom PVR chip in our next board?

That's for me to know and for you to found out.

I'm betting that SEGA will be using multipul custom varients of Series 5 in their upcoming arcade board. probably 2 custom Series 5 GPUs, but possibly as many as 4 GPUs
(hey this is the arcade sector we're talking about, so price is not quite as sensitive as the console space, although price is still important even for arcade boards).

that said, the Xbox 2 will at the very least, rival Sega's new PowerVR-based arcade board, if not surpass it. minimum specs for Xbox 2 would probably be 16 if not 32 pixel pipes, at least 8 Vertex Shaders, VS/PS 3.0+ 500~700 MHz core, GDDR3 memory. 1.5 to 2 billion verts/sec peak
(NV40 is at 600M, R420 will almost certainly beat NV40's vertex performance and Xbox 2 will blow both out of the water)

so in a sense, the new Sega board will likely be a glimpse of the type of power Xbox 2 will have, depending on how many custom Series 5 chips are in that board.


NAOMI 2 did 10 million polys/sec conservatively, with lots of lighting. probably closer to 20-40 million if we go by the standards most other companies use.

if the next Sega arcade board was announced as capable of 100~200 million polygons, that would be a VERY nice improvement. whatever performance figure is given, it too would probably be conservative.
 
Enbar said:
I'm surprised everyone here is so pro TBDR. As time progresses I believe its advantages become less and less.

Future "problems" assume the rendering pipeline will remain basically unchanged ... but things change. What is important is performance today.

If I had to implement a parallel rendering engine on a massively parallel programmable platform, sort-first rendering with tiling would be my first choice. With a sane rendering pipeline tiling is a fine solution for tomorrow's needs (look at Sony's "raytracing" patent to see what I mean).
 
Megadrive1988 said:
Are we gonna use the series 5 or a custom PVR chip in our next board?

That's for me to know and for you to found out.

I'm betting that SEGA will be using multipul custom varients of Series 5 in their upcoming arcade board. probably 2 custom Series 5 GPUs, but possibly as many as 4 GPUs
(hey this is the arcade sector we're talking about, so price is not quite as sensitive as the console space, although price is still important even for arcade boards).

that said, the Xbox 2 will at the very least, rival Sega's new PowerVR-based arcade board, if not surpass it. minimum specs for Xbox 2 would probably be 16 if not 32 pixel pipes, at least 8 Vertex Shaders, VS/PS 3.0+ 500~700 MHz core, GDDR3 memory. 1.5 to 2 billion verts/sec peak
(NV40 is at 600M, R420 will almost certainly beat NV40's vertex performance and Xbox 2 will blow both out of the water)

so in a sense, the new Sega board will likely be a glimpse of the type of power Xbox 2 will have, depending on how many custom Series 5 chips are in that board.


NAOMI 2 did 10 million polys/sec conservatively, with lots of lighting. probably closer to 20-40 million if we go by the standards most other companies use.

if the next Sega arcade board was announced as capable of 100~200 million polygons, that would be a VERY nice improvement. whatever performance figure is given, it too would probably be conservative.

I don't see much need for a multi-chip config for Arcade just yet. Later down the road maybe, just like NAOMI 2 followed NAOMI 1.

Why are you so certain that a dual chip S5 config (assume load balancing between chips would be ideal) would be inferior to XBox2 (ok you could cripple the Arcade system with a vastly inferior CPU but that's a different chapter)?

Naomi2 is extremely CPU bound AFAIK and that's why the Elan polygon throughput is lower than it's true capabilities (10M polys with 6 lights).
 
MfA said:
Guden Oden said:
Um, if a company saw a NV40-beating chip a YEAR ago, even without knowing anything about the actual NV40, you don't think their eyes would have popped out of their sockets???

There would have been no chip, but a design.

With the possible exception of Matrox any company looking to license wouldnt have the expertise to accurately assess how well it could stack up in the future, and how big the risks were for manufacturing problems, so no I wouldnt have expected any eyes to pop (and Matrox was burned with their own design a little too recently to make those kind of investments IMO). That is the whole problem with licensing, the only companies who can assess the risks well enough to be willing to spend the necessary capital will usually choose to go with their own designs.
No exception since Matrox really wouldn't be looking to license, they've got a bit too much of an NIH syndrom going on there.

Anyway, they got burned by their last licensing of a PVR design, and they still have the mountains of m3Ds to prove it! :LOL:
 
I don't see much need for a multi-chip config for Arcade just yet. Later down the road maybe, just like NAOMI 2 followed NAOMI 1.

Why are you so certain that a dual chip S5 config (assume load balancing between chips would be ideal) would be inferior to XBox2 (ok you could cripple the Arcade system with a vastly inferior CPU but that's a different chapter)?

Naomi2 is extremely CPU bound AFAIK and that's why the Elan polygon throughput is lower than it's true capabilities (10M polys with 6 lights).

I think a duel Series 5 config would still be, while i don't like the word inferior, just say less powerful, because of all that is probably going into Xbox 2. large resources, large amounts of transistors. several CPU cores, be it 2, 4 or 6. the ATI VPU is likely to be several times more powerful than R420. at this point, PowerVR would be doing will to rival or slightly surpass R420 and NV40. so say NAOMI 3 with 1 or 2 CPUs and 1 or 2 Series 5s plus 256~512 MB of memory. the best case senario is that they roughly match Xbox 2. but I still think ATI/MS large investment in Xbox 2 will be more than what Sega/PowerVR can come up with. though not favoring Xbox 2 by leaps and bounds.

one thing is clear though. Sony and Nintendo's next consoles should significantly outperform the new Sega board.

now maybe Sega plans to have a new board out every 2 years or so, as they did with the Model series. NAOMI 3 in 2004-2005, NAOMI 4 (PowerVR Series 6) in 2006-2007. then Sega could once again wipe the floor with home consoles, as the did in the 1980s and early to mid 1990s.
 
I dont know but I am very sceptical that any new console could be as powerful as a N@omi 3 with more than 2 GPUs especially if they have the same raw specs as at least NV40.

Arcade board can be made to be expensive unlike consoles. It does not matter much if microsoft spends 2billion(for example)making a console if the parts are basically all off the shelf or slightly modified. To stand out you need something that the others dont/wont have. And right now PVR is something that the others cant match with the same raw specs.

This same thing happend when the DC was launched. Look at how well it compared to PS2 gfx wise. It even had more features. That was with a ridiculous 100mega pixels/sec and a console to cost maybe half the price to produce and had a very low development budget compared to all the new consoles.

The new consoles will be awesome but I think that at that time if the N@omi 3 will not be as powerful they will release a new board.

I also think that N@omi 3 will be most probably not have multiple GPUs. If it does have multiple GPUs it will probably be like N@omi 2.

Anyway if an arcade vendor really wants it can make an arcade board that needs at least 2 generations to match. This was what happend with the expensive Model 3 back in 1996.
 
Back
Top