NVIDIA's Acquisition of MediaQ Discussion

Arun

Unknown.
Moderator
Legend
Hey everyone,

I thought I'd discuss the exact implications of this deal here, and I'm sure many of you will do so too here.

First of all, let me remind you that a PDA chip *is* in the works at nVidia ( and no, that isn't the NV33 AFAIK - or it might be, who knows, hehe ) - it's fairly obvious considering this acquisition, but I posted it at GPU:RW before that press release ;)

So, obviously, nVidia needs their low-power technology. But... Is the need for that low-power technology limited to PDAs and other devices?

It's fairly obvious nVidia is limited in the NV35 ( and NV30, since they needed Flow FX ) by heat: And heat is very closely related to power, as you hopefully all know...

So, let's look at this in an IP perspective: what patents does MediaQ currently own?
1. Method and apparatus to power up an integrated device from a low power state : http://patft.uspto.gov/netacgi/nph-...&co1=AND&d=ptxt&s1=MediaQ&OS=MediaQ&RS=MediaQ
2. Graphics engine FIFO interface architecture: http://patft.uspto.gov/netacgi/nph-...&co1=AND&d=ptxt&s1=MediaQ&OS=MediaQ&RS=MediaQ
3. Parsing graphics data structure into command and data queues: http://patft.uspto.gov/netacgi/nph-...&co1=AND&d=ptxt&s1=MediaQ&OS=MediaQ&RS=MediaQ
4. Programmable and flexible power management unit : http://patft.uspto.gov/netacgi/nph-...&co1=AND&d=ptxt&s1=MediaQ&OS=MediaQ&RS=MediaQ

Okay, so what do we see here?
2 patents related to general GPU stuff, things nVidia doesn't need *at all*.
2 patents related to power consumption, not limited to GPUs.

I believe the most interesting of these patents is the last one. Here's the abstract:
A programmable Power Management Unit (PMU) is provided. The Power Management Unit (PMU) supports a number of different power states namely a normal power state, a software-controlled sleep power sate, a hardware-controlled sleep power state, and two register programmable power states. In the normal power state, all circuits in the integrated circuit (e.g., graphics/display controller) are enabled. In the software-controlled sleep power state, all circuits in the integrated circuit are disabled except for frame buffer memory refresh logic and part of the bus interface. In the hardware-controlled sleep power state, all circuits in the integrated circuit are disabled except for the memory interface logic. In the two register programmable power states, circuits can be selectively powered up or down as desired in a single power sequencing. Moreover, under the present invention, the interval between circuits that are being disabled or enabled in a power sequencing is also programmable.

I've bolded what I consider particularly interesting: this could actually be used in a much more general way, in order to disable circuits in the GPU which aren't used currently.
Obviously, in current architectures, the use is obvious: If a part of the pipeline ( VS, PS or Triangle Setup ) is stalled, do something like disable half of its units. Of course, that's the brute force approach, and better techniques would work, well, better.

In the future, however, we're going to unite Pixel Shader, Vertex Shader, and so on. So, will this remain useful? Absolutely!

Moving to instruction-level parallelism does not prevent you from having unused circuits. For efficiency purposes, I believe we're also likely not to be able to run more than 3 or 4 programs at once in the GPU ( VS, PS and PPP programs, mostly, I suppose ) - so, good luck always using ALL units!

And there must be other reasons I fail to apprehend too, I guess. Which is obviously why feedback is appreciated! :)

Also, the first patent I linked to here enables to reduce/increase clock speed. That might also be used by automatically increasing / decreasing clock speed based on a goal FPS value. Such an implementation might actually be stranger, and it might be good to be able to disactivate such a thing in the driver panel. But sometimes, I believe it could be a quite nifty feature, particularly for old games I guess...

Obviously, MediaQ is also very important for PDAs and other small devices, and that's the primary use of the acquisition. They got quite a bit of very nifty technology IMO, and I must admit that on the long-term, I think nVidia made a worthwhile acquisition here. But then, who am I to say that, an analyst? :oops: :LOL:


Feedback, comments, flames, whatever?


Uttar
 
Jut my generic, random blatherings on the subject.

It seems apparent to me that nVidia just didn't have the engineering expertise (and/or, couldn't hire it directly), to create power-friendly designs.

This is not a put-down for nVidia. You can't expect to excel at everything. (It can be very persuasively argued that ATI didn't have the engineering talent to create uber-competitive high-performance chips until the acquisition of ArtX...)

nVidia sees this talent as being a key driver for growing their business. This seems to at least be a general industry thought...as both ATI and nVidia have been talking about such markets for a while now.

The question of whether or not it's good for the long run is tough. Certainly, the technology and immediate customer base is a good thing. However, as with most acquisitions, the trick is not in just selecting the right technology, it's how well and how quickly the two companies integrate their staff to leverage their talents off of one another.
 
I've dabbled into PDAs recently (hint : avoid Dell Axim like plague), and ATI has a very capable chip included in Toshiba's high end designs (I don't know what's the chip inside Asus' A620, but this one has incredible graphics benchmarks). The MediaQ processor is also interesting. All those chips are 2D only, but from recent software developments (both games and serious apps, like math graphs software), the need for 3D in PDA is becoming apparent very quickly. The XScale processor is not a really good chip for FPU horsepower, so the key is obviously to include decent 3D functionality in extra graphical units for future PDAs. What power (as in 3D power, not power consumption) would be required ?

1) The basic design for PocketPC 2002 and 2003 (PPC2K2 is extremely crappy software, BTW, I haven't tried 2K3 yet) is a 240x320 screen. That's 76800 pixels.
2) Due to extremely high color persistence (remanency ???) of the LCD screen, anything over 30FPS is wasted, but let's suppose a constant 60FPS for bragging rights. That's 4.6MP/s for an overdraw of 1 (tiler).
3) Make that an Immediate Renderer without fillrate-saving features (to save gate count), but targeted at an overdraw of 5 (absolutely huge considering the type of games that will be run). That's 23MP/s... In other words, a chip with 1 pipeline and running at 23MHz could potentially be enough fillrate-wise for an app with an average overdraw of 5, at 60FPS...
4) For bandwidth, the LCD screen for PPC is 65K color, ie 16 bit... So bandwidth probably wouldn't be an issue with correct RAM. The framebuffer would weight only 150K, making an embedded framebuffer possible ?
 
Joe: Very true... nVidia always says "Notebooks are an exciting segment, because you got power consumption playing in too." - yet, every time they release a new notebook chip, it takes even more juice!
Clearly, that MediaQ IP also shows that they got the personnel - they created that IP in a barely two or three years ( they were created in 1997, most of that IP is from before 2000 ) - not like they couldn't create some new one, too, thus. It's really mostly expertise they got IMO :)

Corwin: Interesting calculations. Although if a company would want to be really competitive, it'd have to put in at least dual texturing so you'd need a 1x2, and considering you may want 2x SSAA because of the low resolution, an overdraw of 10 would be required.
As for how much the framebuffer would weight... Well, that's if you got no Z Buffer. Of course, you can do it without a Z Buffer - but if you got one, that's better, obviously, and that'd bring the costs to 300K.
While it could be included on chip, what if we want antialiasing? It'd immediatly double! 600K becomes much more serious already...

Oh, and where are the textures? You send them everytime you need them? Icky...

Although in the first generations of 3D GPUs for PDAs might not have AA or Dual Texturing if we aren't too lucky ( *sigh* ) , thus we might put the frambuffer on chip. In the long term, though, I just don't see that as a solution.

Thanks for the feedback! :)


Uttar
 
Uttar said:
Although in the first generations of 3D GPUs for PDAs might not have AA or Dual Texturing if we aren't too lucky ( *sigh* ) , thus we might put the frambuffer on chip. In the long term, though, I just don't see that as a solution.

No AA :oops:
No Dual Texturing :oops:

;)

K-
 
Kristof said:
Uttar said:
Although in the first generations of 3D GPUs for PDAs might not have AA or Dual Texturing if we aren't too lucky ( *sigh* ) , thus we might put the frambuffer on chip. In the long term, though, I just don't see that as a solution.

No AA :oops:
No Dual Texturing :oops:

;)

K-
Well, as long as my Palm can do 1600x1200 with 16xAF I think I can live without AA for a while... ;)
 
Kristof said:
Uttar said:
Although in the first generations of 3D GPUs for PDAs might not have AA or Dual Texturing if we aren't too lucky ( *sigh* ) , thus we might put the frambuffer on chip. In the long term, though, I just don't see that as a solution.

No AA :oops:
No Dual Texturing :oops:

;)

K-

Oh, but you well know you guys are genius and all the other GPU manufacturers are morons who'll do gigantic mistakes and try to introduce quads for their PDA solutions, right? Right?

So, well, here's my speculative ( read: joke ) spec of the upcoming NV???:
- Supports 8-bit color at half-speed!
- Capable of running FP32 internally ( if they feel like it )
- Passion for Excellence ( how did this thing get here anyway? )
- Amazing 'no pipelines, no output!' technology
- Supports high-speed Vertex Shading when done on companion chips ( read: The CPU )
- Takes 0.01W, and getting twice that in Solar Energy - the first PDA solution which gives you the power to create!
- Originally released in an amazing 1Mhz & 5kg form, to be put in silicon "Later ( TM )"

Personally, I'm betting on something fully DX8 or DX9 compliant for the first nVidia PDA GPU. Something already near of their current standards.
So, the question is: Is it based on the NV2x, on the NV3x or on the NV4x?
I guess that all depends on when it'll be available...


Uttar
 
Joe DeFuria said:
It seems apparent to me that nVidia just didn't have the engineering expertise (and/or, couldn't hire it directly), to create power-friendly designs.

From everything I've been able to read thus far on the topic this is the main reason for the purchase. They proved they were not doing a very good cooling job with the most reason deliverable, hence the reasoning for the dust buster solution. Must be one hell of a cooling or heat dissipation solution to pay 70 million for it. :p On the other hand, the efforts going into media display on cell phones isn’t a business opportunity they want to miss out on. Selling graphic cards? Good business. Selling the technology for graphical cell phones? Potentially bigger business. Just about EVERYONE has a cell phone.
 
I'd say the deal was just as much about the contacts the company has as the engineering/technology they would gain.
 
DaveBaumann said:
I'd say the deal was just as much about the contacts the company has as the engineering/technology they would gain.

Agreed.

The problem is though, the contacts move on quickly if they don't have products that the customers want. In other words, this immedately gets them "in the door", which is a good business move....but that door can shut rather quickly if they don't leverage it soon.

(See 3dfx+STB merger = Huge amounts of STB contacts in the OEM industry couldn't save the merger due to lack of a product that was in demand.)
 
Personally, I'm betting on something fully DX8 or DX9 compliant for the first nVidia PDA GPU. Something already near of their current standards.

Their current standards include MT and AA though for years now.

Oh, but you well know you guys are genius and all the other GPU manufacturers are morons who'll do gigantic mistakes and try to introduce quads for their PDA solutions, right? Right?

Not necessarily. Yet they do have extended experience already with those kind of embedded devices and with a high end full blown dx9.0 part rolling out somewhere before the end of this year, they are able to sustain their lead in that market if they decide for aggressive future development cycles. MBX is based on Series3 and 4, want to guestimate how the next one might look like?
 
Personally, I'm betting on something fully DX8 or DX9 compliant for the first nVidia PDA GPU. Something already near of their current standards.

I may be wrong, but that would sound way too ambitious... A shader-enabled design would raise the gate count dramatically. One of the problems plaguing PDAs (especially PocketPCs) is their very low autonomy, "power-friendly" chips or not... IMHO, a DX6 or DX7 design (perhaps with AA thrown in) that would not drain the battery too much would already be quite an engineering feat...
 
Uttar said:
Personally, I'm betting on something fully DX8 or DX9 compliant for the first nVidia PDA GPU. Something already near of their current standards.
So, the question is: Is it based on the NV2x, on the NV3x or on the NV4x?
I guess that all depends on when it'll be available...
You forgot NV10 ;)

Seriously though: What PDA is going to benefit from shader capabilities at this point in time? Do any even have more than 8-bit color displays??
 
OpenGL guy said:
Seriously though: What PDA is going to benefit from shader capabilities at this point in time? Do any even have more than 8-bit color displays??

You mean 8-bit per channel or plain 8-bit ? Most PDA LCDs available on the market these days are RGB565 or even RGB666 so 16-bit or 18-bit color.

Actually the bulk of the so-called fast 17" flat-panels for desktop usage with 16ms rise/fall time are also 18-bit color (RGB666) so it can't all be that bad...

K-
 
OpenGL guy said:
Seriously though: What PDA is going to benefit from shader capabilities at this point in time? Do any even have more than 8-bit color displays??

I think you just summarized why most of what I heard of ATI's PDA GPU is negative ;) Joking, I know you aren't in a segment of ATI anywhere near that stuff, can't blame you on that quote, eh.

The problem nVidia got in putting the NV3x for such a market though IMO is that even if it's very flexible, they'll never manage to get their FP32 units to less than 3M transistors. And if they do, well, I refuse to even imagine the performance.
Which makes either the NV20 ( or NV10, although that'd surprise me - heck, even the MBX has optional VS capabilities, so they'd be looking really lame by introducing it latee than the MBX... ) or the NV40 possible.
So they might actually be continuing MediaQ's "traditional" products until H1 2004, where all of their NV4x derivatives are coming out ( NV41, NV42, NV43 and maybe NV45 ) - although we'll see...


Uttar


Uttar
 
Kristof said:
OpenGL guy said:
Seriously though: What PDA is going to benefit from shader capabilities at this point in time? Do any even have more than 8-bit color displays??

You mean 8-bit per channel or plain 8-bit ? Most PDA LCDs available on the market these days are RGB565 or even RGB666 so 16-bit or 18-bit color.
I meant 8-bit as in 256 colors. 16-bit or 18-bit color really seems like overkill to me for something as small as a PDA.
 
Uttar said:
OpenGL guy said:
Seriously though: What PDA is going to benefit from shader capabilities at this point in time? Do any even have more than 8-bit color displays??
I think you just summarized why most of what I heard of ATI's PDA GPU is negative ;) Joking, I know you aren't in a segment of ATI anywhere near that stuff, can't blame you on that quote, eh.
And you just summarized why I don't post much. Thanks.
 
OpenGL guy said:
Kristof said:
OpenGL guy said:
Seriously though: What PDA is going to benefit from shader capabilities at this point in time? Do any even have more than 8-bit color displays??

You mean 8-bit per channel or plain 8-bit ? Most PDA LCDs available on the market these days are RGB565 or even RGB666 so 16-bit or 18-bit color.
I meant 8-bit as in 256 colors. 16-bit or 18-bit color really seems like overkill to me for something as small as a PDA.

Budget models with a color display AFAIK support 4096 colors... it's a marketing checkbox thing... and once you start doing 3D Graphics it makes sense to have 666 or 565 formats.

K-
 
Kristof said:
Budget models with a color display AFAIK support 4096 colors... it's a marketing checkbox thing... and once you start doing 3D Graphics it makes sense to have 666 or 565 formats.
Yeah I can see how it'd be useful for 3D. Anyway, I've never seen a PDA that I saw that made me want to run out and buy one. Although someone here at work has a Pocket PC type thing that takes both compact flash and SD media: You can hold a lot of mp3s with all that storage :) but the unit's a bit bulky for use as an mp3s player while walking/hiking/running.
 
Back
Top