article on wsj on nalu and ruby

ZenThought

Newcomer
http://online.wsj.com/article/0,,SB108190075599182040,00.html?mod=yahoo_hs&ru=yahoo

if you don't have access to wsj. here is content:

----------------------------------------------


SANTA CLARA, Calif. -- Companies in Silicon Valley often say their technology is sexy. Jen-Hsun Huang, chief executive of Nvidia Corp. here, is showing off something that really is.

It is a computer-generated mermaid named Nalu, with a cloud of golden tresses that realistically seem to reflect dappled light and flow with the water. Nalu has rosy, unusually lifelike skin, and she is displaying generous quantities of it with a flirtatious wiggle.

More than 2,000 miles away, in a suburb of Toronto, executives at rival ATI Technologies Inc. are preparing a coming-out party for Ruby, another computerized creation who also has a skin texture unusually detailed for a videogame character, along with a shock of red hair and pneumatic chest.

The two characters are unlikely soldiers in a fast-moving technology battle helping to shape the evolution of digital entertainment. Nvidia and ATI, the two leading providers of chips that control graphics on personal computers and other gadgets, developed the animated figures to demonstrate the power of the improved technology each company is unveiling this month. The realism of animation increases as images use a greater number of geometric building blocks, called polygons, to create them. Nalu is composed of 300,000 polygons; Ruby has 80,000 -- both far in excess of what most video images today have.

This is why voluptuous Nalu and Ruby are more than just a form of eye candy: Human skin and hair have been among the hardest textures to simulate convincingly, so their technology presents a breakthrough.

Nvidia, which had a prior computerized model dubbed Dawn, says its creations are realistic enough to draw offers to appear in TV commercials. "We are getting calls from Hollywood agencies," says Mark Daly, a vice president in charge of digital content. "It's pretty weird."

Nevertheless, both companies still are a long way from the industry's ultimate goal -- artificial worlds that are indistinguishable from reality. But in the hands of skilled programmers, the chips will help bring a new level of realism and emotional force in games by creating characters that are more convincing when they move, talk, laugh and cry. Over time, such chips are likely to inspire richer forms of entertainment, where story lines and character development are as important as action, that will appeal to broader audiences.

"The videogame market is now mainly targeted at young men," Mr. Huang says. Soon, he adds, "you could imagine interactive soap operas."

Nvidia and ATI, which both have sales of about $2 billion and market capitalizations topping $4 billion, are longtime rivals in the graphics-chips business. Although ATI was a graphics pioneer, Nvidia outstripped it several years ago to dominate the sector. Lately ATI has been regaining market share. Microsoft Corp. picked ATI to supply chips for its next videogame system, succeeding Nvidia, which makes chips for the current Xbox system.


Dueling demos: Nvidia's Nalu and ATI's Ruby, right, illustrate the lifelike imagery made possible by the new graphics chips.


Still, Nvidia still accounted for 58% of the 23 million PC graphics chips sold for desktop PCs in the fourth quarter of 2003, says industry watcher Mercury Research, compared with 38% for ATI.

Mr. Huang vows to take the lead with the new chip the company is introducing today at a company-sponsored event expected to draw hundreds of gamers to San Francisco. ATI is also confident about Ruby's prospects in the graphics beauty pageant. "I don't think there is any doubt that we will win this round," says Rick Bergman, an ATI senior vice president in charge of its desktop products.

The competition between the two graphics-chips makers has caused the power of graphics chips to double every year. By comparison, Intel Corp's microprocessors typically double in performance every 18 months or so. Nvidia's new GeForce 6 chips, for example, have a whopping 222 million transistors -- nearly twice the number in Intel's most-powerful Pentium 4 microprocessor.

And while Intel's chips have circuitry for one electronic brain handling calculations, graphics chips have multiple processors for specialized jobs performed in parallel. Nvidia's latest chip has the equivalent of 32 specialized brains; ATI will disclose details of its new chip at its own event this month.

Such chips, built into PCs or sold on accessory graphics cards, are the source of most of the images on a PC screen. As such, they are increasingly important to the design decisions of engineers, movie studios, advertising agencies and Web developers, says Jon Peddie, an industry analyst in Tiburon, Calif. The standard microprocessor, in many instances, acts as a mere "coprocessor" to the graphics chips, he says.

When creating a game, designers define geometric models of objects and characters and then determine how they will move. They later define visual textures, such as simulated skin, cloth, wood or metal, and arrange artificial light sources that determine colors and shadows. A computing step called rendering creates the images that users see. The more powerful the graphics chip, the greater the degree of detail and range of choices a designer has at his disposal.

In animated movies, with a set number of scripted scenes, studios can throw hundreds of computers into a final rendering process that creates ultrarealistic images. Computer games typically have lacked the realism of movies because they offer nearly infinite possibilities for action and have to render scenes as users play them.

These latest Nvidia and ATI chips are narrowing the gap in image quality between games and movies. A key reason is a set of programming conventions, defined by Microsoft, that assigns a tiny piece of software to define the light and shade on each of the thousands of picture elements, or pixels, that make up a display screen.

As graphics chips become more powerful, the hardware movie studios and game makers use eventually could become the same, allowing them to swap scenes and characters. "It's going to become completely possible to have the graphics engines used for gaming also used in film rendering," says John Carmack, co-founder and technical director of id Software, which is finishing a long-awaited game called Doom III that took four years to produce.

Game makers already are moving toward film-quality images. Another high-profile game sequel called Half-Life 2 is expected to produce unusually realistic people, even with existing graphics. Valve, a closely held software company in Kirkland, Wash., has developed a system that simulates 40 muscle movements in human speech -- one of the most difficult actions to mimic. The movements will be used by protagonist Gordon Freeman, a virtual character who already is a star from the first Half-Life game.

Graphic chips also have helped games become a spectator sport, attracting onlookers who watch the action at tournaments on big display screens. "The nicer the games look, and the more realistic they look, the more appealing it is for people to watch," says Craig Levine, manager of Team 3D, a gaming team whose sponsors include Nvidia.

[/img]
 
I saw Nalu a lot tonight(swimming, no smiling tho :( ), even with some close up to the skin near the face, but I haven't seen Ruby yet. I wonder which one is more cute and more appealing.

Fanatics of either company can now has an idol to represent their favourism. :LOL:
 
Chalnoth said:
Well, at the very least, Nalu, with her hair, is technically amazing-looking....

Oooh Chalnoth, I bet you're a demon with the ladies given your smooth lines.
"Nalu, you look amazing! I mean, er, technically. With your hair."
 
I saw the WSJ article (copy in the break room). Both look very good. However, I was wondering how ATi did that with just 80K polygons while nV needed 300K polygons. I haven't seen full pics of both yet, just the ones in the articles. However, I don't see a large difference in realism - well not a 220K polygon difference. :)
 
ATI and NVIDIA are trying to achieve very different things with their demos. NVIDIA is using many different technical features and putting them all into a single character for a tech demonstration, however there is nothing else in the demo. "Ruby" is not a tech demonstration about any one thing, but more showing what can be achieved on this level of hardware in a more involved scene.
 
DaveBaumann said:
ATI and NVIDIA are trying to achieve very different things with their demos. NVIDIA is using many different technical features and putting them all into a single character for a tech demonstration, however there is nothing else in the demo. "Ruby" is not a tech demonstration about any one thing, but more showing what can be achieved on this level of hardware in a more involved scene.

anything else you want to share? :D
 
well not a 220K polygon difference.

Just look at the hair, I am pretty sure Nalu's hair is using over 100k polygons.

I am really interested in how that's implemented. I hope NV going to have some insight into it soon enough.

Now, I think all they need is to put 16-32 Vertex Shaders, for some more underwater scenery like in Nemo, to go with Nalu :D
 
Nalu's polygons are in her hair, which is frankly *amazing* if you've played with it hands on. NVidia's trying to show just how good the hair simulation, skin shaders, water, and lighting can get if you pour everything into it. Nalu could be scaled back, still look most as good, and be in a detailed environment.


The Timbury Nvidia demo is more similar to what ATI is doing with Ruby. Less tech complexity, but more fun and artistic.
 
I have a pretty good idea how the hair is done.

They would have a single vertex buffer that describes a single hair. They would then use geometry instancing to replicate this single hair for all of them. The position of each hair on her head is most likely precedually generated in the vertex shader, or they might have a second vertex buffer of positions (and lengths) for each hair. The hairs would be entirely animated in the vertex shader procedually. As such there is pretty much no chance for collision detection but the hairs are animated in such a way so it doesn't matter. Really you'd probably have no idea if any hairs are intersecting anyway with the way her hair flows.
 
But, I think, more impressive is the shading/shadowing on the hair, no?

It looks as if they are using stuff similar to volumetric fog rendering for this. The hair is dense enough that the method might be adapted to it.

How about creating a shadowmap for the hair, getting the distance of the closest hair to the light. get the dist from each hair fragment to the corresponding point in the shadpwmap and attenuate the light assuming some constant density of hair in between and use this light to illuminate that hair fragment.
Some slight variation in the base hair color.
 
Entropy said:
Chalnoth said:
Well, at the very least, Nalu, with her hair, is technically amazing-looking....

Oooh Chalnoth, I bet you're a demon with the ladies given your smooth lines.
"Nalu, you look amazing! I mean, er, technically. With your hair."

Have you been stalking me????

:\


;)
 
DemoCoder and DaveBaumann, thanks for the clarifications on the ATi/nV demos. I haven't viewed the NV40 demo stuff wrt Nalu, just the tech stuff on the B3D front page. I have not had time to d/l anything from DC's extensive NV40 thread so I haven't had a chance to be wowed just yet. :)

The fact that so many polygons are in Nalu's hair intrigues me greatly. I didn't realize that so many just went to the coiffe. :) That vertex shader theory of Colourless sounds very plausible. FWIW, I would like to see a fully rendered scene with a Nalu-like model - like her swimming underwater near a reef, polygonal hair flowing in the water and all. (Note: If Nalu's demo does actually do something like this, please remember that I haven't seen a full running demo, just a few pics so far so be gentle and provide a link. :) ) In this regard, I can appreciate ATi's approach with Ruby and some of their previous demos - Animusic and some of the older Island demos were cool examples of this. The true test for me is to see what effect does all the new technology have on a complex scene as opposed to a model or single entity in (somewhat empty) space.

Anyway, I'm just blabbing...
 
Back
Top