Imagination Technologies announces Wizard line of GPUs, focused on ray tracing.

Yes, but what does it actually mean:

Unmatched real-world ray tracing performance: Up to 300 MRPS (million rays per second), 24 billion node tests per second and 100 million dynamic triangles per second at 600 MHz

How does this compare to a GPGPU solution or a CPU one? Are talking about an order of magnitude improvement over existing solutions? Less? More?
 
That number must be special, because NVIDIA claims "You hand OptiX Prime a list of triangle and rays and it returns the intersections at over 300 million rays per second (on a single GPU)". :D. Anyway, a hybrid ray tracing / rasterization approach is probably a good way to incorporate ray traced elements, but I suspect it will realistically be some years before most game developers will make significant use of this technology (in fact, the hardware alone is probably 1-2 years away from now).
 
That number must be special, because NVIDIA claims "You hand OptiX Prime a list of triangle and rays and it returns the intersections at over 300 million rays per second (on a single GPU)". :D. Anyway, a hybrid ray tracing / rasterization approach is probably a good way to incorporate ray traced elements, but I suspect it will realistically be some years before most game developers will make significant use of this technology (in fact, the hardware alone is probably 1-2 years away from now).

Well, assuming one ray per pixel, it's enough for 1920×1200 at 120 FPS. Maybe that's why both companies emphasize it.
 
That number must be special, because NVIDIA claims "You hand OptiX Prime a list of triangle and rays and it returns the intersections at over 300 million rays per second (on a single GPU)". :D. Anyway, a hybrid ray tracing / rasterization approach is probably a good way to incorporate ray traced elements, but I suspect it will realistically be some years before most game developers will make significant use of this technology (in fact, the hardware alone is probably 1-2 years away from now).

I suppose the crucial aspect of that, is what "single GPU" from nvidia they are talking about.
 
I imagine the number of rays is dependent on the number of light sources as well as reflections.
 
That number must be special
Presumably, that number corresponds to the phattest GPU NV can muster, at ≈225W power draw... Rogue (at least in its mobile SoC incarnation) is what, a hundredth of that...? :)

Well, assuming one ray per pixel, it's enough for 1920×1200 at 120 FPS.
Doesn't 1 RPP RT look like total and utter crap (IE, noisiest, most aliased mess imaginable)? You'd want a bunch of these chips working in tandem (at a lower target framerate) for some decently nice-looking imagery I'd think. :)

How does this new device compare to Imgtech's stand-alone RT accelerator cards?
 
The new IP has higher peak numbers than the old Series2 RT hardware, at much lower power. The thing to remember with rays/frame budgets is that you really want to dish the rays out where they're needed, rather than generate a constant ray flow per pixel.

So maybe that's actually, for example, an average of 10 RPP at your target resolution and framerate if you just limit it to shadow pixels and those with certain materials.

Constant ray flow per pixel is rarely a useful metric, in my experience anyway.
 
Doesn't 1 RPP RT look like total and utter crap

You could make an argument that 1 ray per pixel is a good description of regular rasterization. ;-) But yes, in a typical full blown ray tracer for movie quality etc you'd usually use more than 1 ray per pixel to converge on a noise free image and/or use decent filtering. You can be a lot smarter than that in a hybrid system though when you're not trying to completely ray trace everything.

[Off topic but you can actually do a surprisingly large amount with even a noisy image and filtering when you know the underlying geometry]


How does this new device compare to Imgtech's stand-alone RT accelerator cards?

From launch details the plugin cards:

R2100 = up to 50 million incoherent rays/sec
R2500 = up to 100 million incoherent rays/sec

http://www.imgtec.com/news/release/index.asp?NewsID=722

So, 300M in wizard in a mobile power envelope represents a serious amount of horsepower.
 
nice tweet from John Carmack.

"I am very happy with the advent of the PVR Wizard ray tracing tech. RTRT HW from people with a clue!"

Looking at the R2100 board, that GPU requires 4GB of memory. Assuming that the implementation in an SOC still requires a memory rich environment, is it memory requirements that is going to be the determining factor of when wizard appears in various form factors ? Even allowing for the fact that this is a hybrid system, and not a pure RT image.

One could start jumping to conclusions, about Apple being traditionally the prime IMG partner, Apple leading the way with 64-bit cpu with its bigger than 4Gb address space, and IMG wizard thriving in a big address space area.
 
Last edited by a moderator:
CAD professionals work on some pretty big models / data sets. The amount of memory you put in any system just comes down to what you want to do with that system. My workstation has lots more RAM in it than my XBox - but they do different things.
 
I look forward to reading up on this and also comparing it to SiliconArts RayCore IP.

Assuming the heat dissipation, power and die consumption and also memory capacity and bandwidth usage of this is at least somewhat comparable to a six cluster Series6XT part without the ray tracing logic, like a GX6650, the desirability of the IP might be a function of how much the ray tracing enables better/easier lighting and effects versus 50% more conventional shader compute.
 
Imagination is trying to mess with the minds of some of its long-time fans with the last paragraph of its press release:

"Says Yasuhiro Kondo, general manager, Arcade Machine Development, SEGA Co Ltd.: “SEGA welcomes the announcement of PowerVR Ray Tracing IP from Imagination Technologies. We expect that the Wizard cores will create a great revolution in the graphics experience of the gaming market.”"

Imagination had a high-end Series5 implementation developed and finished for a SEGA arcade board that was never brought to market about ten years ago. With Imagination's repeated mention that high-end gaming markets like consoles are a target for their ray tracing IP, they're definitely inviting speculation that SEGA may bring them back to a future arcade platform with Wizard.
 
Well, assuming one ray per pixel, it's enough for 1920×1200 at 120 FPS. Maybe that's why both companies emphasize it.
If you're aiming for Whitted raytracer with a single ray per pixel, with little bounce, then yes. But that's hardly what raytracing is about these days. ;)
 
the techreport article adds some info to the rays per pixel discussion, from James Mccombe, the founder of Caustic.

http://techreport.com/news/26178/powervr-wizard-brings-ray-tracing-to-real-time-graphics

"If we were to assume an average of a single light ray per pixel, the hardware would be capable of driving a five-megapixel display at 60Hz. Realistically, though, you'll need more rays than that. Imagination Technologies' James McCombe estimates that movies average 16-32 rays per pixel in order to get a "clean" result. He notes that this first Wizard GPU is capable of tracing an average of 7-10 rays per pixel at 720p and 30Hz or 3-5 rays/pixel at 1080p and 30Hz.

As a result, Imagination Tech isn't advocating for a complete transition to the exclusive use of ray tracing any time soon. Instead, it's pushing a hybrid approach that combines traditional rendering with ray tracing. For example, many games today use a rendering technique called deferred shading to make multi-source lighting of complex geometry more efficient. Wizard's ray-tracing capability can be deployed to take care of the final, deferred shading step. Firing rays from the light sources in a scene also offers a means of generating dynamic shadows. McCombe says incorporating ray tracing in this way can reduce complexity while producing higher-quality images."

From the above, assuming IMG refine and enhance the technique, and when the combo of memory/die area/ power allows, one could see a path (!) where each iteration of the technology might allow the balance in the hybrid to shift more towards RT and away from traditional rendering.
 
Last edited by a moderator:
Cool.

In a desktop power envelope, this should be close able to his the 16-32 rays per pixel at 1080p/60.

I find this incredibly exciting.
 
I posted a question on the IMG blog, regarding power requirements of wizard, versus the caustic boards (and remember the specs appear higher for wizard than the boards) alex replied.

The Caustic cards were built on a 90nm TSMC process node and needed about 30W of power per core. So even in an older process and without being integrated or designed for mobile, they were orders of magnitude more efficient that GPU compute-based solutions from the desktop guys.

Obviously, the dedicated ray tracing hardware has now been integrated directly into the Rogue graphics pipeline. We've lowered the power consumption dramatically so it can easily fit inside an SoC designed for a tablet or ultrabook if built on a 28nm or lower node.

my bolding

Impressive that they have the tech at the mobile (but not handset) level already. So it appears that it won't be held back by a need for a sufficiently low process node before it can be practically implemented.
 
Now all they need are licensees.

If Apple weren't developing their own GPU's, they might be interested. Sigh..
 
Back
Top