SEGA Developing MODEL 4 In Conjunction With Saarland University?

That's what I was trying to get at.

It really isn't decades into the future at all. If somebody wanted to, they could do it within the next 2 years.

If the FGN article is correct, then that somebody may just be SEGA.

Am I reading a different version of the internet than you are? Why are you continually ignoring ShiftyGeezer's posts? The tech that you/"FGN" are referring to cannot be deployed in the way you think it can be.

And the whole reason that SEGA is able to make money from the arcades is precisely down to using consumer-level, off-the-shelf tech - peanuts to make, cheap to repair. The days of bespoke hardware is long, long gone.
 
Furthermore, compare what this DRPU is doing in their (theoretical) current configuration to what a GPU of the same size is managing in terms of visual output. Now imagine that raytracing monster alongside a GPU of similar size. Whatever the RT processor is outputting, the scanline GPU (actually a generic vector maths core going forward, that ought to be pretty good at raytracing itself) will achieve better results because scanline is a more efficient use of transistors.

You can't compare an FPGA with an ASIC.

The major differences with this hardware will be with the replacement of the FPGAs with ASICs (application specific integrated circuit - a silicon chip like a CPU or a GPU). This will enable an estimated additional 14x performance improvement as ASICs can run much much faster than FPGAs.

http://www.anandtech.com/video/showdoc.aspx?i=3549&p=4

Now if the 130nm DRPU was an ASIC with 14 times the performance then we could do a fair comparison.
 
Last edited by a moderator:
So you're saying when the Saarland university PhD student wrote that thesis, in particular Chapter 8 - the DRPU ASIC, just after Chapter 7 - the DRPU FPGA, and stated in the chapter introduction...

Low-level checks like cross talk and DFT have been omitted, thus the ASIC design presented here cannot by manufactured as it is, but gives very precise performance estimates for an ASIC implementation of the DRPU architecture.
...and calculated his predictions for the DRPU ASIC, he made a complete hash of it? You might want to write him a friendly email to point out his Chapter 8 is a complete waste of time and he doesn't know what he's talking about and you can safely say the performance of an ASIC DRPU will actually be much, much better then his forecasts. :yep2:

Alternative you could try listening to the arguments and reading the links...
 
C'mon Shifty, TEXAN is that little voice of hope that lives in all of our Sega colored memories... While the Saarland University team's earlier work concludes that decent realtime raytracing is still far off, we can always hope that in some hidden enclave they have had some miraculous insight that overturns their previous predictions... Since you seem to have a fair handle on the practical hardware specifications, could you post the minimum hardware requirements for realtime raytracing with enough ray-bounces to approximate the look of precomputed radiance maps in DirectX 10.1 or DirectX 11 class hardware? (Yeah, probably too many factors there but it would put a great deal of this type of speculation to rest...)
 
http://www.alastra.com/~peter/io/raytracing-vs-rasterization.html

David Kirk (ex-NVIDIA dude vs Professer Philipp Slusallek of the University of Saarbruecken

Let the fight commence.....

Raytracing vs Rasterization
There used to be an interesting debate between Professer Philipp Slusallek of the University of Saarbruecken and chief scientist David Kirk of nVidia at GameStar.de (dead link) The original article has been taken down, but I found a slightly mangled version on the Wayback machine and I've cleaned it up a bit....
[snip]

Ray tracing or rasterization?GameStar: Current games are rendered via the well-known rasterization technique. Ray tracing is an old technology, but for the first time we've the power to calculate it in real-time. Why is ray tracing the better (or the worse) technique to render a PC game?
Kirk: Rasterization (Painter's Algorithm or Depth Buffering) has a rendering time that is typically linear in the number of triangles that are drawn, because each polygon must be processed. Since there is specialized hardware for rasterization in modern GPUs, this time is very small per triangle, and modern GPUs can draw 600M polygons per second, or so. Also, Z or depth culling and hierarchical culling can allow the game to not even draw large numbers of polygons, making the complexity even less than linear.

Although it is tempting to think of GPUs as only rasterization engines, modern GPUs are now highly programmable parallel floating processors. The rasterization part of the GPU is only a small part; most of the hardware is devoted to 32-bit floating point shader processors. So, it is now possible to demonstrate real time ray tracing running on GPUs. It is not yet faster than rasterization, but often ray tracers are doing more calculation for the global illumination - shadows, reflections, etc.

I don't think of this as "ray tracing vs. rasterization". I think of it as ray tracing and rasterization. Rasterization hardware can be used to accelerate the intersection and lighting calculations that are part of ray tracing. In particular, the initial visibility calculations - what's in front - are best done with rasterization. Reflections, transparency, etc, can be done with ray tracing shader programs.

Slusallek: Since I am not limited by company politics I will try to be a bit controversial in my responses :).

Ray tracing offers a fairly long list of advantages over rasterization. The fundamental difference is that rasterization can only look at a single triangle at a time. However, most effects require access to at least two triangles: e.g. casting a shadow from one triangle to the other, computing the reflection of one triangle off of another, or simulating the indirect illumination due to light bouncing between all triangles in the scene.

Rasterization must do various tricks to even approximate these effects, e.g. using reflections maps instead of real reflection. For changing environments these maps must be re-computed for every frame using costly additional rendering passes. But even worse, they are simply incorrect for almost all geometry, in particular for close-by or curved objects. Try rendering a car that reflects the street correctly -- games don't, because they can't.

Another big advantage of ray tracing is the ability to render huge data sets very efficiently. Recently we implemented a very simple addition to ray tracing that allows for rendering a Boeing 777 model consisting of ~350 million polygons (roughly 30 GB on disk) at 2-3 frames per second on a single dual Opteron system with just 3-4 GB of memory. Since it uses ray tracing you can automatically also render it with shadows, reflections, and complex shading even in such a large model.

You see, ray tracing is a fundamentally new way of doing interactive graphics that opens up many new opportunities for doing things that were impossible before. Many researchers are picking up realtime ray tracing now that we have demonstrated it running with realtime performance.

Kirk: You missed my point, perhaps intentionally. Rasterization-specific hardware is now <5% of GPU core area. Most of the silicon is devoted to instruction processing, memory access, and floating point computation. Given that a GeForce 6800 has 10-20x the floating point of the Opteron system you describe, you are a poor programmer if you cannot program it to run a ray tracer at least twice as fast on a GPU as on a CPU.

There are no barriers to writing a ray tracer on a GPU, except perhaps in your mind. The triangle database can be kept in the GPU memory buffers as texture information (textures are simply structured arrays). Multiple triangles can be accessed through longer shader programs. Although current GPU memory is limited to 256MB-512MB, the root of the geometry hierarchy can be kept resident, and the detail (leaf nodes) kept in the system memory and disk. In your example of ray tracing 30GB of triangle data, you are clearly using hierarchy or instancing to create a 350M polygon database, since in your 2-3 seconds you do not have time to read that volume of data from disk.

By the way, ray tracing is not a new idea. Turner Whitted's original ray tracing research paper was written in 1980. Most of the algorithmic innovation in the technique happened in the late 80s and early 90s. The most interesting recent advances are path tracing, which casts many more rays to get a more global illumination (light inter-reflection) result. Several universities have written path tracers for GPUs that run on extremely large databases.
 
Last edited by a moderator:
http://www.pcper.com/article.php?aid=546

An Interview with Crytek CEO, Cevat Yerli

PC Perspective: How much have you experimented with ray tracing in the past? Any interesting background on it?

Cevat Yerli, Crytek: In our company, most of our developers do have experience with ray tracing, more as background knowledge though. We have not investigated enough or made a decision about ray tracing being an option, part or key to our strategy, so at this stage it's not a serious option.

PCPER: Current ray tracing theory seems to indicate that as geometry increases, ray tracing is a more and more attractive option for a rendering engine. Do you see that as the case or has rasterization improved with development to subdue that advantage?

Yerli: So far I haven’t seen a compelling example for using pure classical ray tracing. Part of the problem is that the theoretical argument is derived from looking at the performance of static geometry under certain assumptions for what sort of lighting environment and material types you have in the scene. These examples often compare a ray tracer using a sophisticated acceleration structure for storing static polygon data against a trivial implementation of a rasterization engine which draws every polygon in the scene, which produces an unfair advantage to the ray tracer.
 
There's really not much point trying debate the pros and cons of Raytracing here. For one, the discussion has already been had, and for another, those on the pro-raytracing side appear incapable of providing any technical arguments in their favour to this thread.
 
Sorry to drive things further into OT... I think that my above request for a direct hardware requirement for RT required to equal DX10.1 radiance maps should end many of these debates as it will properly frame how far we are from anything remotely practical... I'd say that until it takes less than 10x the RT hardware to get the look of a precomputed rasterizer the debate is over... The only way that this general condition would be side-stepped would be if no further screen resolution increases were allowed. Then you could ferret out the nyquest for the required textures and render targets. Once those values had been exceeded then the remaining transistors could be used for RT regardless of efficiency because further rasterization would be overkill... The problem is, how many render-target layers is too many? (I don't even suspect that Hollywood has an answer there). Then we return to reality and see that Panasonic has a 48 degree parallax auto-stereoscopic HDTV and there will probably be even more resolution demands as this type of display becomes mainstream... So we would have to calculate the nyquest for 1) the largest practical screen 2) multiplied by the largest practical nyquest for 3D parallax (100 degrees? 300?) 2) The nyquest for texture resolution per triangle on said display... Then we would have the end of rasterization... quite awhile off all things considered... The only reason RT would appear before this scenario would be that it lends something to a specific art style... even then it would probably be a hybrid RT approach...
 
Arcades will be good business moves soon with standardized input devices, and the ability to change or update content remotely from a server, which means there will be no needs to physically touch the hardware to make it play a new game. It will also be possible to display all sorts of advertisements updated in real-time too.

Most arcades will be advertisement-driven; the games will be made to promote other products, will be very short, and it will be easy to display high quality graphics because the games are only meant to be played for a short period of time in a very scripted fashion, so a lot can be done without ending up with development bottlenecks related to complex technologies.

Heck, they won't even really be "arcades", but interactive displays with great computational power, or streaming content only a la Onlive.
 
Last edited by a moderator:
Arcades will be good business moves soon with standardized input devices, and the ability to change or update content remotely from a server, which means there will be no needs to physically touch the hardware to make it play a new game. It will also be possible to display all sorts of advertisements updated in real-time too.

Most arcades will be advertisement-driven; the games will be made to promote other products, will be very short, and it will be easy to display high quality graphics because the games are only meant to be played for a short period of time in a very scripted fashion, so a lot can be done without ending up with development bottlenecks related to complex technologies.

Heck, they won't even really be "arcades", but interactive displays with great computational power, or streaming content only a la Onlive.

Ether_Snake where did you get your information from?

Are you giving opinion or is this Onlive type of system for the arcades actually in development?
 
Panasonic has a 48 degree parallax auto-stereoscopic HDTV and there will probably be even more resolution demands as this type of display becomes mainstream...

I doubt anything other than 60" 1080p will ever become mainstream in the near future. Most people with 1080p sets now sit too far away to actually get the full benefit of it, and that's in the small living rooms that the majority have. As for 3D panels?
 
That actually gives arcades a possible second chance. Most people aren't going to be in a position to (want to) replace their new HD sets with newer 3D sets, but the arcades could go 3D and attract a lot of attention. Still, if they cost too much to play as they do, I still don't see them becoming popular again.
 
This example pic works well. I had to zoom the browser out to 75% to get the sterescopic effect to work. The requirement of glasses could be detrimental though. To onlookers I guess it looks like two superimposed images right? If the 3D effect isn't obvious to people passing by, that'll lose the effect. Word of mouth may attract more interest, but there won't be much of a spectator aspect, which I thought was one of the USPs of the the arcades versus consoles.

What hardware does it run on?
 
Well I figured PS3 based ;). Just didn't know if there was a multi-PS3 combo thing or overclocking or something going on. They're rendering MGSO with twice the number of screens to render. Either the framerate will be attrocious, or they have networked PS3 rendering or somesuch. One for each display makes sense and echo's 3D and super-HD demos already seen.
 
HDTV vs 3D HDTV

I doubt anything other than 60" 1080p will ever become mainstream in the near future. Most people with 1080p sets now sit too far away to actually get the full benefit of it, and that's in the small living rooms that the majority have. As for 3D panels?

I even doubt 60" 1080p as a consumer standard. The improvement over SDTV is hard for many to justify for many Joe Consumers... But 3D is definitely a perceptible improvement... The problem is that the Glasses are almost universally reviled by anyone but Stereoscopic enthusiasts. Panasonic's latest display (sorry to keep using this example) seems to have overcome most of the viewing comfort issues associated with auto-stereoscopic displays (such as parallax barrier or integral imaging) but at the cost of having to render 48 views... (some in the 3D field think anything over 6 view should be sufficient...) I have no doubt that this kind of display will become commoditized at some juncture. It would be great if arcades were used as the first wave of this commoditization process but I suspect that's a pipe-dream... 3D at this point seems as inevitable as the move to color (to me), consumers will definitely perceive the upgrade value. The only problem is that the consumer electronics industry will likely be afraid of this move due to the low HDTV adoption rates...

Hey if we are stuck at 1080p for a long long time maybe we'll get some kind of Ray-Tracing hardware as a novelty use for the "excess" :LOL: GPU transistors...
 
Well I figured PS3 based ;). Just didn't know if there was a multi-PS3 combo thing or overclocking or something going on. They're rendering MGSO with twice the number of screens to render. Either the framerate will be attrocious, or they have networked PS3 rendering or somesuch. One for each display makes sense and echo's 3D and super-HD demos already seen.

There is a "multi-PS3 combo thing", there were some articles of Gran Turismo running in 120 fps a while back.
 
Ether_Snake where did you get your information from?

Are you giving opinion or is this Onlive type of system for the arcades actually in development?


Oh my I just re-read my post, and it makes it sound like I'm talking about a specific coming tech:LOL: Sorry for that, it is my presumption that an Onlive-type of service is perfectly suited for arcade-style gaming since the key problem with Onlive is online play over great distances with other players, something not necessary for arcade games, and the key problem with arcades are the associated hardware, maintenance, and transportation costs.

What I meant was that making game-specific hardware is going to be 100% unjustifiable in the near future thanks to such technology, so "arcades" can become viable again as an arcade "booth" can be updated remotely to play different content without having to send someone to replace or modify the hardware.

Motion controls also help in making such "booths" highly versatile in what content they can offer, even outside gaming.

But I don't see this as arcades really, I just have had this idea that soon we will see "interactive displays" that can be anything you want them to be, as content is server-driven instead of hardware driven. It makes them very attractive from a return-on-investment perspective.
 
Back
Top