I don't even know what this means (and I hope it doesn't involve Tim Sweeney).
And please don't call it a software abstraction layer. Implementing a graphics API on top of GPU hardware requires a lot of software layers. Likewise, implementing it on top of CPU hardware is just that, an implementation. You're not using the same software layers as the GPU implementation and then emulating a GPU. In fact for many graphics operations there can be a much shorter 'distance' between the application and the (CPU) hardware. Everybody in the industry agrees that shoehorning everything into using an API is a limitation that comes with overhead. Getting more direct access to the hardware, as is already common with a CPU, will open up a new era of possibilities.
SVO rendering wasn't invented by Bruce Dell. There have been many SVO renderers before it.Unlimited Detail, involving a certain much derided Bruce Dell.
In fact, already in its early stages, UD had animated characters (quite a feat, actually).There's no animation either. Not a single moving object is present (not even simple rigid bodies instead of complex skinned models).
More detailed volumetric data + high frame rates completely floors the unjustly fashionable and untrue eye candy + destructible environment: kinetic depth effect.completely static (it doesn't feel realistic at all)
Are you saying we should congratulate him because he didn't even do his homework and reinvented the wheel ? Seriously ?I'm sorry to argue with you, sebbbi: you're a very intelligent person. Still, if you have followed Mr. Dell you will agree that he hardly did read anything about octrees. He discovered them, maybe in trying to cull/distribute points among the view frustum's subfrusta (corresponding to viewplane quadrants) faster (perhaps the primeval rendering scheme: a bucket sort where the frusta are the buckets).
I believe there's a video online showing a few animated things in isolation. (Likely because it can't run fast in a landscape.)In fact, already in its early stages, UD had animated characters (quite a feat, actually).
That's an opinion, other people will disagree, destructible environments in Battlefield change the dynamic of the game a lot.More detailed volumetric data + high frame rates completely floors the unjustly fashionable and untrue eye candy + destructible environment: kinetic depth effect.
Of course.Are you saying we should congratulate him because he didn't even do his homework and reinvented the wheel ? Seriously ?
Let me therefore beg of thee not to trust to ye opinion of any man concerning these things, for so it is great odds but thou shalt be deceived. Much less oughtest thou to keep to rely upon the judgment of ye multitude, for so thou shalt certainly be deceived. (I. Newton)
Beginning in the late '90s, the focus shifted from HSR – the noblest part of CG – to a multitude of texturing schemes (poudre aux yeux) and everybody was complying with the SGI style of doing 3D. The result is what we have today, an ugly body (texture mapped triangles) in pompous robes (effeminate eye candy). This is an example of the bad effects of GPUs.J'ai souvent remarqué, que des personnes, qui ne font pas tout à fait profession du métier, ont coutume de fournir des pensées plus singulières, concetti più vaghi et più pelegrini, où l'on ne s'attend pas... une personne qui n'était pas géomètre du tout et qui fit imprimer quelque chose de géométrie donna quelque occasion à ma quadrature arithmétique, sans parler d'autres exemples. (G. W. Leibniz)
http://www.ipaustralia.com.au/applicant/euclideon-pty-ltd/patents/AU2012903094/Anyway if he wants to be taken seriously he must get a patent
Is there any place where we might actually get the proper information about the algorithm?
Not that I know of. But there are bits here (literally) and there, enough for an unprejudiced programmer.Is there any place where we might actually get the proper information about the algorithm?
Rome wasn't built in a day.So why have the iGPU ?
That depends on which CPU and GPU you're comparing exactly.will it have more gflops than a dedicated gpu of 2013 ?
That's ATI's marketing term for their Shader Model 1.4 hardware. I'm not sure if I should feel flattered or a little insulted.Nick is Mr SmartShader
I'm sorry but that is clearly false. Applications that don't use/need the GPU overall don't look/feel very alike, despite having much more low-level access to the CPU. There's really no reduction of the pool of programmers capable of creating something valuable. There's just more of a split between low-level, middle-level and high-level development. You don't have to know x86's encoding format to be part of some innovative application development. Only a handful of compiler programmers do.Honestly I think you'd see more games that looked the same with CPU based rendering.
You'd reduce the pool of programmers capable of writing a good renderer even further (and it's not a deep pool now), most people would just defer to middle ware. Those that didn't would probably write once and use many times.
It's futile to ask that. It reminds me of discussions on forums like these about what people would do with floating-point pixel processing, back in the GeForce 3 days. John Carmack had a few ideas but many considered that not to be worth the transistors. We now know that what Carmack had in mind was just the tip of the iceberg of what can actually be done with the technology. It's utterly unthinkable to have a GPU without floating-point pixel processing today. So don't worry too much about what the unification of CPU and GPU can result into specifically. There will be a revolutionary application for everyone.I think you have to ask what it is you want to do with your CPU renderer that you can't do on a GPU?
I'm sorry to argue with you, sebbbi: you're a very intelligent person. Still, if you have followed Mr. Dell you will agree that he hardly did read anything about octrees. He discovered them, maybe in trying to cull/distribute points among the view frustum's subfrusta (corresponding to viewplane quadrants) faster (perhaps the primeval rendering scheme: a bucket sort where the frusta are the buckets).
In fact, already in its early stages, UD had animated characters (quite a feat, actually).
More detailed volumetric data + high frame rates completely floors the unjustly fashionable and untrue eye candy + destructible environment: kinetic depth effect.
Yes there's more involved than gather support. I just said that it's currently the main reason why there's still a large gap between CPUs and GPUs for graphics workloads. AVX2's gather support won't close that gap entirely, but it will make software rendering adequate for far more applications. And like I also said, AVX can be extended up to 1024-bit. There really is no reason the CPU can't become as "wide" as the GPU.Oh, I think it's a lot more than that, mainly that GPUs aren't just very wide...
CPUs are doing absolutely fine at dealing with memory latency with out-of-order execution, large caches, prefetching, and 2-way SMT. Especially for something as regular as graphics, it's a non-issue. In fact it's really the GPU you should be worried about. A modest amount of memory access irregularity can cause the GPU's cache hit ratio to drop to zero, causing it to be bottlenecked by bandwidth, register space, or work size. The CPU can deal with increasing complexity very elegantly....but also that they pipeline everything extremely effectively. There's no way in hell you're going to be able to hide memory latency effectively enough on a CPU to get near performance of a GPU...
By that reasoning GPU manufacturers should bring back separate vertex and pixel pipelines, because then more things would run in parallel, right?...even if latency wasn't a problem - well, you're still only doing one stage of work at a time on a CPU, while a GPU works on many stages at once. Texture address generation, geometry setup, clipping, texture filtering, shading and so on... All parallel. Serial, on the CPU.
I feel sorry for you.I don't see how you could possibly get remotely similar performance at the same quality level.
The point was that based on your wording you probably have a wrong idea about software rendering. It's not an emulation. It's an implementation. In fact it's really hardware rendering, since the CPU is hardware too. Unless there's no confusion about that, we should call it GPU rendering and CPU rendering instead.I'm not into arguing semantics.
I assume you're talking about me? I never actually said that AVX2 will make the CPU adequately equipped though. Haswell is a monumental step forward, by bringing several important pieces of programmable GPU technology into the CPU cores, but it's only the beginning.Because despite what certain individuals tell you, avx2 will not make CPUs adequately equipped to handle complex 3D rendering.
It doesn't necessarily have to amount to anything.Thats if it amounts to anything
That's ATI's marketing term for their Shader Model 1.4 hardware. I'm not sure if I should feel flattered or a little insulted.
No, you don't have to write your program three times. You just compile it for a different ISA, and let the compiler figure out the details of optimizing for the architecture. AMD is thinking of using HSA as a virtual homogeneous ISA, but that will fail miserably if they keep the CPU and GPU heterogeneous, since compilers can't deal with that and application developers don't want to deal with that. What is needed instead is cores that combine a legacy scalar CPU core and a very wide SIMD engine, which both feed off the same physically homogeneous instruction stream. Intel announced the VEX encoding to be extendable to 1024-bit from the start, and they seem to be pretty well en route with Haswell.But what is this "CPU" that is worth using? Let's say nvidia Maxwell, Haswell/Broadwell successor, Steamroller successor meet a loose definition of a heteregeneous CPU. They are CPU+GPU but the GPU gets a lot CPU-like (even Tesla K20 seems meant to run "software" and not "shaders").
You have to write your program three times, each time catering to the architecture so you get high performance (given real time constraints and big workloads). For added fun, one has ARM, and CUDA libraries, the others have x86-64, AMD has a "HSA" ecosystem, Intel I don't know what they will be doing.