Is the writing on the wall? ATI and NV doomed!

What is going to happen to ATI and NV over the next 5-10 years?

  • No, they will figure out a way to survive!

    Votes: 0 0.0%
  • They will probably merge with larger companies like AMD and Intel.

    Votes: 0 0.0%
  • ATI and NV will merge together to stay competitive!

    Votes: 0 0.0%
  • ATI and NV are the future!

    Votes: 0 0.0%
  • Power VR is the future!

    Votes: 0 0.0%
  • I don't care as long a more episodes of RED DWARF get made!

    Votes: 0 0.0%

  • Total voters
    209
Ailuros said:
ROFL :LOL: Last time I checked he's actually insisting somewhat on the same point with a few minor modifications. I'd love to hear a prediction from Tim himself though, when he thinks his future U3 engine will run adequately on mainstream consumer CPUs and no I don't obviously mean 640*480 with point sampling. If there are let's say by 2010 CPU's out there, that can render that specific game full detail/trilinear in 1024*768*32@60fps, I'll tip my hat off.

If Tim Sweeney honestly believes that... I find that scary really. Unless there is something that he knows and we don't, I don't see CPUs going anywhere in the next few years. A few years ago we had the golden age of CPU-improvements, I'd say. CPUs went from 500 MHz to 2 GHz in only a few months, and we got all the nice goodies we require for software rendering like MMX and SSE1/2. But since then, things have been slow. The most notable improvements now have been HTT and 64 bit. I don't think either will have much effect on the speed of software rendering.

What we need is drastic clockspeed ramping or a revolutionary new way of processing (which most probably can't be retrofitted to the x86 instructionset anyway).
The first seems unlikely since CPU speeds have more or less stagnated in the past year or so, even going to 0.09 process didn't improve things a lot.
The second seems unlikely because the x86 instructionset is considered holy...
The main improvement for the near future seems to be multicore-processing. Nice, but not that impressive. As we know, 2 CPUs aren't twice as fast as 1, but more like 166% in most cases. As you add more CPUs (or cores in this case), efficiency will just drop more.
What makes the difference with a GPU is that they have completely independent pipelines, each with their own memory controller and everything, which gets them much closer to 100% added efficiency per extra unit.
And then ofcourse there's the issue of GPU pipelines being smaller and cheaper, so adding 16 pipelines to a GPU is a reality, while a 16-core Pentium or Athlon is not going to happen anytime soon.
 
I'm not sure what kind of timeframe he was exactly targeting with his more recent prediction, but it was along the line that GPUs will become redundant and will continue to exist only for the high end for let's say antialiasing functions; like some sort of luxury items.

There have been relevant discussions over and over again here on these boards; while in the theoretical realm I could imagine that CPUs could catch up to some point in the future, I can't imagine that it'll envolve advanced texture filtering at all as just one example.

Or a different perspective: the XBox2 is being rumoured to have a multi-core CPU capable of 6 threads in total. I wonder why on the other hand it really needs the Xenon VPU for and not something more modest instead.
 
I think people are seriously misrepresenting what Tim has said. He never said that UE3 will run on mainstream computer CPUs, instead he was saying that their will be a convergence between the two.

Tim Sweeney said:
I think CPU's and GPU's are actually going to converge 10 years or so down the road. On the GPU side, you're seeing a slow march towards computational completeness. Once they achieve that, you'll see certain CPU algorithms that are amicable to highly parallel operations on largely constant datasets move to the GPU. On the other hand, the trend in CPU's is towards SMT/Hyperthreading and multi-core. The real difference then isn't in their capabilities, but their performance characteristics.

When a typical consumer CPU can run a large number of threads simultaneously, and a GPU can perform general computing work, will you really need both? A day will come when GPU's can compile and run C code, and CPU's can compile and run HLSL code -- though perhaps with significant performance disadvantages in each case. At that point, both the CPU guys and the GPU guys will need to do some soul searching!

Tim Sweeney said:
Now is a great time because 3D hardware is coming out of the dark ages of being toy game acceleration technology, and morphing into highly-parallel general computing technology in its own right. The last hurdle is that the GPU vendors need to get out of the mindset of "how many shader instructions should we limit out card to?" and aim to create true Turing-complete computing devices.

We're already almost there. You just need to stop treating your 1024 shader instruction limit as a hardcoded limit, and redefine it as a 1024-instruction cache of instructions stored in main memory. Then my 1023-instruction shaders will run at full performance, and my 5000-instruction shaders might run much more slowly, but at least they will run and not give you an error or corrupt rendering data. You need to stop looking at video memory as a fixed-size resource, and integrate it seamlessly into the virtual memory page hierarchy that's existed in the computing world for more than 30 years. The GPU vendors need to overcome some hard technical problems and also some mental blocks.

In the long run, what will define a GPU -- as distinct from a CPU -- is its ability to process a large number of independent data streams (be they pixels or vertices or something completely arbitrary) in parallel, given guarantees that all input data (such as textures or vertex streams) are constant for the duration of their processing, and thus free of the kind of data hazards that force CPU algorithms to single-thread. There will also be a very different set of assumptions about GPU performance -- that floating point is probably much faster than a CPU, that mispredicted branches are probably much slower, and that cache misses are probably much more expensive.

Read the whole of Sweeney's B3D interview here.
 
This talk about CPUs becoming GPUs and vice versa is a little weird I think. Graphics are not a general programming method. It is about vectors and colors and matrices. Something that a CPU doesn't offer, nor will offer in the near future.

But I do think that the limit for scanline graphics is almost reached and the future will be perhaps raytracing. There was a NVIDIA paper saying something like: "GPUs can not trace rays. Yet..."
 
Not directly linked to that but reading the hexus article about rs480/rs400 and keeping in mind Nforce series, i was wondering what makes GPU makers so good with chpisets?
Are there so many things in common between a GPU and chipset, or it is just because they decided to invest a lot in R&D for chipsets ?
 
But I do think that the limit for scanline graphics is almost reached and the future will be perhaps raytracing. There was a NVIDIA paper saying something like: "GPUs can not trace rays. Yet..."

I think there is still plenty of improvement for scanline graphics. First shadowmapping can still be improved a lot.
Secondly triangle rasterization could be replaced by a micropolygon system like REYES.
Thirdly, raytracing will never be faster than rasterizing or REYES, and doesn't solve all problems either. Raytracing will be an added feature, if anything, I suppose. I don't think it will ever replace rasterizing.
 
Are there so many things in common between a GPU and chipset, or it is just because they decided to invest a lot in R&D for chipsets ?

Well, I suppose they need to know pretty much everything about PCI, AGP and PCI-e for their GPUs... and since memory controllers are crucial to the performance of a GPU, I suppose they are experts in that field aswell...
So I suppose some of their knowledge is quite useful in designing an efficient chipset.
Then again, there are a lot of things in a chipset that are completely unrelated to GPUs, I suppose. Think of harddisk controllers or USB ports.
 
Diplo said:
I think people are seriously misrepresenting what Tim has said. He never said that UE3 will run on mainstream computer CPUs, instead he was saying that their will be a convergence between the two.

Read the whole of Sweeney's B3D interview here.

Great then I'd like an interpretation on the following comment:

Question: In the past you have suggested rendering would move back to general purpose processors, but on the other hand you have admitted reservations to massively parallel processors like ‘Cell’, how do you unite these views?

Answer: Well that’s a big question there. CPUs are becoming parallel. With Intel you have this hyper-threading technology which means you can execute two threads at once almost for free. And Intel and AMD are talking about having multiple cores on their chips. So you’ll get to see a CPU in a few years from now that can have 16 cores. This is not that much different from a GPU with 16 pixel pipelines if you think about it. The big difference between CPU and GPU technology is that on the graphics side you can do everything in parallel without dependencies on other stuff. So for rendering, the GPU is going to have a serious advantage over a CPU, maybe a factor of five to ten. Once you get down to every pixel shader computation in full floating point, at some point computation comes back and dominates again. So CPUS are going to become more efficient relative to GPUs in the future.
I can see a point in, seven to ten years out where a lot of computers don’t ship with any graphics accelerator at all because the CPUs are fast enough to do that. I can still see the really high end machines come out with graphics acceleration.


http://www.beyond3d.com/forum/viewtopic.php?p=237970&highlight=sweeney#237970

....especially the highlighted part.

Tim still holds the view that CPUs will make GPUs redundent in the future but high end machines will still ship with graphics accelerators. I mentioned bilinear filtering much faster on voodoo than P4. He says some may chose to do pixel shader based filtering once the hardware is fast enough. And that CPUs will be more efficient once the ops all become floating point.

http://www.beyond3d.com/forum/viewtopic.php?p=197879&highlight=sweeney#197879

Uhmmm yeahrightsureok..... :rolleyes:
 
While I have great respect for Sweeney for other things, I don't think he is the best person to divine where the industry is going. This was particularly obvious, for example, with the original Unreal game, which was designed around software rendering, and had some horrible inefficiencies with hardware rendering because he didn't expect 3D hardware to take off.

I, for one, would not like to count on some currently unknown new technology to come to the forefront and drastically increase processing power. I rather expect that instead what we'll see is silicon-based designs pushed to their limit over the next 20 years, over which time computing power's progress will become slower and slower. Only when it's painfully obvious that the company that comes out with an entirely new computing technology with more headroom will become the next IBM or Intel will companies start earnestly developing competing technologies.
 
Ten years is a long time in computer gaming.

It takes you from:
01.gif


To:
doom37.jpg
 
Yep, but what I'm saying is that computing power just can't increase by that amount again, not until we move away from silicon transistor-based technologies.
 
Chalnoth said:
Yep, but what I'm saying is that computing power just can't increase by that amount again, not until we move away from silicon transistor-based technologies.
Sure it can. But it won't move into the direction of a single Uberchip.

I mean, Crikey, have you not seen Intel's heatsink for its first dual-core chip?

http://www.theinquirer.net/?article=18350
 
And Intel and AMD are talking about having multiple cores on their chips. So you’ll get to see a CPU in a few years from now that can have 16 cores. This is not that much different from a GPU with 16 pixel pipelines if you think about it.

That's where Sweeney goes wrong. Yes, Intel and AMD are going multi-core. No they are not going to get 16 cores on one chip anytime soon.
I will agree that if they can get 16 cores onto a chip, and get enough bandwidth and cache to them, then they have a chance of beating today's GPUs, since you will indeed have a lot more processing power to burn, so even if the generic design is less efficient, the extra power will easily compensate.
Problem is, 16 cores almost requires 16 times the transistor count, and that is not going to happen anytime soon, we are already approaching the limits of manufacturing with silicon. If we take Moore's law of doubling transistor count in 18 months, then we'd need 4*18 months, which is well... not in the timespan that Sweeney is thinking about with UE3, I guess. And I doubt that Moore's law will hold by then.
And even then you'd only get the performance of TODAY's cards.
If you can put that many transistors on a chip, how many dedicated GPU pipelines can you put on then? 128?
 
The Baron said:
Sure it can. But it won't move into the direction of a single Uberchip.
Ah, but unless it is a single Uberchip, the cost will become too prohibitive, for most people. Obviously you can make a massively parallel PC, but you won't be able to sell it to most people, so it won't be made.

I mean, Crikey, have you not seen Intel's heatsink for its first dual-core chip?
No, but I can believe it, as heat is most likely going to become the primary limiting factor for these designs before they are limited by the fundamental limitations of the physics (about 10nm process size and ~30GHz frequency).
 
Scali said:
I will agree that if they can get 16 cores onto a chip, and get enough bandwidth and cache to them, then they have a chance of beating today's GPUs, since you will indeed have a lot more processing power to burn, so even if the generic design is less efficient, the extra power will easily compensate.
I don't know. You'd have about 6x the frequency, obviously, but I don't know if you'd have that much more processing power. With all the specialized hardware (triangle setup, texture filtering) that can be made use of in addition to the multiple FP units in a modern architecture, I doubt even a 16-core CPU would be as fast at doing 3D graphics. This is even before considering all of the memory bandwidth savings and whatnot that current GPU's employ.
 
I predict by by the 2010, we'll have hit 10GHz CPUs with lord-knows-what tech. And we'll have 1GB of memory on video cards. Yes, they (3D accelerators) will still exist even then. It has to be, otherwise Beyond3D will be boring :) And I haven't even considered Moore's Law (which I have forgotten exactly wtf it means!).

Oh wait, I'm actually not contributing anything important.

:rolleyes:
 
Well, personally, I think the primary advancements in processors beyond the next three years or so will not be in manufacturing advancements, but rather in technology advancements.

That is, we won't necessarily get processors with higher clocks and higher transistor counts (well, not by much compared to today's standards, anyway), but we'll see a big move for companies shooting for more efficient designs instead. This will have to be a move both in the software and hardware sides of things.
 
Excuse me for not being computer smart as 95% of you guys on the forum, but how much of a difference is duel core CPUs going to be over single core in the future performance wise?

The only advantage that I have heard and read is the it will cut down on the heating issue to help squeeze more out performance-wise and that will take time to do before we see the differance game-wise.

I don't see CPUs and GPUs merging anytime soon within this decade so if I am wrong please let me know and explain to us technological-challanged people please... :)
 
I'm not qualified to answer, but that hasn't stopped me before. :)

In terms of 3D rendering, or in general? Logically, dual-core can at most offer a 2x performance increase, but that's assuming you're not bandwidth-limited. Considering the laughably greater bandwidth available to GPUs, I don't see CPUs catching up in rendering power anytime soon.

Yse, dual-core can sidestep heat waste/production issues for a time (you just put two relatively efficient 2-3GHz CPUs together, rather than building a very inefficient 4-5GHz one), but Intel and AMD will still have to solve the energy efficiency problems at smaller processes and higher speeds if they want to maintain small die sizes. Otherwise, they'll hit clock speed limits, at which point they'll be forced to go multi-core. That, in turn, will necessitate a change in programming--a focus on multi-threaded, rather than single-threaded, apps--to capitalize on those multi-core CPUs. Otherwise, that second core will be wasted on most office users (and humans), who do one task at a time.

So dual cores aren't a panacea to gamers or rendering, at least not yet. They're still eminently desirable, though. :)
 
Ailuros said:
Oh no not again :LOL:

We are developing a prototype processor, SCALE, which is an instantiation of the vector-thread architectural paradigm designed for low-power and high-performance embedded systems. As transistors have become cheaper and faster, embedded applications have evolved from simple control functions to cellphones that run multitasking networked operating systems with realtime video, three-dimensional graphics, and dynamic compilation of garbage-collected languages. Many other embedded applications require sophisticated high-performance information processing, including streaming media devices, network routers, and wireless base stations. The SCALE architecture is intended to replace ad-hoc collections of microprocessors, DSPs, FPGAs, and ASICs, with a single hardware substrate that supports a unified high-level programming environment and provides performance and energy-efficiency competitive with custom silicon.

Considering the embedded/cellphone related stuff, it doesn't puzzle you one bit, that INTEL itself is using a dedicated 3D chip in the form of 2700G for that market?

There is an attempt by NeoMagic's MiMagic I think, with their APA (assosiative processor array). It doesn't look like a big successor so far.

But you have to ask the question how much faster and at what cost.

In the case above? Something like night and day perhaps? And we're not talking about excessive gate counts, high power consumption or even close to even PDA/mobile CPU current core frequencies. More like something between 50-150MHz for the 3D cores and 300-500MHz for the current CPUs.

Furthermore I can see WGF to have as one of it's primary target to even more offload the CPU than ever before; we're looking at a possible 2006 or later launch for that one and a more or less 4 years lifetime for that API, just like DX9.0.


Intel isn't looking to smart these days with their recent track record. Much of Intel's recent hardships stem from the complexity of their architectures. Their CISC/RISC super scalar and VLIW style CPU's are just to complex to be cost effcient. Clearly they are running into problems with heat, and the memory wall isn't going away anytime soon.

There is one processing style that does solve these problems though. Vector processors are a perfect fit for next gen architectures.They naturally exploit parrallelism, are scalable, and relativly simple to design. They don't rely on expensive cache to be effective. Data streaming is the future and you don't need much cache for that. VLIW and Super Scalar are wasteful with cache for multi-media applications. Multi-media applications are probably one of the few things that tax a computer these days for the majority of users. A person is physically limited on how fast they can type a document and read email.

A vector processor is going to needs lots of bandwidth, however; so instead of cache, eDRAM gets used. The density of eDRAM compared to SRAM is much higher, eDRAM consumes less power, and the bandwidth provided by eDRAM clobbers SRAM.


Intel is slow to change, but I think a change will come soon. IBM, Sony, and Toshiba have been working on CELL for a few years now. To me, CELL, comes across as a massive vector processor. The patents show a large amount of eDRAM, and talk about using REYES for rendering. Both of these are big hints at a Vector Processor.
 
Back
Top