Is the writing on the wall? ATI and NV doomed!

Discussion in 'Architecture and Products' started by Brimstone, Sep 10, 2004.

?

What is going to happen to ATI and NV over the next 5-10 years?

  1. Yes, they are doomed! The Playstation 3 is the future!

    100.0%
  2. No, they will figure out a way to survive!

    0 vote(s)
    0.0%
  3. They will probably merge with larger companies like AMD and Intel.

    0 vote(s)
    0.0%
  4. ATI and NV will merge together to stay competitive!

    0 vote(s)
    0.0%
  5. ATI and NV are the future!

    0 vote(s)
    0.0%
  6. Power VR is the future!

    0 vote(s)
    0.0%
  7. I don't care as long a more episodes of RED DWARF get made!

    0 vote(s)
    0.0%
  1. Scali

    Regular

    Joined:
    Nov 19, 2003
    Messages:
    2,127
    Likes Received:
    0
    If Tim Sweeney honestly believes that... I find that scary really. Unless there is something that he knows and we don't, I don't see CPUs going anywhere in the next few years. A few years ago we had the golden age of CPU-improvements, I'd say. CPUs went from 500 MHz to 2 GHz in only a few months, and we got all the nice goodies we require for software rendering like MMX and SSE1/2. But since then, things have been slow. The most notable improvements now have been HTT and 64 bit. I don't think either will have much effect on the speed of software rendering.

    What we need is drastic clockspeed ramping or a revolutionary new way of processing (which most probably can't be retrofitted to the x86 instructionset anyway).
    The first seems unlikely since CPU speeds have more or less stagnated in the past year or so, even going to 0.09 process didn't improve things a lot.
    The second seems unlikely because the x86 instructionset is considered holy...
    The main improvement for the near future seems to be multicore-processing. Nice, but not that impressive. As we know, 2 CPUs aren't twice as fast as 1, but more like 166% in most cases. As you add more CPUs (or cores in this case), efficiency will just drop more.
    What makes the difference with a GPU is that they have completely independent pipelines, each with their own memory controller and everything, which gets them much closer to 100% added efficiency per extra unit.
    And then ofcourse there's the issue of GPU pipelines being smaller and cheaper, so adding 16 pipelines to a GPU is a reality, while a 16-core Pentium or Athlon is not going to happen anytime soon.
     
  2. Ailuros

    Ailuros Epsilon plus three
    Legend Subscriber

    Joined:
    Feb 7, 2002
    Messages:
    9,511
    Likes Received:
    224
    Location:
    Chania
    I'm not sure what kind of timeframe he was exactly targeting with his more recent prediction, but it was along the line that GPUs will become redundant and will continue to exist only for the high end for let's say antialiasing functions; like some sort of luxury items.

    There have been relevant discussions over and over again here on these boards; while in the theoretical realm I could imagine that CPUs could catch up to some point in the future, I can't imagine that it'll envolve advanced texture filtering at all as just one example.

    Or a different perspective: the XBox2 is being rumoured to have a multi-core CPU capable of 6 threads in total. I wonder why on the other hand it really needs the Xenon VPU for and not something more modest instead.
     
  3. Diplo

    Veteran

    Joined:
    Apr 17, 2004
    Messages:
    1,474
    Likes Received:
    64
    Location:
    UK
    I think people are seriously misrepresenting what Tim has said. He never said that UE3 will run on mainstream computer CPUs, instead he was saying that their will be a convergence between the two.

    Read the whole of Sweeney's B3D interview here.
     
  4. Sigma

    Newcomer

    Joined:
    Jul 2, 2004
    Messages:
    88
    Likes Received:
    0
    Location:
    Portugal
    This talk about CPUs becoming GPUs and vice versa is a little weird I think. Graphics are not a general programming method. It is about vectors and colors and matrices. Something that a CPU doesn't offer, nor will offer in the near future.

    But I do think that the limit for scanline graphics is almost reached and the future will be perhaps raytracing. There was a NVIDIA paper saying something like: "GPUs can not trace rays. Yet..."
     
  5. PatrickL

    Veteran

    Joined:
    Mar 3, 2003
    Messages:
    1,315
    Likes Received:
    13
    Not directly linked to that but reading the hexus article about rs480/rs400 and keeping in mind Nforce series, i was wondering what makes GPU makers so good with chpisets?
    Are there so many things in common between a GPU and chipset, or it is just because they decided to invest a lot in R&D for chipsets ?
     
  6. Scali

    Regular

    Joined:
    Nov 19, 2003
    Messages:
    2,127
    Likes Received:
    0
    I think there is still plenty of improvement for scanline graphics. First shadowmapping can still be improved a lot.
    Secondly triangle rasterization could be replaced by a micropolygon system like REYES.
    Thirdly, raytracing will never be faster than rasterizing or REYES, and doesn't solve all problems either. Raytracing will be an added feature, if anything, I suppose. I don't think it will ever replace rasterizing.
     
  7. Scali

    Regular

    Joined:
    Nov 19, 2003
    Messages:
    2,127
    Likes Received:
    0
    Well, I suppose they need to know pretty much everything about PCI, AGP and PCI-e for their GPUs... and since memory controllers are crucial to the performance of a GPU, I suppose they are experts in that field aswell...
    So I suppose some of their knowledge is quite useful in designing an efficient chipset.
    Then again, there are a lot of things in a chipset that are completely unrelated to GPUs, I suppose. Think of harddisk controllers or USB ports.
     
  8. Ailuros

    Ailuros Epsilon plus three
    Legend Subscriber

    Joined:
    Feb 7, 2002
    Messages:
    9,511
    Likes Received:
    224
    Location:
    Chania
    Great then I'd like an interpretation on the following comment:



    http://www.beyond3d.com/forum/viewtopic.php?p=237970&highlight=sweeney#237970

    ....especially the highlighted part.

    http://www.beyond3d.com/forum/viewtopic.php?p=197879&highlight=sweeney#197879

    Uhmmm yeahrightsureok..... :roll:
     
  9. KimB

    Legend

    Joined:
    May 28, 2002
    Messages:
    12,928
    Likes Received:
    230
    Location:
    Seattle, WA
    While I have great respect for Sweeney for other things, I don't think he is the best person to divine where the industry is going. This was particularly obvious, for example, with the original Unreal game, which was designed around software rendering, and had some horrible inefficiencies with hardware rendering because he didn't expect 3D hardware to take off.

    I, for one, would not like to count on some currently unknown new technology to come to the forefront and drastically increase processing power. I rather expect that instead what we'll see is silicon-based designs pushed to their limit over the next 20 years, over which time computing power's progress will become slower and slower. Only when it's painfully obvious that the company that comes out with an entirely new computing technology with more headroom will become the next IBM or Intel will companies start earnestly developing competing technologies.
     
  10. Diplo

    Veteran

    Joined:
    Apr 17, 2004
    Messages:
    1,474
    Likes Received:
    64
    Location:
    UK
    Ten years is a long time in computer gaming.

    It takes you from:
    [​IMG]

    To:
    [​IMG]
     
  11. KimB

    Legend

    Joined:
    May 28, 2002
    Messages:
    12,928
    Likes Received:
    230
    Location:
    Seattle, WA
    Yep, but what I'm saying is that computing power just can't increase by that amount again, not until we move away from silicon transistor-based technologies.
     
  12. Tim Murray

    Tim Murray the Windom Earle of mobile SOCs
    Veteran

    Joined:
    May 25, 2003
    Messages:
    3,278
    Likes Received:
    66
    Location:
    Mountain View, CA
    Sure it can. But it won't move into the direction of a single Uberchip.

    I mean, Crikey, have you not seen Intel's heatsink for its first dual-core chip?

    http://www.theinquirer.net/?article=18350
     
  13. Scali

    Regular

    Joined:
    Nov 19, 2003
    Messages:
    2,127
    Likes Received:
    0
    That's where Sweeney goes wrong. Yes, Intel and AMD are going multi-core. No they are not going to get 16 cores on one chip anytime soon.
    I will agree that if they can get 16 cores onto a chip, and get enough bandwidth and cache to them, then they have a chance of beating today's GPUs, since you will indeed have a lot more processing power to burn, so even if the generic design is less efficient, the extra power will easily compensate.
    Problem is, 16 cores almost requires 16 times the transistor count, and that is not going to happen anytime soon, we are already approaching the limits of manufacturing with silicon. If we take Moore's law of doubling transistor count in 18 months, then we'd need 4*18 months, which is well... not in the timespan that Sweeney is thinking about with UE3, I guess. And I doubt that Moore's law will hold by then.
    And even then you'd only get the performance of TODAY's cards.
    If you can put that many transistors on a chip, how many dedicated GPU pipelines can you put on then? 128?
     
  14. KimB

    Legend

    Joined:
    May 28, 2002
    Messages:
    12,928
    Likes Received:
    230
    Location:
    Seattle, WA
    Ah, but unless it is a single Uberchip, the cost will become too prohibitive, for most people. Obviously you can make a massively parallel PC, but you won't be able to sell it to most people, so it won't be made.

    No, but I can believe it, as heat is most likely going to become the primary limiting factor for these designs before they are limited by the fundamental limitations of the physics (about 10nm process size and ~30GHz frequency).
     
  15. KimB

    Legend

    Joined:
    May 28, 2002
    Messages:
    12,928
    Likes Received:
    230
    Location:
    Seattle, WA
    I don't know. You'd have about 6x the frequency, obviously, but I don't know if you'd have that much more processing power. With all the specialized hardware (triangle setup, texture filtering) that can be made use of in addition to the multiple FP units in a modern architecture, I doubt even a 16-core CPU would be as fast at doing 3D graphics. This is even before considering all of the memory bandwidth savings and whatnot that current GPU's employ.
     
  16. Reverend

    Banned

    Joined:
    Jan 31, 2002
    Messages:
    3,266
    Likes Received:
    24
    I predict by by the 2010, we'll have hit 10GHz CPUs with lord-knows-what tech. And we'll have 1GB of memory on video cards. Yes, they (3D accelerators) will still exist even then. It has to be, otherwise Beyond3D will be boring :) And I haven't even considered Moore's Law (which I have forgotten exactly wtf it means!).

    Oh wait, I'm actually not contributing anything important.

    :roll:
     
  17. KimB

    Legend

    Joined:
    May 28, 2002
    Messages:
    12,928
    Likes Received:
    230
    Location:
    Seattle, WA
    Well, personally, I think the primary advancements in processors beyond the next three years or so will not be in manufacturing advancements, but rather in technology advancements.

    That is, we won't necessarily get processors with higher clocks and higher transistor counts (well, not by much compared to today's standards, anyway), but we'll see a big move for companies shooting for more efficient designs instead. This will have to be a move both in the software and hardware sides of things.
     
  18. kenneth9265_3

    Newcomer

    Joined:
    Mar 15, 2004
    Messages:
    89
    Likes Received:
    0
    Location:
    louisville
    Excuse me for not being computer smart as 95% of you guys on the forum, but how much of a difference is duel core CPUs going to be over single core in the future performance wise?

    The only advantage that I have heard and read is the it will cut down on the heating issue to help squeeze more out performance-wise and that will take time to do before we see the differance game-wise.

    I don't see CPUs and GPUs merging anytime soon within this decade so if I am wrong please let me know and explain to us technological-challanged people please... :)
     
  19. Pete

    Pete Moderate Nuisance
    Moderator Legend

    Joined:
    Feb 7, 2002
    Messages:
    5,777
    Likes Received:
    1,814
    I'm not qualified to answer, but that hasn't stopped me before. :)

    In terms of 3D rendering, or in general? Logically, dual-core can at most offer a 2x performance increase, but that's assuming you're not bandwidth-limited. Considering the laughably greater bandwidth available to GPUs, I don't see CPUs catching up in rendering power anytime soon.

    Yse, dual-core can sidestep heat waste/production issues for a time (you just put two relatively efficient 2-3GHz CPUs together, rather than building a very inefficient 4-5GHz one), but Intel and AMD will still have to solve the energy efficiency problems at smaller processes and higher speeds if they want to maintain small die sizes. Otherwise, they'll hit clock speed limits, at which point they'll be forced to go multi-core. That, in turn, will necessitate a change in programming--a focus on multi-threaded, rather than single-threaded, apps--to capitalize on those multi-core CPUs. Otherwise, that second core will be wasted on most office users (and humans), who do one task at a time.

    So dual cores aren't a panacea to gamers or rendering, at least not yet. They're still eminently desirable, though. :)
     
  20. Brimstone

    Brimstone B3D Shockwave Rider
    Veteran

    Joined:
    Feb 8, 2002
    Messages:
    1,835
    Likes Received:
    11

    Intel isn't looking to smart these days with their recent track record. Much of Intel's recent hardships stem from the complexity of their architectures. Their CISC/RISC super scalar and VLIW style CPU's are just to complex to be cost effcient. Clearly they are running into problems with heat, and the memory wall isn't going away anytime soon.

    There is one processing style that does solve these problems though. Vector processors are a perfect fit for next gen architectures.They naturally exploit parrallelism, are scalable, and relativly simple to design. They don't rely on expensive cache to be effective. Data streaming is the future and you don't need much cache for that. VLIW and Super Scalar are wasteful with cache for multi-media applications. Multi-media applications are probably one of the few things that tax a computer these days for the majority of users. A person is physically limited on how fast they can type a document and read email.

    A vector processor is going to needs lots of bandwidth, however; so instead of cache, eDRAM gets used. The density of eDRAM compared to SRAM is much higher, eDRAM consumes less power, and the bandwidth provided by eDRAM clobbers SRAM.


    Intel is slow to change, but I think a change will come soon. IBM, Sony, and Toshiba have been working on CELL for a few years now. To me, CELL, comes across as a massive vector processor. The patents show a large amount of eDRAM, and talk about using REYES for rendering. Both of these are big hints at a Vector Processor.
     
Loading...

Share This Page

  • About Us

    Beyond3D has been around for over a decade and prides itself on being the best place on the web for in-depth, technically-driven discussion and analysis of 3D graphics hardware. If you love pixels and transistors, you've come to the right place!

    Beyond3D is proudly published by GPU Tools Ltd.
Loading...