Is the writing on the wall? ATI and NV doomed!

Discussion in 'Architecture and Products' started by Brimstone, Sep 10, 2004.

?

What is going to happen to ATI and NV over the next 5-10 years?

  1. Yes, they are doomed! The Playstation 3 is the future!

    100.0%
  2. No, they will figure out a way to survive!

    0 vote(s)
    0.0%
  3. They will probably merge with larger companies like AMD and Intel.

    0 vote(s)
    0.0%
  4. ATI and NV will merge together to stay competitive!

    0 vote(s)
    0.0%
  5. ATI and NV are the future!

    0 vote(s)
    0.0%
  6. Power VR is the future!

    0 vote(s)
    0.0%
  7. I don't care as long a more episodes of RED DWARF get made!

    0 vote(s)
    0.0%
  1. Ailuros

    Ailuros Epsilon plus three
    Legend Subscriber

    Joined:
    Feb 7, 2002
    Messages:
    9,511
    Likes Received:
    224
    Location:
    Chania
    The VGP for MBX is optional; it´s a 4-way VS1.1 compliant SIMD. PowerVR will be using proprietary extensions for OGL temporarily I think; no idea if it´s supported in D3DM so far though.

    Bitboys already have announced pixel and vertex shader support with their next generation mobile chips (no idea yet what versions).

    Merely a guess but I´d say that the GPU path will get followed and it seems like the embedded space is catching up with a very fast rythm, especially when looking at some recent presentations concerning OGL-ES1.1 and D3DM and of course the plans for those for the immediate future.

    <shrugs>
     
  2. Pete

    Pete Moderate Nuisance
    Moderator Legend

    Joined:
    Feb 7, 2002
    Messages:
    5,777
    Likes Received:
    1,814
    Thanks for the links, Brimstone. I'll look into them.
     
  3. Himself

    Regular

    Joined:
    Sep 29, 2002
    Messages:
    381
    Likes Received:
    2
    We don't need super fast cpus, we need super fast memory, if that is ever an option you can drop cache from cpus and solve a lot of problems at once, the actual non cache parts of the cpu die are tiny relative to the cache, so more than two cores in a cpu would be a given, you could fit more dies on board and spread out the features for better heat solving. If you had some type of ram that could run at 5-10GHz, that is.. lol.. that's the biggest bottleneck in PCs today, the ram. I think you need a new ram cell technology to really break the bottle, not hacks and funky designs.

    Unless there is some solution for ram, I doubt that the number of cores on a die will matter much for games.
     
  4. Scali

    Regular

    Joined:
    Nov 19, 2003
    Messages:
    2,127
    Likes Received:
    0
    We already have a CPU like that, the Itanium. A 1.5 GHz Itanium is said to perform like a Xeon at 1.5 GHz with x86 emulation in Windows XP.
    And Transmeta itself is similar... it runs x86 emulated at 'usable' speeds, but not quite as fast as 'native' x86 CPUs like Athlon and Pentium.

    The problem is extremely simple: How do you sell a CPU that is slower, but not significantly cheaper?
    The answer is equally simple: You don't.

    Technically the transition to Itanium is possible, and would be a better choice for the future than x86-64. But for the simple reason that x86-64 is slightly cheaper and gives more performance in native x86 programs, it is not going to happen. People invest in now, not in the future. My hopes are on .NET to cut us loose from the x86 legacy and finally give us free choice of hardware (something I consider far more important than all that free software nonsense). Thankfully the GPUs never got stuck on a standard ISA, and have used the virtual machine from the start. I think that part of the reason why GPUs have evolved this fast is because of this. I think that CPUs will evolve much faster aswell as soon as the x86-legacy is over. There will be a lot more freedom to experiment with new technology.
     
  5. Ailuros

    Ailuros Epsilon plus three
    Legend Subscriber

    Joined:
    Feb 7, 2002
    Messages:
    9,511
    Likes Received:
    224
    Location:
    Chania
    Wouldn´t we also need bus protocols that deliver substantially higher bandwidth than PCI-E, otherwise the bus itself will become the bottleneck if host ram has a much higher bandwidth than the host CPU? Needless to say that texturing over the bus should be usually avoided.

    I wonder if and to what degree Virtual Memory will make a difference in future WGF compliant GPUs in that department.
     
  6. Himself

    Regular

    Joined:
    Sep 29, 2002
    Messages:
    381
    Likes Received:
    2
    Bus speed isn't a huge problem compared to something fundamental like ram cells. Memory controller on the cpu would probably be the way to go to avoid having two GHz parts on the board at once. Once you got that, the low bandwidth stuff can be done with an offboard chip. It's a given that if you remove cache from the cpu it's not going to be the same parts you got today, so a fast interconnect to some hypothetical super ram is certainly doable. Hey, if weren't dreaming here, could have ram on the cpu.. lol

    WGF gpus and virtual memory, not a clue, by the time you got any kind of super ram on the go I would hope that windows would be a lot different anyway.. lol
     
  7. HolySmoke

    Newcomer

    Joined:
    May 20, 2004
    Messages:
    84
    Likes Received:
    61
    Speaking of CPU's...

    It's been a year and a half since the 3.2ghz P4 was introduced. We're up to, what, 3.6ghz now? Even taking into account faster bus speeds, larger caches, and on-die memory controllers, raw mhz have hit a wall, and I'm willing to bet that these small "progresses" don't come close to a doubling in performance. So ATi and Nvidia aren't going anywhere soon.

    Just a little factoid many people seem to forget... I voted for Red Dwarf, btw.
     
  8. Ailuros

    Ailuros Epsilon plus three
    Legend Subscriber

    Joined:
    Feb 7, 2002
    Messages:
    9,511
    Likes Received:
    224
    Location:
    Chania
    Hmmm I still seem to be missing your point obviously since you mentioned games. If you´d mean onboard GPU ram with ultra high frequencies and/or amounts then I guess I´d be closer. Graphics boards (especially the high end) get packed with more and more and as fast as possible memory as possible in order to avoid passing any data over the bus to host ram anyway.

    Onboard graphics ram nowadays have in their highest end incarnations a maximum bandwidth of 35+GB/sec and it´ll most likely rise over the 40-45GB barrier with the next generation. Unless we´d be talking about a very unlikely scenario of a SoC with ultra fast UMA, I can´t imagine how anything offboard would represent a more efficient way than trying to keep as much as possible in onboard ram.

    My crystal ball can´t reach beyond the next 5-6 years. WGF timeframe is the longest period into the future I can speculate on in the safest way.


    ***edit: entirely OT, but what I currently consider as one of the slowest parts of mainstream systems are hard drives.
     
  9. Martin Eddy

    Regular

    Joined:
    Oct 5, 2003
    Messages:
    491
    Likes Received:
    4
    Location:
    Australia,Brisbane
    I'd have to agree with you on that one. They keep upping the theoretical transfer speed i.e. SATA, SATA II , but the drives can't come anywhere near that speed in practice. Maybe they need to have a 4 way raid 0 setup built in to the drive? :?
     
  10. borntosoul

    Regular

    Joined:
    Oct 9, 2002
    Messages:
    319
    Likes Received:
    46
    Location:
    Au
    gpu's have been growing much faster than cpu's in performance over the last 6 years. i dont see this trend changing for a long time to come.
     
  11. KimB

    Legend

    Joined:
    May 28, 2002
    Messages:
    12,928
    Likes Received:
    230
    Location:
    Seattle, WA
    Um, I doubt it. .NET doesn't offer anything fundamentally new that Java doesn't, and it's not multiplatform like Java. So performance-sensitive apps will still always use a "normal" programming language (C/C++, etc.), and .NET will be relegated to mostly web applications (if it catches on much at all....).
     
  12. Scali

    Regular

    Joined:
    Nov 19, 2003
    Messages:
    2,127
    Likes Received:
    0
    It does actually: it is being supported by the developer of the most popular OS in the world.
    Also, I doubt that MS makes the same mistakes as Sun made. Sun practically shot Java in the foot with all the API rehashes and ignoring multimedia.

    Being multiplatform is not an issue at all. I said free choice of hardware. If all hardware runs Windows, so be it, as long as I can choose what hardware that is. .NET makes that a reality. The hardware will have a much larger impact on the performance of the complete system than the OS, in most cases. Running linux doesn't suddenly make my CPU twice as fast as it is when running Windows, so who cares?

    You're missing a major feature of .NET over Java: seamless integration with native code. Even 'performance-sensitive' apps will generally not be entirely performance-sensitive. For example, if it has a GUI, that part doesn't have to be coded natively at all. Just like C/C++ and inline-asm back in the day, .NET could be used for most of the code, and native code can be used for the performance-critical parts. And eventually, just like C/C++, the combination of hardware and compiler will become efficient enough to write even most 'performance-sensitive' apps entirely without native optimizations.
    And, just like C/C++, you get the advantages of .NET for all parts of an application that you write in .NET. Only the remaining native portions will have to be ported.

    In short, saying that you will always have C/C++ code is like saying you will always have asm-code. As we know by now, a few exceptions aside, nothing uses asm-code anymore.
     
  13. Diplo

    Veteran

    Joined:
    Apr 17, 2004
    Messages:
    1,474
    Likes Received:
    64
    Location:
    UK
    I think this is one of the major factors, if not THE major factor, that is stopping the evolution of CPUs into true parallel processing. Sure, multi-core is an option in the short term, and might help bridge the gap, but it seems to me the way IBM/Sony/Toshiba are heading is the right way for the long term.

    Really, what we need is some kind of paradigm shift in processing to allow future processors to emulate x86 architecture at speeds at least as fast as the current generation. Only then can we break free of the dead-end that x86 is increasingly becoming. Is this possible in ten years? I think so if everyone involved put their minds to it. Will it happen? I somehow doubt it. Perhaps what it really needs is one of the big chip manufacturers to work with Microsoft because, I believe, that an OS written around the new technology could be the catalyst that is needed.
     
  14. Brimstone

    Brimstone B3D Shockwave Rider
    Veteran

    Joined:
    Feb 8, 2002
    Messages:
    1,835
    Likes Received:
    11
    One of the greatest challenges facing the CELL project is the lack of development of computer languages able to exploit parallelism. All the time and effort poured into C and C++ doesn't help a parallel processor because those languages are seqeuntial based. The IBM Blue Gene research is focusing on this area if I'm not mistaken.
     
  15. _xxx_

    Banned

    Joined:
    Aug 3, 2004
    Messages:
    5,008
    Likes Received:
    86
    Location:
    Stuttgart, Germany
    I'm very sure of one thing: I will never, EVER buy me any kind of console (at least the way these are today).

    I hate gamepads :D
     
  16. Ichneumon

    Regular

    Joined:
    Feb 3, 2002
    Messages:
    414
    Likes Received:
    1
    Should probably take this to the hardware forum but...

    Why can't they do something like that built into the drive? Something like that Kenwood TrueX design did for fast CD access time/performance with slower rotational speed... but applied to Hard Drives. Several read/write heads built into system somehow.

    I remember once quite a long time ago adrive that had 2 seperate read/write heads and IIRC it was kind of a mess... but I have to believe theres a better way than the single, moving read-write head of today...
     
  17. dominikbehr

    Newcomer

    Joined:
    Apr 19, 2002
    Messages:
    72
    Likes Received:
    0
    Location:
    Sunnyvale, CA
    from another point of view the area used for the translation unit isnt that big overall compared to the size of the chip and the x86 instruction set happens to be good for producing quite compact and cache friendly code.
     
  18. Scali

    Regular

    Joined:
    Nov 19, 2003
    Messages:
    2,127
    Likes Received:
    0
    Ironic then that Intel chose to no longer cache x86-code, but store the micro-ops instead.
    Apparently the decoding overhead is more important than the cache-advantage of x86-code.
    I wouldn't be surprised if AMD will choose this path aswell, if they ever intend to scale their clockspeed up aswell.
     
  19. pahcman

    Regular

    Joined:
    Jul 1, 2004
    Messages:
    252
    Likes Received:
    0
    There is no writings except the same predictions that PC is doom. Same old... http://www.nikkeibp.com/nea/dec99/specrep/ ... yawn.
    I suggest you lay off the hype.

    Technology wise, there is no way console can compete head to head, over its life time. If anything, it boils down to diminishing perceptable visual returns and cost of software development.
     
  20. Mulciber

    Regular

    Joined:
    Feb 7, 2002
    Messages:
    413
    Likes Received:
    0
    Location:
    Houston
    which is quite obviously not their intent, givin the design targets of the athlon64 and their intentions to migrate to multi-core processors.
     
Loading...

Share This Page

  • About Us

    Beyond3D has been around for over a decade and prides itself on being the best place on the web for in-depth, technically-driven discussion and analysis of 3D graphics hardware. If you love pixels and transistors, you've come to the right place!

    Beyond3D is proudly published by GPU Tools Ltd.
Loading...