Is the writing on the wall? ATI and NV doomed!

What is going to happen to ATI and NV over the next 5-10 years?

  • No, they will figure out a way to survive!

    Votes: 0 0.0%
  • They will probably merge with larger companies like AMD and Intel.

    Votes: 0 0.0%
  • ATI and NV will merge together to stay competitive!

    Votes: 0 0.0%
  • ATI and NV are the future!

    Votes: 0 0.0%
  • Power VR is the future!

    Votes: 0 0.0%
  • I don't care as long a more episodes of RED DWARF get made!

    Votes: 0 0.0%

  • Total voters
    209
SiBoy said:
I think a more interesting question in the mobile space is whether programmable shaders will catch on (like the openGL es 2.0 work is hoping for), and if so, what will the implementation path be? Vertex shaders running on DSP's? Or following the GPU path?

The VGP for MBX is optional; it´s a 4-way VS1.1 compliant SIMD. PowerVR will be using proprietary extensions for OGL temporarily I think; no idea if it´s supported in D3DM so far though.

Bitboys already have announced pixel and vertex shader support with their next generation mobile chips (no idea yet what versions).

Merely a guess but I´d say that the GPU path will get followed and it seems like the embedded space is catching up with a very fast rythm, especially when looking at some recent presentations concerning OGL-ES1.1 and D3DM and of course the plans for those for the immediate future.

Silicon valley is littered with bad mediaprocessor ideas...

<shrugs>
 
We don't need super fast cpus, we need super fast memory, if that is ever an option you can drop cache from cpus and solve a lot of problems at once, the actual non cache parts of the cpu die are tiny relative to the cache, so more than two cores in a cpu would be a given, you could fit more dies on board and spread out the features for better heat solving. If you had some type of ram that could run at 5-10GHz, that is.. lol.. that's the biggest bottleneck in PCs today, the ram. I think you need a new ram cell technology to really break the bottle, not hacks and funky designs.

Unless there is some solution for ram, I doubt that the number of cores on a die will matter much for games.
 
If a processor like Cell is sufficienly fast, why can't it emulate the x86 instruction set like Transmeta does and allow the industry to shift. If Cell can run Windows at half the speed of a 3GHz P4, then it's still pretty usable

We already have a CPU like that, the Itanium. A 1.5 GHz Itanium is said to perform like a Xeon at 1.5 GHz with x86 emulation in Windows XP.
And Transmeta itself is similar... it runs x86 emulated at 'usable' speeds, but not quite as fast as 'native' x86 CPUs like Athlon and Pentium.

The problem is extremely simple: How do you sell a CPU that is slower, but not significantly cheaper?
The answer is equally simple: You don't.

Technically the transition to Itanium is possible, and would be a better choice for the future than x86-64. But for the simple reason that x86-64 is slightly cheaper and gives more performance in native x86 programs, it is not going to happen. People invest in now, not in the future. My hopes are on .NET to cut us loose from the x86 legacy and finally give us free choice of hardware (something I consider far more important than all that free software nonsense). Thankfully the GPUs never got stuck on a standard ISA, and have used the virtual machine from the start. I think that part of the reason why GPUs have evolved this fast is because of this. I think that CPUs will evolve much faster aswell as soon as the x86-legacy is over. There will be a lot more freedom to experiment with new technology.
 
Unless there is some solution for ram, I doubt that the number of cores on a die will matter much for games.

Wouldn´t we also need bus protocols that deliver substantially higher bandwidth than PCI-E, otherwise the bus itself will become the bottleneck if host ram has a much higher bandwidth than the host CPU? Needless to say that texturing over the bus should be usually avoided.

I wonder if and to what degree Virtual Memory will make a difference in future WGF compliant GPUs in that department.
 
Wouldn´t we also need bus protocols that deliver substantially higher bandwidth than PCI-E, otherwise the bus itself will become the bottleneck if host ram has a much higher bandwidth than the host CPU? Needless to say that texturing over the bus should be usually avoided.

I wonder if and to what degree Virtual Memory will make a difference in future WGF compliant GPUs in that department.

Bus speed isn't a huge problem compared to something fundamental like ram cells. Memory controller on the cpu would probably be the way to go to avoid having two GHz parts on the board at once. Once you got that, the low bandwidth stuff can be done with an offboard chip. It's a given that if you remove cache from the cpu it's not going to be the same parts you got today, so a fast interconnect to some hypothetical super ram is certainly doable. Hey, if weren't dreaming here, could have ram on the cpu.. lol

WGF gpus and virtual memory, not a clue, by the time you got any kind of super ram on the go I would hope that windows would be a lot different anyway.. lol
 
Speaking of CPU's...

It's been a year and a half since the 3.2ghz P4 was introduced. We're up to, what, 3.6ghz now? Even taking into account faster bus speeds, larger caches, and on-die memory controllers, raw mhz have hit a wall, and I'm willing to bet that these small "progresses" don't come close to a doubling in performance. So ATi and Nvidia aren't going anywhere soon.

Just a little factoid many people seem to forget... I voted for Red Dwarf, btw.
 
Hmmm I still seem to be missing your point obviously since you mentioned games. If you´d mean onboard GPU ram with ultra high frequencies and/or amounts then I guess I´d be closer. Graphics boards (especially the high end) get packed with more and more and as fast as possible memory as possible in order to avoid passing any data over the bus to host ram anyway.

Onboard graphics ram nowadays have in their highest end incarnations a maximum bandwidth of 35+GB/sec and it´ll most likely rise over the 40-45GB barrier with the next generation. Unless we´d be talking about a very unlikely scenario of a SoC with ultra fast UMA, I can´t imagine how anything offboard would represent a more efficient way than trying to keep as much as possible in onboard ram.

WGF gpus and virtual memory, not a clue, by the time you got any kind of super ram on the go I would hope that windows would be a lot different anyway.. lol

My crystal ball can´t reach beyond the next 5-6 years. WGF timeframe is the longest period into the future I can speculate on in the safest way.


***edit: entirely OT, but what I currently consider as one of the slowest parts of mainstream systems are hard drives.
 
Ailuros said:
***edit: entirely OT, but what I currently consider as one of the slowest parts of mainstream systems are hard drives.

I'd have to agree with you on that one. They keep upping the theoretical transfer speed i.e. SATA, SATA II , but the drives can't come anywhere near that speed in practice. Maybe they need to have a 4 way raid 0 setup built in to the drive? :?
 
gpu's have been growing much faster than cpu's in performance over the last 6 years. i dont see this trend changing for a long time to come.
 
Scali said:
My hopes are on .NET to cut us loose from the x86 legacy and finally give us free choice of hardware (something I consider far more important than all that free software nonsense).
Um, I doubt it. .NET doesn't offer anything fundamentally new that Java doesn't, and it's not multiplatform like Java. So performance-sensitive apps will still always use a "normal" programming language (C/C++, etc.), and .NET will be relegated to mostly web applications (if it catches on much at all....).
 
Chalnoth said:
Scali said:
My hopes are on .NET to cut us loose from the x86 legacy and finally give us free choice of hardware (something I consider far more important than all that free software nonsense).
Um, I doubt it. .NET doesn't offer anything fundamentally new that Java doesn't, and it's not multiplatform like Java.

It does actually: it is being supported by the developer of the most popular OS in the world.
Also, I doubt that MS makes the same mistakes as Sun made. Sun practically shot Java in the foot with all the API rehashes and ignoring multimedia.

Being multiplatform is not an issue at all. I said free choice of hardware. If all hardware runs Windows, so be it, as long as I can choose what hardware that is. .NET makes that a reality. The hardware will have a much larger impact on the performance of the complete system than the OS, in most cases. Running linux doesn't suddenly make my CPU twice as fast as it is when running Windows, so who cares?

So performance-sensitive apps will still always use a "normal" programming language (C/C++, etc.), and .NET will be relegated to mostly web applications (if it catches on much at all....).

You're missing a major feature of .NET over Java: seamless integration with native code. Even 'performance-sensitive' apps will generally not be entirely performance-sensitive. For example, if it has a GUI, that part doesn't have to be coded natively at all. Just like C/C++ and inline-asm back in the day, .NET could be used for most of the code, and native code can be used for the performance-critical parts. And eventually, just like C/C++, the combination of hardware and compiler will become efficient enough to write even most 'performance-sensitive' apps entirely without native optimizations.
And, just like C/C++, you get the advantages of .NET for all parts of an application that you write in .NET. Only the remaining native portions will have to be ported.

In short, saying that you will always have C/C++ code is like saying you will always have asm-code. As we know by now, a few exceptions aside, nothing uses asm-code anymore.
 
Chalnoth said:
With the tremendous amount of software that's invested in x86 designs, I have a hard time believing that a major architectural change like that is in the works for CPUs. Such a change won't happen until it needs to.

I think this is one of the major factors, if not THE major factor, that is stopping the evolution of CPUs into true parallel processing. Sure, multi-core is an option in the short term, and might help bridge the gap, but it seems to me the way IBM/Sony/Toshiba are heading is the right way for the long term.

Really, what we need is some kind of paradigm shift in processing to allow future processors to emulate x86 architecture at speeds at least as fast as the current generation. Only then can we break free of the dead-end that x86 is increasingly becoming. Is this possible in ten years? I think so if everyone involved put their minds to it. Will it happen? I somehow doubt it. Perhaps what it really needs is one of the big chip manufacturers to work with Microsoft because, I believe, that an OS written around the new technology could be the catalyst that is needed.
 
Diplo said:
Chalnoth said:
With the tremendous amount of software that's invested in x86 designs, I have a hard time believing that a major architectural change like that is in the works for CPUs. Such a change won't happen until it needs to.

I think this is one of the major factors, if not THE major factor, that is stopping the evolution of CPUs into true parallel processing. Sure, multi-core is an option in the short term, and might help bridge the gap, but it seems to me the way IBM/Sony/Toshiba are heading is the right way for the long term.

Really, what we need is some kind of paradigm shift in processing to allow future processors to emulate x86 architecture at speeds at least as fast as the current generation. Only then can we break free of the dead-end that x86 is increasingly becoming. Is this possible in ten years? I think so if everyone involved put their minds to it. Will it happen? I somehow doubt it. Perhaps what it really needs is one of the big chip manufacturers to work with Microsoft because, I believe, that an OS written around the new technology could be the catalyst that is needed.

One of the greatest challenges facing the CELL project is the lack of development of computer languages able to exploit parallelism. All the time and effort poured into C and C++ doesn't help a parallel processor because those languages are seqeuntial based. The IBM Blue Gene research is focusing on this area if I'm not mistaken.
 
I'm very sure of one thing: I will never, EVER buy me any kind of console (at least the way these are today).

I hate gamepads :D
 
madmartyau said:
Ailuros said:
***edit: entirely OT, but what I currently consider as one of the slowest parts of mainstream systems are hard drives.

I'd have to agree with you on that one. They keep upping the theoretical transfer speed i.e. SATA, SATA II , but the drives can't come anywhere near that speed in practice. Maybe they need to have a 4 way raid 0 setup built in to the drive? :?

Should probably take this to the hardware forum but...

Why can't they do something like that built into the drive? Something like that Kenwood TrueX design did for fast CD access time/performance with slower rotational speed... but applied to Hard Drives. Several read/write heads built into system somehow.

I remember once quite a long time ago adrive that had 2 seperate read/write heads and IIRC it was kind of a mess... but I have to believe theres a better way than the single, moving read-write head of today...
 
Chalnoth said:
The underlying architecture being different from the instruction set is the primary reason why x86 processors waste so many transistors (in the translation layer), and, all other things the same, they also can't be as fast as a more efficient architecture, as if the compiler has more information about the underlying architecture, it can obviously optimize for it more.

from another point of view the area used for the translation unit isnt that big overall compared to the size of the chip and the x86 instruction set happens to be good for producing quite compact and cache friendly code.
 
dominikbehr said:
From another point of view the area used for the translation unit isnt that big overall compared to the size of the chip and the x86 instruction set happens to be good for producing quite compact and cache friendly code.

Ironic then that Intel chose to no longer cache x86-code, but store the micro-ops instead.
Apparently the decoding overhead is more important than the cache-advantage of x86-code.
I wouldn't be surprised if AMD will choose this path aswell, if they ever intend to scale their clockspeed up aswell.
 
There is no writings except the same predictions that PC is doom. Same old... http://www.nikkeibp.com/nea/dec99/specrep/ ... yawn.
I suggest you lay off the hype.

Technology wise, there is no way console can compete head to head, over its life time. If anything, it boils down to diminishing perceptable visual returns and cost of software development.
 
Scali said:
I wouldn't be surprised if AMD will choose this path aswell, if they ever intend to scale their clockspeed up aswell.

which is quite obviously not their intent, givin the design targets of the athlon64 and their intentions to migrate to multi-core processors.
 
Back
Top