What other hardware/Technology is on the horizon?

Fuz said:
Its a bit worrying when 90% of the threads on this board some how end up being about the R300 or the NV30. One would think that there are no other companies out there in this industry.
The way things are going, it seems to me that we may eventually end up with just 2 players, just like AMD and Intel. Infact, I am willing to bet on it... it won't happen over night, but it will happen.

Hasn't it already happened? By the way, the CPU market is on its way to having only a single player... :-?
 
Chalnoth said:
3dcgi said:
In an article in Wired Nvidia's CEO said he thought it would be the other way around with the CPU being integrated into the GPU. For low end the quoted developer might be correct, but I wonder if that is really any cheaper than having graphics in the chipset.

I think that eventually, this will be the case. A few things:

If something like this happens, (and it may or may not, there are different scenarios possible,) it is much more likely to be the other way around - Intel will do the integrating, and nVidia and ATI will fall by the wayside.

Intel has x86, they have the CPUs, they have the money, they have the fabs and process technology, and the industry clout to change the paradigms. And they are doing it already. As Humus pointed out they are the second biggest graphics supplier as of now.

For a thought experiment, consider that Intel will introduce their 6.4GB/s chipsets this spring. Probably sooner than later they will produce a integrated gfx variant, which will then actually have a quite decent bandwidth to work with. If Intel utilizes some of the bandwidth conserving technology they have in house, they are set to produce an integrated gfx solution that in one fell swoop kills off ATIs and nVidias low- and low-mid end market completely.

It may not happen this generation, but it is only a matter of (short) time since we are already seeing the process in motion. So does ATI and nVidia of course, it is not by chance that they have both produced north-bridge designs. But I can't see either of them really standing a chance at this, both for technology reasons and due to Intels control of the platform.

And at some point after they dominate in volume, Intel may well introduce more radical changes to PC architecture if they deem it profitable.
Entropy
 
The developer in question was Tim Sweeney, IIRC.

As for the Parhelia review, that was really a throwaway comment. Of course I will complete it.
 
Chalnoth said:
3dcgi said:
In an article in Wired Nvidia's CEO said he thought it would be the other way around with the CPU being integrated into the GPU. For low end the quoted developer might be correct, but I wonder if that is really any cheaper than having graphics in the chipset.

I think that eventually, this will be the case. A few things:

1. It appears that currently, quantum technologies probably won't be useful for general processing. At least, not for quite a while (They will be infinitely-useful for specialized processing scenarios...).

2. Processor speeds have been outpacing bus speeds for a long time. This is only natural, and will continue on into the future. Electrodynamics essentially forces this to be true. As this continues, the bus speeds will become the limiting factor.

3. When the bus speeds become enough of a limiting factor, then it will be beneficial, in terms of performance, to integrate more and more onto a single die. Eventually the fastest desktop computers, for realtime applications, will all include system on a chip designs. It'll be a while, though.

What will be really interesting is how this is dealt with in the future. I really, really doubt that either nVidia or ATI will merge with Intel or AMD. Since ATI and nVidia are both fabless, we may instead just see "strategic partnerships" with the companies, where, at first, nVidia and ATI will still sell their own graphics chips, but eventually their individual graphics chip sales will be relegated to niche markets, with most of their money coming from royalties. At least, that's what I see happening.

1. Companies need to be able to manufacture and mass produce a quantum transistor model before any practical quantum computing can take place.

2. Very easy solution fibre optics. Zero resistance, Zero EMI, Zero RFI, almost zero heat generation. Also Photons travel faster than electrons.

3. Agreed but chip to chip interconnects using fibre optics may be used also.

4. Photonic processors might be the next step.
 
PC-Engine said:
2. Very easy solution fibre optics. Zero resistance, Zero EMI, Zero RFI, almost zero heat generation. Also Photons travel faster than electrons.

3. Agreed but chip to chip interconnects using fibre optics may be used also.

4. Photonic processors might be the next step.
Fibre optics isn't *that* easy. It will probably come, but as for now it requires gallium arsenide (which is difficult to integrate into a silicon CMOS chip) and rather expensive connectors. Haven't heard about any prototypes of fibre chip-to-chip interconnects yet, even with all the research going on around the subject.

Photonic processors? An OR, a XOR and a NOT(!!) gate have actually been built - which are all the building blocks really needed to make processors. Although I would give it 10-15 years until mass-market introduction.
 
Perhaps I'm wrong, but the problem I see with optical electronics is that the wavelength of a typical photon that would be used is much larger than the feature size in today's electronic circuits. Take something on the scale of 100nm-700nm, you're talking about wavelengths that correspond to Terahertz to Petahertz frequency light. Perhaps I'm missing something, but I don't see how you could design an photonic processor with 100M optical logical gates on it comparable to today's chip sizes. Moreover, it seems like you'd be using X-rays as your photons.

I think the future lies in 3 areas: 1) reducing feature size even futher (possibly with a shift towards molecular logic) 2) taking advantage of quantum effects 3) utilizing reversible logic (for power reduction)

Pure optical sounds great for broadband and high speed switching and routing, but I don't know if it will work if you try use it as a replacement for today's silicon based circuits.

I like the idea of a switch that can be triggered by a single electron or atom with little more than kT ln 2 energy.
 
The only problem I see for integrated GPU-CPU (or GPU-chipset) solution is bandwidth. If you have two separate buses (one of them with a very large bandwidth) it is hard to compete with a one bus with reduced bandwidth (must be shared between the CPU and the GPU). That is what happens with current chipsets with integrated graphics. If the problem of bandwidth disapears or an additional bus (GPU to memory) isn't anymore cost effective there are many chances that standalone GPUs will disapear.

The model then could be as Arjan said: a normal CPU with multiple programmable shader units (stream processors) and perhaps some fixed function hardware for better performance (for example rasterization). In fact that is similar to the Emotion Engine architecture where a general purpose CPU (MIPS) is integrated in the same chip with two vertex (SIMD) units (VU0, VU1).

If this happens Intel or AMD have the advantage that they have the more complex part, the general purpose CPU, and the more expensive, the fabs. Both NVidia and ATI could go the way of Transmeta (fabless processor company) but I just don't see them designing CPUs ... In any case this will only happen when bandwidth is never more a problem or it is impossible to solve (more likely) and integrated solutions are able to close enough the gap.
 
Well I'd suggest a freeze on the level of programmability for a couple of years so that,

a) ISV's can catch up and make real use of Dx9 (ps/vs 2.0 - 3.0) level func in something other than shiny tech demoes.
b) IHV's get the chance to spend area on making the HW sufficiently fast that all this shader instruction space and functionality that everyone keeps insisting is essential can be used in a useful manner rather than just being a tick box.
c) I can give my brain a rest!

John.
 
JohnH said:
c) I can give my brain a rest!

Naaahhh... you'd be bored silly, John... you need this, its what keeps you going in life !

I agree on the other points though 8)
 
Fuz said:
...
P10 > Creative decide to release a consumer level P10 board after all, released early 2003.

Do you have any link to this statement from Creative?

It would be nice to see another "true" deferred renderer on the market: I bought a VP560 and was *very surprised* about "slipstream technology" -> it works great! I spite of the lower clock rates of P9 and the 128bit memory on the card it easily outperforms my VP760 in most games and benchmarks (e.g. Villagemark: 51 fps/81,8 fps,Templemark: 37,8fps/94 fps and RtCW: 65,8fps/77,7fps). And the price is less than half :D!
3DLabs definitely has a nice peace of silicon. So let`s wait for the P20 or S63 (I found this strings in the latest drivers)!

And Kristof: I recommend buying a VP560 to see what thecompetitor has to offer ;)...
 
Tim Sweeney said the CPU's would become faster so much more quickly than graphics processors that all grapchis calculations would be done on the CPU, making graphics processors obsolete, (not that they would integrate actual GPUs into the CPU) a statement he later retracted stating the growth of GPU power/speed outstripped his expectations.
 
Fuz said:
DaveBaumann said:
Thats an understatement. As our news post stated in Q2 Intel held 62% of the Integrated market, which equated to a total of 27% of the overall market, making them the second largest in terms of graphics market share and pushing ATI into third.

How long do you ppl think it will be before Intel buys either Nvidia or ATI?
Call it a merger if you will. My guess is that Intel and ATI merge, and AMD and Nvidia merge. If any merger did take place between these companies, that would definately mean the end for the smaller players out there.

I can't recall who said it, but one of the big shot developer was asked where he thought the graphics industry was headed, and he thought that with in 10 years the graphics processor would be intergrated with the CPU.

This seems like a logical step to me, seeing that we are seeing intergration all around us.

IMO I think there will be 2 slots on the motherboard, 1 for a processor and 1 for a graphics chip. Graphics card makers will somehow integrate their new chip onto a motherboard. Maybe a company that has nothing to lose trying to break into the high end graphics market. All they would have to do is get together with a MB maker and design a slot right on the MB for a GPU chip..just like there is now for a CPU chip. The MB maker would make the board upgradable for future chips to just clamp in as well. Think of the speed advantages of a chip like this. It wouldn't need to be as "fast" or as "advanced" in tech because its advantage would be solid integration right into the MB. The cache would be lightning fast. Remove the AGP port for another stick of DDR slot..or two. No need to worry about AGP bandwidth, etc. Maybe I'm way off base. but who knows? ;) Eventually the chip makes will integrate right on the CPU chip..this is final stage. With the process going down to 9nm this looks more and more possible. They even have new technology(damn I wish I had the link still) that has carbon nanotubes a few atoms wide. These tubes are stable and doped with conductive material will yeild chips with billions of transistors..not millions. We are in for a whole new era my friends...bring on Doom XVIII. :LOL:
 
Modulor,

Highly interesting. Do you or anyone else have links or whatever to white papers or pdf's concerning Slipstream?
 
1) programmable graphics are going to be unified very shortly and, for the most part, feature-complete. The new emphasis will be on new features such as better filtering and more accurate colour representation / calculation.

2) BitBoys licenced their tech to someone... someone with their own fab who can make eDRAM? someone who wants a solution to integrated graphics that wont suck up the mem buss their 800MHz+ FSB devours? someone that has the resources to licence and develope the tech? someone who wants total domination in ALL computer hardware sectors? could this describe INTEL?

3) you need: CPU, GPU, sound DASP, networking CODEC, MEM controller, RAM, various peropheral controllers. K8 already has mem controller integrated. the only other thing limited by bandwidth is GPU. multi-core processors? been done. multi-core CPU / GPU processor with integrated MEM controller and eDRAM (refer to #2)? northbridge with integrated sound DASP or networking? Crush K8 (name? you know which one i mean) with the MCP, Granite Bay (or its big brother? i forget more than most people will ever know :p) integrated gigabit ethernet. integrate a storage and external I/O device controller, put it on a high speed bus to MPU, or "main processing unit," couppled with a nice wide bus to the RAM and there you go. you dont need a lot of long traces with only three chips, most of it will be inside the packaging which could use BBUL to go ultra-fast. and it would all fin inside a tiny little box. of course, cooling and power supply would be a different matter...

4)
Perhaps I'm wrong, but the problem I see with optical electronics is that the wavelength of a typical photon that would be used is much larger than the feature size in today's electronic circuits. Take something on the scale of 100nm-700nm, you're talking about wavelengths that correspond to Terahertz to Petahertz frequency light. Perhaps I'm missing something, but I don't see how you could design an photonic processor with 100M optical logical gates on it comparable to today's chip sizes. Moreover, it seems like you'd be using X-rays as your photons.
one word (or acronym, rather): RISC. small, simple instruction set that executes REALLY fast, especially when paired with photons. but let me also mention magnetic fields. changes in magnetic fields move just as fast as photons. and as for quantumn computers, lets replace the system RAM with quantumn-based RAM. now there's youre infinite bandwidth as long as the bus can handle it... even better, lets integrate a few TB into the "MPU" (if economically possible)(see #2). would that be fast enough for you? i know i'd be phase change cooling and volt-modding mine :LOL:

[edit]

5) "CMSOA," or "Computational Synthetic MicroOrganism Arrays." With the recent development of partially-synthetic microorganisms, it riases the question of just how far we can go with these. it would be possible, in theory, to create microorganisms capable of performing basic logic functions. with a properly coded gene for specific reproduction paramiters it would be possible to actually grow data processors. these processors would be limited only by the speed at which they interfaced with I/O devices and the physical volume of their container. it would, (once again) in theory, be possible to grow a data processor array capable or more complex operations than even the human brain! now, the ability to emmulate the human brain would not be possible for quite some time as it would take an unimagineable amount of processing power to analyze and recreate the incomprehensible number and the exact structure of the operations our brains process. Perhaps we will end up in a situation similar to that of "The Hitchhikers Guide to the Universe" where we have to create a computer just to itself design an even more powerful computer capable of performing that which we desire. Some people may raise the question of whether its possible for a computer to design another computer that is more powerful than it. That is irrelevant as all we need to do is synthesise evolution. A small, simple data processor that is capable of evolving and reproducing at a very fast rate could realize a logarithmic curve in processing power.



6) or maybe this is all a bunch of crazy crap dreamed up by a schizotypal teenager...... :rolleyes:
 
I'd just like to make a little comment on the quantum computing. I'm not absolutely certain that the bandwidth would indeed be infinite. I would rather expect that in the first-generation quantum memories that the bandwidth would be quite limited due to the manufacturing. Yes, it will increase, but I think that quantum memories' main pull will be size.

As another note, I think quantum computing as a whole is even more exciting. It's really going to revolutionize the way programmers think about producing algorithms for computers. That is, with a quantum computer, if you have multiple data sets that will follow the same processing for a final output, the quantum computer can do all of that processing at once. Current designs are looking for in the order of 100 Qubits for processing, and to date one of the main algorithms that a quantum computer would be able to do very well would be code-breaking. You know all of those 128-bit and higher encription keys that are used on the 'net? Those keys that take upwards of many days to a few weeks to crack on today's computers? A quantum computer of sufficient size could do it in less than a second. With this kind of possibility to increase processing power, it will only be a matter of time before software engineers find far more uses for the technology (radiosity lighting may be able to make use of a quantum computer...)
 
Highly interesting. Do you or anyone else have links or whatever to white papers or pdf's concerning Slipstream?

There aren't any yet, as 3Dlabs are not putting the exact details in the public domain right now.

I will say that this is the first implementation and there will be further enhancments / revisions yet.
 
Good god, one moment you're about apply the barrel of some vast pain device to a competitor is some vast frag fest, the next your quantum vertex shader finds a different probability path and bang, you've turned into a cow chewing cud watching Val doonican on a 50's BW telly while homer simpson measures you up for a nice steak dinner... Hmmm, steak, garrrr...
 
Back
Top