Who will get there first?

what we really need is, a very strong new competitor to ATI and Nvidia. maybe all the remaining graphics companies could combine their efforts.

PowerVR + Bitboys (lol) + S3 + XGI and maybe 3Dlabs and/or E&S. we need a breakthrough graphics processor. and not 10 different itterations of the same processor, but maybe 2 or 3. one for desktop, console, workstation, arcade. one for mobile products. and one for set-top boxes or whatever. something that is a significantly different processor that processes graphics more efficiently. maybe even something that moves away from scan conversion / rasterization / z-buffer type rendering. I dont know. but something that breaks through some of the limitations that both ATI and Nvidia have. of course, we ALL want this. the next Voodoo or GeForce product that blows everything else away for one generation (say 2-3 years).
 
i think we need to consider the sw side of it just as well. and it's not just about devs learning new architectures and getting used to this or that. remember, the pc is the one 'versatile' platform where devs tend to shoot for the lowest common denominartor. oth, consoles are the one territory where you either exploit the hw well or you go, well, nowhere. so despite the advent of PCIe don't expect to start seeing anytime soon pc devs exploiting the '[cpu]<=>[gpu]' model. in contrast, you can bet that the first console to offer a more symmetrical cpu-gpu model will be put to good use : )
 
darkblu said:
i think we need to consider the sw side of it just as well. and it's not just about devs learning new architectures and getting used to this or that. remember, the pc is the one 'versatile' platform where devs tend to shoot for the lowest common denominartor. oth, consoles are the one territory where you either exploit the hw well or you go, well, nowhere. so despite the advent of PCIe don't expect to start seeing anytime soon pc devs exploiting the '[cpu]<=>[gpu]' model. in contrast, you can bet that the first console to offer a more symmetrical cpu-gpu model will be put to good use : )

But what does this actually have to do with the discussion at hand? The 3D IHV's themselves are now not purely designing for PC's...
 
Megadrive1988 said:
I see CELL as being good to take over 2 of those stages, the CPU stage and the Geometry Engine stage. but graphics companies like Nvidia and ATI are still the best at the last 2 stages, the Raster Manager and Image Generator stages. modern PC hardware is still modeled after the SGI pipeline for the most part, aside from newer things like pixel shading.

But with the hundreds of millions of dollars that goes into architecting a graphics processor do you really believe that ATI and NVIDIA would be content with just the basic pixel raster functions in the future? Their primary development costs are not going into those areas at the moment, apart from elements such as new AA algorithms, all their development costs are going into the programmable pipeline - do you not think they aren't going to look towards protecting that investment in the future? Do you not think they are going to try and extend on it?

The raster functions are already fairly well known, as I've said numerous times - Sony already has the tech for this with PS2, they don't need it from someone else; it would seem clear they have chosen NVIDIA because of their programmable pixel pipeline capabilities as much as anything else, so who's to say this won't continue in the future?
 
DaveBaumann said:
darkblu said:
i think we need to consider the sw side of it just as well. and it's not just about devs learning new architectures and getting used to this or that. remember, the pc is the one 'versatile' platform where devs tend to shoot for the lowest common denominartor. oth, consoles are the one territory where you either exploit the hw well or you go, well, nowhere. so despite the advent of PCIe don't expect to start seeing anytime soon pc devs exploiting the '[cpu]<=>[gpu]' model. in contrast, you can bet that the first console to offer a more symmetrical cpu-gpu model will be put to good use : )

But what does this actually have to do with the discussion at hand? The 3D IHV's themselves are now not purely designing for PC's...

yes, of course you're right. but still, in the grand scheme of things in the universe, i think it's way more interesting to speculate who will be the first to move them things into the plane of the computing experience of us all. say, in the way 3dfx did it with the classic sgi imr architecture. ..of course, we'd need to step out of the 'longhorn' mindset for that.
 
Actually, one of the points of Longhorn sounds to be to effectively create an entirely new platform in an effort to make it easier for software Devs to target a higher base feature-set by making the software "Designed for Longhorn". At present Longhorns min graphics to host interface requirement is AGP8X, but we'll see if that gets bumped up to PCIe or not.

[Edit] Arguably this is already a non-issue in some of the platforms the 3D IHV's are designing for, mobile phones being the main one. ATI's 3D graphics architecture will soon be integrated on a single die (as ARM aready do with PowerVR) by Qualcomm meaning the graphics to CPU interaction is much quicker. Obviously, this is low on the performance scale though.
 
I don't know who'll do it first, but the sooner the better I say! I'm still hoping Cell pans out and in the future I'll be able to buy a Cell PC without an soundcard, graphics card, or any other hardware to cause conflicts. I'll be able to buy software such as Cakewalk without it having bizarre and untraceable conflicts. I'm also sick of games that won't run on some cards, and audio drivers that don't work with some software. Personaly I'd like to see Sony/IBM/Toshiba and pthers oust MS and see a new baseline OS without MS's painful faults. Maybe AmigaDOS and Workbench will reappear... :D
 
DaveBaumann said:
...
but I see Panajev is already looking towards Cell 2 in order to provide this
...

Don't discount the NV GPU being CELL based so quickly as I mentioned in the other thread,

http://www.beyond3d.com/forum/viewtopic.php?p=437597#437597

It would be natural to do VS work on the CELL CPU. If you follow the above post/thread it doesn't make sense to make a non-CELL GPU. But I'll continue that discussion in the other thread if this is off-topic here...

DaveBaumann said:
...
If we are ultimately going towards a general purpose processing model Who will get there first? Will it be Intel? Will it be Sony / CE electronics vendors? Or will it be the 3D IHV’s?

...

We need stricter definitions to answer that question. And you'll get different answers based on those definitions.

DeanoC was in one of his 'Obi-Wan-Kenobi' moods and asked me the following question,

"What makes a CPU a CPU and a GPU a GPU?"

http://www.beyond3d.com/forum/viewtopic.php?p=345790#345790

I eventually answered it was the SAME THING! :D

The above definition would suggest the answer to your question is Intel.

If you use another definition and say one from that link nAo provided,

3Dlabs said:
CPUs and GPUs exist because of their different design goals

CPUs – maximize performance and minimize cost of executing SCALAR code

GPUs – exploit parallelism to beat CPUs at executing VECTOR code

If your definition of a GPU is the above, then a CPU that executes Vector code efficiently (Vector Super Computers have been doing this for decades) and was available in the consumer market would again be another CPU. IIRC (but please correct me on this), the first consumer class CPU with a vector (SIMD) unit was the IBM's/Motorola's PowerPC with Altivec. Intel subsequently followed with an x86 equivalent with MMX (later SSE) and AMD with 3DNow. The order is from memory but the point is that it was a CPU.

Notice, they've all been CPUs so far as a GPU isn't general enough to run any commercial OS. Therefore a GPU is only a co-processor to a CPU. There are no GPU's general enough, AFAIK, that are on any roadmap, to run any OS.

With another definition being that if modern GPUs are programmable Stream processors...then there are no Stream Processors that are CPUs/GPUs that are available that will run an opertaing system. And multi-core CPUs aren't classed as stream processors such as IBM's Power-PowerPC's, Intel/AMD's x86's, Intels Itanium, Suns MAJC or forthcoming Niagra's etc.

However, the CELL processor has been described as a stream processor and can act as a CPU to sustain an OS. So that definition would suggest Sony/Toshiba/IBM be the first and the first product is the Sony/IBM CELL workstation.

Bottom line is unless GPUs are general enough to sustain an OS, then they're just classed as another co-processor. Just like a programmble DSP co-processor that can be classed as a media engine/processer or a sound processor etc. and they're all converging to CPU like functions as GPUs are. :D

Just my 2 cents...
 
Jaws said:
It would be natural to do VS work on the CELL CPU. If you follow the above post/thread it doesn't make sense to make a non-CELL GPU. But I'll continue that discussion in the other thread if this is off-topic here...

Personally I don't discount the idea of Cell doing the vertex processing, as I've already suggested in posts like these. At present I have no definite information one way or another yet.

Bottom line is unless GPUs are general enough to sustain an OS, then they're just classed as another co-processor. Just like a programmble DSP co-processor that can be classed as a media engine/processer or a sound processor etc. and they're all converging to CPU like functions as GPUs are.

Obviously the this takes into account of performance. CPU's were traditionally used to do the vertex processing, but quickly become outclassed by onboard graphics processing that might shift back more towards the newer CPU's soon (this year), but they are still not tuned for fragment processing to any degree of performance close to a graphics processor.
 
Jaws said:
However, the CELL processor has been described as a stream processor and can act as a CPU to sustain an OS. So that definition would suggest Sony/Toshiba/IBM be the first and the first product is the Sony/IBM CELL workstation.

A cell processor is a host CPU (PU) coupled to a stream processor (SPU's), using the original Stanford terminology. So it's not that Sony somehow figured out how to run an OS on it - they have a conventional host CPU stuck inside.

If anything, it reinforces the split between the CPU-like part (PU) and GPU-like part (SPU's). The GPU-like part (stream processor) currently isn't up to the task of anything more than perhaps the VS functionality, so enter Nvidia to fill in the rest.
 
I know less about consoles than I do PC's but does Cell + NV Turbocache + NV GPU tie in together?

Is Turbocache limited to borrowing memory or can it (or something like it) borrow processor power to help out the GPU in certain situations?

Ignore this if consoles do this anyways or if I'm too PC-centric in my thinking.
:)
 
Doing VS work on the CELL CPU is not a bad idea: it is the idea they had with the GPU they had chosen before nVIDIA's one so they have been thinking about it for quite a while.

Will nVIDIA's GPU do only Pixel Shading work or will they include Vertex Shader engines and the related Clipping and Culling steps of the Graphics Pipeline ?

We can make arguments for both: ease of programming would have Vertex Shading done on the GPU as well as Pixel Shading... engineers might push for doing Vertex Shading on the CPU, even if having hardware clipping would be preferrable for developers (just one more annoyance ;)).

I think e-DRAM is a possibility in PlayStation 3's GPU, more so than having Vertex Shaders, but we will see what SCE and nVIDIA will be doing and how much space do they have.

With a GPU that at worst ships on a 90 nm process (I do not think they would think that the 90 nm implementation would be a very cost effective one: the 65 nm node for SCE and Toshiba is not very far away) and is quite probably targeted at 65 nm, they should have space for e-DRAM as well as a nice amount of Shader ALUs. It is a bit too early to make too accurate guesses, we do not even have many information regarding what NV5X actually is so guessing what a custom GPU based on that architecture is would be quite difficult.
 
Pana, clipping doesn't require having vertex shaders on the chip. Although you never know, Sony might insist to remove it just to uphold the tradtiion :?
 
Fafalada said:
Pana, clipping doesn't require having vertex shaders on the chip. Although you never know, Sony might insist to remove it just to uphold the tradtiion :?

It will be called Fafalada transform mode in your honour ;)
 
Fafalada said:
Pana, clipping doesn't require having vertex shaders on the chip. Although you never know, Sony might insist to remove it just to uphold the tradtiion :?

Well, I know it oes not require, but it would be simplier for nVIDIA to keep the Vertex Pipeline and the CLipping and Culling stages intact as well as the interface these blocks have with the Triangle Set-up and the rest of the GPU.

Engineering wise it seems simplier, :(, to have only Triangle Set-up on the GPU and have the developer take care of clipping in software on the SPUs/APUs, but that would be sad and quite annoying IMHO.

Still, the nVIDIA blocks doing culling and clipping have already been designed and I am sure they can handle very high vertex-rates: it is not like Sony/SCE would have to come-up with those pieces (seeing how they dealt it with the PSP... "only front-plane is necessary, let them handle the rest"... what is this ? They add only 1 clipping plane per GPU generation ?!?).
 
DaveBaumann said:
Now, it’s always been my belief that if 3D was ready for a fundamental shift in processing to a more general processing architecture then the 3D vendors would be looking at it or doing it already. From the likes of ATI and NVIDIA we will not see a large development shift from one processing construct to another, but an evolution, and one that is happening already (more so, it appears, from ATI’s front right now).
I have a question Dave, can such a shift happen without any sort of trade off on the performance level? In other words do you think a more general processing IC be as fast as a dedicated hardware?

I'm asking this question because I think the answer might help to find an answer to your question.

What I mean is that Ati is not alone on this market, and therefore any of their strategic moves have to the take in consideration the competition own choices.
Now, seeing how important are benchmarks in the 3D accelerator market, will a big and important shift made by only one of the competitor, shift that could produce "worse" benchmark results, be a good thing for this competitor?

Simply put, can Ati afford to do such big of change, and therefore let Nvidia have a lot more "bungholiomarks", in the market we know today?
Having their Aero Glass GUI in longhorn running at 160fps on an ATI card, will be a meager compensation for the users, if the 3D apps, such as games and modelling softwares runs faster with Nvidia counterpart card, no?

So, except if such a shift can be done with little to none impact on performances, i would consider any attemp from any of the 3D IHV as a risky move. But that's just me...
 
Vysez said:
I have a question Dave, can such a shift happen without any sort of trade off on the performance level? In other words do you think a more general processing IC be as fast as a dedicated hardware?

My point of view on that has always been perfectly clear. The fact that Sony have now adopted NVIDIA also is a clear indicator to a similar point of view, certainly at this time. In truth I’m not convinced we are heading down a path of generalised processing, but there appears to be people in this forum that do – the point of this thread is to potentially challenge those perceptions and also challenge who where we should be looking closer to see signs of such things happening.

So, except if such a shift can be done with little to none impact on performances, i would consider any attemp from any of the 3D IHV as a risky move. But that's just me...

And this equally applies to the console market. Risky for one vendor to utilise a generalised processing scheme when potentially another could throw together two lumps of dedicated silicon and have a higher performance system – it seems as though this risk may have been one of the factors for the market mix that has been announced right now. (One of the ironic elements of all of this is that fact that the player that looked to have had the most “generalisedâ€￾ scheme, from their patents at least, has adopted a graphics vendor that is currently touting more dedicated processing as opposed to a more general [in graphics shader terms] unified shader architecture)
 
CrazyAce said:
It will be called Fafalada transform mode in your honour
Oh that's just dandy, make my name the source of abject hatred of developers across the world :D

Panajev said:
Well, I know it oes not require, but it would be simplier for nVIDIA to keep the Vertex Pipeline and the CLipping and Culling stages intact as well as the interface these blocks have with the Triangle Set-up and the rest of the GPU.
That I have no idea - anyway that's one reason why I babbled about H-Space rasterizing the other day, though I'm not really sure if any of current graphic chips implements that or not.
 
Back
Top