Whats the possibility that.....

Fuz

Regular
the R350 is a dual chip R300 with 256mb?

This isn't as far fetched as it sounds. Alot of ppl beleive that the R300 has pushed .15 technology to the limits. What better way to increase performance with out adding exotic high speed ram or redesigning the chip, then to add a second VPU?

Sure it might cost an arm and a leg, but atleast it most probably will reclaim the performance crown from the GFFX.

Just a thought. What do you guys think?

Edit: Spelling
 
I don't think that much effort is necessary to reclaim (or should it be "keep"? Depends on launch schedules) the performance crown.

Doesn't sound very useful for the consumer space and profitability in any case.
 
I don't think so.

The reason is that ATI simply don't need to do this ( development costs, etc ).

R350 will a better R300 and (maybe) DDR2.

oh...............and a new driver with 60% increase in Quake 3 :LOL:
 
What do I think...well..since my usual rumor snips from ATI mean nothing..

http://slashdot.org/comments.pl?sid=34863&cid=3784210



Multi chip and multi card solutions are also coming, meaning that you will be able to fit more frame rendering power in a single tower case than Pixar's entire rendering farm. Next year.

I had originally estimated that it would take a few years for the tools to mature to the point that they would actually be used in production work, but some companies have done some very smart things, and I expect that production frames will be rendered on PC graphics cards before the end of next year. It will be for TV first, but it will show up in film eventually.

John Carmack


Things that make you hmmmm...

Estimated price....

evilpinky_thumbnail.jpg
<-- 1 Million Dollars

:LOL:
 
Well mabye a slower clocked like 275mhz r300 use that as a dual one. How much of a gain do you think there would be ? To much and the r400 wont be fast enough , too slow and whats the point ?
 
I guess it wouldn't make much economical sense for ATI to release such a product. Still, ATI has just about surprised everyone this year.... I am not saying I expect such a move from ATI, but could you imagine what levels of performance such a beast would produce!

I sure hope some one reviews on of those Evans & Sutherland systems, with some gaming benchmarks, when they become available.
 
What do I think...well..since my usual rumor snips from ATI mean nothing..

http://slashdot.org/comments.pl?sid=34863&cid=3784210



Multi chip and multi card solutions are also coming, meaning that you will be able to fit more frame rendering power in a single tower case than Pixar's entire rendering farm. Next year.

I had originally estimated that it would take a few years for the tools to mature to the point that they would actually be used in production work, but some companies have done some very smart things, and I expect that production frames will be rendered on PC graphics cards before the end of next year. It will be for TV first, but it will show up in film eventually.

John Carmack


*in Keanu Reeves' voice*

F***ME!

Hell, I'd buy an ATi R300, R350 or R400-based multichip card in a heartbeat :D :eek: :D


EDIT: it's probably for the workstation or high-end professional PC market (if there is such a thing) and other high-end apps, like Quantam3D's market. not the $500 ultra high-end desktop/gaming market. :/ but I'd love to be wrong.
 
To swing the regular consumer over to ATI, they need to beat NVidia for 2 generation of products.

So they need to beat GeForceFX nearly at all cost, even if it initially means some loss.

Right now every regular person recognizes the GeForce brandname, and say they're waiting for GFFX, because it'll be better than 9700 pro. If ATI can release something around that time frame to cripple the GeForce FX and market it strongly as a GeForce killer it could turn the tides.

9700 pro was just a wake up call, to gain market share and keep revenue streams coming thru end of next year they need to put a spanking on FX, and market that card.

Speng.
 
Yeah, that quote from JC was definitely referring to the high-end professional market. We're probably talking around the $2000-$5000 range for these puppies. Multichip boards in the consumer market just don't make a whole lot of sense right now. Costs too much for the gain. Besides, multichip boards make even more sense for offline rendering as syncing problems that result in a few frames rendered more slowly aren't a problem for offline rendering, but are for realtime rendering. In other words: It's just easier if realtime isn't an issue.

As a side note, one company has already announced 2 and 4-chip Radeon 9700-based boards, while nVidia has previously had boards with multiple Quadro's (though only for multi-display functionality), so the time is now. Given that the GeForce FX architecture has some definite advantages when it comes to offline-type rendering (branches in VP, longer program lengths), I'd be very interested to see what nVidia (or nVidia's partners) are coming up with for the GeForce FX in the high-end 3D market.
 
Chalnoth said:
Yeah, that quote from JC was definitely referring to the high-end professional market. We're probably talking around the $2000-$5000 range for these puppies. Multichip boards in the consumer market just don't make a whole lot of sense right now. Costs too much for the gain. Besides, multichip boards make even more sense for offline rendering as syncing problems that result in a few frames rendered more slowly aren't a problem for offline rendering, but are for realtime rendering. In other words: It's just easier if realtime isn't an issue.

As a side note, one company has already announced 2 and 4-chip Radeon 9700-based boards, while nVidia has previously had boards with multiple Quadro's (though only for multi-display functionality), so the time is now. Given that the GeForce FX architecture has some definite advantages when it comes to offline-type rendering (branches in VP, longer program lengths), I'd be very interested to see what nVidia (or nVidia's partners) are coming up with for the GeForce FX in the high-end 3D market.

Ati had multichip (2way and 4way) Radeon 8500 boards being built by that same company as well as a few others for simluations.

How many GPU/VPU can the nv30 address?
We know the Radeon 9700 can scale to 256.
 
jandar said:
How many GPU/VPU can the nv30 address?
We know the Radeon 9700 can scale to 256.
I don't think they made that info public yet, but lemme guess: either 2, 4, 8, 16, ..., 128 or 256? ;)

Anything from 32 upwards probably won't make much of a difference right now, but I guess the FX should be similarly scalable to R300 in theory.
 
jandar said:
Ati had multichip (2way and 4way) Radeon 8500 boards being built by that same company as well as a few others for simluations.

How many GPU/VPU can the nv30 address?
We know the Radeon 9700 can scale to 256.

I don't know why there has to be a limit. With the proper hardware and software, you could put as many together as you damned well please. The easiest way to do this would just be to divide the screen into blocks, having each chip render its own section of the viewport. This wouldn't be terribly efficient, but it would get the job done.

Anyway, I don't think nVidia has released any official info on this, so we just really don't know. I suppose we should find out within six months, though. If nothing's announced by then, you can pretty much expect that it's not going to exist on any NV3x.
 
Fuz said:
What do you guys think?

I don't think they wouldn't go to the trouble of developing a new ASIC for such a product when the current one is already multiple-chip capable.

PCB design would be nightmare. 10 layers minimum and very big.

MuFu.
 
There is no need to utilize multi-chip for offline rendering. Multi-board is better. One of the things you must consider when building a renderfarm (or any server cluster for that matter) is heat dissipation and power density.

Why try to cram multi-GPUs on a single PCB, and deal with getting power and heat in and out (plus separate RAM bus) when you can just stick each one in a blade box, or stuff a bunch into one mainboard.

Think SLI, not V5 and MAXX.
 
DemoCoder said:
There is no need to utilize multi-chip for offline rendering. Multi-board is better. One of the things you must consider when building a renderfarm (or any server cluster for that matter) is heat dissipation and power density.

Why try to cram multi-GPUs on a single PCB, and deal with getting power and heat in and out (plus separate RAM bus) when you can just stick each one in a blade box, or stuff a bunch into one mainboard.

Think SLI, not V5 and MAXX.

How about massively-SLI-ed dual-chip boards? ;)
 
Still isn't a big win. These systems are still going to be CPU and I/O limited. I'd rather have horizontal scalability. Buy a few hundred top end linux boxes and stick only a few GPU boards in each (1 or more). I don't really want that high of a GPU-to-CPU ratio since I expect that I need significant CPU to drive the GPUs in the first place. It's not like you will run RenderMan on the GPUs directly,you will in fact, need to process significant amounts of RIB data to drive the GPUs. If your CPU or Bus gets bottle-necked, than your nice little 4-way or 8-way GPU box is sitting idle. Any cost-reduction by eliminating an extra $300 for another CPU host box is out the window as your price/performance (the figure that matters in building a renderfarm) goes way down.
 
In that case, PCI versions of R300/NV30 type single VPU boards optimized for professional apps/rendering, and able to run as an SLI or tile based pair in one box would be nice.

Cheap linux boxes, two PCI boards in each one, and as many boxes as you want. :)
 
speng said:
Right now every regular person recognizes the GeForce brandname, and say they're waiting for GFFX, because it'll be better than 9700 pro. If ATI can release something around that time frame to cripple the GeForce FX and market it strongly as a GeForce killer it could turn the tides.

Are you sure about that? Why don't you just walk up to someone on the street and find out... :eek:
 
LOL, yeah. I would guess > 95% of the people don't even recognice the name of nVidia or ATi.
Heh, reminds me of when I was in the US for my interview with nVidia and was watching the regional news, and the nVidia stock had taken a large jump up that day (cause the rumor that I was going there were spreading *cough* ;)). Anyway, I thought it was quite fun how they told the news, they didn't just say "nVidia stocks are up", but rather had to explain what company they were talking about for the uneducated masses, so they said "and nVidia is the company behind the graphics chips in Microsofts Xbox". I LOLed at that for a while, but it's really quite telling on how uninteresting the graphics industry is to the average joe. ATi vs. nVidia is as interesting for most people as Electrolux vs. Black & Decker is for most of us.
 
hmm

I guess I should be more clear when posting, else it gets ripped to shreds :)

"Regular cumputer gamer". Whenever I see any discussion on which graphics card to get on a game board, I hear get a GeForce, or wait on the GeForce FX it'll be way better than the 9700 pro, or something to that effect.

Speng.
 
Back
Top