New XGI chips

http://www.xgitech.com/products/BitFluent.pdf

Looks much like ATI's AFR to me.

Each chip gets it's own pool of framebuffer and texture memory. So at the board level, you have "effectively" 1/2 the memory footprint. That's the trade-off you make for the increased bandwidth and processing power.

I still like 3dfx's SLI and PowerVR's alternate tile approach better from a technological standpoint, though they are likely more costly to develop.

There shouldn't be any "Latency / Delay" issues with only two chips. Any latency would be the equivalent of standard triple buffering, which I don't recall people complaining about. ;)

However, I do think there is a possibility, depending on bottlenecks and the specific scene, for "uneven" instantaneous frame rates...."fast...slow...fast....slow". This would not be apparent with avg. FPS scores, but might be noticable in "feel".

Anyway, I'm very interested to see if XGI can pull it off.

In related news...blatantly ripped from digitalwanderer / EliteBastards:

http://www.elitebastards.com/

Chris Lin at XGI said:
I guess you would want to know where our performance is:
Simplier way of remembering our product segments:

3DMark2003

Volari Duo V8 Ultra - 5600+

Volari Duo V5 Ultra - 4000+

Volari V8 Series - 3000+

Volari V5 Series - 2000+

Volari V3 Series - 1000+

digitalwanderer said:
And they're coming out a whole lot sooner than you'd think, stay tuned
 
I still like 3dfx's SLI and PowerVR's alternate tile approach better from a technological standpoint, though they are likely more costly to develop.

*sigh*

Back to the topic.

What my insterest is concentrated on with those are AA/AF algorithms, amount of samples for each, quality, performance etc etc.

Xabre's did fine in Futuermark's synthetics too so what? And it's not that the dual V8 score is actually blowing me out of my socks either, considering that it should be damn close to what a R350 delivers today (I don't use 3dmark so forgive me if the 9k8PRO scores a tad higher or lower). If the dual V8 should turn out cheaper than the R350 (which I consider hard to achieve) then they might have a winner there.
 
*back off topic for a second*

Hanners updated my news post there Joe. Those numbers were gotten from a 3Ghz system with 512MB of ram, they'll be debuting something at computex, and they'll be priced competitively with the other two viddy card makers line-up. :)

*back to topic at hand*
 
Should be interesting to see how the XGI drivers end up handling image quality. If the original Xabre was any indication, we might see an interesting duel with the GeForce FX for the "ugliest picture" title.
 
Ailuros said:
If the dual V8 should turn out cheaper than the R350 (which I consider hard to achieve) then they might have a winner there.

Tricky to say about price. It depends on a whole lot more than parts costs, where I'd reckon that the two would be roughly equal apart from the XGI dual being penalized on RAM costs. The chip is probably a lot smaller and cheaper than the R350, being manufactured on 0.13um at XGIs parent company UMC.

The big unknown is price competition among manufacturers and retailers, and that's a very big factor, easily large enough to render minor differences in manufacturing costs irrelevant. Add that XGI might be prepared to accept thinner margins in order to capture market presence, and pricing predictions becomes even more uncertain. We'll see soon enough.

As was pointed out, the 3DMark03 scores are respectable, but this could be a very capable card for DOOM3. 16 pixel pipelines at 350 MHz are intrigueing paper specs.

I'm positively surprised thus far, but the proof of the pudding is in the eating. Whether they can actually outperform the best card available on the market is less interesting (to me) than whether they can do a good enough job of it to keep in the race and up the ante further in the next round, adding some degree of pressure on the big two.

nVidias CEO has been at pains trying to hammer home a message about how expensive and complex it is to manufacture gfx ASICs today. One can't help wondering to what extent all that talking was done simply to dissuade other players from entering the race and eating their lunch.

Entropy
 
Well, it depends on what the 16 pipes are actually capable of doing. As I mentioned at the beginning, they state "16 pixel rendering pipes" at the same time as they state 8 "pixel shader 2.0" pipes (for the Duo). Are they flexible enough for anything besides rendering "pixels"? Does the term "pixel" even correspond to something useful for Doom 3, like stencil output, or to one of the other terms nVidia confirmed precedent for recently, like "texel"?

Note also that the bus width for each of the "pixel rendering pipes" parallels the 5800 or 9500 Pro, not the 9700 and higher (non SE :-?).

I do think legacy game performance with the given figures is indicated to be impressive if the duo design is executed well, but the relevance to Doom 3 seems very ambiguous to me at the moment, and highly questionable for other shader processing in general. Here's hoping that it is good enough to get their foot in the door and keep them advancing.
 
I fully grant that we don't know enough at the moment. Paper specs only go so far. My main point was simply that even though they actually may be competitive at the high end (pretty shocking!), as far as contributing to the competitive landscape they only have to be good enough to steal some marketshare from nVidia in the far east.
Doesn't look impossible to me.

As it is, I look forward to in-depth reviews of their products, something I honestly didn't believe was possible.

Entropy
 
Yes, the ambition to go dual core is an important and hopeful indication to me. The technical evaluation will assuredly be something to look forward to. :D
 
16 pixel pipelines at 350 MHz are intrigueing paper specs.

Sounds more like a marketing fluff to me.

Single chip V8 looks to me like this:

8 texture ops/clock
4 arithmetic ops/clock
2 VS engines
4*2

I don´t know these days what people call pipelines anymore, either they mean Z operations per clock or even texturing units. (Again open to corrections if I should be wrong).

Demalion´s considerations are probably in line with the above.

...but this could be a very capable card for DOOM3

Hard to tell without knowing how many Z/stencil units (or even ROPs) the chips have and how they are configured. Even theoretically having real 16 pipelines with one Z unit for each, doesn´t guarantee anything as of yet.

***edit:

dual chip should equal (if the above is correct) with (2*4=8 )*2; it seems to have a quite nifty amount of memory bandwidth, remains to be seen how effective whatever bandwidth saving techniques they´ve implemented will be.
 
Here's my take on the XGI cards.

They don't need to be world-beaters, they just need to be in the same approximate performance class as ATI and Nvidia cards in their price range. All they need is to approximate the slower of the two in any given benchmark to be a technical success. If they can get acceptable performance then they can always sell to the people who don't want a GfFX but can't stand ATI.

The interesting question is how much are they going to need in the way of Nvidia-style optimizations for performance. If they don't it will be another nail in Nvidia's coffin.
 
Ailuros said:
16 pixel pipelines at 350 MHz are intrigueing paper specs.

Sounds more like a marketing fluff to me.

Single chip V8 looks to me like this:

8 texture ops/clock
4 arithmetic ops/clock
2 VS engines
4*2

Agreed. IIRC, the original Xabre also played fast and loose with the definition of a pipeline (claimed 4x2, actually 2x4). The V8 sounds like an RV350 with an extra "TMU" per pipe.
 
These 3dMark scores are very impressive, so long as everything is being rendered correctly. Looks like a solid foundation for XGI to build upon heading into 2004.

This is really the only decent news i've heard all day. I feel so disillusioned about the entire industry today... hopefully i'll have brighter outlook tomorrow, eh? :?
 
Pete said:
Ailuros said:
16 pixel pipelines at 350 MHz are intrigueing paper specs.

Sounds more like a marketing fluff to me.

Single chip V8 looks to me like this:

8 texture ops/clock
4 arithmetic ops/clock
2 VS engines
4*2

Agreed. IIRC, the original Xabre also played fast and loose with the definition of a pipeline (claimed 4x2, actually 2x4). The V8 sounds like an RV350 with an extra "TMU" per pipe.
sounds like they hired smoe of nvidia's pr department
 
Shades of 3dfx....

http://www.guru3d.com/newsitem.php?id=499

Their high-end segment however is based up-on a dual GPU configuration which I have reservations about. My concern is that such a solution is extremely expensive for the end user (you have to pay the two chips). I confronted XGI with that question and they responded quite satisfactory. They simply are going to use a bigger fabrication technology, I think they will use .15 micron. This results in way higher yields and makes the product cheaper. So from their point of view the product will be affordable even in the high-end range.
 
Natoma said:
Nvidia is not going to be "simply" replaced anytime soon, no matter how much people might want that to occur. Personally I want them to get their act together and get back on the horse. We need more major competitors in the business, not less.
True however we don't NEED nvidia, and the way I see it they are going down especially if these new competitors pull thru with stable decent performing products. Nvidia has been pulling underhanded tactics for years, cheating in drivers for years it's not that they did this because they were afraid of the 8500(it all started here), 9700, and 9800. They just increase the frequency and intensity. The reason all this is happening is to show you just what type of company nvidia is, not what they've become. They have had some decent cards but they always used underhand tactics to maintain the lead kyro, quack bug(they were behind it being published as a cheat), and the 3dfx lawsuit fiasco. I can truly say the industry would be better without them!


Rockman said:
Wait I'm getting to excited, didn't they cheat in 3dmarko3 to ?
Natoma said:
This I'm not privy to. I know they had big issues with Bilinear filtering. I don't know if that is what you're maybe referring to however. It could be seen as a "cheat" because it significantly raised performance, but I've been informed that it was probably a legitimate driver bug and not a malicious cheat.
hmm didn't know that at any rate I hope they succed nvidia :)
 
They simply are going to use a bigger fabrication technology, I think they will use .15 micron. This results in way higher yields and makes the product cheaper. So from their point of view the product will be affordable even in the high-end range.
Right up until the point when it results in way lower yields (per-wafer), way lower binsplits and way higher power/cooling requirements that make the product more expensive.

DaveBaumann said:
Shades of 3dfx....
A very, very insightful comment.
 
Dave H said:
Right up until the point when it results in way lower yields (per-wafer), way lower binsplits and way higher power/cooling requirements that make the product more expensive.

Kind of like nv30/35 at .13 microns for nVidia all year long?...:)

One thing beneficial that has come out of the R3x0/nv3x situation this year is a wider appreciation that a manufacturing process is only part of the picture, and that it's not wise to judge chips strictly on the process used to manufacture them.
 
WaltC said:
Dave H said:
Right up until the point when it results in way lower yields (per-wafer), way lower binsplits and way higher power/cooling requirements that make the product more expensive.

Kind of like nv30/35 at .13 microns for nVidia all year long?...:)

One thing beneficial that has come out of the R3x0/nv3x situation this year is a wider appreciation that a manufacturing process is only part of the picture, and that it's not wise to judge chips strictly on the process used to manufacture them.

Whether to target .15u or .13u for a chip launching in fall 2002 was a tough decision, one that ATI happened to get lucky on and Nvidia happened to get very unlucky on. Whether to target .15u or .13u for a chip launching in winter 2003 is a no-brainer.
 
Back
Top