VSA100 ?

DaveBaumann said:
KYRO's AA is SuperSampling, so it is not fillrate free, the architecture, though, enables it to be bandwidth free. Combine MSAA and a tiler and you have totally free FSAA (well, the only overhead being your pointer list size increases).

How do you mean totally free, as in no performance hit what so ever?

I am not sure how that is possible as the direct3d msaa involves rendering the frame and the z-buffer to a higher resolution - mr neeyik
 
misae said:
However my argument is that a powerful GPU needs a powerful system feeding it all the time. I assure you that a Geforce 2 Ultra would be significantly faster in most situations on a system like a K6-2 rather than having a GF4 ++ card in there.

Like I said.. real world test... needs to be real world (I have done some testing with regards to this in fact).
GF2U actually outperforming GF4ti by any substantial margin at the same resolution/settings/benchmark, sounds rather far-fetched and counter-intuitive to me, no matter how strong/weak the CPU is - this would imply that the GF4ti driver takes more CPU cycles to reach a given performance level than the GF2U driver, which doesn't sound very plausible unless the GF4ti driver is immature, or some effects are enabled on GF4ti but not GF2U, or something along those lines. It's a result I find hard to believe without benchmarks and an explanation, preferably a rather technical one.
 
How do you mean totally free, as in no performance hit what so ever?

I am not sure how that is possible as the direct3d msaa involves rendering the frame and the z-buffer to a higher resolution.

I'd recommend reading the numerous PowerVR articles and reviews here to understand what is occuring on a tiler to help understand this.

However, to all intents and purposes the onchip tile is acting a the frame buffer - because of this and the fact that the Z-Reads are done onchip for each tile the extra Z reads are virtually free anyway (KYRO can do 32 Z checks per clock). When the tile is rendered the subsamples are combined to make the AA'ed pixels and are passed to the external frame buffer in the 'low' resolution format. So, the only costs is the extra onchip Z reads, which can be 'free', and an increase in in the number of 'tiles' the geometry is stored under (which can be offest by an increase in tile size, but that require more transistors)
 
AzBat said:
Only GigaPixel? Interesting. I wonder if we'll ever see that tech included in hardware from NVIDIA.

Well, GP is the only one I know of from the top of my head. I still have my doubts about NVIDIA using tiling. Its ironic that one of the main (percieved) issues with tiling is the memory required for the bin space, however by utilising lossless compressed AA IMR's are attempting to get close to 'free' AA, but still require gobs of memory to do it.

As for Oak WARP5, it looks like it used sort-independent anti-aliasing. Can somebody explain this with comparisons to the current most commonly used anti-aliasing techniques? I'm pretty well lost with most of the techniques out there.

Not familiar with Oak at all, but from you describe the 'Sort independant' terms seems to be both a factor of its tiling nature and that it isn't Edge AA (I remember some formst of edge AA did require sorting).

I'd still guess at the Oak being Supersampling though.
 
No one said NVIDIA is going to 'die'. They are too smart to die. ;)
ATI's CEO said NVIDIA is still number 1 and the one to beat. Competition is just heating up.

Performance crown is important because it's the automatic product image consumers receive not only for a particular card, but the entire RANGE of cards. You'll have people who'll read a really good Ti6400 review and get a MX in the end. Natually business relationships are important but ultimately, the big OEMs can't sell goods that people dont' want. If people demand Radeon9100s instead of MX4x0s, then that's voices they can't afford to ignore. Relationships/pricing is important, but secondary to consumer demand. And natually it's the product image derived from crown performance (and reasonable price) that decides what consumers want.
 
DaveBaumann said:
How do you mean totally free, as in no performance hit what so ever?

I am not sure how that is possible as the direct3d msaa involves rendering the frame and the z-buffer to a higher resolution.

I'd recommend reading the numerous PowerVR articles and reviews here to understand what is occuring on a tiler to help understand this.

However, to all intents and purposes the onchip tile is acting a the frame buffer - because of this and the fact that the Z-Reads are done onchip for each tile the extra Z reads are virtually free anyway (KYRO can do 32 Z checks per clock). When the tile is rendered the subsamples are combined to make the AA'ed pixels and are passed to the external frame buffer in the 'low' resolution format. So, the only costs is the extra onchip Z reads, which can be 'free', and an increase in in the number of 'tiles' the geometry is stored under (which can be offest by an increase in tile size, but that require more transistors)

Ok thankyou again.

I will do a search on google for the articles. What are the good points and bad points about using a tile based architecture though? I presume the bin space isn't the only draw back and how come more cards aren't using a tile based architecture?

I do appologise for going a bit off topic
 
DaveBaumann said:
Well, GP is the only one I know of from the top of my head. I still have my doubts about NVIDIA using tiling. Its ironic that one of the main (percieved) issues with tiling is the memory required for the bin space, however by utilising lossless compressed AA IMR's are attempting to get close to 'free' AA, but still require gobs of memory to do it.

I totally agree. It is kind of sad that tiling hasn't seen widespread use. It just seems like a more efficient way of doing 3D. Brute force can only take you so far, or that's what we keep thinking.

DaveBaumann said:
Not familiar with Oak at all, but from you describe the 'Sort independant' terms seems to be both a factor of its tiling nature and that it isn't Edge AA (I remember some formst of edge AA did require sorting).

Yeah, it wasn't edge AA. It did full screen AA and it was always on. One of a couple big issues people had with it. Another issue was it wasn't a very fast at all. However, the image quality was amazing at the time.

DaveBaumann said:
I'd still guess at the Oak being Supersampling though.

That's what I'm thinking as well. The best technical data I could find was in Reactor Critical's Russian web site. I can't read Russian or even Bablefish Russian-English for that matter. ;) So here's a link to those that can read it...

http://www.reactor.ru:8101/chips/chip-warp5.shtml

Tommy McClain
 
Joe DeFuria said:
If for some reason, ATI had a 2x performance lead for an extended period of time (like 1 year), NVidia might start to have their relationships eroded.

*Ahem*

http://www.xbitlabs.com/news/story.html?id=1040150688

Which is kind of funny. They went bankrupt selling NVIDIA cards because there were too many people in the space, now they'll rush (along with everybody else) to sell ATI cards. Any idea what the outcome might be?
 
RussSchultz said:
Joe DeFuria said:
If for some reason, ATI had a 2x performance lead for an extended period of time (like 1 year), NVidia might start to have their relationships eroded.

*Ahem*

http://www.xbitlabs.com/news/story.html?id=1040150688

Which is kind of funny. They went bankrupt selling NVIDIA cards because there were too many people in the space, now they'll rush (along with everybody else) to sell ATI cards. Any idea what the outcome might be?

Didn't they also assemble the cards in the US also which increased the price and lowered margins further for VisionTek? There were other problems apparently. The 'M' words comes to mind.

edit: my english blows harder than a yak on k2 :devilish:
 
They went bankrupt selling NVIDIA cards because there were too many people in the space, now they'll rush (along with everybody else) to sell ATI cards. Any idea what the outcome might be?

Well, Russ, and you don't think VisionTek knows this?

Obviously, something is DIFFERENT about their ATI relationships / deal than their nVidia deal. Or do you suppose that whoever is running VisionTek isn't as bright as you? ;)

This doesn't mean that VisionTek will be successful. It does indicate that there is something different about the ATI relationship that makes them believe they can succeed this time around. Perhaps it's a more favorable pricing or allocation structure, perhaps they are more confident in ATI's ability to deliver on schedule, or perhaps *gasp*, they just feel that ATI's product roadmap is just that much better...who knows. Obviously not you. ;)

EDIT: That's the whole point. Despite the "established realtionship" that nVidia had with VisionTek that you (edit: not you, DemoCoder) place utmost importance on, that did not prevent VistionTek from re-entering the market based on a NEW relationship with ATI....as I said: relationships can erode and be built up not too much unlike technology leadership can change hands...
 
Oh, and Russ, Democoder said:

Contracts with big players like Dell, Compaq, and Gateway are not really made on performance, but cost, support, relationship between the companies, etc.

The implication that this particular time period of ATI Technology / performance leadership doesn't mean all that much in the grand scheme of things. You seem to disagree with this...seeing as you believe VisionTek and "everyone else are now rushing to sell ATI cards."

So, I'm not sure where you stand on this whole issue...with me or DemoCoder?
 
RussSchultz said:
They went bankrupt selling NVIDIA cards because there were too many people in the space, now they'll rush (along with everybody else) to sell ATI cards.

AFAIK one of the contributing factors in VTek's demise was them spending quite a lot of money attempting to set up a European division - a market whose complexity they totally underestimated and failed at almost immediately.
 
As ATI has opened to the ODM business, we have Powercolor, Sapphire, ATI and now VisionTek (plus, no doubt, multiple Korean, Chinese, and Taiwanese ODMs marketting into their own countries that we never see).

I don't care a whit about your argument with Democoder, I'm just stating that the channel was tight with everybody selling NVIDIA, and (in my opinion) going to ATI won't help one bit because their problem was their manufacturing costs, and not with the underlying product. If they can't compete in one, they can't compete in another because all things are equal in video card ODMing.

They don't:
- make the chip
- design the board
- write the drivers
- provide service in any meaninful manner

They buy parts and put them together. Lowest cost wins.
 
I don't care a whit about your argument with Democoder...

I only brought it up because your arguments seemed inconsitent to me.

I'm just stating that the channel was tight with everybody selling NVIDIA, and (in my opinion) going to ATI won't help one bit because their problem was their manufacturing costs, and not with the underlying product.

Again, assuming Managment of VisionTek is as smart as you, and "knows what you know" can you exlpain to me why VisionTek would switch suppliers from nVidia to ATI, if the underlying producut means zilch?

If they can't compete in one, they can't compete in another because all things are equal in video card ODMing.

Hmmm....you wouldn't be playing "Arm-chair CEO", now, would you Russ? ;)
 
I don't think VisionTek really "switched" suppliers. VisionTek went bust; the company that bought their name has decided to market ATI boards.

I think the relationships that DemoCoder was referring to were the PC manufacturer relationships. Those are the relationships that can push your chips into a lot of boxes; board vendor relationships don't mean as much once you've reached a threshold level of distribution.
 
antlers4 said:
I don't think VisionTek really "switched" suppliers. VisionTek went bust; the company that bought their name has decided to market ATI boards.

I would have to agree as well. The article that Joe linked at xbit mentioned that Hartford Computer Group purchased the brand names. It didn't say the actual company or assets(technology or employees) were purchased. It could be a totally different company.

[EDIT]
I would also like to add that from older articles at xbit(linked at the bottom) that former employees of VisionTek started BFG Technologies. BFG is now NVIDIA's premier board partner in North America. This gives more credit to the idea that VisionTek is not the same company before they went bankrupt.
[/EDIT]

Tommy McClain
 
Ouch. Oh. Burn. :cry:

Except I backed my opinion up with facts, gleaned from my experience of working with ODMs in the consumer electronics market, where, yes, cost is king and even people like SonicBlue send their products to be build by Asian ODMs. I don't think it matters whether it's ATI or NVIDIA, the component costs are fixed and the only issue that Visiontek can compete on is: a) brand name, and b) cost to manufacture. They obviously went bust before, simply changing the IHV supplier won't solve the problem as they've simply switched one set of fixed costs for another. I can only hope if they're reentering the market, they've addressed the root cause of their insolvency.

One thing I don't understand is how my argument (did I have an argument? I simply made a comment) was inconsistant. I think you're spoiling for a fight where one just doesn't exist.
 
Back
Top