SGI sues ATI

If SGI no longer sells nor plans to sell any future visualization solutions, doesn't it mean that ATI can't take the "you are infringing on our patent Z, so lets cross-license" route? Assuming this is the case and that the patent is good, SGI basically has nothing to lose... and in fact logic dictates they should sue NVidia and other 3d card companines as soon as possible.
 
They won't be sueing NV:

The patent, which was issued to SGI on 18 November 2003, claims intellectual rights for "A floating point rasterization and frame buffer in a computer system graphics program," as well as "the rasterization, fog, lighting, texturing, blending, and antialiasing processes operate on floating point values," which ATI has allegedly used without SGI's approval, according to the lawsuit.

SGI works with ATI's competitors, including Nvidia, which makes them understandably upset about the prospect of ATI copying off of their proprietary technology. "SGI has licensed this technology to ATI's major competitors and, as I have previously been stating publicly, that SGI intends to aggressively protect and enforce its IP. This is the first visible step in that process," said SGI's CEO Dennis McKenna.
 
Is it just me or is this patent total garbage?

But this doesn't really seem worthy of a patent, but in reality is an obvious continuation of what has already been done.

But as advances in semiconductor and computer technology enable greater processing power and faster speeds; as prices drop; and as graphical applications grow in sophistication and precision, it has been discovered by the present inventors that it is now practical to implement some portions or even the entire rasterization process by hardware in a floating point format.


I would have expected it to include some novel new technique for calculating or storing floating point values, not just an "oh look we were the first to patent using more bits for better graphics!" What's next, patenting the use of a 32bit program counter to run longer fragment shaders now that more silicon is available?
 
Is it just me or is this patent total garbage?
Presumably you must thank
Primary Examiner: Powell; Mark R.
Assistant Examiner: Havan; Thu-Thao
for this.

But this doesn't really seem worthy of a patent, but in reality is an obvious continuation of what has already been done.
One would think so but at least, from my humble experience, the USPTO does seem to have tightened up the exam process lately.
 
I'm going to file a patent for doing that stuff in some binary coded decimal format that allows arbitrary high integer numbers, so we can have Infinitely High Dynamic Range, without the imprecision of FP!

well, I'll do it but I need $200,000 dollars first, can some of you guys help?
 
I'm going to file a patent for doing that stuff in some binary coded decimal format that allows arbitrary high integer numbers, so we can have Infinitely High Dynamic Range, without the imprecision of FP!

well, I'll do it but I need $200,000 dollars first, can some of you guys help?
200K?! You can do it much cheaper than that!
 
while i have no interest in defending the patent system, i'll point out that 8 or 9 years ago engineers spent time researching a good compromise floating-point representation (fp16/s10e5 - not fp24, not fp32), that would be cost-effective in high-end asic implementations. That choice was not obvious and required real work to figure out the trade-offs in different representations. Given that it reappeared as 'half' in openexr and float16 in post-millenium GPUs, it seems pretty easy to declare it all trivial now. So is hiearchical z, texture compression formats, aniso filtering, msaa, cube maps, etc once someone else has done the work or silicon has progressed far enough that cost is no longer a factor.
 
while i have no interest in defending the patent system, i'll point out that 8 or 9 years ago engineers spent time researching a good compromise floating-point representation (fp16/s10e5 - not fp24, not fp32),
One might argue that 16 bit is an obvious choice given that it is a straight power of 2 that is smaller than the standard 32bit fp. <shrug>

Furthermore, I believe that 3DFX used a 16-bit floating point format in their rendering chips for the Z buffer. One might argue that it would be obvious to apply this to other items in the pipeline.
 
That choice was not obvious and required real work to figure out the trade-offs in different representations.
The choice was not obvious, but every step necessary towards figuring out the choice was obvious. There wasn't even a need to make an intuitive leap of faith for a given configuration, the design space to explore was miniscule. 16/24, sign bit/two's complement, a couple of useable exponent sizes for each ... that's it. 0% inspiration, 100% perspiration required.
 
It's hard work to make a stack of bricks as tall as the Sears Tower. That doesn't mean that making a huge stack of bricks bigger than any previous stack of bricks is an idea worthy of a patent.
 
Back
Top