I love you version, cause you keep mixing dreams with reality, go on dude, don't stop pleaseversion said:mpeg4 removed
a SPE faster than 8 vertex shader removed too
32 pixelshader+4 redundancy pixelshader will be fine
I love you version, cause you keep mixing dreams with reality, go on dude, don't stop pleaseversion said:mpeg4 removed
a SPE faster than 8 vertex shader removed too
32 pixelshader+4 redundancy pixelshader will be fine
Yep, I think we even have a poster around here that wrote the software for itEE can decode 1080p in software?
Fog ALU removed and replaced with FP32 normalize?version said:mpeg4 removed
a SPE faster than 8 vertex shader removed too
32 pixelshader+4 redundancy pixelshader will be fine
alexsok said:Titanio
Well, novelty is a good feature right, and undeniably, G70 is a fast chip, actually it's a top performer that is beating everything to a pulp at this point, BUT, as a forward-looking, novelty-seeking product, it undermines all of rumours about it and the RSX.
alexsok said:It doens't seem to incorporate any new ideas that ATI have already included in Xenon which is why i'm asking if by chance the chips are pretty much identical (hightly doubtful).
A 'helper' chip for one thing, providing shadowing and illumination functions. In RSX anyway.Titanio said:Huh? What rumours were you hearing, exactly? This one came out pretty much in line with the rumours, but with a few surprises here and there.
Shifty Geezer said:A 'helper' chip for one thing, providing shadowing and illumination functions. In RSX anyway.Titanio said:Huh? What rumours were you hearing, exactly? This one came out pretty much in line with the rumours, but with a few surprises here and there.
Rockster said:Jaws, those instruction counts are not max's across the entire chip as you suggest. Clearly they relate to the maximum number of execution units active per clock, but which units those are hasn't been clearly defined. The 136 likely includes things like norm, fog, etc. Surely you aren't suggesting that ALU's must sit idle for fog or vertex fetch because of a lack of instruction slots.
8xSSAA is a "free" feature transistors wise.psurge said:Also, couldn't one drop support for most of the weird AA modes (like quincunx, 8xS, etc..) and offer just 0/2/4x?
Now where did you hear that they are COMPLETELY different? The only company that went on-record and dispeled all the rumours are ATI who explicitly stated that the architecture of Xenos has been built from the ground up (which pretty much speaks for itself - what's with the unified ps & vs shaders and other technological innovations, which pc accelerators will catch up on in 1/1.5), while NVIDIA have confirmed that they have no intentions in the near future to rework their architecture and unify the dissociated units and the RSX will be based on a more traditional architecture (ergo G70 assumption). Besides, the next-generations specs are longhorn oriented and all the new features would have to adhere to that reworked framework, seeing as RSX isn't compatible to it, i surmise it will offer pretty much the same features as G70, perhaps with subtle improvements. Now i didn't say that X360 is more powerful than the PS3 so please no flame wars here, but, to tap into all the next-gen tech, NV will have to adapt a new approach and start from scratch, and after seeing G70 in action and reading NV's comments on RSX, i HIGHLY doubt they will actually differ by all that much (higher clock speeds, some cash optimization, perhaps more ps & vs the usual polishing).They're not identical, their architectures are completely different. I'm not sure where you got the idea that they'd be of similar designs?
Ujesh Desai also mentioned that while this very technology is being developed for the PS3 as well, since there's plenty of time to improve the performance gains on what they've already accomplished, that the PS3's performance will surely outpace what was demonstrated at this event. In fact, the key to Nvidia's SLI strategy is to show, in future generations, what will be capable with single chipsets, using dual graphics cards running in parallel now. Meaning, the next gen of graphics technology can likely be gauged using two of the current cutting edge tech. Based on what we saw today, it's a promising glimpse into the future of each console's visual potential, perhaps finally fulfilling the promise of Hollywood-quality CG in the context of real-time, interactive gameplay.
Overall, we consider this a successful launch. Aside from the performance of the 7800 GTX, we can infer that the PS3's RSX will be even more powerful than the G70. As RSX will be a 90nm part and will still have some time to develop further, the design will likely be even easier to program, faster, and full of more new features.
mckmas8808 said:Here is something from the 1Up website.
Ujesh Desai also mentioned that while this very technology is being developed for the PS3 as well, since there's plenty of time to improve the performance gains on what they've already accomplished, that the PS3's performance will surely outpace what was demonstrated at this event. In fact, the key to Nvidia's SLI strategy is to show, in future generations, what will be capable with single chipsets, using dual graphics cards running in parallel now. Meaning, the next gen of graphics technology can likely be gauged using two of the current cutting edge tech. Based on what we saw today, it's a promising glimpse into the future of each console's visual potential, perhaps finally fulfilling the promise of Hollywood-quality CG in the context of real-time, interactive gameplay.
So the RSX will be comparable to 2 G70s running in SLI? Man thats huge.
alexsok said:Now where did you hear that they are COMPLETELY different?They're not identical, their architectures are completely different. I'm not sure where you got the idea that they'd be of similar designs?
alexsok said:while NVIDIA have confirmed that they have no intentions in the near future to rework their architecture and unify the dissociated units and the RSX will be based on a more traditional architecture (ergo G70 assumption). Besides, the next-generations specs are longhorn oriented and all the new features would have to adhere to that reworked framework, seeing as RSX isn't compatible to it, i surmise it will offer pretty much the same features as G70, perhaps with subtle improvements.
alexsok said:to tap into all the next-gen tech, NV will have to adapt a new approach and start from scratch, and after seeing G70 in action and reading NV's comments on RSX, i HIGHLY doubt they will actually differ by all that much (higher clock speeds, some cash optimization, perhaps more ps & vs the usual polishing).
If your're including texture ops for Xenos in addition to it's 96 instructions/cycle, then you might aswell include what G70 is doing with it's other 72 instructions per cycle, even if they're not texture ops...otherwise ~ 30-40 Biilion instructions per second are being ignored!
One of the shader benches here at B3D showed 119% speedup over the 6800 Ultra. Two of the Shadermark shaders showed less than 50% speedup (which can likely be attributed to either an immature compiler, or the limitation not being in the shader). Most were much higher.Jawed said:But designing a GPU for "peak" is clearly not working.
In all these reviews, the best case we're seeing is a 50% speed-up over 6800 Ultra in shader-limited cases. That 50% speed-up can be entirely explained by increased pipelines and clock.
Jaws said:Xenos can only issue 96 instructions/cycle ~ 48 vec4 + 48 scalar. It doesn't have any further instructions available per cycle for any other execution units? :?
Xenos ~ 480 Flops/cycle + NOTHING ELSE?
G70 can issue 136 instructions per cycle.
e.g. 64 instructions on 56 vec4+ 8 scalar ~ 464 FLOPS/cycle AND it still has 72 UNUSED instructions per cylce.
G70 ~ 464 Flops/cycle + 72 instructions/cycle on further operations.
sonycowboy said:Both the EE and certainly the CELL are FP monsters, which is exactly what decoding needs.
aaronspink said:sonycowboy said:Both the EE and certainly the CELL are FP monsters, which is exactly what decoding needs.
Yep that floating point does wonders considering that all video is integer data.
Aaron Spink
speaking for myself inc.
Even though features haven't been added to the vertex and pixel shaders directly, the increased power will allow game developers more freedom to generate more incredible and amazing experiences. Though not seen in any game out now or coming out in the near term, the 7800 GTX does offer the ability to render nearly "Sprits Within" quality graphics in real-time. Games that live up to this example (such as Unreal Tournament 2007) still have quite a ways to go before they make it into our hands and onto our hardware, but it is nice to know the 7800 GTX has the power to run these applications when they do come along.
Maybe they got rid of all the needless trannies and added a extra g70 core.mckmas8808 said:Here is something from the 1Up website.
Ujesh Desai also mentioned that while this very technology is being developed for the PS3 as well, since there's plenty of time to improve the performance gains on what they've already accomplished, that the PS3's performance will surely outpace what was demonstrated at this event. In fact, the key to Nvidia's SLI strategy is to show, in future generations, what will be capable with single chipsets, using dual graphics cards running in parallel now. Meaning, the next gen of graphics technology can likely be gauged using two of the current cutting edge tech. Based on what we saw today, it's a promising glimpse into the future of each console's visual potential, perhaps finally fulfilling the promise of Hollywood-quality CG in the context of real-time, interactive gameplay.
So the RSX will be comparable to 2 G70s running in SLI? Man thats huge.
Link http://www.1up.com/do/newsStory?cId=3141621
Also Derek from AnandTech wrote:
Overall, we consider this a successful launch. Aside from the performance of the 7800 GTX, we can infer that the PS3's RSX will be even more powerful than the G70. As RSX will be a 90nm part and will still have some time to develop further, the design will likely be even easier to program, faster, and full of more new features.
I know this is expected but its just makes me feel better reading it from them.