Whatever hour we're in, I think the point is that the RSX is not a custom GPU for the PS3 but a tweaked G70 (which was opted for because of uncertainties earlier on).Shifty Geezer said:Regars RSX as an 11th hour solution : Announced a year before release, if we take the system dvelopment as 2002, same as XB360, RSX came in after 2 years from development start, 2 years in on a 3 year project. That makes it an 8th hour solution.
Those people happen to be the people designing the old-generation RSX going into PS3. No wonder they think Xenos is impossible, because in terms of shader language and the graphics pipeline, it is the next generation of graphics, DX10, and they're still thinking in terms of DX9 for PS3. Everything they do and say over the next 18 months will be to tell you that USA is pointless crap.Titanio said:Some would disagree that we're at a point where it makes total sense to unify the pipe right now - that doesn't preclude it being a better choice in the future, but for now, that they still might be different enough that different, dedicated hardware is a better choice, or at least a valid alternative.
Utilisation is always a concern. It's the equivalent of having a dual-issue CPU that issues only one instruction per clock, 50% of the time. Utilisation IS THE BIG DEAL with Xenos.Especially in a closed system with decent tools, where issues such as utilisation are not as strong a concern.
Definitely wrong. They've been caught on the hop like R300 v NV30.Three guesses who that is. It's a different perspective, but right now we don't know who's right and who's wrong (if it's a case of being right or wrong at all! Which I doubt it probably is..)
The list of exceptions that prevents RSX performing at 100% efficiency is as long as your arm. And they're not trivial.The pixel and vertex shaders in G70, for example, are tuned and refined for their respective workloads to a very very very low level. I can only guess that accomodating each others workloads could invalidate certain assumptions and soforth that they could otherwise make. In other words, there might well be a cost in terms of the level of optimisation possible for a specific task.
There's no question, in the DX10 future Ask yourself why NVidia is designing their own unified architecture GPU. Like I said earlier, we'll prolly see it in 2007.VS and PS have been getting more and more similar, but are we at a point now where there is no cost in terms of optimisation and performance to unify them? That's the question.
Lysander said:DX10 supports unified shaders (and 64bit cpus) so it is natural that it is the foundation for X2 developement tools. XNA was initiative 1 year ago (with Allard as chief) but now is software environment for making games on X2.
Jawed said:Those people happen to be the people designing the old-generation RSX going into PS3. No wonder they think Xenos is impossible, because in terms of shader language and the graphics pipeline, it is the next generation of graphics, DX10, and they're still thinking in terms of DX9 for PS3. Everything they do and say over the next 18 months will be to tell you that USA is pointless crap.
Jawed said:Utilisation is always a concern. It's the equivalent of having a dual-issue CPU that issues only one instruction per clock, 50% of the time. Utilisation IS THE BIG DEAL with Xenos.
Jawed said:Definitely wrong. They've been caught on the hop like R300 v NV30.
Jawed said:The list of exceptions that prevents RSX performing at 100% efficiency is as long as your arm. And they're not trivial.
Jawed said:There's no question, in the DX10 future Ask yourself why NVidia is designing their own unified architecture GPU. Like I said earlier, we'll prolly see it in 2007.
Personally I've yet to see any significant argument that this isn't the case now - fundamentally the ALU's of both the VS and PS units are both, primarily, performing vector math. The most significant argument against for stating that VS and PS are tailored to different things was put forth by Kirk in that PS are designed to be much more latency tolerant as they will be dealing with texture reads; however an architecture such as Xenos removes this issue by threading the textures down a different path, allowing the ALU's to operate on other threads, oblivious to the latencies involved with the texture reads, and the thread that requested the texture reads only coming back into context on the ALU's once the texture data is ready. Shader ALU's do also have their own, native latency, and this, being a known quantity is actually architected into the design of Xenos in order to circumvent it as best as possible.Titanio said:DX10 is not here now. SM4.0 is not here now. Are pixel shading and vertex shading interchangeable without tradeoff now?
Lysander said:Titanio, who is making DX10, who is making XNA, who is making Xbox?
Sure it is not out (yet) for PC people. You think that someone (software company) would create such a complex hardware as X2 without proper software?
Dave Baumann said:Personally I've yet to see any significant argument that this isn't the case now - fundamentally the ALU's of both the VS and PS units are both, primarily, performing vector math. The most significant argument against for stating that VS and PS are tailored to different things was put forth by Kirk in that PS are designed to be much more latency tolerant as they will be dealing with texture reads; however an architecture such as Xenos removes this issue by threading the textures down a different path, allowing the ALU's to perform other ALU operations oblivious to the latencies involved with the texture reads and the shader instructions that requested the texture reads only coming back into context on the ALU's once the textures are ready. Shader ALU's do also have their own, native latency, and this, being a known quantity is actually architectured into the design of Xenos in order to circumvent it as best as possible.
"Shader levels" don't really have any have any direct effects on texture read latency - this is purely down to the fact that textures are addressed from either cache or RAM, which inevitably takes longer than operation performed directly on the chip, and the latencies involved with texturing are not entirely predicable. Shader utilisation can have some effect in that the more shaders are utilised by applications the higher the ALU-instruction-to-texture-read ratio gets higher, putting slightly less onus on the number of texture reads (but still making it critical not to stall ALU operations).If latency issues were the ONLY differentiating characteristic between 3.0(+) level shaders, that would certainly make unification easier. But I'd be wary to suggest it's the only difference, just because others have not been highlighted. It's the most obvious difference in Nvidia's model, but the only one?
Flexability and orthogonality to the developers are nice, and all, but I actually now really do believe that performance is a key issue, but not just unification of the ALU's to better balance between the VS/PS ops on the available resources, but also from the threaded nature of the chip that should allow for a better utilisation of those ALU's generally.Even with NO loss of efficiency in a US vs a dedicated PS/VS, in a closed box I think the main advantage would come down to flexibility (of instruction mix, guaranteed utilisation across arbitrary mixes) rather than pure performance in the specific instances we're talking about here.
BlueTsunami said:I'm not sure if its PR or if this has been talked about already but this was posted in 2004 and he states for two years prior to 2004...so you could state that they where working together since late 2002 early 2003. I'm not sure if you could say that working from then that its a 11 o'clock move that ended up putting a g70 GPU in the PS3. Maybe this has already been debunked. *Shrugs*
london-boy said:Uhm... Isn't the USA scheduling totally abstract to the developers? that's my understanding of it. There is no "developing for the USA". It just works. Whether it works efficiently or not it's down to the compiler or whatever routines the programmers have not much control over.
I could very well be wrong, but that was my understanding of it.
aaronspink said:I've been working with Dave for the past two years. We keep trying to work out when we're going to get together to talk about possibly utilizing some potentially beneficial technology. But he keeps putting off making the major descision on the primary purpose of our proposed meetings and instead focuses on the secondary deliverable for which we are committed to complete. But then last week after his primary platform action plan failed he agreed that his secondary backup choice would match our subject for the primary purpose of our proposed meetings.
Basically it is entirely possible that Nvidia and Sony were working on something else (like say tools and a license for CG) all the while Nvidia was trying to get more from Sony. After Sony's original plan fell through they went to Nvidia and needed the quickest thing they could get. The rest is where we are now.
Aaron spink
speaking for myself inc.
nice postvblh said:I've followed this thread from it's beginning & to be honest i can understand booth sides. However, it seems that the differences being argued can be summed up like this:
Xenos is the "future", it's unified shader arch etc being the "jet engine" of GPU tech if you will.
RSX (for comparision perpose) is the Rolls Royce Merlin of GPU tech, tried tested & proven.
Both have there limitations & strong points. None of us here can truly say which will be the "winner". Early jet engines had there limitations & so will xenos. I can't right off either of them & from reading Dave, Jawed & other's on this board in the "know" i'd say that USA is the future. However, i'm not seeing them saying that RSX is dead in the water either. Dave has insider access to GPU tech from both compainies & seems to believe in the tech put into Xenos as the way to go for the future. I don't see his leaning towards XENOS as bias.
As someone who will be buying a next gen console (360 for sure & one of the others if i like some of games i see) i don't need to feel that my console is the BIGGEST, BADDEST TECH out there. Buy the dam things cause you like the games etc not because the CELL or XENOS is the future of CPUs' or GPUs'. None of us needs to justiy our purchase to anyone.
Find it and put it up. I had a hard time finding these.hugo said:Alpha Spartan,the all green pics that you've put up is actually Snake's view through his vision lens.Why don't you put up the actual scene where there were soldiers and bots running across the street?It would be a better comparison.
GI: You’ve always pushed the hardware you’re working on quite a bit. What have you been able to do on the Xbox 360 that you haven’t been able to do on the first Xbox?
Itagaki: At E3 we were one of the few developers to show their game demo on Xbox 360. Although it wasn’t playable. Most of the games were 15 fps. Our E3 trailer we tried getting it to 60 fps, but it ended up turning out to be around 45. That was E3. Now we’ve brought it up to 60. To be more specific, maybe it’s about 55 fps. From now until launch we’ll bring it up to 60. Other developers are now trying to bring their games up to 30fps. That’s a fact. Can you think of any other games that are running at 60 fps?
GI: You’ve said before you want to develop for the most powerful console out there. You must be pretty confident in the Xbox 360.
Itagaki: Yes. I hope so. I think Xbox 360 is the best game console on the earth. It’s better than PlayStation 3.
GI: Can I ask you why?
Itagaki: PS3 has too complicated of architecture..
ihamoitc2005 said:He also said this.
liverkick said:Too hard? I guess that makes Itagaki a Ninja Dog in games development.