Faf,
Can APUs actually access external memory without having to have the APU DMAC DMA it into their own RAM?
Can APUs actually access external memory without having to have the APU DMAC DMA it into their own RAM?
Well if you can occupy the channel for duration of the tree scan you can simply loop the DMA-chain into itself, and have it stall on every iteration. Then you only need to update source addy, and release the stall on the channel everytime you load.Gubbi said:It is exactly this mechanism I'd like to know more about. Are we talking explicit setting up DMA with source, destination and range for every access, then wait for it to complete (polling for completion ?). That's alot of overhead just to read one node in a tree.
Not according to any of the patents we've seen so far.Guden said:Can APUs actually access external memory without having to have the APU DMAC DMA it into their own RAM?
Only if we retrain all the level designers (who aren't usually trained programmers, let along low-level programmers), game play coders etc to use 128K only with no random pointer walks. Similar problems will occur on Xenon due to massive miss cache costs. On PS3 it simple won't run as they won't be able to access outside the 128K memory.
A few years ago Sony said only 4 developers could make games for PS2, we managed to disprove that prediction. Does anybody want a world where the only developers are EA, Sony/Enix, Nintendo and SEGA making sequels to there franchises?
CTO : Not yet. But we've setteled the basic policy and discussed with some vendors. Contrary to PS2, we will provide library with devoted support like PSP.
Exactly, it not about the 'old timers' (BTW anybody whose been in the business longer than 1 console generation is an old timer, not many people survive >3-5 years...) like ERP and I We are used to packing DMA chains and writing game logic in ASMERP said:All those old codebases aren't just going to get magically replaced. Given the Lead time on a launch title, the codebases will pretty much have to evolve, rather than be rewritten. I have enough problems trying to explain to other programmers on my projects why what they are doing will be slow on PS2, PS3 has the potential to be much more complex.
Its the guys who have grown up on PC like architectures that are going to get the shock. Today there nothing like seeing the newbie who can wrap DirectX around their little finger meet the PS2 for the first time
archie4oz said:Its the guys who have grown up on PC like architectures that are going to get the shock. Today there nothing like seeing the newbie who can wrap DirectX around their little finger meet the PS2 for the first time
Been there, done that, enjoyed the popcorn...
This is one of the more ridiculous posts I've read in a while. A game programmers job is to create a great game. Of course some of this time will involve performance optimizations, but I don't blame many developers for prefering to spend time on game logic instead of graphics. It doesn't mean they're lazy and just interested in money.Guden Oden said:Deano,
What you're saying basically sounds as if programmers aren't interested in doing their job, they just want to lean back, relax and have the money come pouring down their pockets all by itself.
Certainly it's NOT too much to ask that programmers learn about the hardware they're writing code for! That's why Japanese developers typically kick US coders' asses when it comes to slickness, especially on "weird" hardware such as PS2; because they're willing to get down with it.
US coders seem to be stuck in their lazy-ass rear-wheel-drive-automatic-transmission way of thinking even when out on the race track, unable to understand why they're getting outclassed by manual transmission 4WD drivers. What, you mean it's not possible to win by just steering and stepping on the gas pedal?! You gotta be kidding me!
Ok, so I'm generalizing, but when listening to PC dev-people like Sweeney and Carmack, that's certainly the way they come off; they're so obsessed with their own ease and comfort when coding. So programming for a console requires WORK and THINKING. Tough shit pal, if that's too much to ask of you maybe you should run home to mama instead, or switch carreers and apply for a job with MS over in Redmond, I'm sure nobody will ask you to do any hard optimization-work over there!
Mfa said:As long as a processor can have enough outstanding prefetches/DMA-requests you can do vertical multithreading in software.
Fafalada said:Not according to any of the patents we've seen so far.Guden said:Can APUs actually access external memory without having to have the APU DMAC DMA it into their own RAM?
Guden Oden said:I think the 100+ team of highly experienced engineers that designed this micro-architecture already thought of these issues - and more besides. Don't you?
I think the 100+ team of highly experienced engineers that designed the [Pentium 4 Tejas/Jayhawk] micro-architecture already thought of these issues [with heat, current leakage, excessive power consumption, and pipeline stalling] - and more besides. Don't you?
Panajev2001a said:I agree with what you said DeanoC, software is the challenge and IMHO one of the reasons why Sony/SCE added IBM to the list of major partners for PlayStation 3, well to the technology they wanted to use in PlayStation 3, and they did not just stay with Toshiba and Rambus.
Only if we retrain all the level designers (who aren't usually trained programmers, let along low-level programmers), game play coders etc to use 128K only with no random pointer walks. Similar problems will occur on Xenon due to massive miss cache costs. On PS3 it simple won't run as they won't be able to access outside the 128K memory.
Well, they have more than 128 KB: they have to split the program in chunks or you can design a sort of paging system that virtualize data and instruction memory for the APU.
Also, if we are pipelining APUs then we limit to 128 KB ( with he ability of DMA-ing more data to/from the shared DRAM ) to each step as each step would be running on a different APU.
A few years ago Sony said only 4 developers could make games for PS2, we managed to disprove that prediction. Does anybody want a world where the only developers are EA, Sony/Enix, Nintendo and SEGA making sequels to there franchises?
You are right and I agree with you that even though there are exciting technical challenges what people want is to have tons of great looking games and for that to happen we need to help developers produce content faster.
I think that with PlayStation 3 some efficiency will be lost: I do not think that as a game platform, that PlayStation 3 is only seeking pure performance per se, to push the highest polygon figures.
No matter how we put it, what programmers want is speed and the ability to optimize ( well at least a good bunch of them likes the challenge as you yourself said ) while the producers/artists want to have an abstracted environment where they can control the content production/creation without having to know in depth about the console's inner working besides some tips from the programmer ( "learn how to use CLUT and good quantization programs or I will kill you", that kind of thing ).
I know the battle will be related to content and what the hardware and software is trying to do next-generation is two-fold in regards to performance: they have to set the bar high to allow noticeable improovements in graphics and they have to set it even higher to allow designers/artists to create the content that will showcase the jump in graphical power between the old generation of consoles and the new one.
CTO : Not yet. But we've setteled the basic policy and discussed with some vendors. Contrary to PS2, we will provide library with devoted support like PSP.
This shows a change in focus from Sony itself ( latest patents from IBM's Gschwind seemed to be focused on solving programming aspects of CELL rather than just go on and on about hardware enhancements ).
Any abstraction layer in practical terms means leaving some performance behind and that is why they have to push for such high performance goals in the first place.
Much faster hardeware would help designers in level creation and model design in one way as it would allow them to avoid the lengthy mesh optimization work they have to do in order to fit processing power requirements for the Hardware: do you think that GT4 designers decided to optimize their 4,000-6,000 polygons car models to look like they had 10,000-15,000 polygons ( the cars in GT4 look quite comparable to some games like WRC 4, RSC 2, PGR 2, etc... which use many more polygons per car ) just for fun ?
Of course not.
I know that given very high expectations modelling time does increase as you try to model everything in minute detail ( doing Gollum for LOTR: TTT and LOTR: ROTK did not take 1 or 2 days, but much more time and money, but how can Sony or Microsoft help developers to reduce their workload ? ): more than allow very easy integration of your models in your game engines directly from your 3D Studio MAX or SoftImage 3D rendering package I do not know what Sony and Microsoft can do to help artists create more and better content.
I think Sony knows that middlewares will be highly utilized in the next-generation and I take that comment I quoted as to say "we do not plan for you to code to the metal, we will give you high level libraries ( edit: please Sony give them a nice REYES compatible renderer in the SDK I am such a broken record ) and do not worry the performance increase will be high enough to support the middleware and the high level abstraction and to give you the rendering power you need".
On PlayStation 2 they allowed the developer to really go deep and close to the metal and in some cases they forced them to do things that develoeprs were not even used to do anymore like clipping in software ( as it was inefficient to put in Hardware and the developer would still be able to extract great performance by doing it him/her-self ).
PlayStation 2 seems to me to have been created with the idea "we will not put high level roadblocks in your way so you cna achieve close to peak performance without dependind on us to improove constantly our libraries".
The tune changed with PSP and with PlayStation 3 I think we can expect a closer to PSP and PSOne development environment rather than PlayStation 2.
To make my idea more concise, it seems that their goal has been "imagine the performance of the next-generation fo consoles... now imagine that performance achieved with high level libraries and as little as possible low level management".
That kind of idea takes much more power under the hood than what it would take to just create such a hardware that would achieve those performance goals only when programmers took the effort of doing lots of low-level optimization ( which was still a feasible way of doing things with PlayStation 2, but that would not work any longer for future generations of consoles ): in this sense the design approach that was behind the PlayStation 2 startegy, under hardware and programming perspectives, cannot be simply recycled.
To me it is telling that they had all the plans about EE 2 + GS 2, EE 3 + GS 3, their line of Creative Workstations and the GSCube project.
It tells me of an approach that they tried internally, they studied and analyzed and they saw that was just "not enough", that evolving the PlayStation 2 approach was not enough.
They started to realize, probably working on GSCube, that parallel processing was the key to high performance, but even in the technical know-how they might have needed the help of someone with experience in that field, someone that could have allowed them to achieve even more performance than they could by themselves at the time.
With GSCube the question "how do we get a performance leap ?" was answered, but then inside Sony people did ask themselves "how are people going to use this performance leap in meaningful ways" and I think the feedback from the use pf GSCube in the prefessional rendering industry helped ( Dreamworks, Squaresoft and others were able to work with the GSCube ).
I think that, at that point, SCE knew that more high level abstraction work had to be done to help developers and in order to reach the performance goal they had in mind they needed also some more boost in regards to hardware processing power available.
Sony/SCE and Toshiba needed the help of someone like IBM that knows how to design parallel architectures ( I think Sony started hearing nice things about the BlueGene projects and got very interested in them ) and that has important informations to help Sony/SCE and Toshiba develop new manufacturing processes to put such architectural concepts into silicon .
When I say "designing parallel architectures" I am not restricting myself at the Hardware/System Architecture level, but I am thinking broadly, I am thinking about things such as development environments ( IDEs ), compilers, OS's, programming libraries, debuggers, programming languages and various other tools.
IBM could provide that to Sony/SCE and Toshiba and that is why the joint-venture was started IMHO ( there are other things such as IBM's aim for the CE market, but that is the topic for another discussion ).