Predict: The Next Generation Console Tech

Status
Not open for further replies.
I've never heard of this tool from IBM before -- is it based on Eclipse?

Visual Studio is the gold standard in the software industry, especially games. Everywhere you go, you will see VS, VisualAssist, P4, Incredibuild, etc. Even Sony's tools, like ProDG, integrate into Visual Studio. Why waste time teaching developers new tools when there are already well-established processes and tools used throughout the industry?

I feel IBM's RAD can give Visual Studio a run for it's money any day of the week and I'm not alone according to Evans Data Reports.

http://www.evansdata.com/reports/2008IDE.php?rid=QXJ003
 
Hand them SN's ProDG (which is the better debugger, hands down)

Hmm, didn't realize that ProDG got edit & continue support, when did that happen? :) ProDG definitely isn't as stable as VS either, stuff like the debugger getting lost when stepping too quickly through breakpoints, F10 single stepping taking forever, variables in a debug build not being viewable for whatever reason, or today ProDG just feels like crashing every single time it's run and you have to go back one rev, or get a custom version from SN to keep working, etc... SN has been very good on support so far though, they are usually aware of the issues and usually have a custom build they can give you so that you can keep working. But it's frustrating nonetheless, more so if you're used to VS which works nice all day long.
 
The question is what will 1MB of LS per SPE gain for you instead of having either more logic units on the chip or a smaller, cheaper, cooler chip? Will 1 MB be enough and wanted to keep a second active thread going and increase efficiency, or will it just be a big lump of prefetched data because the system BW can feed the units fast enough?

Increase more logic unit for SPE is one of kind thing to increasing processing efficient.
However 1MB LS also need for support those 2GB XDR2 memory as well. By the time
I'm not sure that IBM planing to implement those hardware threading or some sort like
Hyperthreading Technology from those intel camps in CBEA anymore. The lastest implement
on PowerXcell 8i model was only improve DP performance for SPE only.
 
The next version of CELL needs imho not just to address the local store 'problem' (which it's not a problem per se..) but the latency problem. While it's true that in many cases access to data can be streamlined this costs time and makes programs more complex and difficult to debug. They need to add some sort of hw threading to SPUs..

Possible, but it would be really expensive even to get some SoE MT with more than 2 HW threads as duplicating many times over a 128x128 bits register file would get expensive rather too quickly... Still, I could still 2-way HW threading working even without abandoning a Local Store model for a cache based hierarchy... 0.5-1 MB of LS with a portion to store a SPURS/MARS like kernel and shared variables to help synchronization primitives processing and two equally sized chunks of LS dedicated to each HW thread with the SoE MT logic in the SPU pipeline switching execution thread (between the two HW threads) at each DMA instructions and other stalls conditions.

Still, that could be implemented with cache locking I'd think... so a cache with good locking mechanisms gets two thumbs up from me ;).
 
I can't see any easy way to support multithreading in SPEs and maintain backwards compatibility. Any given snippet of SPU code can use all 256KB of LS, to maintain backwards compatibility you would need to either:
1. Replicate the LS for each context, 512KB for two hardware contexts, 1024KB for 4 contexts.
2. Have a compatibility mode where you're only allowed to use one hardware context, - and dice the LS into chunks with multiple contexts, ie. 256KB for one context, 128KB per context for two, 64KB per context for four contexts, etc.. Local Store size could also be increased.

Option 1) would be an usightly waste of LS in the case with only one context running. Option 2) offers more options with regards to number of hardware threads and local store size

For all intents and purposes the LS is part of the SPE processor core context, thus mucking about with it is tricky.

Cheers
 
I can't see any easy way to support multithreading in SPEs and maintain backwards compatibility. Any given snippet of SPU code can use all 256KB of LS, to maintain backwards compatibility you would need to either:
1. Replicate the LS for each context, 512KB for two hardware contexts, 1024KB for 4 contexts.
2. Have a compatibility mode where you're only allowed to use one hardware context, - and dice the LS into chunks with multiple contexts, ie. 256KB for one context, 128KB per context for two, 64KB per context for four contexts, etc.. Local Store size could also be increased.

Option 1) would be an usightly waste of LS in the case with only one context running. Option 2) offers more options with regards to number of hardware threads and local store size

For all intents and purposes the LS is part of the SPE processor core context, thus mucking about with it is tricky.

Cheers

Which is why IMHO, CELLv2 ISA should allow to take it out of the SPE core context going for a cache based design allowing CELLv1 backward compatibility through cache line locking (maybe a custom locking scheme activated by a "compatibility" mode).
 
What are the latencis for accessing the LS?
Because I think thay are pretty low and I can't think of nobody outside of Intel that could provide a cache as fast as SPE LS.
 
What are the latencis for accessing the LS?
Because I think thay are pretty low and I can't think of nobody outside of Intel that could provide a cache as fast as SPE LS.

7 cycles for 256 KiB, I think.
It's approximately twice as slow as an Intel L1, and about 40% faster than the L2.
 
IIRC LS latency is 6 cycles. 7 cycles is the latency of many floating point vector operations.
 
I've never heard of this tool from IBM before -- is it based on Eclipse?

Visual Studio is the gold standard in the software industry, especially games. Everywhere you go, you will see VS, VisualAssist, P4, Incredibuild, etc. Even Sony's tools, like ProDG, integrate into Visual Studio. Why waste time teaching developers new tools when there are already well-established processes and tools used throughout the industry?

I'm not advocating RAD simply because I don't like MSVS or something. I'm being practical...or rather I'm putting myself in Sony's position.

How much would I want to collaborate with MS to build my toolchain and integrate all of them seemlessly? How much would I want MS to know? How much do I want to pay MS to do it?

I'm not being a contrarian for its sake alone and it isn't my hope that anyone should have to learn how to use another IDE. However, given I don't think MSVS is a prime target for Sony I'm picking what I feel is the next best option.

If no one else sees a conflict of interest then I'll retract the suggestion to use RAD. What's more important is that things become easier. That's what I'm really arguing.
 
I'm not advocating RAD simply because I don't like MSVS or something. I'm being practical...or rather I'm putting myself in Sony's position.

How much would I want to collaborate with MS to build my toolchain and integrate all of them seemlessly? How much would I want MS to know? How much do I want to pay MS to do it?

Personally? I'd merge PS4 and Xbox 720, let MS design the Game OS and slap the XMB GUI on top :p.

The thought of having two $399 consoles with great exclusives again makes me shiver...
 
Panajev2001a said:
The thought of having two $399 consoles with great exclusives again makes me shiver...
MS solves that problem by putting all of their great exclusives on PC. :oops:

As for toolchains - it reminds me of the situation with 3d packages in some(most?) countries where majority of artists (and studios respectively) refuse to touch anything not made by Autodesk with a tenfoot pole even though Autodesk recent quality standards and their attitude are some of the worst I've experienced in the industry.
Just because something is an established platform standard, it doesn't mean it's an established standard for quality.

Not saying MSDev is half as bad(it isn't) - but 2003 Did essentially try to abolish all that was positive about MSVC before it, and 2005 while fixing a lot of the problems has one of the buggiest GUIs I've experienced this side of MSVC 2.0.
But I do hear a lot of good stuff about 2008.
 
If no one else sees a conflict of interest then I'll retract the suggestion to use RAD. What's more important is that things become easier. That's what I'm really arguing.

Sony doesn't have to collaborate or share source with Microsoft to bolt into Visual Studio. They can, and do, develop plugins for VS, and the compiler is easily integrated. Honestly, unless Sony can again command a monopoly in the console market (which is effectively what they had), they will need to work to minimize the pain of cross-platform development, and a big part of that is the toolchain.
 
Not saying MSDev is half as bad(it isn't) - but 2003 Did essentially try to abolish all that was positive about MSVC before it, and 2005 while fixing a lot of the problems has one of the buggiest GUIs I've experienced this side of MSVC 2.0.
But I do hear a lot of good stuff about 2008.

I actually have .NET, 2005, and 2008 installed at the moment due to the variety of projects expecting one version or another, but my last project was all in 2005, with a transition to 2008 towards the end. So far, VS 2008 looks like a decent improvement. The biggest problems I had with 2005 mostly revolved around P4 plugin problems, infinibuild(TM), and slow intellisense.
 
Status
Not open for further replies.
Back
Top