Remember this post by Tommy McClain about XBox2?

AzBat said:
Personally I can't see how they can release a successful console if the public perceives the Xbox2 is not as fast or faster than the PS3. If it hadn't been for the better performance of the current Xbox, I seriously doubt Microsoft would have sold as many as they had.

Tommy McClain

True. Adding to that, I don't see them holding up too good if they can't release their console within the same timeframe in order to stop history repeating itself. That puts Microsoft in a very difficult position...
 
Phil said:
True. Adding to that, I don't see them holding up too good if they can't release their console within the same timeframe in order to stop history repeating itself. That puts Microsoft in a very difficult position...

I agree as well. At the beginning of the year I sent a list features and goals for Xbox2 to one of my contacts that was forwarded to right people in the Xbox division. Of my two biggest goals the first was the one I mentioned earlier and the second one was the one you just mentioned. I think those are the most important. It will be interesting to see what happens.

BTW, I sent a similar list to Sony for the PS2 a couple of years before it shipped and if I remember correctly they met a few of my features and goals. Although it wasn't like they hadn't been discussed before. ;)

Tommy McClain
 
MfA said:
AzBat if that is true I have good guess as to the xbox2 case, it is going to be one huge heatsink.

In theory some of the smaller x86 cores with SIMD have decent Watts&area&computation per core, but they just dont have the time to design a multicore x86 design with lots of cores (although they might be able to get a dual core from AMD, if they already have it in development). It would need too much R&D. The best they can do is take the most monstrous P4 derivative Intel has to offer ... realistically they can not afford that though, neither in the budget for manufacturing cost nor the budget for power consumption. In peak performance it will almost certainly still lag a massively parallel design by something like an order of magnitude.

They are tied to existing cores with minimal modification, unless Sony fucks it up again they will be FUBAR.

That's why I think that they'll use a dual A64 from AMD. It's relatively cool compared to the P4 Prescott and beyond, and pretty powerful too. AMD shouldn't have fab problems since they are basically merging their fabs with IBM's (tech-wise at least).
 
I wish some Analyst would ask them how they expect to compete with an area efficient design when superscalar processors have been spending ungodly amounts of area for minimal CPI increases for nigh a decade, and when the lack of eDRAM increases the power needed for memory interfacing by an order of magnitude while decreasing available bandwith by more than that.

I dont understand what Microsft is thinking, they could not have been spending the amount of money needed to develop competetive CPU tech without telling their stockholders. Unless someone has been doing some stealth R&D on massively parallel general processors I just dont seem any option for them to compete on this front ... as I said long ago, the only people who I think would have had the balls to do this was NVIDIA, or maybe AMD or both, but without Microsoft fronting a couple of hundreds of millions of dollars I dont see them having done it.

Their lack of eDRAM will probably again mean their specs cant rival Sony in GigaPixels, unless they support something ridiculous like 16X multisampling and report it in GigaSamples or something (will help the numbers, but wont do much for games). This time they cannot make up the difference by supporting better shading IMO. Especially for things like shadow maps this will hurt them in games (a tiler could even the odds BTW ;). They might be able to make up a bit of the lack in processing power in the shaders, but I find it doubtfull ... and that still lacks the power to compete in physics/AI/etc.

What we have is a console for which it will be nearly impossible to compete on general purpose processing and rasterization if Sony did their job well this time.

Marco
 
Why is it expected that they'll go for some ungodly CPU? Todays Vertex Shaders can cope with virtually all the Vertex handling locally now - the CPU shouldn't be used for any 3D processing. I'd find the rumour of the Pentium-M (or something equivelent at the time scales being talked about) to be more probable.
 
I wish some Analyst would ask them how they expect to compete with an area efficient design when superscalar processors have been spending ungodly amounts of area for minimal CPI increases for nigh a decade, and when the lack of eDRAM increases the power needed for memory interfacing by an order of magnitude while decreasing available bandwith by more than that.

I dont understand what Microsft is thinking, they could not have been spending the amount of money needed to develop competetive CPU tech without telling their stockholders. Unless someone has been doing some stealth R&D on massively parallel general processors I just dont seem any option for them to compete on this front ... as I said long ago, the only people who I think would have had the balls to do this was NVIDIA, or maybe AMD or both, but without Microsoft fronting a couple of hundreds of millions of dollars I dont see them having done it.

Their lack of eDRAM will probably again mean their specs cant rival Sony in GigaPixels, unless they support something ridiculous like 16X multisampling and report it in GigaSamples or something (will help the numbers, but wont do much for games). This time they cannot make up the difference by supporting better shading IMO. Especially for things like shadow maps this will hurt them in games (a tiler could even the odds BTW ;). They might be able to make up a bit of the lack in processing power in the shaders, but I find it doubtfull ... and that still lacks the power to compete in physics/AI/etc.

What we have is a console for which it will be nearly impossible to compete on general purpose processing and rasterization if Sony did their job well this time.

Marco


This is why, at one time, I thought MS should use several parallal GPUs in XBox2, to counter the advantage Sony would have with their Cell CPU, assuming MS did not use a massively powerful CPU.
 
DaveBaumann said:
Why is it expected that they'll go for some ungodly CPU? Todays Vertex Shaders can cope with virtually all the Vertex handling locally now - the CPU shouldn't be used for any 3D processing. I'd find the rumour of the Pentium-M (or something equivelent at the time scales being talked about) to be more probable.


They need a powerful cpu to handle all non-graphics related tasks, such as physics/AI. Cell has 1 tera flop worth of computing power. That's on top of the computing power of the "APUs" in the "visualizer" dedicated to graphics (I'm assuming you have seen the Cell patent).
 
So, you're going to use 1 tera flop of power for physics and AI? IMO, to most game developer that will be overkill for many, many years to come - they won't know what to do with it.
 
DaveBaumann said:
So, you're going to use 1 tera flop of power for physics and AI? IMO, to most game developer that will be overkill for many, many years to come - they won't know what to do with it.

I don't think you understand how the architecture works. If you happen to have freetime some rainy day:

http://appft1.uspto.gov/netacgi/nph...1=AND&TERM2=broadband+networks&FIELD2=&d=PG01

The architecture has few arbitrary restrictions on where (locality not necessary in fact, but thats another story) processing takes place and what is processed where - basic raster/sampling functions are obviously an exception. The numbers people use are taken from the Preferred Embodiment section. It promises to be... unconventional if you're mind can forget about the current PC paradigm.
 
Well - is it as bbot suggests that there are is no 3D functionality or is there some?

By the time these processors are about, all 3D processing can be handled by the 3D processor (3dm03 proves that relatively complicated scenes, with character animation can be done by the 3D board) leaving the CPU to control the bines, physics, AI and various other bits of house keeping. I somehow thing dual AMD64's would be overkill for this.
 
Physics is possibly the only thing that can suck up more computational power than rendering.

Games currently uses inverse kinematics at a fairly low degree.

Imagine games using hydro codes for real water, instead of those gimmicky FFT/ shallow water code -waves. And turbulent flow codes to make that rocket launcher smoketrail look just right.

I hope one of the reasons why M$ is licensing IP from ATI is so that they can roll their own SOC with plenty of (general purpose) power.

Cheers
Gubbi
 
Do you expect games developers and/or middleware physics engines to make the leap to utilising this computational power for in-game physics in the next two years and onwards?
 
DaveBaumann said:
Do you expect games developers and/or middleware physics engines to make the leap to utilising this computational power for in-game physics in the next two years and onwards?

No. But we're talking about consoles that debut in two years and with expected lifetimes to around 2010. They better be forward thinking.

Stuffing a Pentium-M in there just doesn't cut it, IMO.

Cheers
Gubbi
 
DaveBaumann said:
Well - is it as bbot suggests that there are is no 3D functionality or is there some?

There is dedicated logic. The level of which is currently unknown and all speculation.

I've suggested that they're merely for highly iterative, yet "dumb" tasks like sampling, filtering. With the bulk of the 3D processing being non PC-centric in nature and done on the MPU (1TFlop/eDRAM supposed) and Visualizer (0.5TFlop/eDRAM + Hardwired raster).

We've had discussions on REYES/Micropolygon style rendering, a few discussions on running shader programs on APUs. Stuff like that. This is what I mean by non PC-centric. When you remove the typical PC bounds, you can do alot more, alot differently. I still hate Pixel Shading, and am waiting for someone to tell me why the concept of fragment shading wouldn't border on useless if you have 10 micropolygons per pixel.

To say that developers wouldn't know what to do with the computational power.. is something I don't understand. I've already heard a developer here chat about ideas that I've never heard in relation to a PC; an ex-programmer from Naughty Dog IIRC wrote a piece on a micro-polygon approach that was discussed here.

By the time these processors are about, all 3D processing can be handled by the 3D processor (3dm03 proves that relatively complicated scenes, with character animation can be done by the 3D board) leaving the CPU to control the bines, physics, AI and various other bits of house keeping. I somehow thing dual AMD64's would be overkill for this.

You didn't read the patent huh? Mfa also had some excellent posts on this, which have already addressed the AMD64 comment you made. With all due respect, I don't think AMD64 would have a chance one-on-one.
 
I don't think an AMD64 would be overkill, I think it would be the wrong architecture.

The Athlon/P3/P4 architectures spends a large chunk of their transistors on getting a single thread of sequential code to run fast (Hyperthreading adds little to throughput IMO).

However it is clear that the bulk of non-rendering computational power will be spent on physics. Physics codes are typically just glorified matrix solvers, with large amounts of inherent parallism.

The two things you need here are lots of bandwidth and lots of computational oomph (FLOPS). CELL has both. However CELL is limited in other ways. First each PE has a very limited amount of local memory. Second data is passed around in chunks (ie. random access is coarse grained).

A matrix solver on CELL is thus coded similarly to a matrix solver on big clusters, only here the nodes are smaller and we have tonnes more bandwidth. The tradeoffs are similar however. We can trade off bandwidth for computational use (Cholesky) and get a decent dense matrix solver out of CELL. CELL will suck on sparse matrices and people will be likely to use brute force dense solvers on sparse matrices (*ugh*).

M$ needs to have a architecture with lots of bandwidth and lots of functional units. They could go with emulation to get backwards compatibility. I suggested some candidates here.

Cheers
Gubbi
 
I'm not sure if this is applicable but at:

http://www.gamespy.com/quakecon2003/carmack/

John Carmack talks about "decomposing Pixar Renderman shaders into multipasses". Isn't Renderman a REYES renderer? Is he saying that you can use traditional VS/PS and almost get the same result as the micropolygon approach? If so, what is your opinion? Agree with him?
 
Is he saying that you can use traditional VS/PS and almost get the same result as the micropolygon approach?

Its already happening. Look at ATI's Rendermonkey and other tools and utilities like it. I'd imagine that the exluna guys are working on it for NVIDIA's toolset as well. this is also the fundamental difference to the appraoch that Sony and the rest of the 3D world so far - Sony, it appears, will create detail by increasing the poly counts, whereas the rest of the industry is creating detail at the fragment level itself; tools like Rendermonkey will make it easier to take render man code and replicate that on shader hardware.

WRT to the CPU discussion all I'm wondering is whether you have really seen what computation power is left with contemporary processors when the 3D elements are removed from its processing. Ace's hardware showed how CPU invarniant 3DMark03 is between a PII 350 and P4 2.8GHz, and that includes the use of the Havok physics engine. It would appear to me that there seems to be plenty of spare capacity there, when 3D rendering is removed.
 
DaveBaumann said:
WRT to the CPU discussion all I'm wondering is whether you have really seen what computation power is left with contemporary processors when the 3D elements are removed from its processing. Ace's hardware showed how CPU invarniant 3DMark03 is between a PII 350 and P4 2.8GHz, and that includes the use of the Havok physics engine. It would appear to me that there seems to be plenty of spare capacity there, when 3D rendering is removed.

I believe that was for the rendering-heavy subtests only. Look at the rag-troll tests, here the 1.4GHz celly is more than 5 times faster than the 350MHz P2 (both with a R9700PRO).

I'm not saying that it will be necessary with TEH CRAZY physics in games coming out within 2-3 years. But next gen consoles are to last until 2010, and by then I want to completely dismantle the buggy in HL2 with a minigun and see every single piece jump around naturally.

Cheers
Gubbi
 
Gubbi said:
I'm not saying that it will be necessary with TEH CRAZY physics in games coming out within 2-3 years. But next gen consoles are to last until 2010, and by then I want to completely dismantle the buggy in HL2 with a minigun and see every single piece jump around naturally.

Cheers
Gubbi


HEH, that would be good...
 
Isn't Renderman a REYES renderer? Is he saying that you can use traditional VS/PS and almost get the same result as the micropolygon approach? If so, what is your opinion? Agree with him?

He said Renderman shaders. Not Renderman. Even that had to be broken into multipass.

As long as you don't go for the image fidelity and scene complexity that's required in movie, going OGL vertex and fragment route should gives better performance, than REYES micropoly.
 
Back
Top