Next gen consoles: hints?

The consoles are definitely holding you back there. That's what happens when your largest demographic is on consoles. Still off topic though.

It seems to me that Sony and Microsoft are hearding in completely different directions in regards to desiging their next gen consoles. It's looking as if Sony is going into a huge performing CPU route in the CELL technology. I'm not going to get into specifics because that is an entirely different thread, but Sony definitely wants a beefy CPU. I have no clue what's going on behind the scenes regarding the graphics side of things. I just hope more info or an official press release comes along soon so we will have a better understanding of things.

Microsoft's first attempt with a console is impressive in terms of hardware. They took PC hardware and overcame bottlenecks of the time to make a machine that still has pretty good graphics these days. They took proven parts that they had experience with and put them on a board that had some really nice things in it. In the next gen I fully expect them to follow to having a very nice GPU (R500 or a derivative). What they're contractign IBM for regarding the CPU I'm not sure. It could just be to use their facilities or it could be to have IBM and MS come up with a CPU that's better for gaming than whatever x86 is out there at the time. OR it could be based off of x86 but just using IBM to manufacture it.

The question I would like answered is: Can hardware being developed in a similar time frame and have a launch date close to one another, could one hardware be substantially more powerful than the other? If not, then it doesn't matter. Sony will probably end up with at least 50% of the marketshare. MS will either increase marketshare or decrease, it all depends on what Nintendo has up their sleeves and if they wish to compete all out next gen.
 
DaveBaumann said:
Vince said:
If any of the consoles will be a large leap forward, it'll be the PS3. The other architectures have too much legacy, too much linear thinking/progression from what we've heard (eg. ATI/DirectX next)

Tell us, in your informed opinion - where is this linear thinking/progression actually lacking? How is progression from whats known and accepted a bad thing? Surely the unknown also has the largest risks?

Sure. It's lacking in several regards as our best known information would point towards it being a design which is still utilizing discrete computation blocks which are distributed in a more PC-centric manner (eg. CPU, GPU, SPU). I feel this is a somewhat reasonable assumption, which it obviously is and I admit to, at this time due to the companies involved and the feasibility of integrating the multiple formats and IP's into a coherent entity similar to, say, Cell. Even with the closed-box ability for MS to do what Akira stated, you're never going to catch a system which is even remotely close to the Fig.6 in the SCE Patent or referred to in the Sony/Toshiba/Rambus agreement.

This is the most fundamental problem IMHO. Even if ATI can match Sony's shading potential, it won't catch the PS3 as a system. MS and DirectX Next alone is just a linear advancement over what you have in DX9 or OGL2. It's a mediocre stepping towards the programmability you'll have on an APU/VU at most. And this is only examining it on a microarchitectual scale.

Much of the talk of DXnxt/NV50/R500 centers around the concept of unified computational resources - yet it's still more restrictive than what Sony promises and isn't comparable when you look at it from a system level.

How is progression from whats known and accepted a bad thing?

Personally, I find this to be a boring aspect. Ultimately it's protected from defeat by the fact that it's the conservative strategy, but if you look at the history of human knowledge and advancement you'll find that although the vast majority of discoveries were of your linear type, they'd never yield the paradigm shift that someone like Democritus, Kepler/Newton, Frege/Russell or Einstein (among countless others) did.

As Gordon Parks once said, "The guy who takes a chance, who walks the line between the known and unknown, who is unafraid of failure, will succeed."

Why an enthusiast would prefer a company to take the conservative path (and then defend it as a consumer) amuses me personally, but to each his own I suppose.

And when new technology is given to developers this is something thats quite desireable - the least amount of effort for them to learn something new can often pay great dividends, and then they can explore the limits from there

I think PlayStation2, from 2001 onward, has effectivly discredited this mentality that never really held any weight to begin with. Developers, like the rest of us, are played entities/employees. We get payed to do a given job, regardless of the task assigned. If a publisher wants a game on N platform, my guess is that if the developer wants to eat, he's going to conform.

As a consumer, I don't care what a developer has to go threw to earn by $50.
 
akira888 said:
These are the significant legacy bottlenecks on a PC, and how the Xbox1 overcame them.
1) Poor memory bus - 256 bit DDR bus @ 100 Mhz, something nonallowable on PC do to the wider tolerances needed in an open platform but quite possible in a closed box.
2) Lack of CPU access to video memory - Unified Memory Architecture
3) Only high level access to coprocessors - Low level push buffers
Thats about it. What's a PC holdover that holds Xbox back? I can't think of any.

I was referring to the Next Generation. Specifically I view the ability to share computational resources throught the system as a key facet, this is something which is just happening in 2003 on the PC. I'd expect this to become pronounced next generation when we're clear of the set-piece mentality of pre-DX Next architectures and programming.

Bottom line - the Xbox chipset has an extremely similiar silicon cost to PS2's chipset and yet which chipset is vastly superior?

Which system launched 2 years later? Your entire argument is nullified wrt preformance/features as related to silicon/area costs when you factor in the temporal lag and density advancement as defined by Moore's Law.

The real question is, "If Sony was designing the GS (area constant) to 150nm rules - then which would be superior?" Alternately, "If nVidia was deigning the XGPU to the 250/220nm process, who would be superior?"
 
Microsoft's first attempt with a console is impressive in terms of hardware. They took PC hardware and overcame bottlenecks of the time to make a machine that still has pretty good graphics these days. They took proven parts that they had experience with and put them on a board that had some really nice things in it.

Some nitpicking. But I think we should give credit to where it's due here. You should replace "Microsoft" with "Nvidia".
 
Vince said:
jvd said:
A big leap from last gen console hardware but not from this years pc hardware .

I am not correcting you, merely asking why you believe this to be the case. Why can't, say PS3, be a large leap up from the R350 and NV35? If any of the consoles will be a large leap forward, it'll be the PS3. The other architectures have too much legacy, too much linear thinking/progression from what we've heard (eg. ATI/DirectX next)

I just don't believe it will be able to do much more than the r500 will do. Mabye more polygons with more lights but it wont be able to do as many effects . Not only that but its aa and aniso will be lacking compared to the r500 . So there will be a trade off . IF the xbox 2 can do a 150 million polygons a sec with 8 lights mabye the ps3 can do 225. But it will loose with the aniso and fsaa comparisions and to me it be on equal footing. I don't believe the rendering power is there yet.
 
jvd said:
The new nintendo system will most likely be the value version of whatever the r500 turns out to be with and an extremly fast power pc chip . IT will also have a die shrunk gamecube put into the system. It will be able to read the gamecube discs and special dvds .


At least thats what i've been told. Dunno how true it is.

are you sreu its not going to be version of the R350 core?
 
Legion said:
jvd said:
The new nintendo system will most likely be the value version of whatever the r500 turns out to be with and an extremly fast power pc chip . IT will also have a die shrunk gamecube put into the system. It will be able to read the gamecube discs and special dvds .


At least thats what i've been told. Dunno how true it is.

are you sreu its not going to be version of the R350 core?

Well its my understanding that the r500 is now something diffrent than 3 months ago and its going to be in the new nintendo system with ms getting something between the r550 and r600.
 
Sonic said:
What they're contractign IBM for regarding the CPU I'm not sure. It could just be to use their facilities or it could be to have IBM and MS come up with a CPU that's better for gaming than whatever x86 is out there at the time. OR it could be based off of x86 but just using IBM to manufacture it.

Hmm, might not be one CPU. ;)
 
jvd said:
Legion said:
jvd said:
The new nintendo system will most likely be the value version of whatever the r500 turns out to be with and an extremly fast power pc chip . IT will also have a die shrunk gamecube put into the system. It will be able to read the gamecube discs and special dvds .


At least thats what i've been told. Dunno how true it is.

are you sreu its not going to be version of the R350 core?

Well its my understanding that the r500 is now something diffrent than 3 months ago and its going to be in the new nintendo system with ms getting something between the r550 and r600.

SHould we even expect a r400 or 450?
 
Vince said:
I was referring to the Next Generation. Specifically I view the ability to share computational resources throught the system as a key facet, this is something which is just happening in 2003 on the PC. I'd expect this to become pronounced next generation when we're clear of the set-piece mentality of pre-DX Next architectures and programming.

No significant amount of work can be accomplished through data transfer on a backboard bus, they're just too narrow.

Which system launched 2 years later? Your entire argument is nullified wrt preformance/features as related to silicon/area costs when you factor in the temporal lag and density advancement as defined by Moore's Law.

The real question is, "If Sony was designing the GS (area constant) to 150nm rules - then which would be superior?" Alternately, "If nVidia was deigning the XGPU to the 250/220nm process, who would be superior?"

Compare this: (fair given the same litho)
0.18um Pentium III @ 108mm2 with 0.22um Geforce @ ~150mm2
0.18um EE @ 240mm2 with 0.25um GS @ 279mm2

I know it's not 100 percent fair (250nm vs 220nm) but still, that is a wide gap in die size (over 2x) that doesn't exactly show up on the screen in any way, shape, or form.
 
Legion said:
jvd said:
Legion said:
jvd said:
The new nintendo system will most likely be the value version of whatever the r500 turns out to be with and an extremly fast power pc chip . IT will also have a die shrunk gamecube put into the system. It will be able to read the gamecube discs and special dvds .


At least thats what i've been told. Dunno how true it is.

are you sreu its not going to be version of the R350 core?

Well its my understanding that the r500 is now something diffrent than 3 months ago and its going to be in the new nintendo system with ms getting something between the r550 and r600.

SHould we even expect a r400 or 450?
I never said my sources were correct. I was told its based off teh r500 which would fit with the time. I.e x mass of 2005. I'm told the xbox 2 will launch in 2006 with something between the r550 and r600.

Dunno specs on either of the chips
 
akira said:
I know it's not 100 percent fair (250nm vs 220nm) but still, that is a wide gap in die size (over 2x) that doesn't exactly show up on the screen in any way, shape, or form.
You are talking about a GF1? Ok, I'll bite, this is based on what?
 
Sure. It's lacking in several regards as our best known information would point towards it being a design which is still utilizing discrete computation blocks which are distributed in a more PC-centric manner (eg. CPU, GPU, SPU). I feel this is a somewhat reasonable assumption, which it obviously is and I admit to, at this time due to the companies involved and the feasibility of integrating the multiple formats and IP's into a coherent entity similar to, say, Cell.

Again - how are discrete blocks actually an issue (if this is whats to happen)? As I explained DirectX can already effectively limit the required processing to those blocks, and this is only extended further with DX10.

However, surely even the Cell based system will have some discrete elements (i.e. processor and raster back end).

Even if ATI can match Sony's shading potential, it won't catch the PS3 as a system.

At the moment I'd say it would be tough for Sony to reach ATI's fragment shading potential.

MS and DirectX Next alone is just a linear advancement over what you have in DX9 or OGL2. It's a mediocre stepping towards the programmability you'll have on an APU/VU at most. And this is only examining it on a microarchitectual scale.

Again, you come out with such things Vince, but where is your expertise on making such statements. You've not even effectively seen what DX9 can do yet and you write off DX10 as "mediocre". These are baseless words.

Much of the talk of DXnxt/NV50/R500 centers around the concept of unified computational resources - yet it's still more restrictive than what Sony promises and isn't comparable when you look at it from a system level.

No, its different from what Sony promises. No doubt Sony's solution will be restirictive in other areas.

Personally, I find this to be a boring aspect. Blah, blah, blah

I didn't ask if you find it boring I asked how this was an issue, something which you failed to answer.

How this is conservative, I don't really know - remember, Sony's route appears to be more traditional in the shading sense. The shader route that consumer graphics/DX have taken in terms of high powered fragment shading is the one that deviating from the traditional path.

I think PlayStation2, from 2001 onward, has effectivly discredited this mentality that never really held any weight to begin with.

Developers already had a large time to gain some understanding of PS2 by the time competiting hardware came out. But then, look at the arguments of DC vs PS2 back when it was released.
 
Vince said:
if you look at the history of human knowledge and advancement you'll find that although the vast majority of discoveries were of your linear type, they'd never yield the paradigm shift that someone like Democritus, Kepler/Newton, Frege/Russell or Einstein (among countless others) did.
Many others have taken unconventional approaches. You don't hear of them, because they were wrong. When an unconventional approach works, it can work big. When it fails, it vanishes into obscurity.

Failure is, of course, more common than success.
 
All i want is a DECENT output system that lets you choose what kind of display you want to play on. Automatic detection would be nice, in that if i plug a VGA cable (please Sony-MS-Nintendo, put VGA-DVi out... pretty please)
As in, the dashboard should have a list of options like:
1) Monitor (in which case the resolution would automatically be the highest, i guess 1024x768, which would use the same settings as 720p on pro-scan TVs i guess)
2) Interlaced TV (lame ass 480i - set by default for everyone)
3) Pro-scan TV (with a list of resolutions to choose from, from 480p to 1080i/p)


That's all i want.
I guess DD/DTS will be the standard since it is already in almost all Xbox games, therefore i'm not worrying about that.
 
london-boy said:
All i want is a DECENT output system that lets you choose what kind of display you want to play on. Automatic detection would be nice, in that if i plug a VGA cable (please Sony-MS-Nintendo, put VGA-DVi out... pretty please)

HDTV standards will be supported when the MAJORITY of customers have them. Hires uses RAM and fill-rate and if most people aren't using it why bother lowering the quality for everybody else?

A console is a fixed platform, we try and use all the resources. If I can choose more particles for the everbody or less particles but high res for a few which do you think gets picked?

Its not like plugging a HDTV in gives us the extra RAM for the framebuffers, memory is very tight on almost all consoles games.

Games that support HDTV are usually those which have got some spare capacity so can give HDTV res for 'free'. The game I'm working on now can't support HDTV because we don't have the fillrate or RAM for it, we could remove shadows and run at 15FPS for HDTV I guess :)
 
DeanoC said:
london-boy said:
All i want is a DECENT output system that lets you choose what kind of display you want to play on. Automatic detection would be nice, in that if i plug a VGA cable (please Sony-MS-Nintendo, put VGA-DVi out... pretty please)

HDTV standards will be supported when the MAJORITY of customers have them. Hires uses RAM and fill-rate and if most people aren't using it why bother lowering the quality for everybody else?

A console is a fixed platform, we try and use all the resources. If I can choose more particles for the everbody or less particles but high res for a few which do you think gets picked?

Its not like plugging a HDTV in gives us the extra RAM for the framebuffers, memory is very tight on almost all consoles games.

Games that support HDTV are usually those which have got some spare capacity so can give HDTV res for 'free'. The game I'm working on now can't support HDTV because we don't have the fillrate or RAM for it, we could remove shadows and run at 15FPS for HDTV I guess :)

Yes Deano, i'm fully aware of that, however expecting AT LEAST 480p in 2006 isn't asking TOO MUCH is it now...
I'm no expert on how exaclty u get around doing it but in the end would it be possible to have the resources being used in different manners, still utilizing the same amoutn of performance?
Example: it output is 480i, then the resources which are not being used for the extra resolution would be used for AA, the game rendering internally at, say, 1024x768 by default... Then, if one customer wants to see the game at full res, he will be able to, but without AA?
Would it be feasible?
I'm sorry to say, i will boycott the next gen of consoles if it's not EASY to get proper hi-res output (here in Europe, where it stil is a big problem for some gamers)
 
london-boy said:
I'm sorry to say, i will boycott the next gen of consoles if it's not EASY to get proper hi-res output (here in Europe, where it stil is a big problem for some gamers)

I hope so to, but its largely in the hands of the consumers. It enough people are buying HDTVs (which there not in Europe at least...) than support will be easy.

A single buffer at say 1024x1024 sampled to whatever the screen can cope with would be the ideal way of doing it (basically what the dreamcast did at 640x480). Of course then it would probably look better at 640x480P then 1024x768 as super sampling is IMO more important than resolution...

Its not a console issue but a TV issue and as Europe doesn't have a HDTV standard yet you can't really blame the console makers.
 
Back
Top