Predict: The Next Generation Console Tech

Status
Not open for further replies.
no, POWER4 is the PowerPC G5, used on macs and Xbox 360 dev kits, but the Xenon is totally different.
I agree the power 6 or 7 is totally impossible and was only talking of using both x86 and PowerPC, which is only conceivable if it's a Xenon for compatibilty, but then I agree MS would not spend the dollars and especially pay for the development of a new 45nm or 32nm Xenon (the 45nm existing Xenon share the die with the x360 GPU as far as I know)
 
Taking all the rumours so far I'm going with the following ATM.

Microsoft: Jaguar cores, customised 7770/7850 performance level GPU and IBM eDRAM (64MB)

Sony: Jaguar cores, 7850 performance level lightly customised GPU and possibly 2.5/3D stacking/DSPs.

Hopefully we get new leaks early in the new year.
 
Last edited by a moderator:
Taking all the rumours so far I'm going with the following ATM.

Microsoft: Jaguar cores, customised 7770/7850 performance level GPU and IBM eDRAM (64MB)

Sony: Jaguar cores, 7850 performance level lightly customised GPU and possibly 2.5/3D stacking/DSPs.

Hopefully we get new leaks early in the new year.
Microsoft will use HD 8 not 7(without M anyway don't worry),some AMD China employees hinted
 
I thought the previous rumor of a 7950 GPU was doubtful, because it's an expensive card with 365mm2 chip and 180W TDP. But what if that rumor is actually based on what they put in the dev kit, and the final version would be a Sea Islands equivalent? The 8850 is a more reasonable 270mm2, plenty of shaders disabled for yield, and is supposed to retails for $199 with 2GB gddr5, and only 130W TDP, it's also the same 3TF performance, which would give credibility to the rumors that they put a 7950 in the dev kit to approximate the performance of a customized 8850 with a currently available GPU.

I would be happy with that, and wouldn't be grumpy anymore. ;)
 
One thing i haven't understood in all these hardware discussions is(and again, correct me if i'm wrong please, i don't claim to know much about hardware), why are you guys comparing off the shelf components to what Microsoft and AMD will come up with in the final unit?

I mean, since Microsoft's previous console(360) used a custom GPU chip that stuffed in a lot of pretty forward thinking elements in comparison to the chips on retail at the time, why would it have to directly compare to AMD's off the shelf parts now?

I mean AMD and Microsoft are working on the GPU together right? And i assume that Microsoft was offered a look at AMD's roadmap for the future when trying to figure out how to future proof the thing correct? So why would they not be using efficiencies that AMD has learned from the 7000 series, as well as architectural improvements and efficiencies already based on their prototypes of their 8000 series, if it is indeed coming out next year?

I only bring up the question because in a lot of discussion i hear that what components they end up using will be based on the street date that those components hit the retail shelves, and i'm just wondering why that has to be the case.
 
One thing i haven't understood in all these hardware discussions is(and again, correct me if i'm wrong please, i don't claim to know much about hardware), why are you guys comparing off the shelf components to what Microsoft and AMD will come up with in the final unit?

I mean, since Microsoft's previous console(360) used a custom GPU chip that stuffed in a lot of pretty forward thinking elements in comparison to the chips on retail at the time, why would it have to directly compare to AMD's off the shelf parts now?

I mean AMD and Microsoft are working on the GPU together right? And i assume that Microsoft was offered a look at AMD's roadmap for the future when trying to figure out how to future proof the thing correct? So why would they not be using efficiencies that AMD has learned from the 7000 series, as well as architectural improvements and efficiencies already based on their prototypes of their 8000 series, if it is indeed coming out next year?

I only bring up the question because in a lot of discussion i hear that what components they end up using will be based on the street date that those components hit the retail shelves, and i'm just wondering why that has to be the case.
The term "custom" is overrated, and marketing speak,. From what I understand Sony and MS are asking AMD to sell them their best GPU and adapt it to their needs. That's what's custom, a different memory interface, more shaders or less shaders than the current GPU in production, etc... they don't dictate them a better way to make a GPU, they call them to adapt one for them, because neither Sony or MS can actually make a better one. If AMD could make a better one they would have done so already, they are at the top of their game. The edram is an exception, but it's in the "memory interface" category, and adding something to an existing AMD GPU design.
 
Further extrapolations

No I guess we won't be seeing an Intel Core i7 and PowerPC design together, for the simple fact that the two instruction sets are not compatible. I.e. they run different instruction sets/microcode. HOWEVER, I said Intel-based CPU on that statement - thus, it may be a different architecture, and is similar to the general design of CPU logic cores, SIMD-based processors, and GPU.

Main topic:
Anyway you can't see the forest for the trees! All signs indicate to an X86-based processor for processing. That's huge. We haven't had that since 2001 - it will have been 11 years. What are some of the benefits of the Intel-based processor? It can run branched, highly complex code, very quickly. Good performance with highly complex code means it can simulate life better. Monitor and maintain hundreds of variables. People can't think outside of the box - there are many possibilities. Say you want to open a door in a video game. Current games - you open it, that's it. Say you want to time something to happen while opening the door. If you open it this fast, something happens, if you open it this fast, something else happens. That's an example of branched, complex code.

Say you want to accurately calculate the trajectory of the bullets flying through space from an AK-47. You have to fire a vector from the gun, and see what collision it makes with the environment. Highly complex sorting and collision algorithms. You've got to factor in gravity, the trajectory of the bullet, the wind-resistance, the speed of the human moving. Highly complex, branched code which the X86 excels at, but which the PowerPC doesn't. Say you want to do that for a few hundreds characters. Precisely. It pays to have an Intel X86, with 1/3 of the transistors dedicated to the instruction stage.
 
aegies:
I wish everyone would stop comparing the hardware inside of the next-gen consoles to consumer parts. It's not how things are going to shake out. The parts driving those consoles will have trade-offs that enable for greatly enhanced performance in some respects and compromises in others. But those compromises will be made in the interest of game development.

:!::idea::oops:
 
The term "custom" is overrated, and marketing speak,. From what I understand Sony and MS are asking AMD to sell them their best GPU and adapt it to their needs. That's what's custom, a different memory interface, more shaders or less shaders than the current GPU in production, etc... they don't dictate them a better way to make a GPU, they call them to adapt one for them, because neither Sony or MS can actually make a better one. If AMD could make a better one they would have done so already, they are at the top of their game. The edram is an exception, but it's in the "memory interface" category, and adding something to an existing AMD GPU design.

We have no evidence that Microsoft is making their own GPU with AMD, but honestly Microsoft does own Direct 3D and they own many patents and they could make changes in the API exclusively towards the next Xbox. AMD for example does have input (as well as Nvidia) and I am sure Microsoft has input on what could go into a Direct 3D chipset for Direct 3D 11.5 or 12.

It's not out of the realm of the impossible.
 
Say you want to accurately calculate the trajectory of the bullets flying through space from an AK-47. You have to fire a vector from the gun, and see what collision it makes with the environment. Highly complex sorting and collision algorithms. You've got to factor in gravity, the trajectory of the bullet, the wind-resistance, the speed of the human moving. Highly complex, branched code...
Nope. Simple, highly parallelisable code working on fancy data structures. Xenon and Cell will be as good as any SIMD architecture in this regard. If the x86 processors in the next-gen consoles are less potent in the SIMD department (highly likely), there'll be no improvements in the physics department (except physics may shift to GPU). Furthermore, an x86 processor can be anything from i7 to Celeron, so that doesn't tell us anything about the performance. x86 only tells us the instruction set of the CPU, and some assumptions about architecture which can be ignored when we have specific details about the cores (AMD Jaguar or whatever).
 
You guys know more than I do, but this is my opinion and you can tell me if I am wrong or right.

The reasons why a GTX670 costs $400 (when I bought mine in June was because of four basic reasons)

1) Because of using a very high spec GDDR5 memory (2 GB) which is very expensive
2) Because of the 28nm process was brand new and they couldn't make as many chips (low yield)
3) Because Nvidia knew early adopters would pay (which goes to #2)
4) ATI already had cards out that were more expensive (at the time), so Nvidia could justify it

We know the bus on the GTX 670 is 256-bit, so we know that's not it. We also know a heat sink and fan isn't that expensive and the ports on the back while it makes it more expensive, it's not the killer reason.

None of this applies to consoles coming by the end of 2013.
 
You guys know more than I do, but this is my opinion and you can tell me if I am wrong or right.

The reasons why a GTX670 costs $400 (when I bought mine in June was because of four basic reasons)

1) Because of using a very high spec GDDR5 memory (2 GB) which is very expensive
2) Because of the 28nm process was brand new and they couldn't make as many chips (low yield)
3) Because Nvidia knew early adopters would pay (which goes to #2)
4) ATI already had cards out that were more expensive (at the time), so Nvidia could justify it

We know the bus on the GTX 670 is 256-bit, so we know that's not it. We also know a heat sink and fan isn't that expensive and the ports on the back while it makes it more expensive, it's not the killer reason.

None of this applies to consoles coming by the end of 2013.

And also many distributors take a marge, you have Nvidia first, graphic card maker, the importer and/ore wholesaler and finally the salesman where you buy…
So your card would probably sell around $250-300 dollar from graphic card maker and Nvidia sell the GPU around $100-150.
You can take price of retail product to determine cost of integrated hardware… Less steps, costs down. ;)
 
And also many distributors take a marge, you have Nvidia first, graphic card maker, the importer and/ore wholesaler and finally the salesman where you buy…
So your card would probably sell around $250-300 dollar from graphic card maker and Nvidia sell the GPU around $100-150.
You can take price of retail product to determine cost of integrated hardware… Less steps, costs down. ;)

Somehow I think stores selling GPUs aren't taking such a big amount off the top. I don't have figures on hand but I've heard of razor thin margins in other markets like game sales and you'd kind of expect competition to drive it well below what you're saying.

No idea about the OEM though, since of course the GPU chip isn't the only thing on the BOM.
 
how would ken kutaragi make a ps4 ? ( without the supercell )

If hee dosent pursue his dreams with the CELL, and judging by his previous efforts with PS1/PS2/PS3, I would say he would go parallalization crazy and bandwidth crazy, I wouldnt be surprised if he asks IBM/Intel for a 16 core+ CPU. and if he makes a deal again with Rambus (or something innovative with Samsung). I also expect he wont make a deal with AMD due to their very weak performances in terms of CPUs. This would lead him again in the arms of Nvidia. He could also develop with toshiba some new DSPs.

so kutaragi ps4 : IBM/Nvidia/RAMBUS (or samsung)/Toshiba-Sony.

and I forget to mention that it woould be a very powerful conssole, for 10 years real this time :mrgreen:
 
If hee dosent pursue his dreams with the CELL, and judging by his previous efforts with PS1/PS2/PS3, I would say he would go parallalization crazy and bandwidth crazy, I wouldnt be surprised if he asks IBM/Intel for a 16 core+ CPU. and if he makes a deal again with Rambus (or something innovative with Samsung). I also expect he wont make a deal with AMD due to their very weak performances in terms of CPUs. This would lead him again in the arms of Nvidia. He could also develop with toshiba some new DSPs.

so kutaragi ps4 : IBM/Nvidia/RAMBUS (or samsung)/Toshiba-Sony.

and I forget to mention that it woould be a very powerful conssole, for 10 years real this time :mrgreen:

And cost about $600 and bankrupt Sony in the process.
 
And cost about $600 and bankrupt Sony in the process.

lol, dont throw out kutaragi from the ps4 equation this early, we should not forget that he trained a lot of engineers in Sony, I wouldnt be suprised that sony executives called him for advices adn ideas for ps4, and that his previous students/friends still call him for advices too. so maybe kutaragi aferall had somee input with ps4.;) (I believe he is still occupying a post of adviser within sony, japanese companies dont forget their previous employees, they have a great culture in that regard).
 
Status
Not open for further replies.
Back
Top