Predict: The Next Generation Console Tech

Status
Not open for further replies.
Cell2 + Larabbee makes absolutely no sense. None at all.

If memory serves Sony signed a contract to collaborate with Nvidia going forward. Correct me if I am wrong. We should also temper any Larabbee predictions with how it would perform better in the system as its primary task then something from Nvidia or AMD/Ati.

Why not ? you only have to look at larrabee as a Directx 11 chip and everything would be the same as it would be with a GT300 except for that you could make crazier things if you decided to go with your own software rasterizer.
Intel won´t take that thing out without a great directx 11 compiler, no doubt about that.

And which one will allow sony to mantain reduced costs at a similar performance ? a GT300+ at 40nm that we don´t know where could reach the 32nm-22nm reduction process, or a 32nm Larrabee that will get sooner the 22nm mark ?

Thinking about it it is a non issue for Sony. The doubt is with the CPU.

As far as the nvidia contract for future products was also told it could only be related to the psp 2. Anyway look at the contrat between Sony and Toshiba about the companion chip, maybe in PS4 we´ll have a Larrabee and a GT300 hung up elsewhere in the board as a nule scaler ;)
 
Last edited by a moderator:
Speaking from a developers perspective, CellV2 + Larrabee would be a nightmare machine. You'd have two vector machines each with completely different arch, with completely different memory/cache models doing nearly the same type of workload. You would have to carefully engineer and design separate systems to keep each (cellv2 and larrabee) fully busy, with a huge amount of re-programming and re-engineering to transfer work between the two types of processors.
Yep, that´s how Sony likes them.
"We don't provide the 'easy to program for' console that [developers] want, because 'easy to program for' means that anybody will be able to take advantage of pretty much what the hardware can do, so then the question is what do you do for the rest of the nine-and-a-half years?"

Hirai suggested the PS3 would see the same kind of software evolution curve that the PlayStation 2 did, as represented by the fact that God of War 2 was noticeably more sophisticated than its predecessor on the same console.

"So it's a kind of - I wouldn't say a double-edged sword - but it's hard to program for, and a lot of people see the negatives of it, but if you flip that around, it means the hardware has a lot more to offer," he said.
 
Why not ? you only have to look at larrabee as a Directx 11 chip and everything would be the same as it would be with a GT300 except for that you could make crazier things if you decided to go with your own software rasterizer.
Intel won´t take that thing out without a great directx 11 compiler, no doubt about that.

And which one will allow sony to mantain reduced costs at a similar performance ? a GT300+ at 40nm that we don´t know where could reach the 32nm-22nm reduction process, or a 32nm Larrabee that will get sooner the 22nm mark ?

Thinking about it it is a non issue for Sony. The doubt is with the CPU.

As far as the nvidia contract for future products was also told it could only be related to the psp 2. Anyway look at the contrat between Sony and Toshiba about the companion chip, maybe in PS4 we´ll have a Larrabee and a GT300 hung up elsewhere in the board as a nule scaler ;)

There is no reason to think of Larabbee as just a GPU though. The overlap in functionality between Cell and Larabee would be egregious. Sony would elect to go with Cell2 OR Larabbee. No engineer worth his salt would sign off on doing otherwise. As for DX on consoles the API itself doesn't matter but the set of extended tools which interface with the hardware do. I'm sure Intel and Nvidia both can provide more that decent support in that respect so that's a wash.

Reduced costs can't be determined solely by which process LRB and Nvidia's next arch would be fabbed on.

I have to think you're messing with me to suggest Sony would have both a Larabbee and GT300 in PS4 especially if one is merely to serve as a 'scaler' of any sort. That's another blatant duplication of capabilities and even worse scaling could be handled in software or by a dedicated chip far more cost effectively.

The super companion chip didn't serve a critical role in the system and had a very short tenure at that.
 
Last edited by a moderator:
Had the rumor been that Larrabee is taking the place of both the CPU and GPU for the PS4, this I could perhaps see a little sense in. Larrabee as a GPU with another arch for a CPU just doesn't make any sense to me. Why even bother? This detail alone makes me toss this rumor into the round file, or at least think that if part of this is true that the details are all wrong.
Well actually Charlie just said that Intel won the just the GPU contract supposely that what he was said without being gave much details, so he looked like he reported just that but he may agree with you (from realtechworld forum):
Charlie said:
The next decision Sony has to make is whether to go with an Intel CPU or not.
It _MIGHT_ be all Larrabee. One thing to keep in mind is that it is not Larrabee 1, but more likely a Larrabee 4 or 5 derivative.
 
Whatever Charlie says from a personal analysis angle means less than nothing; if there's merit in any of his reporting, it's solely from a "here's the dirt I heard" angle. Whenever he gets into personal views of whatever he's writing about, it just goes off into la la land. Larrabee 4 or 5 derivative? Uh uh. It actually shocks me that he's on the RWT forums to begin with, I would think it constant derision for him.
 
He must be guesstimating with some wide error bars on that.
Since Larrabee I isn't expected until late '09/early '10, that would mean 4/5 revisions in three years.

The timeframe on that is what puzzles me.
Intel doesn't appear to want to fork x86 for host processors, which is why an Intel exec insisted Larrabee (or at least the first round) would remain a graphics product.

Using a Larrabee descendant for a console CPU+GPU would divide the x86 market, and do so in a way that would endanger standard CPUs.

Reconciling the two branches into a common design when the first one would be barely out is kind of a tight fit. It's possible, but pretty cramped on the schedule.
 
Pardon my ignorance, but what's exactly "Directx 11 chip"?

It is an graphics chip that understands Microsoft future directx 11 graphics api. Suppossely next Nvidia and Ati graphic chips will be directx 11 parts. So what i wanted to mean is that Larrabee having a directx 11 compiler is that it would be able to run the same code as those chips.


I have to think you're messing with me to suggest Sony would have both a Larabbee and GT300 in PS4 especially if one is merely to serve as a 'scaler' of any sort. That's another blatant duplication of capabilities and even worse scaling could be handled in software or by a dedicated chip far more cost effectively.

The super companion chip didn't serve a critical role in the system and had a very short tenure at that.

I only was joking trying to say that Sony can break in some way a contract as has done before. As far as i know super companion chip was in the board for some type of contract with Toshiba.
 
arstechnica has an article.

Probably about the best thing to come out of it is this:

I've known for a while, and even mentioned on Ars, that Intel has a team that's actively courting the Sony PlayStation 4 contract

Seems a bit odd, according to the Takahashi book Intel and MS have some kind of loyalties (theres something in there about Gates making a personal plea to his engineers to keep Intel in the 360, which of course didnt work out). So I wonder why Intel seems supposedly to concentrate more on Sony? Do they dislike MS, or did MS give them the cold shoulder for next gen already?
 
Seems a bit odd, according to the Takahashi book Intel and MS have some kind of loyalties (theres something in there about Gates making a personal plea to his engineers to keep Intel in the 360, which of course didnt work out). So I wonder why Intel seems supposedly to concentrate more on Sony? Do they dislike MS, or did MS give them the cold shoulder for next gen already?

Intel could just as easily have a team courting Microsoft and Nintendo as well. It's in Intel's best interest to get some kind of foothold with anybody making a console.

That statement doesn't indicate that Intel has given up on that front.

As for Bill Gates and his attempt to keep Xbox 360 on x86, that was then.

Intel in more recent times is actively undermining Microsoft in the netbook market and mobile scene, and Bill Gates isn't in charge anymore.
 
That is the premise of GPGPU, though for a console or any serious close to metal programming, the extra layer of API would be stripped away. There are still notable problems because the typical slaved GPU setup relies heavily on the CPU and there are often issues with additional software complexity and possibly limited bandwidth between the GPU and host processor.


In most cases, that would be a cache.


MCMs inject certain inefficiencies into the manufacturing process, though they potentially save on other headaches.
When packaging multiple chips, the probability that one of the chips is bad, that the packaging process for one of the chips fails, or that the package itself is bad increases.
The design and manufacture of the actual MCM is more complex and expensive, so it's not normally done unless other factors really make it necessary.

The 'promise' of GPGPU programing I have heard is that you will be able to do "physics, particles animation, collision detection, large sorting operations, A.I, animation, water simulation, sound processing" on the GPU and leave the CPU free to perform the tasks it specialises in. If the promises of efficiency are true then couldn't the console achieve more with say ~330mm^2 combined GPU/CPU than 2*~200mm^2 individual dies considering the efficiency improvements from specialisation and latency minimisation?
 
arstechnica has an article.

Probably about the best thing to come out of it is this:



Seems a bit odd, according to the Takahashi book Intel and MS have some kind of loyalties (theres something in there about Gates making a personal plea to his engineers to keep Intel in the 360, which of course didnt work out). So I wonder why Intel seems supposedly to concentrate more on Sony? Do they dislike MS, or did MS give them the cold shoulder for next gen already?

It's because AMD has 360 and Wii. Previously Intel don't care.
 
It is an graphics chip that understands Microsoft future directx 11 graphics api.

Oh, an why do you think these things exist? Maybe you also think that Directx 10 chips exist, do you?

Suppossely next Nvidia and Ati graphic chips will be directx 11 parts. So what i wanted to mean is that Larrabee having a directx 11 compiler is that it would be able to run the same code as those chips.

Larabee is very close to modern NV/AMD GPU internally. What does it have to do with MSFT high-level API (Direct3D)?

I only was joking trying to say that Sony can break in some way a contract as has done before. As far as i know super companion chip was in the board for some type of contract with Toshiba.

If Toshiba deal was not such a disaster, and if MSFT waited little longer with X360 we would get very different PS3. What we have today is just a workaround.
 
The 'promise' of GPGPU programing I have heard is that you will be able to do "physics, particles animation, collision detection, large sorting operations, A.I, animation, water simulation, sound processing" on the GPU and leave the CPU free to perform the tasks it specialises in.

The premise of GPGPU is that the parrallel resources of GPU hardware can be applied to non-graphics workloads. Physics and other operations used by games would be a possible subset of the overall general-purpose angle.

It's somewhat misleading to say it frees up the CPU, since GPGPU setups at present lean very hard on the CPU and software interface to actually move the needed data and commands to the GPU.
So it's more that the GPU works hard to achieve better results on a very limited and often castrated variant (see: GPGPU video encoding) of a CPU task while the CPU spends a significant amount of its time spoon-feeding the slave card (F@H).

Perhaps Larrabee or some more general GPU in 2012 could change that, though only up to a point because Intel doesn't want Larrabee to submarine its main x86 lines and GPU makers have quite some ways to go.

If the promises of efficiency are true then couldn't the console achieve more with say ~330mm^2 combined GPU/CPU than 2*~200mm^2 individual dies considering the efficiency improvements from specialisation and latency minimisation?
It might, but it seriously depends on the implementation and what level of integration you speak.

An MCM is only one step above completely separate chips, and there are varying levels of integration if on-chip.

For tasks that have evolved to tolerate latency or do not map well to a GPU, integration does not help. A huge amount of current rendering has evolved to tolerate an expansion bus and a master-slave relationship between the CPU and GPU, so a game coded like today's engines would likely see a weaker GPU and weaker CPU.
 
I'll echo a few people in this thread and say that the only way I could see Larrabee in PS4 making sense is if there is only Larrabee. Maybe some variant that is different from the one that will be released as a PC GPU and more suited to fulfilling both CPU and GPU tasks.

If technically feasible such a setup would certainly have advantages in terms of hardware complexity/cost and software development. On the other hand, implementing BC would be a nightmare, middleware and developer experience would be thrown out of the window (again) and games targetted at more conventional systems would most likely make only very limited use of the architecture. In short, I don't believe it will happen.
 
1 Larabee Core = 1 Cell SPU
The differences are rather tiny.

No, similarities are only skin deep, the differences are huge.

The SPEs have 4 wide vector unit and operates a single thread out of a dedicated local memory.

A Larrabee core has a 16 wide vector unit and operates multiple threads out of a coherent cache.

16 wide vector units are efficient on graphics but not much else. That's why most systems (including x86 chips) are 4 wide.
The coherent cache may be more programmer friendly but it's not hardware friendly, it'll require more power and will severely restrict scalability.

I can see Larrabee being used as a GPU but I can't see it beating Nvidia or ATI. It might beat Cell, but only on highly parallel tasks.

As for Intel's uber silicon tech, they weren't the first to 45nm logic and they were only 6 months ahead of IBM at 45nm. It's looking that that gap may narrow or not exist at all at 32nm.
 
Status
Not open for further replies.
Back
Top