News and Rumours: PS4

But a series of low powered cores would seem to be the logical way to go as opposed to a series of monolithic cores.
That's essentially what MS did this round, and not a single dev really liked it. ARM cores are terrible performance-wise compared to anything on a desktop/server, and what game code is so threaded that having dozens of them would be more beneficial than a high-performance monolithic CPU?

It's like, 50x crap CPUs is still a load of crap, you know?
 
Some of these post... you guys are smarter to believe these early Dev kits are even close to final specs.

Right now dev kits have to have a Trinity with a 7670. Those chips work "together"... http://www.fudzilla.com/home/item/26635-trinity-has-fewer-radeon-cores-but-more-efficient

Sony already confirm pretty much that ps4 will be built on 28nm. Masaaki TsurutA let this cat out the bag with the comment on 28nm inthis interview and confirms that they have been developing a SOC for ps4. http://mandetech.com/2012/01/10/sony-masaaki-tsuruta-interview/

Kaveri 28nm APU set to luanch early 2013. The. Hsa design is perfect for a console...
http://www.anandtech.com/show/5491/amds-2012-2013-client-cpugpuapu-roadmap-revealed
 
The hardware spec alone seems boring. Sure hope Sony has a compelling end user vision of what PS4 is like.
e.g.,
Prime Sense's controller-free interactive TV vision looks nice.
OnLive's online environment with spectating looks nice too.
The Google Maps April Fool joke was neat too. ^_^
 
The hardware spec alone seems boring. Sure hope Sony has a compelling end user vision of what PS4 is like.
e.g.,
Prime Sense's controller-free interactive TV vision looks nice.
OnLive's online environment with spectating looks nice too.
The Google Maps April Fool joke was neat too. ^_^

The way that AMD have been promoting their "HSA" initiative for APUs points towards an attempt to achieve a lot of the stated aims for Cell, but with a focus on easier software development. An easier-to-program Cell does nothing for you?
 
They have to have a high end GPU (or at least a step ahead of the next xbox one), that is a must. Any worse and Americans won't buy it.

With no exotic CPU next time round able to make up for a weaker GPU they don't really stand a chance.

You would think though that AMD would cut them a deal if they are taking CPU and GPU from them as it would be in their best interests to make sure the PS4 was superior to the competition.
 
Would Sony be compelled to spend upwards of a billion in R&D for the next PS4?
I don't think the components are be expensive as the PS3's no matter how advanced they get. They'll hit a thermal limit before they heat a per console budget limit.

I think the technology war would be fought in the R&D field. :cool:
 
They have to have a high end GPU (or at least a step ahead of the next xbox one), that is a must. Any worse and Americans won't buy it.

With no exotic CPU next time round able to make up for a weaker GPU they don't really stand a chance.

You would think though that AMD would cut them a deal if they are taking CPU and GPU from them as it would be in their best interests to make sure the PS4 was superior to the competition.

But AMD supplies the competition's GPU, doesn't it?

Maybe they don't want consoles to be too competitive with the video cards that it sells for PCs.
 
Why not? Anything they can fit in a console won't be as fast as AMD's fastest pc parts and 6~12 after launch whatever gpu they might use will be outdated anyway.
 
Maybe their design approach on the Vita will be a clue.

When it was announced, it was the fastest in the mobile space but by the time it came out, the gap wasn't as big.

If they can get a competitive spec for a SFF type of power envelope, it might be a good start.
 
Getting both in the PS4 (should this all be true) is guaranteed income for 6 or 7 years as I imagine there would be a percentage per chip going to AMD even if Sony fab them themselves.

Wouldn't the percentage be fixed also? Meaning they will get the same in year 6 as they do in year one, when the related GPU would've dropped 3 quarters of it's value in that time.
 
They could negotiate the contract anyway they want, but if you wanted percentage based, it would be a % of what?
 
Would Sony be compelled to spend upwards of a billion in R&D for the next PS4?
I don't think the components are be expensive as the PS3's no matter how advanced they get. They'll hit a thermal limit before they heat a per console budget limit.

I think the technology war would be fought in the R&D field. :cool:

Well they spent $400 million on the CELL alone, in one of the recent interviews it was stated that the PS4 was their first $1billion project.
 
On what could they possibly spend $1 billion? High powered version of 2.5D interposer?

[hopefully to down the line connect two Kaveri APUs].
 
On what could they possibly spend $1 billion? High powered version of 2.5D interposer?

Hookers and blow?

Though seriously, does staff salary count towards the R&D budgets? Is there some GAP that refines what can and can not be written off as R&D?
 
On what could they possibly spend $1 billion?
Hardware ray tracer :runaway:


Though being somewhat serious, throwing a few billion transistors at it a half-decent ray tracer could probably be made that's capable of running the stuff shown in GPGPU ray tracing thread a couple of orders of magnitude faster. Biggest problem would be that we still don't quite have good enough algorithms for many problems with ray tracing so simply having the capability of throwing a billions of rays per second at the screen won't quite be enough.
 
Some of these post... you guys are smarter to believe these early Dev kits are even close to final specs.

Right now dev kits have to have a Trinity with a 7670. Those chips work "together"... http://www.fudzilla.com/home/item/26635-trinity-has-fewer-radeon-cores-but-more-efficient

Sony already confirm pretty much that ps4 will be built on 28nm. Masaaki TsurutA let this cat out the bag with the comment on 28nm inthis interview and confirms that they have been developing a SOC for ps4. http://mandetech.com/2012/01/10/sony-masaaki-tsuruta-interview/

Kaveri 28nm APU set to luanch early 2013. The. Hsa design is perfect for a console...
http://www.anandtech.com/show/5491/amds-2012-2013-client-cpugpuapu-roadmap-revealed
Fudzilla is everything but reliable and often don't understand what they are writing, worse than Charlie.
I mean more efficient? Trinity will use the same architecture as Cayman, VLIW4/ Northern Island.
kurt are Southern Island, they can work together I guess yes but far from ideal. I don't expect the final product to ship with GPU of different architecture.
Even for development it would make omre sense to have llano+ southern island based GPU.
So I'm not sure about "dev kits have to have, etc." statement.

The second is proof of nothing, iuts basically a lot of BS, the same apply to the "original" source (gaming bolt to ET) with a lot of wishful thinking /bs from the authors based on a few pretty generic quotes.
Just an example there are no quote for the 4K/300fps bs just the authors assertions, and so on.

Then there is anandtech way more serious but too bad they don't (neither AMD slides by the way) give precise target for the release of their 2013 product. Kaveri is "2013" that's all that can be said from that.

Wishful thinking and selective reading are bad friends ;) and that's true whether or not Sony use GCN based GPUs.
 
Fudzilla is everything but reliable and often don't understand what they are writing, worse than Charlie.
I mean more efficient? Trinity will use the same architecture as Cayman, VLIW4/ Northern Island.
kurt are Southern Island, they can work together I guess yes but far from ideal. I don't expect the final product to ship with GPU of different architecture.
Even for development it would make omre sense to have llano+ southern island based GPU.
So I'm not sure about "dev kits have to have, etc." statement.

The second is proof of nothing, iuts basically a lot of BS, the same apply to the "original" source (gaming bolt to ET) with a lot of wishful thinking /bs from the authors based on a few pretty generic quotes.
Just an example there are no quote for the 4K/300fps bs just the authors assertions, and so on.

Then there is anandtech way more serious but too bad they don't (neither AMD slides by the way) give precise target for the release of their 2013 product. Kaveri is "2013" that's all that can be said from that.

Wishful thinking and selective reading are bad friends ;) and that's true whether or not Sony use GCN based GPUs.
I dont know where to began here....

Trinity is more efficient going by amd itself. And it does work with the 7670, Unless sony just want to use old tech and had amd custom make it so 7670 would work with llano. But I dont think so base on the poor yield of the old parts. :LOL:
http://www.tomshardware.com/news/AMD-Trinity-Piledriver-VCE-Demo,15009.html

Tomshardware not a trusting source also???

FYI the next gen AMD apu always get shown at CES and released a couple months later.

I guess the 28nm quote was for a upcoming toaster sony is making. I know he talks about PS4 SOC but i think that code name for the next gen toaster sony plans to release. Or maybe it was just made up....

I mean really.... Trinity 32nm A10 5800K work with the Radeon HD 7670 in dual graphics combination. This is the rumor GPU in the PS4 dev kits IGN leaked. HD 7670 does not work with the old parts. that would be the AMD Radeon™ HD 6670 with it was the old chips. Unless sony made a custom part to use the old chip which make no sense at all.

:oops:

And to say the console maker will not change "GPU of different architecture" because that is exactly what happen with the xbox 360.... went from a AMD Radeon x800 to a very custom part.
 
Last edited by a moderator:
The recent IGN/fudzilla rumor seems strange to me. I've long thought for gaming, a good balance is to spend quite a lot more of your budget on graphics card than CPU, like twice as much or some such. Those parts rumored look like spending a similar amount (or less) on GPU compared with CPU, both in terms of money and TDP.

Is that really a good balance for gaming? Or is my impression from PC not applicable to game console?
 
I dont know where to began here....

Trinity is more efficient going by amd itself. And it does work with the 7670, Unless sony just want to use old tech and had amd custom make it so 7670 would work with llano. But I dont think so base on the poor yield of the old parts. :LOL:
http://www.tomshardware.com/news/AMD-Trinity-Piledriver-VCE-Demo,15009.html

Tomshardware not a trusting source also???

FYI the next gen AMD apu always get shown at CES and released a couple months later.

I guess the 28nm quote was for a upcoming toaster sony is making. I know he talks about PS4 SOC but i think that code name for the next gen toaster sony plans to release. Or maybe it was just made up....

I mean really.... Trinity 32nm A10 5800K work with the Radeon HD 7670 in dual graphics combination. This is the rumor GPU in the PS4 dev kits IGN leaked. HD 7670 does not work with the old parts. that would be the AMD Radeon™ HD 6670 with it was the old chips. Unless sony made a custom part to use the old chip which make no sense at all.

:oops:

And to say the console maker will not change "GPU of different architecture" because that is exactly what happen with the xbox 360.... went from a AMD Radeon x800 to a very custom part.
More efficient like VLIw4 vs VLIW5 for thing as pressure on registers accesses, not all five alu in the VLIw5 block were alike it's 4+1 and there is almost a good argumnet to be made that in previous AMD gpu architecture FLOPS peak figures should be cut by 20% / were overblown by 25%, see you pretty much like fudzilla you got no idea... you're to focused making fun of me...
Basically all Northern Island based GPU (with trinity that's three) have less FLOPS but it's irrelevant.
For previous GPU AMD was giving FLOPS as 10 per 5 wide vliw units. Thing is never (almost?) never happen that the 4+1 alu design push two FLOPS per cycle.
In Northern Island there is no special units, so you have a 4 wide VLIW design so 4 alus. that 8FLOPS per cycle per vliw units (there 16 such units in a AMD SIMD unit acting in a vectorized fashion). Acces to registers is less of a bottlenech (some operation can be slower though) but it's achieve closer to its peak figures. FLOPS are a BS units basically that means nothing as their value varies on an architecture basis.
Evergreen/southern Island FLOPS are different than Northen Island FLOPs. Actually Northern Island FLOPS are more relevant than previous architecture.
Fudzilla "knows" what is Cayman right and instead of making sense they use short cuts like "more efficient" whithout having any idea, because all they see is less SP/cores/marketing bs. AMD for the public has to bend thing into it's more efficient that somehow a taste of their own medicine, For evergreen to northern island the relevant metric is the number of SIMD not the number of SP.

Anyway, both architectures don't swallow the same code, they may require different optimization as other (lesser) change happened from Southern Island to Northern Island.

Whether Sony goes ultimately for GCN I reiterate that it makes no sense to use different GPU architectures even in the devs kits and for marginal gains "in efficience". I don't remember how much SIMD trinity has but I suspect it's the same as llano so five, that would 320SP, would be 400 in previous architecture but that's a bit irrelevant. In most case trinity will achieve the same results some a bit time worse, sometime a bit better as llano (ceteris paribus so for the high end full blown desktop APU). That's for the GPU now there is the CPU and lets put it nicely piledriver modules have to proof them selves.

So I reiterate that "Sony have to use trinity" is not making that much sense.

How about the other article and your claim of Kaveri launching early 2013?

The recent IGN/fudzilla rumor seems strange to me. I've long thought for gaming, a good balance is to spend quite a lot more of your budget on graphics card than CPU, like twice as much or some such. Those parts rumored look like spending a similar amount (or less) on GPU compared with CPU, both in terms of money and TDP.

Is that really a good balance for gaming? Or is my impression from PC not applicable to game console?
In llano (we don't know for trinity or Kaveri) the GPU takes as much room as the 4 cpu cores so if APU+GPU happens it kind of shifts the line a bit.
 
Last edited by a moderator:
More efficient like VLIw4 vs VLIW5 for thing as pressure on registers accesses, not all five alu in the VLIw5 block were alike it's 4+1 and there is almost a good argumnet to be made that in previous AMD gpu architecture FLOPS peak figures should be cut by 20% / were overblown by 25%, see you pretty much like fudzilla you got no idea... you're to focused making fun of me...
Basically all Northern Island based GPU (with trinity that's three) have less FLOPS but it's irrelevant.
For previous GPU AMD was giving FLOPS as 10 per 5 wide vliw units. Thing is never (almost?) never happen that the 4+1 alu design push two FLOPS per cycle.
In Northern Island there is no special units, so you have a 4 wide VLIW design so 4 alus. that 8FLOPS per cycle per vliw units (there 16 such units in a AMD SIMD unit acting in a vectorized fashion). Acces to registers is less of a bottlenech (some operation can be slower though) but it's achieve closer to its peak figures. FLOPS are a BS units basically that means nothing as their value varies on an architecture basis.
Evergreen/southern Island FLOPS are different than Northen Island FLOPs. Actually Northern Island FLOPS are more relevant than previous architecture.
Fudzilla "knows" what is Cayman right and instead of making sense they use short cuts like "more efficient" whithout having any idea, because all they see is less SP/cores/marketing bs. AMD for the public has to bend thing into it's more efficient that somehow a taste of their own medicine, For evergreen to northern island the relevant metric is the number of SIMD not the number of SP.

Anyway, both architectures don't swallow the same code, they may require different optimization as other (lesser) change happened from Southern Island to Northern Island.

Whether Sony goes ultimately for GCN I reiterate that it makes no sense to use different GPU architectures even in the devs kits and for marginal gains "in efficience". I don't remember how much SIMD trinity has but I suspect it's the same as llano so five, that would 320SP, would be 400 in previous architecture but that's a bit irrelevant. In most case trinity will achieve the same results some a bit time worse, sometime a bit better as llano (ceteris paribus so for the high end full blown desktop APU). That's for the GPU now there is the CPU and lets put it nicely piledriver modules have to proof them selves.

So I reiterate that "Sony have to use trinity" is not making that much sense.

How about the other article and your claim of Kaveri launching early 2013?


In llano (we don't know for trinity or Kaveri) the GPU takes as much room as the 4 cpu cores so if APU+GPU happens it kind of shifts the line a bit.
Again i dont know why you goes off on something I did not say. Where did I talk about FLOPS? The performance benchmark gains are from AMD slides. But again I really dont see why we are talking about this....

Sony had to use trinity for the gpu sharing to work. Since llano does not support the 7670, only the 6700. That is where the comment "sony must be using trinity" came from... Only source we have is IGN and that is what I basing these comments on. Another reason to use trinity is for the "Piledriver cpu core." These will be based on future stream roller that will be Kaveri.

And it makes perfect sense for a console to use GCN since it will support HSA. Kaveri supports HSA and will be built on 28 nm and we already had a quote from sony talking about 28 nm yields.


I am not the only one thinking this.....

"What's more likely is that the final product will be built around Kaveri; the architecture AMD expects to launch in 2013. Not only would a Kaveri-derived product put Sony on par with what's shipping in the mainstream market, the enhancements coming as part of AMD's Heterogeneous System Architecture would make it much easier for system programmers to exploit the CPU/GPU dynamic to extract maximum parallelism and performance."

http://hothardware.com/News/ConsoleWatch-PS4-Orbis-CPU-Specs-Leak-But-Likely-Reflect-Early-Hardware/

And you can check about any site they have roadmaps for AMD APU'S. Seems you pick and choose what site you believe or whatever. Like i said already.... the new APU are always preview at CES... because they are what go into new laptops.

I think this all makes perfect sense and would be great performing hardware.
 
Last edited by a moderator:
Back
Top