PlayStation 4 (codename Orbis) technical hardware investigation (news and rumours)

Status
Not open for further replies.
Yeah, I thought of that immediately, but it's fading like the 14+4 rumor.

Someone pointed out somewhere ( not link this time! ) that the 14+4 division was there in the modified standard gpus via bios to emulate the final silicon with 4 separated vector units.
 
Someone pointed out somewhere ( not link this time! ) that the 14+4 division was there in the modified standard gpus via bios to emulate the final silicon with 4 separated vector units.

Ah so it's his speculation ? The 14 CUs will still have to be fed from memory though. That 14+4 rumor only talked about catering to compute operations, not another vector engine on top of 18 CUs (and the 8 core CPU).
 
Ah so it's his speculation ? The 14 CUs will still have to be fed from memory though. That 14+4 rumor only talked about catering to compute operations, not another vector engine on top of 18 CUs (and the 8 core CPU).

Yeah, cell inclusion is more like a tech geek wet dream, but maybe a dev nightmare and so it would go against the Mark Cerny pushed philosophy of being dev friendly. So i am more of the opinion that is more feasible to get modified fpu units in Jaguar.
 
If they put one there, it will most likely replace some of the dedicated decoders, and yes feeding the GPU too, but it should not force the GPU to acess data exclusively via the unit.

I doubt they did.
 
Yeah, cell inclusion is more like a tech geek wet dream, but maybe a dev nightmare and so it would go against the Mark Cerny pushed philosophy of being dev friendly. So i am more of the opinion that is more feasible to get modified fpu units in Jaguar.

But shouldn't HSA make things easier for devs to work with co-processors?
 
Yeah, I thought of that immediately, but it's fading like the 14+4 rumor.

Holy crap, I can't believe you guys keep wanting more h/w in PS4. Man, you guys are greedy. :LOL:

You don't expect Sony to leave all that PS4Eye Image processing up to the Jaguar do you?



Throrically yes, and thats about the patent i posted,the thing is it is from ibm:(

Is this the patent you was looking for?
Methods and apparatus provide for interconnecting one or more multiprocessors
 
You don't expect Sony to leave all that PS4Eye Image processing up to the Jaguar do you?

*shrug* Depends on what they want to do with the Eye, and whether it's all worthwhile.

Besides Jaguar, they also have the GPGPU. Throwing in the SPUs would be a fun exercise for us, but how much are we willing to pay really ? The PS4 spec seems to grow with every passing week. :devilish:

In any case, I don't see why and how PSEye will force the GPGPU to only read memory from the CPU or the vector unit. At this point, I am just eager to hear from the development and business side of things.
 
I have been wondering: is possible to put a steamroller fpu in a jaguar core without many modifications?.Are the modules between low power parts and performance parts interchangables?.I wonder this because the alpha kits had 8 bulldozer cores at jaguar clock rate and wouldnt be logical to think the fpus will be like those?.(well i know in bulldozer the fpu is shared between 2 cores,but still if you only use one it is much more powerful).
 
The pipe stage count doesn't match, I don't believe, nor does the writeback method work. The supported operations don't match. We don't know what the internal encodings are for the uops, and those may not match.
The latencies for the Bulldozer FPU would be suboptimal for Jaguar as well.

So I think some rework would be needed on the FPU and the core as well.
 
Also, 3src/1dst operand microops for FMAC (3 or 4), something the Jaguar scheduling apparatus is unlikely to support.

Cheers
 
It has one additional graphics queue, but otherwise I havn't seen anything differing from the shortly available C.I./GCN 1.1 ISA document.

May I ask what does having the extra queues do, it seems to me that brings some sort of increase in efficiency but is there any other reason for them?. And if it does bring a efficiency increase what kind of order are we talking (1%, 10%) etc or is it too small to even mention?.
 
Better utilization with more complex workloads seems to be one motivation for the queue expansion.
There's a range of queue and buffer enhancements and semaphore monitoring functionality that looks to be geared at making the system more responsive when going back and forth between CPU and GPU.
 
The additional graphics queue was labeld a "high priority" queue and somewhere was "vshell" written on it. So probably the OS got a dedicated queue.
 
Shifty_Geezer,
Thank you for decided to make a stick linking to which thread.

The FAQ made no mention. How do I message the Admin, other Posters, or Edit posts?
I've sent two email to "Contact Us". No replies yet?

At the time it seemed like a great idea to break up info, 2013 Event to Launch?? You know, start fresh now with info and pictures that are being made publicly available.

I did not know this thread existed. So thank you again for the "sticky" with directions.
 
The FAQ made no mention. How do I message the Admin, other Posters, or Edit posts?
New posters don't have edit or messaging rights. They come after you've contributed for a while. A "Contact Us" email will need the site staff to not be crazily too busy to respond to emails. ;)

At the time it seemed like a great idea to break up info, 2013 Event to Launch?? You know, start fresh now with info and pictures that are being made publicly available.
We'll keep everything pre-launch together, and then have a post-launch thread.

I did not know this thread existed. So thank you again for the "sticky" with directions.
There was some renaming to make things clearer. ;)
 
So what is "hardware balanced at 14 CUs?" meant to mean?

You know if the GP/GPU execution units were designed to be overclocked, And there is a total of 18, they could keep 4 cu in a reserve sleep state and move around like crop rotation?

(Computer CPU's already cycle workload according to thermal profile. If the GPU had to be overclocked to keep up with the CPU, then this path of thought makes some sense. ~ It is not common sense to leave processing area unused, but /sometimes/ multiple separate threads can not solve, what a single faster operating or overclocked thread could.)

[I have seen that there is a 14+4 rumor of some sort.... Just doing some free thinking. What do you think of my idea to overclock GPUCompute units for coupled CPU work?]
 
Status
Not open for further replies.
Back
Top