PlayStation 4 (codename Orbis) technical hardware investigation (news and rumours)

Status
Not open for further replies.
Most of the AR games I have seen look fake or cartoonish. I doubt they will be effective in telling a horror story.

Demon's Souls player invasion scared me more than any other horror movies combined. Sometimes, it's the idea or situation that's frightening. We don't always need the presentation.

And no, I don't want to wear a Playstation Eye (+ cable) on my head. -_-

Clicky dpad, good. PSEye on top of my head, bad.

Yes I know it was crazy but it was the easiest way I could think of to simulate this
jbqwq0r0LIIRfQ.jpg
.
 
Ah I see. When I first saw HMZ-T1, I thought of adding a camera to the unit too. I think it will open up new application areas (e.g., Augmented capability). Its AR applications may be beyond gaming, and outside the living room.
 
I guess it works the same as a drawing tablet: Touching the lower left corner of the pad automatically lets your cursor (or whatever) jump to the lower left corner on the screen, touching the upper right corner on the pad lets it jump to the upper right corner on the screen, etc. This takes getting used to but it's much faster than a classical notebook touchpad where you have to wipe over the pad until the cursor has reached its destination on a high-res screen.

All capacitative touch panels work the same, it's possible to use a notebook touchpad that way (if the drivers allow, but again, it's software), actually, it's just that the driver has to emulate the mouse and thus convert absolute position into relative position. Direct touch wouldn't be convenient on a small feedbackless panel though - drawing tablets are much bigger and some even have screens. With a small touchpad in direct-touch mode it would be hard to hit the exact place, with mouse emulation mode you have continious visual feedback.

DS4 could use both modes with ability to select. I'd really like to see a FPS with coarse and quick control from touchpad and precise aim from gyro.
 
Jon Stokes (formerly of Ars Technica) is talking of doing a deep dive for Orbis as a 1.99 kindle single. Would be an interesting addition to this discussion.
 
He also said the following which I thought was pretty interesting:

all apps are better with half the cores at twice the performance, the question gets more nuanced if the trade is at 1.5x perf
 
He also said the following which I thought was pretty interesting:

all apps are better with half the cores at twice the performance, the question gets more nuanced if the trade is at 1.5x perf

What does that mean?

EDIT: Oh, he seems to be talking about the cpu.
 
He also said the following which I thought was pretty interesting:

all apps are better with half the cores at twice the performance, the question gets more nuanced if the trade is at 1.5x perf
I always said that BD/PD/SR derived cores are very unlikely to be able to clock twice as high when constrained to the same low power budget as the 8 Jaguar cores got. Each additional Watt for the CPU is missing on the GPU side. It makes a lot of sense to restrict the CPU power budget to 20-25W or something. And how high does one get four PD cores within that on TSMC's 28nm process (which also means one has to put in a tremendous effort to port PD from GF's 32nm SOI to it in the first place)? And the other way around, the SOI processes of GF appear to be not a perfect fit for the GPU part. Taking future shrinks and possible changes of the foundry into consideration, it was simply the better solution to put in twice the number of Jaguar cores instead of expensively porting PD cores and strangling them to low frequencies with the power budget afterwards.
 
I always said that BD/PD/SR derived cores are very unlikely to be able to clock twice as high when constrained to the same low power budget as the 8 Jaguar cores got. Each additional Watt for the CPU is missing on the GPU side. It makes a lot of sense to restrict the CPU power budget to 20-25W or something. And how high does one get four PD cores within that on TSMC's 28nm process (which also means one has to put in a tremendous effort to port PD from GF's 32nm SOI to it in the first place)? And the other way around, the SOI processes of GF appear to be not a perfect fit for the GPU part. Taking future shrinks and possible changes of the foundry into consideration, it was simply the better solution to put in twice the number of Jaguar cores instead of expensively porting PD cores and strangling them to low frequencies with the power budget afterwards.

Why would they port PD to 28nm rather than just using SR which will already be 28nm and available commercially in the launch timeframe of the new consoles?

I do agree with you're point though. PD/SR simply aren't designed to be very low power so I'm not sure you could get them within 25w at any reasonable clock speed.

I'd also be curious what Carmack is defining as a core there. In Intels case it's obvious, in AMD's not so much.
 
Has anyone else seen the comparison shots of UE4 running on PS4 and PC?

They stripped a heck of a lot of the demo to get it run on PS4.... A lot more then even I thought...
 
Has anyone else seen the comparison shots of UE4 running on PS4 and PC?

They stripped a heck of a lot of the demo to get it run on PS4.... A lot more then even I thought...

At this point in the game that isn't too concerning. They probably didn't have much time to put something together and the state of the tools is probably not complete if most developers weren't even sure how much RAM the final console was going to have.
 
At this point in the game that isn't too concerning. They probably didn't have much time to put something together and the state of the tools is probably not complete if most developers weren't even sure how much RAM the final console was going to have.

This.They have beta kits, with representative almost final silicom from January, and developers said they are more powerful than the prior ones, so the enhanced gpu,uma, and apu latencies will star to shine from now.And libgcm!.
 
This.They have beta kits, with representative almost final silicom from January, and developers said they are more powerful than the prior ones, so the enhanced gpu,uma, and apu latencies will star to shine from now.And libgcm!.

Twitter responses from epic people seem to sugest that they only just found out about the 8 gigs of ram. So my thought is the demos were made around a 4 gig ram budget
 
For Orbis' initial UE4 performance, I wouldn't use Epic's demo to judge it. A better candidate is probably Knack. I suspect like the Uncharted 1 days, they are building custom tools as they make the game. Then, those tools and frameworks will be shared around.
 
This.They have beta kits, with representative almost final silicom from January, and developers said they are more powerful than the prior ones, so the enhanced gpu,uma, and apu latencies will star to shine from now.And libgcm!.

What exactly is libGCM ?
 
I'd also be curious what Carmack is defining as a core there. In Intels case it's obvious, in AMD's not so much.

I interpret it as that that he would rather have 4 Jaguar cores at 3.2 GHz rather than 8 cores at 1.6 GHz. And even if the 4 core Jaguar is only at 2.4 GHz, it may still beat the 8 core in the majority of use cases.
 
Status
Not open for further replies.
Back
Top