NGGP: NextGen Garbage Pile (aka: No one reads the topics or stays on topic) *spawn*

Status
Not open for further replies.
You added the word "designed", you made up an information that 14 are "dedicated", and claim 4 are "reserved" for imaginary things. All of this is in your head. It's not in the source. You're making things up.

Bingo! I'm glad someone picked that up. It's a perfect illustration used to show that people are just making stuff up based on imperfect information.

Anything anyone says in any definitive nature about either console at this point has just as much validity as anyone else saying anything based on the information that has leaked.

Thank you for pointing that out to people that didn't catch it on their own. I was wondering if perhaps I was being too subtle in making that example.

The reason the "minor benefit" claim is ambiguous is because we don't know the point of reference. Is it a minor benefit compared to not using them for graphics? Is it a minor benefit compared to them being "stock" CUs? Is it a minor benefit because they only represent ~20% of the total FLOPs when used? Or is it a minor benefit compared to using them to assist the CPU where it would be a 400% gain? And we still don't know what the author considers "minor".

Yes, and that's the whole point. Noone in this thread has enough information to make any sort of definitive judgement with regards to what each console is doing much less each console with regards to the other console.

It's fun to speculate just as long as people realize they are only speculating. When things start to take on the pseudo mantle of declarative truths however (X console will be at least Y% faster, or Y console is doomed to failure because B person thinks they know how fast each console is, etc.), that's when you just have to shake your head and wish you didn't start reading this thread.

Unfortunately, there's still the occasional good nugget of speculation here, so I continue to dig through the dreck to look at certain posters speculations that aren't burdened by having a preference to one brand or the other.

Regards,
SB
 
My take on all this is that the CPU in Durango will make up for the deficiency of compute.

I believe that 4 of the CU in Orbis will be reserved for Pseye(s) and/or physics work.. BUT can be used for rendering; though due to the overall balance of the system would only provide a minor boost. Could also be that not the entirety of the 4 CU's are reserved for PSeyes. Could be that only two and a half put of the four are, and so it would only be a minor boost if using that block of 4 for rendering.


The way I see it shaking down though is, lets forget about the 4 CU for a moment:

In terms of rendering, we've got 14 vs 12 CU... now if the new Xbox has a beefier CPU, say with .4TF of compute vs .1TF that difference will have to come from Orbis's 14 CU... so now we've got:

New Xbox: 1.6TF coming from the CPU and GPU
PlayStation: 1.5TF coming from the CPU and GPU

It's a wash. (as many developers have been saying)

This would also tie into DaE and friends comments.



So there are really two things yet undetermined in this whole shebang (assuming the leaks are still accurate)... the +4 CU and their role, and the CPU in Durango.. but all signs are pointing to more flops being uncovered yet to make this a more level playing field.



Btw, what kind of improvements *are* likely in Durango's CPU? like MOST likely? I'm curious of as to what kind of improvements can be made to double or quadruple the flops, and what one is likely to expect.


Oh and I doubt we'll be getting anything other than vague specs on the 20th.

I don't think theres improvements to the tune of 410GFLOPS in the Durango CPU tbh. Thats 4x the power. Also you are miscalculating some things, Orbis slated at ~2.0TFLOPS (1.84TFLOPS GPU + 104GFLOP CPU). That doesnt add up to 1.5 TFLOPS. The GPU alone is more then that.

Oh you said without the extra 4 CU. I see

I personally don't think the 4CU are going to be fixed to HAVE to something but Sony might have a specific task in mind for them.

Also it seems to only be a wash if you forget about the 410GFLOPS of extra CU that the Orbis has tacked on [and also add on 400GFLOPS to Durango]. That doesn't seem fair to me really.
 
Last edited by a moderator:
Also it seems to only be a wash if you forget about the 410GFLOPS of extra CU that the Orbis has tacked on [and also add on 400GFLOPS to Durango]. That doesn't seem fair to me really.

I'm just giving Durango the benefit of the doubt as a few have said it's a wash.

The 410GFLOPS could be dedicated to the PSeyes and Durango could have a beefier CPU. What's most likely.. probably somewhere in between. I'm guessing some of the CU's will be working with the new PSeye, and I'm guessing Durango does have a beefier CPU.

Hopefully we get some new info on the 20th.
 
I'm just giving Durango the benefit of the doubt as a few have said it's a wash.

The 410GFLOPS could be dedicated to the PSeyes and Durango could have a beefier CPU. What's most likely.. probably somewhere in between. I'm guessing some of the CU's will be working with the new PSeye, and I'm guessing Durango does have a beefier CPU.

Hopefully we get some new info on the 20th.

a 4x beefier CPU is not likely IMO.

And the 410GFLOPs won't be dedicated towards anything, they might be commonly used for something but dedicated? no way. I do not think Sony is going to force anyone to use pseye in every single game. We do not even know if the pseye requires further computation or if it all happens on the device.

A much fairer comparison imo would be.

1.3 TFLOPS for new Xbox (GPU + 104GFLOPS CPU) [if rumours of much beefier CPU false]
1.6 TFLOPs for new Xbox (GPU + 400GFLOPS CPU) [If rumours of much beefier CPU true]
1.9 TFLOPs for Orbis (GPU + 104GFLOPS CPU + 410GFLOPS CU)
 
I do think the author clearly intends for the reader to marry "minor benefit" and "balanced for 14cus" though. To me this can only mean 3 things.

1. The 4 are physically less suited for rendering tasks
2. They are physically separated or connected in such a way that makes them less suited for rendering tasks
3. As i stated above, the rendering pipeline is optimized (or saturated) by 14cus. Any additional cus thrown at rendering are constrained by this saturation.

I honestly don't see a way to interpret it as written where this chip can effectively use 1.8tf or >14cus for rendering in a meaningful way.

I actually agree that there may be a physical limitation. My theory from the beginning has been that the Orbis chip is effectively an 8 core Jaguar plus 4 CU APU design with HSA enhancements married to a discreet 14 CU GPU on the same die. The 4 CUs in that case would be better suited to helping the CPU than intruding on the main rasterization process, but that is not to say they can add nothing to the graphics. If you have high quality post processing for DoF, motion blur, Anti-Aliasing, those can easily be moved away from the main GPU to the "Compute Module". Likewise, the "Compute Module" can be put to work doing cosmetic physics calculations (ala Physx) for particles, dynamic cloth and liquid or smoke simulations. In both cases we're talking about 400 GFlops worth of work that the Durango GPU would itself be responsible for, so we're back where we started: comparing 1.2TFlops to 1.8Flops.
 
I don't think theres improvements to the tune of 410GFLOPS in the Durango CPU tbh. Thats 4x the power. Also you are miscalculating some things, Orbis slated at ~2.0TFLOPS (1.84TFLOPS GPU + 104GFLOP CPU). That doesnt add up to 1.5 TFLOPS. The GPU alone is more then that.

Oh you said without the extra 4 CU. I see

I personally don't think the 4CU are going to be fixed to HAVE to something but Sony might have a specific task in mind for them.

Also it seems to only be a wash if you forget about the 410GFLOPS of extra CU that the Orbis has tacked on [and also add on 400GFLOPS to Durango]. That doesn't seem fair to me really.

Where did the Durango with a beefier CPU come from?

Last time I checked VGleaks,

Durango
- x64 Architecture
- 8 CPU cores running at 1.6 gigahertz (GHz)
- each CPU thread has its own 32 KB L1 instruction cache and 32 KB L1 data cache
- each module of four CPU cores has a 2 MB L2 cache resulting in a total of 4 MB of L2 cache
- each core has one fully independent hardware thread with no shared execution resources
- each hardware thread can issue two instructions per clock

Orbis

  • Orbis contains eight Jaguar cores at 1.6 Ghz, arranged as two “clusters”
  • Each cluster contains 4 cores and a shared 2MB L2 cache
  • 256-bit SIMD operations, 128-bit SIMD ALU
  • SSE up to SSE4, as well as Advanced Vector Extensions (AVX)
  • One hardware thread per core
  • Decodes, executes and retires at up to two intructions/cycle
  • Out of order execution
  • Per-core dedicated L1-I and L1-D cache (32Kb each)
  • Two pipes per core yield 12,8 GFlops performance
  • 102.4 GFlops for system




I don't see anything that yells "I'M BETTER" from either of them.
From the specs they look like the same CPU, only Orbis is a bit more detailed. Could be wrong, please point me out.
 
There's no actual rumor to that effect, only the idle speculation of people hoping the difference won't be as large as it appears.
 
Where did the Durango with a beefier CPU come from?

Last time I checked VGleaks,

Durango
- x64 Architecture
- 8 CPU cores running at 1.6 gigahertz (GHz)
- each CPU thread has its own 32 KB L1 instruction cache and 32 KB L1 data cache
- each module of four CPU cores has a 2 MB L2 cache resulting in a total of 4 MB of L2 cache
- each core has one fully independent hardware thread with no shared execution resources
- each hardware thread can issue two instructions per clock

Orbis

  • Orbis contains eight Jaguar cores at 1.6 Ghz, arranged as two “clusters”
  • Each cluster contains 4 cores and a shared 2MB L2 cache
  • 256-bit SIMD operations, 128-bit SIMD ALU
  • SSE up to SSE4, as well as Advanced Vector Extensions (AVX)
  • One hardware thread per core
  • Decodes, executes and retires at up to two intructions/cycle
  • Out of order execution
  • Per-core dedicated L1-I and L1-D cache (32Kb each)
  • Two pipes per core yield 12,8 GFlops performance
  • 102.4 GFlops for system




I don't see anything that yells "I'M BETTER" from either of them.
From the specs they look like the same CPU, only Orbis is a bit more detailed. Could be wrong, please point me out.

I haven't heard a rumor to that effect either. I was merely entertaining his thought of a massively powerful CPU to show off all situations.
 
I haven't heard a rumor to that effect either. I was merely entertaining his thought of a massively powerful CPU to show off all situations.

Ya question was direct at anybody entertaining the thought, not specifically at you. Your quote was just closer to my mouse and was related to the issue. :smile:
4x powerful CPU sounds improbable from the get go.



The notion of "everything balanced to 14 CUs, anything extra provides little to no benefit" also sounds ludicrous to me.
If this reasoning stands true, every PC system would have an "optimal GPU card" and upgrading them over that limit would prove little to no benefit across ALL games.
And of course we know that's not the case.
Any reasonably updated system would see better IQ/fps from a GPU upgrade.
 
Last edited by a moderator:
The problem, in my opinion, is just the wording.

Define "minor". Compared to 14, 4 is a "minor" fraction at only 28%. Just barely more than a quarter. Could that contribute to "a minor boost in rendering" phrase?

Also define "hardware balanced", because, even being a CS graduate (not the best low level hardware guy, mind you), I can't say what that's supposed to mean, in this context.
 
The problem, in my opinion, is just the wording.

Define "minor". Compared to 14, 4 is a "minor" fraction at only 28%. Just barely more than a quarter. Could that contribute to "a minor boost in rendering" phrase?

Also define "hardware balanced", because, even being a CS graduate (not the best low level hardware guy, mind you), I can't say what that's supposed to mean, in this context.

Whats the chance that the original doco that all the Orbis stuff is from isn't even in english ?.
 
Well, as I speculated on the orbis thread, what if the 4 CUs are actually _reserved_ as opposed to just "different". It's might be about the right amount of processing power to turn stereo camera images into a 3d mapped scene in real time, and probably a lot cheaper in BOM than a 3D camera like the kinect, especially over time. So 12 versus 14 might actually be a reasonable comparison.

Dedicating 20% of the GPU resources to a peripheral feature doesn't make any sense at all. What if a game doesn't use the camera feature? What do I need a camera for in the new Dark Souls or Final Fantasy game? It would be a complete waste of silicon.

If I were Sony I would like the 4CUs to be as flexible as possible. The best abstraction is probably to think of them as the new SPEs of the PlayStation 4. I would want to have this 409.6 GFLOPS computing power at my disposal whenever I want and not only dedicated to one single task alone. Most core gamers don't want motion control anyway, so you'll most likely use it for more and better effects in core games. On the other hand, casual games like Wonderbook doesn't require super duper graphics effects, in these games you can use it for some fancy augmented reality stuff. Dedicating this computing power (which is roughly one third of the Durango GPU after all!) to a single feature is a very unfortunate decision in my eyes. I can't see that happening.
 
Whats the chance that the original doco that all the Orbis stuff is from isn't even in english ?.

I'd say it's a good chance that it's written by a native english speaking guy, as AMD is the sole provider of the hardware this time around. The software reference, on the other hand, might not be so easy, as Sony does this in-house (afaik), which means Japan.

But who knows...there's too much "yay" and "nay" for my taste for/against both console designs at the moment.
 
I'd say it's a good chance that it's written by a native english speaking guy, as AMD is the sole provider of the hardware this time around. The software reference, on the other hand, might not be so easy, as Sony does this in-house (afaik), which means Japan.

But who knows...there's too much "yay" and "nay" for my taste for/against both console designs at the moment.

I'm sure the specs are accurate some of it just smacks a bit of bad translation to me though, this could be someone rewriting it without a lot of technical knowledge though.
 
Dedicating 20% of the GPU resources to a peripheral feature doesn't make any sense at all. What if a game doesn't use the camera feature? What do I need a camera for in the new Dark Souls or Final Fantasy game? It would be a complete waste of silicon.

If I were Sony I would like the 4CUs to be as flexible as possible. The best abstraction is probably to think of them as the new SPEs of the PlayStation 4. I would want to have this 409.6 GFLOPS computing power at my disposal whenever I want and not only dedicated to one single task alone. Most core gamers don't want motion control anyway, so you'll most likely use it for more and better effects in core games. On the other hand, casual games like Wonderbook doesn't require super duper graphics effects, in these games you can use it for some fancy augmented reality stuff. Dedicating this computing power (which is roughly one third of the Durango GPU after all!) to a single feature is a very unfortunate decision in my eyes. I can't see that happening.
But if MS ships with Kinect, they will have done exactly that (dedicated significant BOM resources to a "peripheral feature" which could have been used for a beefier GPU). If the rumors of 2 cores (25%) and 3GB (37.5%) reserved for "system use" are correct, then, again, they have dedicated significant resources to non-core-game features. Why should Sony be any different?
 
But if MS ships with Kinect, they will have done exactly that (dedicated significant BOM resources to a "peripheral feature" which could have been used for a beefier GPU). If the rumors of 2 cores (25%) and 3GB (37.5%) reserved for "system use" are correct, then, again, they have dedicated significant resources to non-core-game features. Why should Sony be any different?

I do not see any reason why Sony could not use dedicate processor inside the pseye.

Also, I hope MS does not ship with resources permanently dedicated to Kinect that probably wont go down too well with core gamers, and even if the rumours are correct Sony would be dedicating over 20x the resources to a external device that MS is. I do not think they would let it happen.

its probable that the pseye might use the extra 4 CU's, but have them required just to push a gimmick controller?, I do not think Sony would do that.
 
But if MS ships with Kinect, they will have done exactly that (dedicated significant BOM resources to a "peripheral feature" which could have been used for a beefier GPU). If the rumors of 2 cores (25%) and 3GB (37.5%) reserved for "system use" are correct, then, again, they have dedicated significant resources to non-core-game features. Why should Sony be any different?

Good point. The remaining question would be: are those resources fixed and non-available for other tasks? What if a game does not use kinect (if this is possible and no mandatory stuff from MS)...are resources still blocked or are devs free to use them for whatever they want to enhance the game in other ways?

To be honest, I would be surprised if resources are fixed and dedicated...as this does not sound future proof and efficient?!
 
But if MS ships with Kinect, they will have done exactly that (dedicated significant BOM resources to a "peripheral feature" which could have been used for a beefier GPU). If the rumors of 2 cores (25%) and 3GB (37.5%) reserved for "system use" are correct, then, again, they have dedicated significant resources to non-core-game features. Why should Sony be any different?

Does Microsoft want to have Kinect always availabale even in core games and does Sony want this, too? And does Sony have something like Kinect at all?

If Microsoft aims to have Kinect ready for any game, for example a finger gun for Halo, then it definitely makes sense to have dedicated resources for this feature. If Sony wants to let the developers decide whether they will use the camera for augmented reality (for example Wonderbook) or not, then they don't need dedicated resources. Having a camera packed with every PlayStation makes sense for Move and head-/face-/eyetracking, but I can't see these features using 409.6 GFLOPS computing power.
 
I do not see any reason why Sony could not use dedicate processor inside the pseye.

Also, I hope MS does not ship with resources permanently dedicated to Kinect that probably wont go down too well with core gamers, and even if the rumours are correct Sony would be dedicating over 20x the resources to a external device that MS is. I do not think they would let it happen.

its probable that the pseye might use the extra 4 CU's, but have them required just to push a gimmick controller?, I do not think Sony would do that.

If a game uses pseye for body detection, then it may need 4CU's. However, if a game only needs support of PSmove, it may need 1~2 CU. For other games which don't need PSEYE, all 4 additional CU's can be used for graphical purpose.
 
Status
Not open for further replies.
Back
Top