NGGP: NextGen Garbage Pile (aka: No one reads the topics or stays on topic) *spawn*

Status
Not open for further replies.
My problem with the CUs remains the Durango:
- the 360/kinect, it seems the initial image analysis was done on the GPU.
- the PS3/Move, the image analysis was similarly done on one of the Cell sub-cores.
- the PS4/Move, that analysis is almost certainly going to happen on the '+4'.

Whilst the Durango seems to be doing it all on 2 jaguar cores, along with "whatever it does" with the HDMI input, background downloads and in-game chat? And much of that is "real-time".

Maybe the jaguar cores are more powerful than I'm thinking? but that doesn't make any sense to me.
 
I was more speaking from a raw rendering standpoint. I think people have recognized the 4 reserved CU's have likely made them less than optimal for pure rendering, meaning the 18 to 12 CU advantage isn't quite as brutish. Durango could still certainly find itself in trouble when you have a compute heavy game without dedicated hardware, as you say. It seems to me that all 12 CUs will be likely be homogenous and can be addressed equally for rendering or compute, much like Xenon was a unified architecture compared to RSX. I just hope the hardware disclosures on console reveals are enough to give us all of these details.

I thought the prevailing opinion on here (which jives correct to me, as those 4 CUs have all retaining their texturing HW and other rendering focussed gubbinz) is that the aforementioned "reservation" of these 4 CUs, is purely a software thing, in that they are also managed by a separate scheduler which can be developer driven (as opposed to HW driven).

I don't see how that would make them in anyway less efficient when using them for rendering.

I think people have taken Eurogamer's info. of them being dedicated and separate from the rendering pipeline as gospel without realising that Eurogamer's source (or they themselves) probably muddled it up. Eurogamer even made it sound like the compute unit was a discrete unit, entirely separate from the GPU (and perhaps CPU), which is of course inconsistent with the way the rumours have all reported the GPU as an 18CU part capable of 1.84TFlops.

I still see the difference as 50% between PS4 and Durango. And if the CPUs turn out to be exactly the same, with Durango only having the 102GFLOPs Jaguar for compute, then developers will more than likely utilise PS4s 4 compute CUs for rendering anyways, because otherwise they wouldn't be able to get Durnago to keep up in multiplatform games. With only 12 GPU CUs vs 18 (14+4), Durango's GPU will be too busy with rendering, trying to approach PS4s GPU performance, to be useful for GPGPU I would imagine.
 
I don't think having a special scheduler is necessary. GCN already provides compute-specific pipelines already. There are two at a minimum.
The architecture has promised things like software-driven task queues and low-overhead compute, and it's probably had the hardware for it, even if the PC hasn't exposed it so far.
 
I thought the prevailing opinion on here (which jives correct to me, as those 4 CUs have all retaining their texturing HW and other rendering focussed gubbinz) is that the aforementioned "reservation" of these 4 CUs, is purely a software thing, in that they are also managed by a separate scheduler which can be developer driven (as opposed to HW driven).

I don't see how that would make them in anyway less efficient when using them for rendering.

Then what is the point in it being 4? If it's purely a software abstraction, why would Sony tell developers they can only use 4 for compute, and not as many as they please? There has to be a hardware distinction or difference somewhere, otherwise I don't see the point of the restriction or reservation. Unless Sony is restricting them for the OS like CPU cores, it should all be up to the developer.
 
I'm a 360 fan, but there is no denying from these specs, the ps4 will be more powerful, the only thing, i can hope for is ps4 being 20% more powerful and not 50%.

I guess it is "risky" for MS going too low, they can "lose" some third party support.
 
I don't think having a special scheduler is necessary. GCN already provides compute-specific pipelines already. There are two at a minimum.
The architecture has promised things like software-driven task queues and low-overhead compute, and it's probably had the hardware for it, even if the PC hasn't exposed it so far.
Exactly. The most probable explanation is that a feature of GCN (the ACEs, which are also present in all SI discrete GPUs) gets exposed to the devs and that got blown out of proportion in the current rumor frenzy. That 14+4 partition (or any other) is most likely just an example Sony or some developer mentioned, but it is not limited to that and can be adjusted to the needs of the task. And it doesn't even have to be partitioned statically. That is probably only necessary to get very predictable runtimes. It would also be possible to let the ACEs and the Graphics Command Processor figure out a dynamic distribution of graphics and compute tasks all by themselves. One could just assign higher priorities to certain tasks (as supported by GCN) which are needed faster.
It is quite probable Durango can support the same functionality (as it is also GCN based).
 
Last edited by a moderator:
Then what is the point in it being 4? If it's purely a software abstraction, why would Sony tell developers they can only use 4 for compute, and not as many as they please? There has to be a hardware distinction or difference somewhere, otherwise I don't see the point of the restriction or reservation. Unless Sony is restricting them for the OS like CPU cores, it should all be up to the developer.

They probably can. There's nothing about GCN that forbids compute in the other 14.
I wonder if the other 4 are related to certain system functions, or that there are certain latency and performance guarantees to the reserved four that cannot be promised for the rest.
 
They probably can. There's nothing about GCN that forbids compute in the other 14.
I wonder if the other 4 are related to certain system functions, or that there are certain latency and performance guarantees to the reserved four that cannot be promised for the rest.

This is still my bet. I just don't see doing a minor redesign on 4CUs being worth the effort.
 
It circles back to Sweetvar's info too well to be ignored imo.

5 SIMD
320 ALU

For whatever reason they wanted 4 CU to be beefed up, attached or separate. And it being modified makes it less likely to be within the same GPU.
 
It circles back to Sweetvar's info too well to be ignored imo.

5 SIMD
320 ALU

For whatever reason they wanted 4 CU to be beefed up, attached or separate. And it being modified makes it less likely to be within the same GPU.

Yep.

The peak triangle / vertices rate / sec for the GPU is more inline with that of a GPU with 14 CUs, not 18.

8 x jaguars + 4 CUs for compute, 14 CUs for rendering.

jaguars ~ 8x PPE
4CUS ~ 4x SPEs
4CUS ~ 10x RSX
7-8x the usuable ram amount
~8x the bandwidth

It should be easier to do dynamic recompilation on Orbis than on Durango.
 
It circles back to Sweetvar's info too well to be ignored imo.

5 SIMD
320 ALU

For whatever reason they wanted 4 CU to be beefed up, attached or separate. And it being modified makes it less likely to be within the same GPU.
Only if you ignore the CU count, peak FLOPs figure, and that there is only one GPU mentioned in the rumor you're trying to link back to Sweetvar's information.

Yep.

The peak triangle / vertices rate / sec for the GPU is more inline with that of a GPU with 14 CUs, not 18.
The triangle rate is independent of the CU count, and it's the same as the triangle rate for a 16 CU desktop GPU.
edit: more like 15 or so, although this unit isn't directly tied to the CUs
 
Last edited by a moderator:
We still have to wonder who that is.

Probably Microsoft, based on some vague stuff I picked up along the way (and that might be wrong!). Intel and NVIDIA will probably join this one day after never. If it isn't Microsoft, or some other party that has shown some skill at actually developing a strong software eco-system, chances of stillborn-ness grow quite a bit. As it is it's a collection of entities that do hardware (some of it quite good) and...that's about it. IBM would be quite uninteresting too, in this context, since their focus lies elsewhere these days.

The fact that most of these people are also involved in the design and evolution of OpenCL, which is an utter mess (being rather polite here), does not inspire much confidence. Granted, CL's woes come primarily from two entities that are definitely not there, out of which one will be there one day after never as already mentioned (lucky guesses invited).
 
Probably Microsoft, based on some vague stuff I picked up along the way (and that might be wrong!). Intel and NVIDIA will probably join this one day after never. If it isn't Microsoft, or some other party that has shown some skill at actually developing a strong software eco-system, chances of stillborn-ness grow quite a bit. As it is it's a collection of entities that do hardware (some of it quite good) and...that's about it. IBM would be quite uninteresting too, in this context, since their focus lies elsewhere these days.

The fact that most of these people are also involved in the design and evolution of OpenCL, which is an utter mess (being rather polite here), does not inspire much confidence. Granted, CL's woes come primarily from two entities that are definitely not there, out of which one will be there one day after never as already mentioned (lucky guesses invited).

blackapple.gif
?
 
Last edited by a moderator:
Probably Microsoft, based on some vague stuff I picked up along the way (and that might be wrong!). Intel and NVIDIA will probably join this one day after never. If it isn't Microsoft, or some other party that has shown some skill at actually developing a strong software eco-system, chances of stillborn-ness grow quite a bit. As it is it's a collection of entities that do hardware (some of it quite good) and...that's about it. IBM would be quite uninteresting too, in this context, since their focus lies elsewhere these days.

The fact that most of these people are also involved in the design and evolution of OpenCL, which is an utter mess (being rather polite here), does not inspire much confidence. Granted, CL's woes come primarily from two entities that are definitely not there, out of which one will be there one day after never as already mentioned (lucky guesses invited).

Thanks, this makes a lot of sense, I was ruling out Microsoft based on the "hardware vendors only" rule, but Microsoft is in fact a hardware vendor as well. The blank hexagon would be filled when Durango is announced. And actually, the next Surface Pro is a HSA tablet.

Microsoft and AMD will be in a good position to boast about these three letters and I can see the tremendous fountains of hype and clamor from here already.

As for the lucky guess, I'd bite in but I don't want to risk myself finding out I'm eating a worm.
 
Isn't it funny that someone always comes up with new fairy dust for Durango that will close the gap to Orbis?

When Orbis was rumored to have a 1.8 TFLOPS GPU, the "leakers" said that Durango will have a secret sauce for the GPU (some kind of super FLOPS) since it had a 7970 in the first devkit. When Orbis was rumored to have 4GiB of super fast GDDR5 RAM, the "leakers" said that Durango will have some wizzard jizz that will close the bandwidth gap. Now Orbis is rumored to have 512 GFLOPS on the computing side (8 Jaguars + 4 GCN CUs) and "only" 1.4 TFLOPS on the GPU side and all of a sudden Durango will have some sort of Super-Jaguar with ultra-beefy FPUs that close the gap again. What happened to the secret sauce for the Durango GPU? No longer required since Orbis was downgraded to 1.4 TFLOPS on the GPU?

All these Durango rumors sound contradictory as hell. I'm not believing any of it until someone comes out with concrete information.

Honestly? I think half the time they are bs'ing us.

There's no reason to be sure they even know anymore of the Orbis specs that we have right now. To automatically say every time Durango has this or that and regardless of what is 'now' for Orbis will be at parity, it makes no sense since we don't even know for sure what Orbis is certainly about (given the discussion about 14 + 4 we think we're certain of even less than before).

This aegis guy is the same one who said he would be surprised that Xbox could have DDR3 when we first heard of GDDR5 from Orbis leaks on neogaf, what is that about? Now all of a sudden the tune is DDR3 is perfectly fine from the same guy. Either way he's put himself in the position to be 'right' at least one time, because he's committed to DDR3 being 'good' and 'bad.'

It sounds more like crowd control than anything else. Any perceived benefit has been thrown aside faster than you can say 'Nancy.' Crowd control and controlling perceptions with bs, imo.

Who are these people anyway? The people who I know actually working on Orbis don't even know that much (they only see the devkit from diagnostic screens), and on top of that they don't come on forums and wax lyrical on the subject of comparing the two systems.
 
Isn't it funny that someone always comes up with new fairy dust for Durango that will close the gap to Orbis?

When Orbis was rumored to have a 1.8 TFLOPS GPU, the "leakers" said that Durango will have a secret sauce for the GPU (some kind of super FLOPS) since it had a 7970 in the first devkit. When Orbis was rumored to have 4GiB of super fast GDDR5 RAM, the "leakers" said that Durango will have some wizzard jizz that will close the bandwidth gap. Now Orbis is rumored to have 512 GFLOPS on the computing side (8 Jaguars + 4 GCN CUs) and "only" 1.4 TFLOPS on the GPU side and all of a sudden Durango will have some sort of Super-Jaguar with ultra-beefy FPUs that close the gap again. What happened to the secret sauce for the Durango GPU? No longer required since Orbis was downgraded to 1.4 TFLOPS on the GPU?

All these Durango rumors sound contradictory as hell. I'm not believing any of it until someone comes out with concrete information.

How is that more annoying than the constant rush to judgment that "Orbis crushes Duango". You see WAYYYY more of that on forums than, "special sauce will save Durango " talk. I'm sorry that's just factual, we can count each type of post on neogaf if you'd like.

It seems we are discussing rumors, so if their are rumors about something on Durango or Orbis they'll be discussed. There's been a rumor about improvements to the Jag cores on Durango for a while. I suspect due to people still talking about them there may well be something to them (because it's often the case that true information is supported behind the scenes, thus is continued to be talked about when other rumors die out). Do I have ANY idea if any of these rumors are true though? Not really.


Durango is just the weirder, less straightforward architecture. That also lends itself to special sauce conjecture more than Orbis. ERP's posts imo are the main arbiter of the likelyhood of Durango special sauce for the GPU (basically his posts about the ESRAM filling the ALU's better), and I would say it doesn't sound spectacularly promising there, but possible.
 
How is that more annoying than the constant rush to judgment that "Orbis crushes Duango". You see WAYYYY more of that on forums than, "special sauce will save Durango " talk. I'm sorry that's just factual, we can count each type of post on neogaf if you'd like.

It seems we are discussing rumors, so if their are rumors about something on Durango or Orbis they'll be discussed. There's been a rumor about improvements to the Jag cores on Durango for a while. I suspect due to people still talking about them there may well be something to them (because it's often the case that true information is supported behind the scenes, thus is continued to be talked about when other rumors die out). Do I have ANY idea if any of these rumors are true though? Not really.


Durango is just the weirder, less straightforward architecture. That also lends itself to special sauce conjecture more than Orbis. ERP's posts imo are the main arbiter of the likelyhood of Durango special sauce for the GPU (basically his posts about the ESRAM filling the ALU's better), and I would say it doesn't sound spectacularly promising there, but possible.

But wierder/less straightforward as PS2/PS3 or even worse (hard to develop)?
 
But wierder/less straightforward as PS2/PS3 or even worse (hard to develop)?

No idea as I'm farthest thing from a programmer or technically knowledgeable.

I dont think anything will match cell for pure difficulty.

ERP's posts imo imply using the EDRAM in a certain performance enhancing manner, if it's even possible, might be difficult for programmers.
 
I will say, the recent news of 4 CU's possibly somehow not being useful for graphics on Orbis, is imo quite the gamechanger. I dont see how it can be looked at otherwise.

I still liked Durango's chances decently as it was, 5GB RAM and 1.2TF vs 3.5/1.8. Change it to 1.4, to me then you're looking at basically equal GPU's (especially on third party multiplats, aka 95% of games) but one has a lot more RAM, plus Kinect, Xbox arguably better brand image in many countries, likely vastly more featured OS (if supposed dedicated resources are anything to go by), and I think you could likely chalk next gen up for Microsoft already. Certainly Nintendo wont be there.

4 CU's for compute/physics is stupid and smacks of that "sony being sony and screwing it up" move I was waiting for (I thought for example, they might downgrade back to 2GB RAM, but I kind of expected them to do something terrible).
 
I will say, the recent news of 4 CU's possibly somehow not being useful for graphics on Orbis, is imo quite the gamechanger. I dont see how it can be looked at otherwise.

I still liked Durango's chances decently as it was, 5GB RAM and 1.2TF vs 3.5/1.8. Change it to 1.4, to me then you're looking at basically equal GPU's (especially on third party multiplats, aka 95% of games) but one has a lot more RAM, plus Kinect, Xbox arguably better brand image in many countries, likely vastly more featured OS (if supposed dedicated resources are anything to go by), and I think you could likely chalk next gen up for Microsoft already. Certainly Nintendo wont be there.

4 CU's for compute/physics is stupid and smacks of that "sony being sony and screwing it up" move I was waiting for (I thought for example, they might downgrade back to 2GB RAM, but I kind of expected them to do something terrible).

Hypothetically

Orbis has stock jaguars, 4CU for compute, 3-3.5 GB of FAST GDDR5 ram, 14 CUs that's might have stalls common to PC gpus

Durango has a heftier CPU, a powerful audio / general processor, 5 - 5.5GB of DDR3 ram, 32MB esram with low latency, and 1.2 teraflops gpu that's not troubled by stalls (due to Esram + 4 x DMEs)

Suddenly it looks a lot closer.

Until the blanks are filled out, it's way too early to award any systems the performance crown.
 
Status
Not open for further replies.
Back
Top