NGGP: NextGen Garbage Pile (aka: No one reads the topics or stays on topic) *spawn*

Status
Not open for further replies.
It was quoted by one of the guys over at Neo Gaf who seems to know a lot about Orbis. I wasn't a statement of fact though I think he said "he wouldn't be surprised.

Isn't this what happens, someone who 'seems' to know something that makes one console look better than the other and fanboys jump on it like it's gospel.

Im sure Sony fanboys would do the same if the situations where reversed but this is getting beyond a joke.
 
Are we sure that Durango is Jaguar based? People have mentioned the rumors don't explicitly state Jaguar. Perhaps it's richland/vishera downclocked with some special sauce? For instance, that AndyH guy said each core was modified to have its own FPU, but it seems Jaguar already has an FPU per core (correct?), they're just not so stellar, which led to some saying they're upgraded instead. However, what if it's based on vishera, like an 8350? Then the comment makes more sense. Say an 8350 modified for an FPU per core, downclocked to 1.6GHz? At 1.6 even vishera shouldn't be the heat/watt monster we know it as, and given the smaller GPU CU count, they might have room.

This would only make sense if Durango was a 32nm Vishera + VLIW4 APU, which would be 2nd gen HSA and therefore inferior to a 28nm 3rd gen HSA. Not to mention that there is no reason at all to use Vishera if you're going to have a core frequency of 1.6Ghz.

As to the 14+4 issue surrounding orbis, what if it's a result of legacy design? That is to say, what if orbis was originally an APU + GPU. It would help explain the CPU context switching patent by Sony. Additionally, Arthur Geis (the ageis poster from GAF) mentioned that those old Durango rumors about MS using a 6670 were actually for orbis, and that people had confused them. Also, IIRC, around that time there was talk of a 2 GPU solution for Durango, but perhaps this too was for orbis. If true, these seemingly suggest an APU+GPU solution. However, somehow someway this changed (maybe they caught wind of what MS had planned?), and Sony went with a SoC instead. Now, maybe, prior to this event, Sony planned to use the CUs (let's conveniently count 4) in the APU to provide processing support for the CPU in a more integrated, HSA like setup, and so some modifications were done to say the ACEs or to add an extra SIMD unit or increase texture caches or whatever it is the latest rumors are suggesting, specializing them in some way for GPGPU work with expectation they would be under utilized for rendering with the separation. The aforementioned event occurs, and this design gets updated into a SoC, but because developers have already been working with this setup, Sony isn't able rework the designed integration of the 4CUs too radically. As a result, we have a puzzling 14+4 setup with some vestige literature suggesting these 4CUs will provide minimal rendering support, despite that it may no longer holds as they're now just more specialized CUs in an 18CU SoC GPU. This might explain why eurogamer may perceive them as separate - their documents are out of date.

I think that the Sony GPU context switching patent was related to the Crossplay feature of the Vita, but that's only a guess.

I also think the 14+4 setup as well as the APU+GPU concept are just abstractions of how this Liverpool processor works. It's all a big hetereogneous processor, so I think it's better to view each element as a gear wheel instead of a complete new engine. If I were Sony I would aim to have the infamous 4 CUs as flexible as possible. Having them available for GPGPU and also for rendering would be the silver bullet. Maybe Sony says "you can use as much GPU resources for rendering as you want, but we want you to use only these four super-duper CUs for your GPGPU algorithms".
 
Isn't this what happens, someone who 'seems' to know something that makes one console look better than the other and fanboys jump on it like it's gospel.

Im sure Sony fanboys would do the same if the situations where reversed but this is getting beyond a joke.

I think it was a guy called Thuway over at gaf that said it, but I can't be arsed to search the cesspit thread over at neogaf so I do apologise if it wasn't him, as I see he's just joined up here. In the main he seems a level headed poster.
 
Are you sure?

Was recently told Durango's peak triangles and vertices rate>Orbis.

Now why would that be...I dont know but it seems pretty telling to me, that the whole GPU might be better.

rumors from the starts pointed that cpu and gpu in Durango are more customized than those in orbis that are more PC standard, but this can't stop people doing exact math, adding GFlops as they was apples and concluding that Orbis will double the framerate of Durango (200% in real world means that orbis will be 3x-4x Durango), all this remember me the days that people are saying Ps3 = 2x Xbox360 because of the Flops and Xbox360 = 1.5x Xbox. But real world told a total different story.

Those are not time to make math or to compare flops of different architectures, I think that we have to focus over the real differences, as the memory system
 
I won't even believe what Ms and Sony say (as you can guarantee they'll be bullshitting) nevermind an internet fame chaser.
 
the origin seems to be proelite who seems to say its a fact but wont elaborate. take all the salt and disbelief you'd like, as i know many of you will. i tend to give it some credit and take note, but wouldn't count it as fact or anything.

http://www.neogaf.com/forum/showpost.php?p=47002439&postcount=936

http://www.neogaf.com/forum/showpost.php?p=47002541&postcount=953
who cares what proelite has to say? take a look at proelite november 2012 edition:
http://www.neogaf.com/forum/showpost.php?p=43932120&postcount=1199

credibilityzero.png
 
proelite has been all over the map and an admitted troll, but of late he often seems to have good info.

But I'm not asking anybody to believe whatever. Choose for yourselves. The truth whatever it is will come out eventually, one things for sure the more i dig the more complex and caveat filled these machines seem, so it's hard to say. Over time we will find out more with more certainty, but it may be a while.
 
I will believe my eyes when I have both consoles sat in front my TV. PS whatever for me (love gran turismo and naughty dog) and xbox whatever with Kinect 2 for the misses and for social occasions. So the xbox will be my 'wii' this gen, the console I can play whenever I want and the PS what I play when the misses is out lol
 
Bandwidth isn't an advantage?

so you know exactly how standard gddr stands against double amount of DD3+eSRAM+DME's?

now the only advantage seems to be the more CU's, waiting to know more on durango, which is surrounded by a mystery halo
 
This would only make sense if Durango was a 32nm Vishera + VLIW4 APU, which would be 2nd gen HSA and therefore inferior to a 28nm 3rd gen HSA. Not to mention that there is no reason at all to use Vishera if you're going to have a core frequency of 1.6Ghz.
I can understand that Vishera at 32nm is a bad choice for the example setup, but richland is 28nm, IIRC, so that may be more viable perhaps? I hadn't really thought carefully about performance at the lower end frequencies. I just assumed vishera/richland would perform better than Jaguar at even low frequencies, but I guess Jaguar is tuned to run better at lower frequencies, while vishera/richland are tuned to higher frequencies.


I think that the Sony GPU context switching patent was related to the Crossplay feature of the Vita, but that's only a guess.

I also think the 14+4 setup as well as the APU+GPU concept are just abstractions of how this Liverpool processor works. It's all a big hetereogneous processor, so I think it's better to view each element as a gear wheel instead of a complete new engine. If I were Sony I would aim to have the infamous 4 CUs as flexible as possible. Having them available for GPGPU and also for rendering would be the silver bullet. Maybe Sony says "you can use as much GPU resources for rendering as you want, but we want you to use only these four super-duper CUs for your GPGPU algorithms".
I had thought that initially the lowered rendering capabilities might have been a result of the hypothesized separate APU+GPU setup and dealing with synchronization between the 2 GPUs and so forth, rather than due to specialization resulting in stripping the 4 CUs of their rendering capabilities. Ah well, it seems to not be the case. Though, I wonder why Sony might limit developers to use those 4CUs explicitly for GPGPU work. I would think if it was just an abstraction they would not attach a explicit number, but mention it as a best practice. Was it that they could only afford to include in the design modified CUs? Maybe, previously it was 11+6 (with the modified CUs taking extra space), and this latest iteration is considered more "balanced"? It does seem fit with what we know.
 
Mysterious halo of fud maybe.

I think we know what both systems are going to be, one set of buyers are reasonably happy the ones that were set on buying the other aren't. There really is no mystery at all.
 
Bandwidth isn't an advantage?
It's ~170 GB/s vs ~170 GB/s. The division in Durango's BW between RAM pools doesn't negate the working BW the processors have available. And if Durango has a clever architecture of lower latency RAM that enables better efficiency, it could come out on top. Of course, that would require considering these boxes are more than just a collection of numbers that can be compared as bigger=better...
 
Status
Not open for further replies.
Back
Top