NGGP: NextGen Garbage Pile (aka: No one reads the topics or stays on topic) *spawn*

Status
Not open for further replies.
Custom bought by MS, off the shelf bought by Sony.

Im pretty sure that's the only criteria used here. Maybe Ms renaming some parts had done the trick.

Customised parts that are still underway to the shelf.
 
Btw, what kind of improvements *are* likely in Durango's CPU? like MOST likely? I'm curious of as to what kind of improvements can be made to double or quadruple the flops, and what one is likely to expect.

Don't the vgleaks specs go into detail on the Durango CPU and the details aren't any different from regular Jaguar cores?

Also, the strangest part of that DF article was that SuperDAE is apparently vgleak source - despite him contradicting their specs on his twitter account.

Is he trolling Richard into thinking he's behind vgleaks (perhaps Rich hasn't seen his tweets where he says all the recent spec leaks are not true or old info, or Rich is basing this belief as DaE gave vgleaks pics of the rear of the devkits last year)

Or perhaps DaE is trolling us by saying the vgleaks specs are inaccurate.

Either way he is a big troll.
 
Last edited by a moderator:
Based on informal developer conversations, yes, libGCM does make a difference.

This might also be important, a few devs have said that the API on 360 hinders full utilisation of the hardware compared to PS3.

And if Lottes and that Edge article is correct, then the situation is similar or worse on Durango due to MS's interest in forwards compatibility and/or Durango being a Windows 8 app box.

So that may play a part too if devs can't make the most of Durango due to stricter APIs.
 
(surprisingly) Microsoft. Have any leaks confirmed that kinect 2 is actually a depth camera?

It'll most likely either be a camera sourced from Prime Sense again, or a time of flight depth sensor that they had been working on but didn't finish in time. Considering that they actually purchased a depth camera company, I'd put it as a good possibility that it won't be Prime Sense sourced this time. It will still likely have a depth camera, however. Just might not be the same as what was featured in the first Kinect.

Both are 64bits, even both can be jaguar, what I mean is the orbis leak said jaguar, durango only x64, it can mean anything but it is weird only one with jaguar name.

Yeah, I found it strange that they would only put x86 64 for Durango but specifically mentioned Jaguar for Orbis.

It may mean something, it may not mean something. But it certainly is curious. Most likely it's Jaguar based as well, but I can't help wondering if it was purposely not called Jaguar.

Don't the vgleaks specs go into detail on the Durango CPU and the details aren't any different from regular Jaguar cores?

Also, the strangest part of that DF article was that SuperDAE is apparently vgleak source - despite him contradicting their specs on his twitter account.

Is he trolling Richard into thinking he's behind vgleaks (perhaps Rich hasn't seen his tweets where he says all the recent spec leaks are not true or old info, or Rich is basing this belief as DaE gave vgleaks pics of the rear of the devkits last year)

Or perhaps DaE is trolling us by saying the vgleaks specs are inaccurate.

Either way he is a big troll.

Or as has been speculated often enough here by some. The information that we are working with not only is incomplete, but it may not even be totally or partially accurate. :)

It's fun to speculate on the rumored leaks, but I really do wish people would stop getting so serious about them.

Regards,
SB
 
So, pretty much everything I've guessed at is seemingly true, if only I was this lucky with the lottery.

I don't know why people don't just accept what we know and get on with it.

Both consoles will have their own merits even if one is ultimately better/more powerful.

It never bothered me when my PS3 got bad ports I still enjoyed the games.

If Microsoft make a box that doesn't break this time I will have both anyway like most people.
 
Just wondering what the line in the sand is that separates custom parts from "off the shelf" parts.
I doubt anyone's adhering to any strict definition. RSX is customised, but also off-the-shelf. The technical definition would be any GPU that's had any modification beyond those made available for IHVs to incorporate in devices and graphics cards, but the typical forumite is going to be using a fuzzier definition where a customisation has to change the architecture and/or operation somewhat. If Liverpool is sharing the same DNA as whatever Southern Islands chip it's supposed to parallel (R10xx) just with variations in CU counts and clocks and maybe cache sizes, it'll be considered an OTS part. Only if it features something novel to the architecture, like eDRAM or a raytrace unit, will it be considered custom.
 
Not commenting on the rumors, but what constitutes "off the shelf" in this case.
I think what Shifty said makes a lot of sense.

Even so the definition is kind of fuzzy because off-the-shelf isn't quite right for a console.

Sony are going to add flexibility and new features to Liverpool -hence the extra 4 CUs- not to be stuck with legacy features only necessary in the PC space. They compromise to have something a bit different not to have characteristics consoles don't cater for. Sony have a TBDR machine already, PS Vita, if I am not wrong.

We don't know very much about the systems though. I am quite confused if you ask me.

Shifty and his *visionary* blitter -DMEs- and TBDR theories are making a lot of sense to me for Durango.

I also think at this point that Durango is a TBDR machine, the continuation of the abandoned project Microsoft Talisman.

http://en.wikipedia.org/wiki/Microsoft_Talisman

(the article is great, because it clearly explains the difference between conventional rendering, TBDR and what Talisman did, a good read)
 
It's clearly a custom APU, i.e. we're unlikely to ever see anything like it in the PC space (8 Jaguar cores, 18 GCN CU's and GDDR5 memory interface) but at the same time it uses mostly existing PC technology for the individual components.

I'd say it's a stretch to call it off the shelf but I see where people are coming from with the designation.
 
I'd say it's a stretch to call it off the shelf but I see where people are coming from with the designation.
People need to call it something to differentiate from truly custom hardware like Xenos or EE+GS, and the natural opposite is 'off-the-shelf'. I think that's a language we'll just have to accept.
 
It's clearly a custom APU, i.e. we're unlikely to ever see anything like it in the PC space (8 Jaguar cores, 18 GCN CU's and GDDR5 memory interface) but at the same time it uses mostly existing PC technology for the individual components.

I'd say it's a stretch to call it off the shelf but I see where people are coming from with the designation.

There's an obvious point concerning 'off-the-shelf' which may explain some contradictory rumours:
- if you use a "off-the-shelf" 7770, and decide to upgrade it to a 7850, then you "just" need to replace that component.
- if you heavily customize a 7770, and decide to upgrade it to a 7850, then you've got to both replace the component, and re-customize it.

i.e. It may be that when Sony changed their target memory to 4GB, they also changed the target GPU... something that Microsoft may not have been in a position to do.
 
There's an obvious point concerning 'off-the-shelf' which may explain some contradictory rumours:
- if you use a "off-the-shelf" 7770, and decide to upgrade it to a 7850, then you "just" need to replace that component.
- if you heavily customize a 7770, and decide to upgrade it to a 7850, then you've got to both replace the component, and re-customize it.

i.e. It may be that when Sony changed their target memory to 4GB, they also changed the target GPU... something that Microsoft may not have been in a position to do.

Huh?
 
...

A much more in-line proposal which also makes more sense would be to reserve them for certain functions when the computing power is required, and allow the devs to tap into them when they're not required.
In this case, the "4" would serve as a hard limit on how much CUs the other functions would take up so the developers will know that "in any case, we will still have 14 CUs to work with".

The breakdown argument from some people currently delving on the forum sounds like this
...

Thanks. Really, that notion that you are forbidden to use these holy +4 CUs is weird, if you have these awesome parts that you can access at very low latency and can do anything.. Then why not run rendering on the 18 CUs if that's what you need to do.
(this could happen for a step during the rendering of a frame)

Still, if that 4+14 thing is true I guess there must be something in it. I'll suppose that the CU are all identical. There would be two "front ends", one for the 14 and one for the 4 (I put front end in heavy quotes, I say that with a vague and loose meaning)

Someone with a real understanding of GCN could comment but here's something (old, in a GCN preview) about "Asynchronous Compute Engines"

http://www.anandtech.com/show/4455/amds-graphics-core-next-preview-amd-architects-for-compute/5

Meanwhile on the compute side, AMD’s new Asynchronous Compute Engines serve as the command processors for compute operations on GCN. The principal purpose of ACEs will be to accept work and to dispatch it off to the CUs for processing. As GCN is designed to concurrently work on several tasks, there can be multiple ACEs on a GPU, with the ACEs deciding on resource allocation, context switching, and task priority. AMD has not established an immediate relationship between ACEs and the number of tasks that can be worked on concurrently, so we’re not sure whether there’s a fixed 1:X relationship or whether it’s simply more efficient for the purposes of working on many tasks in parallel to have more ACEs.

And here for reference you can see the block diagram of a GCN with 32 CUs and two ACE.
http://www.guru3d.com/articles_pages/amd_radeon_hd_7970_review,5.html

So in Orbis I imagine one ACE takes care of 4 CUs and one (or two) take care of the 14 others. Or maybe other stuff is duplicated : two Command Processors? Two Global Data Share? Which would virtually make it a dual GPU, but still somehow sharing L2 and ROPs.

There could be such a special arrangement.. Else, maybe it's just a 18CUs GPU and the 14+4 split is entirely a software construct.
 
People need to call it something to differentiate from truly custom hardware like Xenos or EE+GS, and the natural opposite is 'off-the-shelf'. I think that's a language we'll just have to accept.
That's like only having a concept of black and white then introducing grey and insisting grey be called black or white. Call something what it is, in this case rumours suggest are a modified COTS (commercial of the shelf) part. This is widely used term in many industries, including aerospace and defence. Oxymoron perhaps, but it's clear what it means.

Do you really think knowingly mislabelling something is going to help this car crash of a thread? :no:
 
That's like only having a concept of black and white then introducing grey and insisting grey be called black or white. Call something what it is, in this case rumours suggest are a modified COTS (commercial of the shelf) part. This is widely used term in many industries, including aerospace and defence. Oxymoron perhaps, but it's clear what it means.

Do you really think knowingly mislabelling something is going to help this car crash of a thread? :no:
You can't get cultures to adopt language changes over the vernacular. Just doesn't happen. English is littered with mutable meanings and we're smart enough to adapt to them generally. I think this one's just gonna slide.
 
I am rather interested in nextgen tools and workflow setup. Someone should get you to write a piece. 8^D

It's not that complicated, really...

Tessellation on its own can only make curves smoother. In our real world the only things that are perfectly smooth are man-made objects, and even those are usually built from many smaller pieces.
So in this case you could get a more rounded barrel on a gun, but you'd still need a lot of non-tessellated objects for the other parts. Look at this picture to see how much of such a model could not be made just by tessellating a few simple objects:
m4_mk18cqbr.jpg


Also, modeling for tessellation usually requires more geometry than a model that wouldn't get subdivided; although this depends on what algorithm is used. We need all this extra geometry for things like maintaining a smooth surface or sharp edges; it is too complicated to summarize here. But for us, building a realistic representation of the above mentioned gun can easily require ~100.000 quad polygons - before subdividing it.



Realistic organic objects like people, clothing, terrain, vegetation, and so on, are pretty complex forms with uneven, bumpy surfaces (it's the reason why you can't model them with constructive solid geometry either). So you need displacement mapping on top of the tessellation to 'model' these forms.

The most efficient use of displacement would start with a very simple base model that gets its polygons subdivided to create more vertices, which can then be moved by the displacement map. Basically you wouldn't actually model things like fingernails, or belts, or individual muscle shapes - but just a simple 'stick man' kind of model and rely on the displacement to add the details.
But this means creating precise forms will always require creating a LOT of geometry. You'd also have to add a lot of polygons everywhere - it is very hard to locally increase or decrease mesh detail without messing up the overall shape of the model, and so on.

For example if you want to create something as simple as a thigh pocket on some cargo pants, vertices should be aligned with the outlines, otherwise you'll get a blobby or zig-zagging edge after the displacement. So using, say, 1 or 2 levels of subdivision - or none at all - on a supposed low-end console instead of the 4-5 levels required for the proper shape would completely ruin the artwork.

An even better example is the Unigine benchmark's dragon:
dragon_tesselation.jpg

Without enough vertices to use, those spikes would first look like a noisy mess, then disappear completely.

Another problem is that you can't animate detail that's only represented in a displacement map - neither by manual animation nor by any in-engine dynamics. This would not be such a big problem in the current generation, but I'd expect nextgen console games to look into cloth sims and such a lot more. For example in a possible Uncharted sequel, the gun holster and its belts on Drake shouldn't be sticking to his torso but move around independently. We have a lot of this secondary dynamics stuff on all of our characters and it adds a lot of subtle realism.

You could of course model all the details like the pocket into the base geometry - but in that case you'd get a high detail model from the start, and tessellation would only add small scale detail on top of the existing shapes, so it wouldn't be able to scale down the geometry load for a low-end system.

So the only solution would be to build two completely different ingame assets, which is just impossible with the already huge budgets. This is also why every ingame implementation of tessellation and displacement we've seen so far was completely half-hearted and thus not too impressive.


Note that I'm not against using tessellation and displacement mapping - in fact I'd like to see it because it's a very good way to create actual geometry detail, and we rely on it heavily in our offline work. But, it does not allow you to scale the same content for two platforms with significantly different performance; once you start authoring for displacement, it has to be there or your artwork will suffer significantly.

Oh and there are the technical issues related to the current hardware's efficiency, too many small triangles would waste a lot of GPU performance. It'd require a completely new architecture to make good use of high levels of tessellation and displacement.

I would expect the next gen to move to 50-100k polygon characters as a standard instead, and only use tessellation (and displacement) in a few limited cases. For example smooth out the wheels on cars, make terrain more detailed (we already have some level of this going on) and of course use it for water. And in no way could tessellation help bridge significant differences in hw performance and suddenly make a 2-year hw upgrade cycle easier to take advantage of.
 
You can't get cultures to adopt language changes over the vernacular. Just doesn't happen. English is littered with mutable meanings and we're smart enough to adapt to them generally. I think this one's just gonna slide.
This isn't a language change, although I'll concede fanboyism may well be a sub-culture. ;) Getting people to call a giraffe a giraffe and not a big horse isn't a language change or misappropriated vernacular, it's basic ignorance of what a giraffe is.

You've posted a few times about people not reading and learning about what they are talking but here you're suggesting everybody mis-use the definition of a basic aspect of the architectures we're looking at. If folks start off in the wrong place, they're lost from the start.

Not that I can take this thread seriously. It's astonishing how passionately some folks can disregard facts and math and how others are inclined to hope for things when the topic is two plastic boxes with chips to be used to play games. It's bonkers really.

Both boxes are going to be technically better than the consoles we have now. One box will, probably, be better to some degree than the other.
 
Also, modeling for tessellation usually requires more geometry than a model that wouldn't get subdivided; although this depends on what algorithm is used. We need all this extra geometry for things like maintaining a smooth surface or sharp edges; it is too complicated to summarize here. But for us, building a realistic representation of the above mentioned gun can easily require ~100.000 quad polygons - before subdividing it.
Many years ago we were talking about NURBS rendering (PSP's reported NURBS hardware IIRC) and there were lots of issues back then. Has there really been no progress in realtime SDS rendering such that we can't ditch the triangle meshes and go with the root models in the first place? :(
 
This isn't a language change, although I'll concede fanboyism may well be a sub-culture. ;) Getting people to call a giraffe a giraffe and not a big horse isn't a language change or misappropriated vernacular, it's basic ignorance of what a giraffe is.
It's more like talking about giraffes and horses under the umbrella of animals. Or 'stripy horse' to mean 'zebra'.

You've posted a few times about people not reading and learning about what they are talking but here you're suggesting everybody mis-use the definition of a basic aspect of the architectures we're looking at.
The term used to denote them is pretty immaterial. The thing that matters is what's inside. Whether a zebra is a called a zebra or a stripy horse, it's internal operation and difference from a 'brown horse' can still be discussed. Whether we call Orbis's GPU a custom part or a custom-enhanced part or proprietary modified part or an off-the-shelf with tweaks doesn't really contribute to the (utterly confused) conversation to any degree IMO, as the differences in what it does are what matter and we're oblivious to those. Deciding how we'll name customised graphics parts versus custom graphics parts is no doubt the most useful thing this thread could hope to achieve, but no-one will take any notice of whatever we decide upon. ;)
 
Many years ago we were talking about NURBS rendering (PSP's reported NURBS hardware IIRC) and there were lots of issues back then. Has there really been no progress in realtime SDS rendering such that we can't ditch the triangle meshes and go with the root models in the first place? :(

I'm not aware of how the research is going in realtime implementations, but I do know that content creation apps like Mari have some good things. I'll look into the details one day, but not now ;)

Nevertheless, even the quad based implementations are such that a lot of extra geometry is required for complex surfaces. This is probably related to the underlying math and can not be circumvented in any reasonably way. We can manage this issue, thanks to the hardware and some clever software trickery. But I don't know if there are any alternatives to just using brute force.

For example the ingame Master Chief model in Halo 4 is using normal mapping. They had a very very high res source model for the normal maps but it was also quite messy, impossible to rig or UV map so it could not be used in our work. The version we've re-built for the cinematics (I hope it's OK to disclose it) is more than 700.000 thousand quad polygons before subdivision and it required 2 levels for close-ups.
Understand that this is the result of the art design, where a lot of precise and complex details are used instead of large, simple curved surfaces. Modeling a cylon from the old BGS TV show would be a lot easier, just because of the design. So the problem is not the tech, but the task itself.

So how could a non-triangle based realtime SDS implementation help here for the inevitable Halo 5 incarnation?
It's probably going to be better to build a 50-100k polygon low res model and use normal maps, it'll be faster and better looking than trying to force tessellation and such into the engine (and hard surface details are near impossible to displace, unless you're using micropolygons).
 
Status
Not open for further replies.
Back
Top