NGGP: NextGen Garbage Pile (aka: No one reads the topics or stays on topic) *spawn*

Status
Not open for further replies.
not only, as I know it compress/decompress on the fly even to and from the same source/destination.

Obviously it can do it to the same pool, too. But that's exactly how it would work on Orbis (since there is only the one pool) so it didn't seem notable. The thing people thought was exciting was using the DMEs to move compressed data from the main memory and have it automagically land uncompressed to the ESRAM, but it's too slow to be the magic bandwidth multiplier people originally hoped for.
 
The thing people thought was exciting was using the DMEs to move compressed data from the main memory and have it automagically land uncompressed to the ESRAM, but it's too slow to be the magic bandwidth multiplier people originally hoped for.

have you done some benchmark?
you can't come to conclusion looking single parts than the whole system, specially when you talk about a customized, not well known structure

how do you know the dme will work, with what? where is the target render in esram in ddr, in both? is a whole framefuffer or is tiled, and so on
when you build a system to be efficent, you have to consider the whole machine, so nobody in this moment knows how much are useful, apart few person inside MS and maybe AMD
 
Obviously it can do it to the same pool, too. But that's exactly how it would work on Orbis (since there is only the one pool) so it didn't seem notable. The thing people thought was exciting was using the DMEs to move compressed data from the main memory and have it automagically land uncompressed to the ESRAM, but it's too slow to be the magic bandwidth multiplier people originally hoped for.

I believe the 'slow' part has been debunked in one of the durango threads by those who had a better understanding of how decompression is used in the real world. Also, while people were interested in the co-function of decompression I think most of the "exciting" conversation was focused on the the idea of how the DMEs may be used in implementing virtual textures or tiling (based on their described functions in VGLeaks)


EDIT: Here it is:

I think they wanted LZ decompression for DXT texture data. The quoted 200MB/s compressed stream is 30% faster than a single core on my 2600s based workstation. They'd need two jaguar cores to get that kind of performance.

The 200MB/s compressed data would decompress to 300-400MB/s DXT data or 300-800MTexels, 5-10 Mtexel per 60 Hz frame. - Probably fast enough by any measure.
 
Last edited by a moderator:
have you done some benchmark?
you can't come to conclusion looking single parts than the whole system, specially when you talk about a customized, not well known structure

how do you know the dme will work, with what? where is the target render in esram in ddr, in both? is a whole framefuffer or is tiled, and so on
when you build a system to be efficent, you have to consider the whole machine, so nobody in this moment knows how much are useful, apart few person inside MS and maybe AMD


Isn't that what every one is doing.?

Guessing how some things may work,because there is not actual real test of any of the hardware's.?

No one knows how efficient both systems would be,saying something is build to be efficient doesn't mean it will not encounter a rock on the road,this apply to both machines.
 
have you done some benchmark?
you can't come to conclusion looking single parts than the whole system, specially when you talk about a customized, not well known structure

how do you know the dme will work, with what? where is the target render in esram in ddr, in both? is a whole framefuffer or is tiled, and so on
when you build a system to be efficent, you have to consider the whole machine, so nobody in this moment knows how much are useful, apart few person inside MS and maybe AMD

We do not need to do a benchmark when we know the specifications of the parts that we are talking about.

He is correct for the most part, the DME's will not make a gigantic difference but the difference will still be there.
 
I believe the 'slow' part has been debunked in one of the durango threads by those who had a better understanding of how decompression is used in the real world. Also, while people were interested in the co-function of decompression I think most of the "exciting" conversation was focused on the the idea of how the DMEs may be used in implementing virtual textures or tiling (based on their described functions in VGLeaks)


EDIT: Here it is:

That doesn't debunk what I was talking about. Before the details of its performance leaked people were thinking it would be like it added a multiplier to your total effective bandwidth. Whether it decompresses faster than someone's single i7 core, or if it is fast enough to provide some arbitrary level of texture data doesn't really support that naive dream scenario. Even in the quote you found it takes a 200MBps and turns it into 400MBps at best. So that's like a +200MB effective bump to the overall 68GBps DDR3 bandwidth. Before we had the figures a lot of people were assuming it would make that 68GBps act like a 100GBps bus.
 
That doesn't debunk what I was talking about. Before the details of its performance leaked people were thinking it would be like it added a multiplier to your total effective bandwidth. Whether it decompresses faster than someone's single i7 core, or if it is fast enough to provide some arbitrary level of texture data doesn't really support that naive dream scenario. Even in the quote you found it takes a 200MBps and turns it into 400MBps at best. So that's like a +200MB effective bump to the overall 68GBps DDR3 bandwidth. Before we had the figures a lot of people were assuming it would make that 68GBps act like a 100GBps bus.

Well, the DMEs do appear to add bandwidth parallel to the existing primary pipelines so in that regard there is "relief" in so far as whatever functions the DME provides (which i assume are common data move functions in any system) it wont be pulling it from the existing pool. If its bandwidth spec is any indication, it could be saving as much as 25GB/sec and I suppose in the right scenario it could make 68GB/s behave closer to 93GB/s. In the end I find it hard to believe that with all the apparent thought put into the design of Durango that MS would have screwed up something as basic as bandwidth.
 
Well, the DMEs do appear to add bandwidth parallel to the existing primary pipelines so in that regard there is "relief" in so far as whatever functions the DME provides (which i assume are common data move functions in any system) it wont be pulling it from the existing pool. If its bandwidth spec is any indication, it could be saving as much as 25GB/sec and I suppose in the right scenario it could make 68GB/s behave closer to 93GB/s. In the end I find it hard to believe that with all the apparent thought put into the design of Durango that MS would have screwed up something as basic as bandwidth.

No, the DME bandwidth isn't additive to the system. 25.6GBps is just the maximum their bus operates at.

And no one is saying Microsoft screwed up anything. They just made different choices.
 
They have agreements since 2001..;) Even less..
Is the use of a winky supposed to mean you know something secret that nobody else does? Because I call bullshit.
Exactly why would sony share a design of a CPU that will go into its consoles with another Japanese electronics company.?
Cell was a derivative of PowerPC, designing a new architecture is expensive and there has to be users for it. The idea here was IBM would bring the PowerPC architecture (which itself was a collaboration between Apple, IBM and Motorola) and use it in some of their Blade servers for which certain applications and Sony and Toshiba would be using the chips for different things, as well as manufacturing their own - therefore the costs (and therefore the risks) would be shared. Toshiba are are still using Cell in some of their products, like TVs.

Yes it is like that sony had factory to manufacture blu-ray and most of the blu-ray make are for the PS3 not stand alone,is funny but unlike the 360 which has drives,from Samsung,Benq and several other brand the PS3 doesn't if sony has to outsource those why not go cheap.?
Sorry, none of that made sense. If you're responding to why my PS3 had a Toshiba drive in, you may recall that the PS3 was delayed because of issued manufacturing the blue die laser required. So it's hardly surprising Sony sourced components from anywhere it could.
 
Indeed, if Apple are using the fan tech they developed for their macbooks, it's brilliant. It still won't work out at the distances required for a console. The Kinect pipeline gets around 40DB noise reduction, but takes 1/8th of a 360 core to do it. And directional mics, while they _may_ be useful on a mac mini, although I'd doubt it, would not work on a gaming machine where the field of play is something like 100 degrees wide.
Thanks for responding. To the best of my knowledge there's no mic in a MacMini, I used the example of a MacMini (could have used iMac also) as examples of consumer electronics that run silent, or near silent, but don't have expensive cooling solutions. They are simply well engineered. When you lose the noise, the simplify the problem.

Directional mics, as I pointed out above, would not work for a large playspace.
Why is this the case and what type of directional mics are we talking about, Unidirectional? How about an omnidirectional with a wide-cardioid? Because that'd look to capture around 100 degrees ahead quite nicely, perhaps a bit wider.

Phone mics use a simple form of noise reduction. They don't have to be amazing because your mouth is literally an inch from the mic, and it's mono. Some phones use a second mic to try remove ambient noise.
I think you're referring the way the mics operate when the phone at held to the side of your face, sorry for not being clear, but I was talking about how the mics that operate in hands-free. Or any mic in a laptop (where it's often under the keyboard) and will get vibrations from the HDD and possibly the optical drive, as well as needing to negate sound from the speakers as well.

I don't doubt it's a complex problem.
 
Is the use of a winky supposed to mean you know something secret that nobody else does? Because I call bullshit.

Cell was a derivative of PowerPC, designing a new architecture is expensive and there has to be users for it. The idea here was IBM would bring the PowerPC architecture (which itself was a collaboration between Apple, IBM and Motorola) and use it in some of their Blade servers for which certain applications and Sony and Toshiba would be using the chips for different things, as well as manufacturing their own - therefore the costs (and therefore the risks) would be shared. Toshiba are are still using Cell in some of their products, like TVs.


Sorry, none of that made sense. If you're responding to why my PS3 had a Toshiba drive in, you may recall that the PS3 was delayed because of issued manufacturing the blue die laser required. So it's hardly surprising Sony sourced components from anywhere it could.



You can call it what you want other people after me told you the same,sony and Toshiba had been partners on the PS for quite some time.

Toshiba was a rival company to sony,it makes no sense to them to share the design with a company which will use Cell for its TV's you know sony sell TV's as well right.?

That is total BS now i know you make up that crap about your PS3 having a toshiba blu-ray drive,when the PS3 had a shortage of Blu-ray diodes,toshiba was highly opposing Blu-ray and pushing the other format HD-DVD...

In fact i search fiercely a PS3 with a toshiba blu-ray drive and nothing came on,so maybe you should share a link pointing at the PS3 using Toshiba Blu-ray drives.

http://www.joystiq.com/2008/02/16/toshiba-pulls-hd-dvd-support-blu-ray-wins/

Toshiba dropped HD-DVD on 2008 by that time there was no shortage of blu-ray diodes and Toshiba didn't have a Blu-ray player.

http://www.blu-ray.com/news/?id=3365

And its first Blu-ray player arrived on late 2009,your argument was never good and you forgot that Toshiba even that was partner with sony did not back up Blu-ray they back up like MS HD-DVD.
 
You can call it what you want other people after me told you the same,sony and Toshiba had been partners on the PS for quite some time.

Toshiba was a rival company to sony,it makes no sense to them to share the design with a company which will use Cell for its TV's you know sony sell TV's as well right.?

I don't understand how you can post two contradictory statements so close to each other. Were Sony and Toshiba partners, or rivals? Were they "sharing the design" of Cell or weren't they?

The problem is utilization isn't the only measure of efficiency. And efficient design is as much about avoiding stalls, bottlenecks and saturation as it is about high utilization. If the city planners in your town designed the sewer system to be at a high level of "utilization" on an average day, that might seem very efficient, but only until the first time it rained and every toilet in the city backed up at the same time and millions of gallons of raw sewage is dumped into the local waterways. In that case building excess capacity to cope with peak loads and worst case scenarios is more efficient, which was my point about the ROPs.
I'd just like to highlight this as the metaphor of the year (ignoring that Furmarking a GPU results in mere heat, while Furmarking a sewer system results in a hot mess).
 
I don't follow the thread but you may be talking about different Toshiba divisions. The semiconductor arm and the consumer electronics arm are different.
 
I don't understand how you can post two contradictory statements so close to each other. Were Sony and Toshiba partners, or rivals? Were they "sharing the design" of Cell or weren't they?


I'd just like to highlight this as the metaphor of the year (ignoring that Furmarking a GPU results in mere heat, while Furmarking a sewer system results in a hot mess).


Haha....

No that part sound like that but is not,i told him that because on another post i told him that it make no sense for sony to make a CPU with a rival company if they were not partners.
 
Obviously it can do it to the same pool, too. But that's exactly how it would work on Orbis (since there is only the one pool) so it didn't seem notable. The thing people thought was exciting was using the DMEs to move compressed data from the main memory and have it automagically land uncompressed to the ESRAM, but it's too slow to be the magic bandwidth multiplier people originally hoped for.

Isn't the possibility of having the setup designed specifically to virtual textures a massive change to the RAM bandwidth argument already? If they are aiming to leverage virtual textures in a fundamental way that would significantly cull the bandwidth required for streaming textures for a given scene. They would also gain on the volume side of the RAM; i.e. having more RAM is a much better option than having faster RAM in this scenario.
 
Status
Not open for further replies.
Back
Top