News & Rumors: Xbox One (codename Durango)

Status
Not open for further replies.
I think it will end that we are all wrong and we all are partially right, nobody has the whole truth, this is why is funny for a tech lover to speculate over it.
some speculates with more salt, some with a lot of salt as 3Dilettante, shifty etc, some less, what transform a funny and good discussion is someone that want to make war, someone that say something as "you monkey, don't understand a "£$"%, your english is pathetic, you're ridicolous, what you say is moronish, give us your credentials, tard, you're high" and such approach
Shifty is right, there's some that weights more as argument, yukon, dae/vgleaks etc but this is still open to discussion...
The discussion has to be intelligent and coherent, and marcberry's argument isn't. It's dependent on an understanding of what a VSP is but instead of proving what it is, he just posts the same info again and again in different colours as if taht changes anything. That's what irks people on this board. Subjective stuff can be disagreed with generally without getting knickers in a knot, but when there weight of knowledge if heavily in one favour and the arguments to the contrary aren't based on anything substantial, first we get noise covering the same arguments over and over, and then we get personal as frustrations bubble over.

The remit of B3D isn't so much to predict everything or understand everything perfectly, but to provide a place where logic and reason prevail. If that logic and reason leads people to misinterpret and predict wrong, it doesn't matter and we'll be quietly surprised. Guesses based on misunderstanding or hopeful interpretation, even if they prove right in the end (eg. a document may be poorly worded so as to fail to convey its intended meaning, and someone might blindly guess at the intended meaning ignoring all evidence to the contrary), are never going to go down well here.
 
That seems closest to a VLIW4/VLIW5 GPU, and is less consistent with the Durango leaks than GCN. The arrangment of the L1, LDS, scheduler, registers, and texture units don't match the Durango leak.

I'm curious whether VSP means the same thing in all places.
After 3dcgi's, another nail in the coffin: this has an SPI. Therefore it's not GCN, since GCN does interpolation in the pixel shader (as does Evergreen).
 
what transform a funny and good discussion is someone that want to make war, someone that say something as "you monkey, don't understand a "£$"%, your english is pathetic, you're ridicolous, what you say is moronish, give us your credentials, tard, you're high" and such approach
I didn't realize this forum was so full of that sort of thing that it's worth mentioning....

Some users as me or other have problems with the english written, too (sorry, I know, this is annoying) and it adds huge difficulties when others search an aggressive approach
This may lead to misunderstandings, but it shouldn't lead to aggressiveness.

I would like to give some contributes, some ideas, but just to make it clear, because some few users I (and others) feel the envirorment as hostile
If you feel this way, please report the offensive posts or start a thread in Site Feedback.

One could describe my questioning of marcberry as aggressive and even patronizing, but I don't believe his numbers or his reasoning. I'm not sure my challenging his info qualifies as hostile. I think bringing "hate" and "fairness" into a discussion about facts and figures isn't exactly conducive to better SNR.
 
From another thread, the VSP is named and diagrammed! It's a vector processor with a scalar unit (3+1). Is this different from a GCN SIMD array's ALU structure?
 
Shifty, a VSP is also diagrammed in the patent 3dil said showed a VSP as something closest to VLIW4/5 and Jawed said was pre-Evergreen (patent, post).
 
thinking about efficiency, I've a doubt about the GPU, in GCN each L1 cache has an 128 kiB partition of the L2 cache right?

so in the durango's GPU the L2 cache should be 12*128 KB, 1,5 MB, but from VGleaks, it's 512 KB

so each L1 has a 43 KB partition, a LOT less than 128 KB (in reality there're 4 128 KB partition, virtually divided in 12 43KB segments)

am I missing something or they are using eSram to compensate the lack of L2?

and what's about the competitor, its gpu's L2 partition is even lower, each L1 has a 28 KB partition on L2, and no eSram at all


what do you think about?
 
The VSP in that diagram has 4 MACC units and one scalar(not exactly the same kind of scalar we have these days)/transcendental unit.

The general trend seems to be that AMD does not use VSP for marketing or product evangelism. I'm trying to find any notable reference, but it's almost like the patent makers (edit: lawyers?) arrived a preferred nomenclature for what we've called shader clusters.

It doesn't mesh with other parts of the Durango rumors. Regardless of whether it's VLIW4 or VLIW5, one key point is that a shader cluster is the granularity that the GPU "threads" can operate and branch on.
The number of "threads" in the Durango diagram with 4 VSP boxes per SIMD actually quarters the number of threads possible, if you use the nearly decade-old definition of VSP.

Just to emphasize what I mentioned earlier, there is a mention of a VSP in the GCN ISA document, although I'm not sure what weight it should be given.

http://developer.amd.com/wordpress/media/2012/10/AMD_Southern_Islands_Instruction_Set_Architecture.pdf

The DP_RATE[2:0] field of the TRAP_STS register specifies to the shader how to
interpret TRAP_STS.cycle. Different vector shader processors (VSP) process
instructions at different rates.

I would expect some amount of customization in Durango. I'm not sure if I can make the jump to using nomenclature from 2005 by skipping documentation and statements from this decade. I'd need something that can reconcile the claim with other parts of the design it impacts.


thinking about efficiency, I've a doubt about the GPU, in GCN each L1 cache has an 128 kiB partition of the L2 cache right?

I'm drawing a blank on an L1 to L2 partitioning. L2 slices are linked to a memory channel.
 
thinking about efficiency, I've a doubt about the GPU, in GCN each L1 cache has an 128 kiB partition of the L2 cache right?

so in the durango's GPU the L2 cache should be 12*128 KB, 1,5 MB, but from VGleaks, it's 512 KB

so each L1 has a 43 KB partition, a LOT less than 128 KB (in reality there're 4 128 KB partition, virtually divided in 12 43KB segments)

am I missing something or they are using eSram to compensate the lack of L2?

and what's about the competitor, its gpu's L2 partition is even lower, each L1 has a 28 KB partition on L2, and no eSram at all


what do you think about?

Theres multiple L1 sizes but they are in no way tied to the size of L2/CU's.

In GCN theres multiple levels of caching and it seems like both Durango and the others have the normal amount of caches, if you want more details feel free to read the white paper.

http://www.amd.com/la/Documents/GCN_Architecture_whitepaper.pdf

Page 4 is the Caching setup per CU
Page 10 details L2

Further more L2 is tied to the amount of memory controllers so both have 512KB is not really surprising, it seems to me that the majority of GCN cards have 512KB or less L2.
 
Ok here's the latest pastebin being passed around GAF. Tar and feather me if you must. At least I dont see anything glaringly impossible about it

http://pastebin.com/sckq0WNF



Final Specifications(as of March 2013):

CPU: 8-Core Custom AMD Architecture (enhanced Jaguar), 1,6GHz
GPU: GCN, 20 Shader Cores providing a total of 1280 threads, 0,8 GHz, at peak performance, the GPU can effectively issue 2 trillion floating-point operations per second
Memory: 12GB DDR3-2133, 384-Bit, Bandwidth: 102,4 GB/s
64 MB eSRAM, 2048-Bit ("on die"), Bandwidth: 204GB/s
Accelerator: 4 Data move engines, Image, video, and audio codecs, Kinect multichannel echo cancellation (MEC) ardware,Cryptography engines, SHAPE-Block
TDP: Max. 185 W

The Architecture Design is the same as the one from the Alpha Specifications (February 2012).
The Main Enhancements are the Memory Controller ( Custom "IP" 384-Bit) and the GPU-Part of the die ( 12 SCs to 20 SCs; 32 MB eSRAM to 64 MB eSRAM, double Bandwidth)
This results in both an increase in power consumption ( from 125 W to 185 W) and an increase of the size of the APU (300 mm2 to 450 mm2)
Yield Rates of the chip have been moderate at best (Improvements since September 2012)

probably the biggest redish (pink?) flags if any are:

185 watts seems a little low for that

"sources" saying durango 12 cu specs were accurate as of just a few months ago, it conflicts with them.

i also dont see anything that couldn't have been written by a well informed member of neogaf or b3d.

to quote alstrong, *shrug*
 
Ok here's the latest pastebin being passed around GAF. Tar and feather me if you must. At least I dont see anything glaringly impossible about it

http://pastebin.com/sckq0WNF





probably the biggest redish (pink?) flags if any are:

185 watts seems a little low for that

"sources" saying durango 12 cu specs were accurate as of just a few months ago, it conflicts with them.

i also dont see anything that couldn't have been written by a well informed member of neogaf or b3d.

to quote alstrong, *shrug*

One of the mistakes I can see is that the eSRAM bandwidth is tied to the GPU clockspeed, therefore the eSRAM bandwidth cannot be higher without the GPU having a clock speed increase.
 
12GB 384-bit DDR3 with a overall BW of 306GB/s would make the Xbox better then GTX Titan in therms of memory and performance and that GPU cost $1000
I call BS.
 
i also dont see anything that couldn't have been written by a well informed member of neogaf or b3d.

to quote alstrong, *shrug*

So ,if you know that's something that could be written by a rabid fanboy or someone who wants to have fun at the expense of naive people , why you put garbage like this here ?
 
Shifty, a VSP is also diagrammed in the patent 3dil said showed a VSP as something closest to VLIW4/5 and Jawed said was pre-Evergreen (patent, post).
Yep, but different. That's a VSP identified in part of a circuit diagram. The new image is a categorical VSP diagram to illustrate a VSP, and it has a different configuration that includes a scalar with the vector MACCs. I don't believe macberry's position at all, but this does look like one embodiment to me where the lowest level ALU includes a scalar. There's no context for the patent to read what it's trying to say though, and patents can show embodiments of ideas that don't represent final designs.

Ok here's the latest pastebin being passed around GAF. Tar and feather me if you must. At least I dont see anything glaringly impossible about it

http://pastebin.com/sckq0WNF

i also dont see anything that couldn't have been written by a well informed member of neogaf or b3d.
Yep. Take the existing knowledge, up the numbers a little, and you sound reasonably plausible. Of course, that's by ignoring the issue of how you get from one known design to the new one. Without an explanation of how MS have gone about developing this new design (threw away old designs and made new silicon in three months? Developed two systems concurrently and are now picking the most expensive one?), they avoid the obvious challenge to their position.
 
12GB 384-bit DDR3 with a overall BW of 306GB/s would make the Xbox better then GTX Titan in therms of memory and performance and that GPU cost $1000
I call BS.
that doesn't mean much. Titan is priced at a premium, and the same performance could be achieved for a lower BOM. That doesn't prove anything - just disproves your proof. :p

So ,if you know that's something that could be written by a rabid fanboy or someone who wants to have fun at the expense of naive people , why you put garbage like this here ?
Well it is a rumour thread, and this one isn't stupidly outlandish that can be ignored out of hand (unlike 4 Power7 cores + 20000 RPM SCSI HDD blah blah). Any rumour could be made by rabid fanboys or someone who wants to have fun at other's expense.
 
12GB 384-bit DDR3 with a overall BW of 306GB/s would make the Xbox better then GTX Titan in therms of memory and performance and that GPU cost $1000
I call BS.

Ignoring the fact one is 2.0 teraflops and the other 4.5?

Also arent we often told "you just cant add the bandwidth's like that"?

And yeah as Shifty said, price!=cost in the GPU game. If rumor is right Nvidia will soon introduce some GPU's that get near Titan performance for "only" $500-$600.
 
Final Specifications(as of March 2013):

CPU: 8-Core Custom AMD Architecture (enhanced Jaguar), 1,6GHz
GPU: GCN, 20 Shader Cores providing a total of 1280 threads, 0,8 GHz, at peak performance, the GPU can effectively issue 2 trillion floating-point operations per second
Memory: 12GB DDR3-2133, 384-Bit, Bandwidth: 102,4 GB/s
64 MB eSRAM, 2048-Bit ("on die"), Bandwidth: 204GB/s
Accelerator: 4 Data move engines, Image, video, and audio codecs, Kinect multichannel echo cancellation (MEC) ardware,Cryptography engines, SHAPE-Block
TDP: Max. 185 W

The Architecture Design is the same as the one from the Alpha Specifications (February 2012).
The Main Enhancements are the Memory Controller ( Custom "IP" 384-Bit) and the GPU-Part of the die ( 12 SCs to 20 SCs; 32 MB eSRAM to 64 MB eSRAM, double Bandwidth)
This results in both an increase in power consumption ( from 125 W to 185 W) and an increase of the size of the APU (300 mm2 to 450 mm2)
Yield Rates of the chip have been moderate at best (Improvements since September 2012)

an upgrade is very possible, but this is not an upgrade, this is a NEXT-NEXT- gen console Xbox 4
this , plus kinect 2, at 499$ would means the comptetitors kicked out from the business. Forget about that, it's really too much, over 300 GB/s, 64 MB of eSram..

I bet. if the impossible will happen I'll but 3 of this.

919827_10151442434191344_1848490703_o.jpg
 
I think people should deal with the fact that MS have chosen a system like they did. They put low powered, cheap CPU coupled with good, albeit underpowered GPU, and gobs of cheap RAM for the reason. They had strategy several years ago and that one won, they knew Sony could eclipse these specs without exactly breaking the bank, they didn't have knee jerk reaction after Sony's conference because they already knew system specs.

Upgrade was probably possible back in early 2012, but if they didn't go for that back then then it means we are getting leaked ones. Its difficult to predict if the spec difference will be deathblow to MS or will they be able to lure people with their services and software, but one thing is clear. MS knew that with these kind of specs they will have less powerful system, but they still chose to go with it.
 
an upgrade is very possible, but this is not an upgrade, this is a NEXT-NEXT- gen console Xbox 4
this , plus kinect 2, at 499$ would means the comptetitors kicked out from the business. Forget about that, it's really too much, over 300 GB/s, 64 MB of eSram..

I bet. if the impossible will happen I'll but 3 of this.

i dont see 2.0 tf as any next-next gen box...just an evolution.

that's mid-low range pc gpu computing power still.
 
I think that MS strategy here is more to broaden then market and reach out for more people with some kind of proper "all in one"-product, that has some extra stuff (like Kinect).

Stuff like services and functionality will do more for sales and expanding the market.
We also have unknown stuff like: "How good is Kinect2, can it deliver the original promise of Kinect1" and "what kind of impact will things like Occulus Rift/Fortaleza/Google glasses have".. and things like this. Functions and services will do more to expand market/improve sales than FLOPS..

MS seems to be better prepared Sony. Sony went for a more traditional route, albeit with added extra (as a secondary thingy). MS seems to be treating games and all the added extra with the same importance, meaning that they will be pushing everything under the same umbrella: "Entertainment".

As I see it, MS is a better position strategically, to do this.
However, we dont know Sonys plans either...

on the other hand, MS dwarfs Sony financially so whatever mistakes MS makes with Durango, it can somehow be rectified later on. If Sony is gaining more traction because of "better" specs, then MS can adjust that by securing more content... so then it will be, it does not matter if you have a great hardware, if you lack software-type of scenario..

in any ways, interesting times ahead :)
 
Status
Not open for further replies.
Back
Top