News & Rumors: Xbox One (codename Durango)

Status
Not open for further replies.
the enhanced programming guide and associated features only works in the US. Simply plugging in an hdmi cable and using whatever features the source device offers shouldn't be though

Is there some official link? I can download tv programming for my local cable provider in Windows Media Center (Mexico), and I'm talking about analog TV signal.

EDIT: No dGPU confirmed by Penello.
 
Last edited by a moderator:
Albert panelo again on gaf..

#218
Originally Posted by Albert Penello http://www.neogaf.com/forum/showthread.php?p=80951633#post80951633
I see my statements the other day caused more of a stir than I had intended. I saw threads locking down as fast as they pop up, so I apologize for the delayed response.

I was hoping my comments would lead the discussion to be more about the games (and the fact that games on both systems look great) as a sign of my point about performance, but unfortunately I saw more discussion of my credibility.

So I thought I would add more detail to what I said the other day, that perhaps people can debate those individual merits instead of making personal attacks. This should hopefully dismiss the notion I'm simply creating FUD or spin.

I do want to be super clear: I'm not disparaging Sony. I'm not trying to diminish them, or their launch or what they have said. But I do need to draw comparisons since I am trying to explain that the way people are calculating the differences between the two machines isn't completely accurate. I think I've been upfront I have nothing but respect for those guys, but I'm not a fan of the mis-information about our performance.

So, here are couple of points about some of the individual parts for people to consider:

• 18 CU's vs. 12 CU's =/= 50% more performance. Multi-core processors have inherent inefficiency with more CU's, so it's simply incorrect to say 50% more GPU.
• Adding to that, each of our CU's is running 6% faster. It's not simply a 6% clock speed increase overall.
• We have more memory bandwidth. 176gb/sec is peak on paper for GDDR5. Our peak on paper is 272gb/sec. (68gb/sec DDR3 + 204gb/sec on ESRAM). ESRAM can do read/write cycles simultaneously so I see this number mis-quoted.
• We have at least 10% more CPU. Not only a faster processor, but a better audio chip also offloading CPU cycles.
• We understand GPGPU and its importance very well. Microsoft invented Direct Compute, and have been using GPGPU in a shipping product since 2010 - it's called Kinect.
• Speaking of GPGPU - we have 3X the coherent bandwidth for GPGPU at 30gb/sec which significantly improves our ability for the CPU to efficiently read data generated by the GPU.

Hopefully with some of those more specific points people will understand where we have reduced bottlenecks in the system. I'm sure this will get debated endlessly but at least you can see I'm backing up my points.

I still I believe that we get little credit for the fact that, as a SW company, the people designing our system are some of the smartest graphics engineers around – they understand how to architect and balance a system for graphics performance. Each company has their strengths, and I feel that our strength is overlooked when evaluating both boxes.

Given this continued belief of a significant gap, we're working with our most senior graphics and silicon engineers to get into more depth on this topic. They will be more credible then I am, and can talk in detail about some of the benchmarking we've done and how we balanced our system.

Thanks again for letting my participate. Hope this gives people more background on my claims​

 
He's not in PR but I agree, he's not making it any easier. Personally, I don't think there's anything Microsoft can say to make most people change their minds about the tech. Microsoft are so vilified at this point I don't think they can ever recover. They really need the games & OS to speak for themselves.

Tommy McClain
 
So first the engineers at MS say they weren't shooting for the graphics high ground.
Then Al says it's not all about the specs
Now we have a Major Nelson ( Xbox 360 has 278.4 GB/s of memory system bandwidth. ) retread with specs being everything now.

One way to keep it interesting I guess. What's in the water at Redmond these day !!
 
I would suggest as others have stated, panello was just stating that the obvious Flop advantage doesn't necessarily translate directly to a similar performance gap.
But how would he be certain of the competitors advantages & whatever?, industrial espionage happens all the time & for far smaller amounts than what these companies are playing for
 
More on his perspective of information

Originally Posted by Albert Penello http://www.neogaf.com/forum/showthread.php?p=80962073#post80962073
At Microsoft, we have a position called a "Technical Fellow" These are engineers across disciplines at Microsoft that are basically at the highest stage of technical knowledge. There are very few across the company, so it's a rare and respected position.

We are lucky to have a small handful working on Xbox.

I've spent several hours over the last few weeks with the Technical Fellow working on our graphics engines. He was also one of the guys that worked most closely with the silicon team developing the actual architecture of our machine, and knows how and why it works better than anyone.

So while I appreciate the technical acumen of folks on this board - you should know that every single thing I posted, I reviewed with him for accuracy. I wanted to make sure I was stating things factually, and accurately.

So if you're saying you can't add bandwidth - you can. If you want to dispute that ESRAM has simultaneous read/write cycles - it does.

I know this forum demands accuracy, which is why I fact checked my points with a guy who helped design the machine.

This is the same guy, by the way, that jumps on a plane when developers want more detail and hands-on review of code and how to extract the maximum performance from our box. He has heard first-hand from developers exactly how our boxes compare, which has only proven our belief that they are nearly the same in real-world situations. If he wasn't coming back smiling, I certainly wouldn't be so bullish dismissing these claims.

I'm going to take his word (we just spoke this AM, so his data is about as fresh as possible) versus statements by developers speaking anonymously, and also potentially from several months ago before we had stable drivers and development environments.
 
FFS he keeps acting like a car salesman.

So he's saying that comparing numbers is wrong... then he spins a completely misleading memory bandwidth addition.

99.7% of the memory is 68GB/s.
0.3% of the memory is 278GB/s peak in theory, but much less in practice. It's not nothing and can be very useful, but I'm certainly calling his false equivalency fallacy.
He doesn't address the number one point about GPGPU which is... if they don't have enough CUs, what's left for GPGPU?
Also doesn't address the amount of resources reserved for Kinect or the system.

This is called Spin and FUD.
 
FFS he keeps acting like a car salesman.

So he's saying that comparing numbers is wrong... then he spins a completely misleading memory bandwidth addition.

99.7% of the memory is 68GB/s.
0.3% of the memory is 278GB/s peak in theory, but much less in practice. It's not nothing and can be very useful, but I'm certainly calling his false equivalency fallacy.
He doesn't address the number one point about GPGPU which is... if they don't have enough CUs, what's left for GPGPU?
Also doesn't address the amount of resources reserved for Kinect or the system.

This is called Spin and FUD.

Well, people (GAF) requested him more info, and he is just answering them. I don't know if he is wrong with the numbers, but I think you must see where is he posting that information.
 
It's not really relative or subjective. It's measurable.
No. The treshold of "quiet" is subjective, because each person has a different level of tolerance and experiences to call something quiet. It's a subjective word.

It's also relative because until someone experienced a really quiet refrigerator, he might say that his current Fisher&Pickel refrigerator is very quiet, because it is compared to his previous GE refrigerator. The point of reference for such a subjective word will change with what a person experience. If you live downtown, for example, quiet is a completely different frame of reference. That's what relative means.

So what objective figure would you associate with "quiet"?

I personally determined my own point of reference at 20dbA, because that's what I get from my video projector above my head, I hear it if I concentrate on it, but it's quiet for me. I don't see the Xbone being much less than 30dbA at 1m when playing a AAA title. But I could be wrong.
 
the enhanced programming guide and associated features only works in the US. Simply plugging in an hdmi cable and using whatever features the source device offers shouldn't be though

http://www.techradar.com/news/gamin...ow-well-xbox-one-will-play-with-uk-tv-1176297

It will be coming to other regions in time.

That's both relative and subjective. What's the point of reference to consider something "quiet"?
What would be the passing grade to say they succeeded?

http://www.physicsclassroom.com/class/sound/u11l2b.cfm
 
FFS he keeps acting like a car salesman.

So he's saying that comparing numbers is wrong... then he spins a completely misleading memory bandwidth addition.

99.7% of the memory is 68GB/s.
0.3% of the memory is 278GB/s peak in theory, but much less in practice. It's not nothing and can be very useful, but I'm certainly calling his false equivalency fallacy.
He doesn't address the number one point about GPGPU which is... if they don't have enough CUs, what's left for GPGPU?
Also doesn't address the amount of resources reserved for Kinect or the system.

This is called Spin and FUD.

What's wrong with the comments?

As for the memory bandwidth what is the issue with the majority of the memory being slow vs most being fast? In no way are you ever going to be addressing that quickly all of the time. The largest consumers of bandwidth are going to be operations performed by the RBE's, which again are not going to be addressing all that RAM continuously. If you can localize operations from the RBE's to perform complete, but fairly localized, tasks then you can every effectively use the available bandwidth on the ESRAM; AMD's graphics architectures already perform all pixel operations (past rasterization stage) in small tiles, meaning there is already some inbuilt efficiency to the ESRAM even before other software (API or program) operations are performed to utilize it in the most efficient manner possible.

Whether they can achieve an effective bandwidth similar to a 256b GDDR5 interface is unknown, and will likely very from one application to the next dependent on the types of operations being performed in the app, but you can't necessarily dismiss it as a false equivalency, and it is absolutely incorrect to dismiss it based on the "size" of the ESRAM vs the DDR3.
 
It's definitely a false equivalence.
Otherwise I could add the register files and it would dwarf anything else as unimportant bandwidth.
Helpful pool, yes, but not equivalent to a global pool. I'm certainly not dismissing it's usefulness. He claims you can add bandwidths together and it's certainly not that simple.
 
What's wrong with the comments?

As for the memory bandwidth what is the issue with the majority of the memory being slow vs most being fast? In no way are you ever going to be addressing that quickly all of the time. The largest consumers of bandwidth are going to be operations performed by the RBE's, which again are not going to be addressing all that RAM continuously. If you can localize operations from the RBE's to perform complete, but fairly localized, tasks then you can every effectively use the available bandwidth on the ESRAM; AMD's graphics architectures already perform all pixel operations (past rasterization stage) in small tiles, meaning there is already some inbuilt efficiency to the ESRAM even before other software (API or program) operations are performed to utilize it in the most efficient manner possible.

Whether they can achieve an effective bandwidth similar to a 256b GDDR5 interface is unknown, and will likely very from one application to the next dependent on the types of operations being performed in the app, but you can't necessarily dismiss it as a false equivalency, and it is absolutely incorrect to dismiss it based on the "size" of the ESRAM vs the DDR3.

That makes too much sense...:LOL:
 
How is it a false equivalence if the majority of your bandwidth consumption can be controlled in a manner that can consume the bandwidth of the 32MB effectively and limiting the consumption required by the external RAM? This is the very nature by which a TBDR operates, but even well controlled immediate mode renderers can make effective use of it, especially in a closed platform.
 
The combined bandwidth maybe possible. Perhaps there are cases where the ROPs are rendering out of the SRAM while the ALUs use the main memory bandwidth for a GPGPU computation. If you can execute out of both pools of RAM simultaneous, then I say you can add the bandwidth.
 
Status
Not open for further replies.
Back
Top