IMO, the article makes no sense overall because it's based on the assumption that it's either a Tegra X1 or a Tegra X2.Eurogamer also wrote this article:
The ability to use HDMI cables shouldn't be mutually exclusive to having a proper dock with all connections in place. The Surface Pro line has dedicated USB and Mini-DP ports and a Thunderbolt-esque proprietary connector for charging and connecting to the dock (with audio+video+usb ports).Righto, and have to fish for the HDMI cable every time you wanna use it or have it schlepping messily on the side of your TV when you're not using it? (AND the charging cable as well btw.) Awesome suggestion! So why not simply have a small dock there which holds the cables in place, and also holding the console in place while docked instead of having it loitering like a bum around your AV gear?
Latency and not getting compression artifacts?Why even fuck with hdmi cords? Why not use the existing wireless android screen casting technology that all modern phones and tablets support?
The NX won't do VR.Put a camera in a base that charges and hooks to the tv and then let the player put the console on their head and play vr games . With ir tracking they would be able to track the controllers and visor.
You trolin, mate?We know AMD's flops are not equal to Nvidia' flops.
I assume he meant marketing count flops differently in both firms...You trolin, mate?
Is it really used only for short bursts? Why would it, in the shield TV? There shouldn't be any thermal issue, as they can use an active cooling. Power wise it shouldn't be of a problem either, there is no battery etc.IMO, the article makes no sense overall because it's based on the assumption that it's either a Tegra X1 or a Tegra X2.
There's never been a single console, from Nintendo or any other, that uses off-the-shelf SoCs. The Tegra X1 isn't appopriate for a console because there's a Cortex A57 module in there that's used for short bursts of high power consumption in the presence of short-lived single-threaded tasks (mostly javascript?). The Tegra X2 makes even less sense with that weird choice of Denver+A57 core mix.
we had that conversation a few pages ago (I think p98). It means the peak numbers don't give a good way to compare two different architectures. It gives you a clue if these numbers differ by integer factors, but if an XB1 has 20% more GCN-flops than NX has Maxwell/Pascal-flops, it's really nothing to put your finger on and claim either of these to be faster.You trolin, mate?
Flops on the GPU are only a small part of the whole performance equation, which includes CPU, RAM, OS/API overheads, and title quality (cheap port or focussed development).Metal Gear Rising has much better performance on xbox360 than Shield TV.
Flops on the GPU are only a small part of the whole performance equation, which includes CPU, RAM, OS/API overheads, and title quality (cheap port or focussed development).
Furthermore, that one example isn't representative as the video itself says. There are other X1 games (Doom 3 BFG edition) that outperform their 360 counterparts. The video also mentions flops and how the X1 is a class apart.
So, in short, you shot yourself in the foot with that reference.
Is it really used only for short bursts? Why would it, in the shield TV?
Flops on the GPU are only a small part of the whole performance equation, which includes CPU, RAM, OS/API overheads, and title quality (cheap port or focussed development).
Furthermore, that one example isn't representative as the video itself says. There are other X1 games (Doom 3 BFG edition) that outperform their 360 counterparts. The video also mentions flops and how the X1 is a class apart.
So, in short, you shot yourself in the foot with that reference.
Well, do they really though? A FMA is a FMA, regardless of NV or AMD hardware running the code.I assume he meant marketing count flops differently in both firms...
we had that conversation a few pages ago (I think p98). It means the peak numbers don't give a good way to compare two different architectures. It gives you a clue if these numbers differ by integer factors, but if an XB1 has 20% more GCN-flops than NX has Maxwell/Pascal-flops, it's really nothing to put your finger on and claim either of these to be faster.
Latency and not getting compression artifacts?
IIRC, at least until a while ago it was almost impossible to use a Chromecast to play games through an Android phone/tablet, because of the ~1s latency. Same with Intel's WiDi solutions.
Maybe for the 1080p games on XO. They'd probably want to alter geometry LOD though.How much of a Pascal GPU would you need to run an Xbone/PS4 game at 540p (assuming that is the screen resolution)? It's probably doable on a 256 gigaflop handheld.
we had that conversation a few pages ago (I think p98). It means the peak numbers don't give a good way to compare two different architectures. It gives you a clue if these numbers differ by integer factors, but if an XB1 has 20% more GCN-flops than NX has Maxwell/Pascal-flops, it's really nothing to put your finger on and claim either of these to be faster.