Nintendo Switch Tech Speculation discussion

Status
Not open for further replies.
http://m.neogaf.com/showpost.php?p=228608213 everything from the edit on down. The eurogamer specs and Foxconn specs both draw the same ~2.23w, combined with all the other relationship with these clocks and the X1, it's safe to assume that these were final clocks, and that the 8 day stress test was a stability test to check for throttling. Not a single frame was dropped according to the leaker while running what we assume is the unity fish demo.

Again the 1060 would be in a dock, it draws considerably less power than what AMD cards would draw and we don't know the size of such a device but obviously thicker than a thin and light, though 2pcie slots wide is thick enough, there is no thermal issue with the card and vram might not be GDDR5 anyways.

We don't know when a device like this would ship, but they made 2k devkits, hardly a prototype, software is likely being tested with this. As for HD assets for 4k, the SCD patent mentioned being able to plug a hard drive into it, this would likely avoid game software since you can't take the dock with you on the go, but could store HD assets for a game when the device is docked, since vram would be in the dock itself.

The A72 at 1.78ghz would exceed ps4's cpu and even trade blows with the slightly faster ps4 pro cpu AFAIK.
 
Last edited:
http://m.neogaf.com/showpost.php?p=228608213 everything from the edit on down. The eurogamer specs and Foxconn specs both draw the same ~2.23w, combined with all the other relationship with these clocks and the X1, it's safe to assume that these were final clocks, and that the 8 day stress test was a stability test to check for throttling. Not a single frame was dropped according to the leaker while running what we assume is the unity fish demo.

Again the 1060 would be in a dock, it draws considerably less power than what AMD cards would draw and we don't know the size of such a device but obviously thicker than a thin and light, though 2pcie slots wide is thick enough, there is no thermal issue with the card and vram might not be GDDR5 anyways.

We don't know when a device like this would ship, but they made 2k devkits, hardly a prototype, software is likely being tested with this. As for HD assets for 4k, the SCD patent mentioned being able to plug a hard drive into it, this would likely avoid game software since you can't take the dock with you on the go, but could store HD assets for a game when the device is docked, since vram would be in the dock itself.

The A72 at 1.78ghz would exceed ps4's cpu and even trade blows with the slightly faster ps4 pro cpu AFAIK.

so were gonna ignore all reliable sources, and believe a most likely BS leak that's made up, not even one reliable insider has backed this leak. at the person you linked, a nintendo fanboy that wished those specs were true, and ignore every reliable source that says other wise, he made the mods change the thread title to fan fiction thread cause of his rambling.
 
so were gonna ignore all reliable sources, and believe a most likely BS leak that's made up, not even one reliable insider has backed this leak. at the person you linked, a nintendo fanboy that wished those specs were true, and ignore every reliable source that says other wise, he made the mods change the thread title to fan fiction thread cause of his rambling.

If you don't want to look at the leak and see how it's confirmed, that's fine. Those are just numbers, facts all lining up.

As for eurogamer, they admitted in that very thread that the clocks could have been increased after looking at the leak. While insiders gave us performance upgrade rumors on the October devkits and how Mario was not running at stable frame rate back in early October but by November it was solved.

Ignore how the leak got the weight of the device right, how he knew about the Joycons neon sku and how he knew about the naming of the inside of the Joycons shoulder buttons SL and SR. It's easy to dismiss this leak as fantasy if you don't look at it, but once you do, you have to take it seriously, and when you do that, you have to research the clocks he outlines, and that leads you to identical power draw as eurogamer's clocks, if final hardware was indeed 16nm and A72 cores, which makes all the sense in the world to anyone who would design this chip for anyone else. Not to mention that the leaker stated that the devkit had 16nm process.

It's very clear that eurogamer's clock leak was real, and it's also very clear that switch was stability tested for 8 days on Foxconn's clocks, with those specs matching power consumption, it is more likely to have those new specs in the retail unit as eurogamer's info is from a developer possibly working on old kits with the info dated in fall, while the Foxconn employee is staring at final hardware in late November.

I get it, I'm done preaching about this, take whatever info you want from it, I just thought I'd bring the info to a place that might understand it, and for those that do, have fun.
 
If you're insisting that the speculation in the leak is ironclad why do you keep saying it's Cortex-A72 when the Foxconn employee says Cortex-A73?

No one's ignoring how much the leaker got right, it seems clear that they had real hardware. But they also made it clear that their comments on the CPU and GPU architecture were based on their own guesses and not some kind of empirical evidence.

I have to ask again, why would nVidia use Cortex-A72 or A73 for a 16nm Switch SoC and not Tegra X2?

The power consumption speculation is based on an incredible number of wags; I struggle to believe that a Pascal SM consumes just over 6W at 1.6GHz given actual TDPs. The Cortex-A57 power numbers also seem very pessimistic, I'm guessing they're based on the notoriously bad Qualcomm implementation? I doubt Tegra X1's version does so badly. Physical implementation matters a lot here.
 
If you're insisting that the speculation in the leak is ironclad why do you keep saying it's Cortex-A72 when the Foxconn employee says Cortex-A73?

No one's ignoring how much the leaker got right, it seems clear that they had real hardware. But they also made it clear that their comments on the CPU and GPU architecture were based on their own guesses and not some kind of empirical evidence.

I have to ask again, why would nVidia use Cortex-A72 or A73 for a 16nm Switch SoC and not Tegra X2?

The power consumption speculation is based on an incredible number of wags; I struggle to believe that a Pascal SM consumes just over 6W at 1.6GHz given actual TDPs. The Cortex-A57 power numbers also seem very pessimistic, I'm guessing they're based on the notoriously bad Qualcomm implementation? I doubt Tegra X1's version does so badly. Physical implementation matters a lot here.

exactly. fanboys are really reaching here, every reliable source is backing eurogamers specs, these people have there reputation and reliable inside sources like developers working on the switch, that give them reliable info. unlike this foxcon leak of a few unreliable sources making guesses on the hardware, and the only person pushing this is real is a delusional fanboy from neogaf called zombie, anybody who has been following the industry will tell you, if something leaks and is legit many other websites/reliable sources will back it.
 
If you're insisting that the speculation in the leak is ironclad why do you keep saying it's Cortex-A72 when the Foxconn employee says Cortex-A73?

No one's ignoring how much the leaker got right, it seems clear that they had real hardware. But they also made it clear that their comments on the CPU and GPU architecture were based on their own guesses and not some kind of empirical evidence.

I have to ask again, why would nVidia use Cortex-A72 or A73 for a 16nm Switch SoC and not Tegra X2?

The power consumption speculation is based on an incredible number of wags; I struggle to believe that a Pascal SM consumes just over 6W at 1.6GHz given actual TDPs. The Cortex-A57 power numbers also seem very pessimistic, I'm guessing they're based on the notoriously bad Qualcomm implementation? I doubt Tegra X1's version does so badly. Physical implementation matters a lot here.

His speculations are all wrong, his reporting of facts are all right. We do know that he reported from inside foxconn and knew the battery capacity as 4310mah.

For instance the foxconn employee assumes the Switch screen is 1080p, it's 720p so his speculation was wrong. He assumes that the microsd card slot is a simcard slot for 4G, again wrong here. He assumes it is pascal (it is likely a custom shrunken X1, if you want to call that pascal, that's ok, it's what nvidia did.) He assumes it's an A73 core, but the design on switch would likely predate A73 availability, so I (being the z0m3le from the thread and never hid that fact) assumed it was A72 from the very first time I read this leak back in november, but I mostly dismissed it because of the 4G talk, although the battery capacity did fit with what we expected.

The A57 numbers are from Samsung Exynos 5433 chip here: http://images.anandtech.com/doci/8718/A57-power-curve.png this same chart is in my post if you bothered to look at it.
The A72 power consumption is based on TSMC manufactured 16nm Kirin 950 chip here: http://images.anandtech.com/doci/9878/power-big.png
The power estimations for Maxwell were based on Thraktor's work on pascal SM power consumption he did in this chart here: http://www.maths.tcd.ie/~rooneyot/gaf/pascal_powercurve.png

What is surprising is that all of these data points link the 2 clock leaks we have to the same power consumption. As for why Nvidia would use A72 over Denver? Well 1. Nintendo always uses it's own CPU implementation for handhelds, at least dating back to the GBA, and they have all been based on ARM chips, and 2. Denver isn't designed for these tasks, in fact Volta based Tegra chips will use a new Custom CPU from Nvidia and not Denver1 or Denver2. As for X2, the chip you mean is called P1, it is a pascal based GPU, but Nintendo likely designed their custom chip around X1 and not P1.
 
There's no reason why they have to, truth is they would have revoked everybody's x86 license if they could have (it took years of law suits before they and AMD came to a truce) and in this case it's a high sped peripheral bus which has competitors on the market, dominant competitors even (USB 3.1 10Gbps), so no monopoly issue there. They really want TB to take off but they're trying to make water roll up hill as the only peripheral standards that make it out of niche device ghettos are cheap and almost or completely royalty free. I agree it's self defeating but this is the company that tried to make IA64 a thing for almost a decade after it's time had come and gone.

On a technical note they haven't released it to any other ARM vendor except Apple and they may not even have engineering support for ARM even if they wanted to enable another ARM vendor on top of a completely new O/S stack to write and support as well.
Apparently according to anandtech, ASMedia already sells a controller that enables alternate mode for USB-C, so it's not just Alpine Ridge that enables that function:

Front functionality comes through a USB 3.0 Type-A, a USB 3.0 Type-C (ASRock in their sign claim this is USB 3.1 and Thunderbolt 3, however I cannot see an ASMedia ASM1142/ASM2142 nor an Alpine Ridge controller for this)



Where do these numbers come from? A72 improves the power consumption per clock over A57 but not by such a huge degree...

It's A57 20nm vs. A72 16FF, but it's just like @Syferz wrote.

QFOqpIe.png
SGM5g4A.png


I'm waiting for Anandtech to review the Huawei Mate 9 with Cortex A73 cores. Those should be even better.


If there was a 1060 (or something like it) in the devkit that the leaker inspected internally, then there would be a suitably respectable heatsink and fan combination that would be clearly visible and also an array of 6 GDDR5 chips (also easily visible) in the there. I am unaware of these having been identified in the system. (And laptop GPUs most certainly will throttle under stress - as will many desktop cards come to think of it).
The devkit could have a downclocked GP106 with 16 ROPs disabled and a corresponding 128bit memory bus.
Again, it's a devkit so it could be a placeholder chip for a dedicated GPU coming down the line.

Regardless, if this was to fit the architecture described in the patents, the external GPU would come inside a new, advanced dock that would have its own cooling system.
I hope no one is thinking that the Switch has a hidden GTX 1060 inside it. That would be utterly stupid and impossible.


On top of this, the NX games don't ship with "HD assets" on their little carts and those would be required to make the most of a 1060 like GPU ... meaning you would also need an additional HDD in the dGPU dock and downloads of improved assets over broadband. This would in turn potentially cause variations in loading and streaming performance that would have to be tested around. This is assuming of course that you're not just running stock "sub XB1/PS4" assets at 4K ... because that would be a waste.
Why would the console need to install or download anything if the "HD assets" were already in the cartridge?
If it's a cartridge, bandwidth should already be very good. There's no need to install anything. The same happens in the Vita.


Nintendo is releasing the new 32X to increase the power of the switch, just plug it into the dock, it comes with a 1060 to run switch games at 4k. It all makes sense.
It doesn't make much sense.
Most hardware-related decisions taken by Nintendo haven't made any sense either, though. And the fact remains that the Foxconn leaker is definitely legitimate and an external GPU would match Nintendo's patents about the supplementary compute device.
It might be very real in a prototype stage but it'll never come out, for example. Or it may come out only next year, or within 2 years.



every reliable source is backing eurogamers specs
Eurogamer is the only source for eurogamer's clocks.
 
OK we have definitely fallen down a well here, the Foxconn leaker is a long goddam way from being as credible as is claimed here as even in the original chinese language post he is very clear that he is speculating. Somehow though in the game of translation and hype this has morphed into "there is defo a massive gpu dock attached via an unknown path with unknown tech to cpu cores that couldn't feed said GPU if it had a running start".

The Asmedia controller is indeed a USB 3.0 controller which is also a USB 3.1 Type A/Gen 1 controller which means 5.0 Gbps. Apple insisted on bastardising the USB 3.1 designator by using the USB 3.1 name to imply newness when all they did was stick a type-C connector on there which is why they asterisked it with 'type a' a.k.a. plain ol' USB 3.0 5.0 Gbps. USB type-C is a physical connector, it is completely divorced from the signal tech behind it which is why there are 5.0 Gbps type C ports, 10 Gbps type-C ports, Thunderbolt 3 type-C ports and mixed mode because fml. This is a goddamn nightmare for me in talking with customers about notebook docking and that's before we get into USB power delivery which is also not a part of the USB 3.1 type A, type B or type-C spec ARRRRGGGGGHHHHHHHHHH!!!!!!

In short the external GPU is not happening because they don't have Thunderbolt in there which gives you PCIe x4 lanes for use with external GPUs. USB 3.1 type B / Gen 2 (aka USB 3.1 10 Gbps) does not have PCIe lanes at all it just had 10Gbps of USB 3.0 which means any GPU connected to it needs to sit behind a host like Texas Instruments DisplayLink technology which is basically a softGPU and thus terrible for gaming.
 
Last edited:
I
Eurogamer is the only source for eurogamer's clocks.

eurogamer has proven to a trusted source, they are on point, and they are confirming what they heard from developers, vs some guys on neogaf that are nintendo fanboys with a unreliable leak, I can't see eurogamer being off that much with specs especially when they confirm Four ARM Cortex A57 been locked in retail. you guys do realize eurogamer is a multi million dollar website that knows there shit, they have contact with switch developers, yet were gonna believe an unreliable source with no other source backing his info except a delusional fanboy that wants to believe.
 
eurogamer has proven to a trusted source, they are on point, and they are confirming what they heard from developers, vs some guys on neogaf that are nintendo fanboys with a unreliable leak, I can't see eurogamer being off that much with specs especially when they confirm Four ARM Cortex A57 been locked in retail. you guys do realize eurogamer is a multi million dollar website that knows there shit, they have contact with switch developers, yet were gonna believe an unreliable source with no other source backing his info except a delusional fanboy that wants to believe.

You quoted me with someone else's post.

This leak literally proves Eurogamer's clocks were fact, the leak also makes it seem highly likely that the clocks were changed. The SoC for both Eurogamer's and Foxconn's leaks draw the same power in portable mode, that is ~2.23w while they also relate to both X1 and each other perfectly. The thing that is the foxconn clocks were leaked before December 19th when Eurogamer's leak happened, and Eurogamer's clocks came from fall and Hermii posted from the neogaf thread that the Eurogamer's leak came from fall and that clocks could have changed. This leak doesn't discredit Eurogamer at all, it reinforces their original leak but it points to different clocks and specs on final hardware.
 
OK we have definitely fallen down a well here, the Foxconn leaker is a long goddam way from being as credible as is claimed here as even in the original chinese language post he is very clear that he is speculating.

Got to love how these things snowball. The language barrier alone is a pretty big hurdle here. I just cant see Nvidia licensing the A72 or A73 ARM cores for the Switch alone. If they were going to do that, you would think they would have used them for the Parker chips. Even concerning power draw, wouldn't the A57 cores consume quite a bit less power moving from 20nm to 16nmFinfet?

On top of everything, we now have games to help make conclusions. I see speculation over at Gaf about there being a 128bit 50GB/s memory bus, but to me, the lack of AA in Mario Kart 8 makes me think its the 64bit 25 GB/s memory bus. DF showed MK8 running a locked 60fps, so whats up with no AA? Unless of course the game already maxes out the memory bandwidth. I get the same suspicion from Zelda running at 900p docked while the portable mode runs just as well at 720p. The increased clocks should have made the bump to 1080p easily obtainable, unless of course your bumping into memory bandwidth limitations.

As for the A57 cores at 1Ghz having trouble matching or exceeding the performance of the PPC Tri Core Wii U CPU I find hard to believe. Everything I have seen shows superior performance per clock, and the NEON FPU on the ARM57 is light years beyond the SIMD capabilities of the PPC in Wii U.
 
Someone feel free to correct me if im wrong. But if we compare this to iPhone leaks, we have seen details about battery, chassi and design leaked from China but has anyone ever accurately leaked chip info before a launch? Im not talking about the name but specific details? We usually get this after launch through journalists with Apple connections and ChipWorks

I mean a Foxconn worker would know the details of battery, joycon etc because he is there helping manufacturing it. But how would he have info about clock frequency, wether its A57 or A73 or wether its Maxwell or Pascal?
 
The devkit could have a downclocked GP106 with 16 ROPs disabled and a corresponding 128bit memory bus.
Again, it's a devkit so it could be a placeholder chip for a dedicated GPU coming down the line.

Still, it would have its own array of GDDR5 and its own heatsink, along with it's own bank of VRMs. I'm not aware that the leaker spotted any of these things.

Why would the console need to install or download anything if the "HD assets" were already in the cartridge?
If it's a cartridge, bandwidth should already be very good. There's no need to install anything. The same happens in the Vita.

If all you did was increase the textures 2 x 2 (one mip map level) you could easily increase the total size of the game package by 100%. And that's going to roughly double the size of your cart costs, which will already be several dollars, and therefore several dollars more than competing formats. And that's for something that only a tiny proportion of NX users would ever be interested in. This doesn't strike me as a smart business move.

And given that cart sizes are likely to increase in powers of two, you aren't looking a fine grained control over cart sizes. Juggling content to fit may easily end up compromising one or both versions.

Furthermore, you then have a dilemma: how do you get this data off the cart for stock NX users? Do you:

- load the base texture for the dGPU version, generate mip maps and then discard the base texture? This would increase access times, waste power and require more memory reserved as a working space. Or do you:

- store the first mip map on the cart too? This would increase required texture storage space by 25% and potentially force you up to the next cart size or potentially require content compromises - particularly bad for the majority of users who will pay for it but never benefit from it.

In conclusion: storing Ultra HD dGPU assets on the base cart would be expensive and extremely wasteful for the vast majority of users who would never buy the "Super Dock". It would also mean versioning two sets of assets and increase testing, for little financial return. And NX still wouldn't be able to run AAA games that benefit the most from such powerful hardware.
 
His speculations are all wrong, his reporting of facts are all right. We do know that he reported from inside foxconn and knew the battery capacity as 4310mah.

For instance the foxconn employee assumes the Switch screen is 1080p, it's 720p so his speculation was wrong. He assumes that the microsd card slot is a simcard slot for 4G, again wrong here. He assumes it is pascal (it is likely a custom shrunken X1, if you want to call that pascal, that's ok, it's what nvidia did.) He assumes it's an A73 core, but the design on switch would likely predate A73 availability, so I (being the z0m3le from the thread and never hid that fact) assumed it was A72 from the very first time I read this leak back in november, but I mostly dismissed it because of the 4G talk, although the battery capacity did fit with what we expected.

Okay, so in other words this guy provides nothing with regard to the architecture of the GPU and CPU and mostly just released information that has otherwise been confirmed. So where's the relevance? I know you think that the stress test CPU/GPU clocks given mean it can't be X1 at the Eurogamer clocks for actual games but that's totally unsubstantiated, even Thraktor on that NeoGAF thread agrees with that.

The A57 numbers are from Samsung Exynos 5433 chip here: http://images.anandtech.com/doci/8718/A57-power-curve.png this same chart is in my post if you bothered to look at it.

You provided that graph in a much earlier post on the thread, don't give me crap for not yet reading the entire thread when you linked to a single post.

I don't think we can really judge power consumption of one CPU across different implementations and processes, even if those processes are ostensibly given the same name.

The power estimations for Maxwell were based on Thraktor's work on pascal SM power consumption he did in this chart here: http://www.maths.tcd.ie/~rooneyot/gaf/pascal_powercurve.png

I know the source (which you gave in the post), but I don't know what they're derived from. But I have a lot of skepticism towards how that data is extrapolated, it assumes this incredible dynamic range when in all likelihood the part hits Vmin and reverts to linear scaling long, long before the left side of the graph.

What is surprising is that all of these data points link the 2 clock leaks we have to the same power consumption. As for why Nvidia would use A72 over Denver?

No I never asked why they used A72 instead of Denver, I asked why nVidia used A72 for Switch and A57 for Tegra P1 (sorry didn't even realize it was given this name). Stop and think about it. If nVidia went through the trouble of producing a proper A72 implementation for their SoC there's zero reason why they would still be using A57 on P1, when A57 is inferior in every metric (power consumption, performance AND size). Hence why with every other SoC maker they're using A72, some having skipped A57 entirely. The only reason why nVidia would be using A57 in P1 is because they already did the leg work on it for X1 and do not want to spend the engineering time and possibly updated licensing fees on it. But if they did all that for Nintendo they'd be reusing the work for their own chip.

And you say Nintendo designed their chip around X1 but it uses Cortex-A72, Pascal, and is 16nm? That's hardly an X1 like design!
 
OK we have definitely fallen down a well here,

Could we pretty please keep it civil here? What's so wrong about discussing things in the realm of speculation and how exactly does that hurt you or any other?
If this speculation bothers you (or others) so much, feel free to ignore this thread and post in the following thread:

https://forum.beyond3d.com/threads/hardware-specifications-of-nintendo-switch-reveal.59806/
In that thread the users are forbidden of discussing anything other than Eurogamer's specs. So if you definitely want to stick to those and not discuss anything else, that thread is for you.

Now... no one here is saying "OMG Switch has secret sauce Nintendo RULEZZZZ because I believe in god Reggie!!!111one". AFAICS, logic and valid arguments are being applied here, so that "falling down a well here" statement feels a bit uncalled for.

The Foxconn leaker had pretty much been forgotten until Nintendo came up with the official detailed specs for batteries, buttons and other stuff.
The games that were shown so far have pretty underwhelming visuals. Those do point to a very underwhelming SoC, namely an underclocked Tegra X1 at Eurogamer's clocks.
That most probably points to the final units having a TX1 with eurogamer's clocks. Period.

However, it could (less probable IMO) also point to 1st-gen developers having developed for less powerful devkits with a TX1 and those clocks. Then the custom chip for production arrived and came up more powerful (i.e. 1785MHz CPU and 921MHz GPU in docked mode). According to @Syferz there was someone from Eurogamer's staff who wrote on neogaf that those clocks were from earlier in fall, whereas the Foxconn leaker posted in later November, so we could simply be looking at different devkit iterations.

Something like this wouldn't be unprecedented. We know for a fact that the first PS4 games couldn't use more than 4GB because Sony wasn't sure there would be 4Gbit chips in time for production.
Only the later games used the console's full 8GB of RAM.
1st-gen Xbone games developers used a 800MHz GPU and 1.6GHz CPU, and only when production started were they allowed to target the final clocks.

I'm not even going to address the eurogamer fanboy who's already being rude and flamebaiting everyone who dares to doubt the almighty omniscient omnipresent super-rich Eurogamer God...
But for everyone else, please do keep it civil.


the Foxconn leaker is a long goddam way from being as credible as is claimed here

Number of Eurogamer leaks that have been proven 100% right so far:

1 - SoC made by nvidia - already been claimed by Semiaccurate and Emily Rogers several months before
2 - 6" display - not sure where it came from originally but IIRC this had been around well before Eurogamer mentioned it
3 - Detachable controllers - I'm not sure where it came from so let's assume Eurogamer


Number of Foxconn leaks that have been proven 100% right so far:

1 - 4310mAh battery in the main unit -> no one else claimed this
2 - 300g tablet weight measured in a digital scale -> no one else claimed this and Nintendo's official specs are 298g
3 - Each JoyCon has two shoulder buttons called SL and SR -> no one else claimed this until January's official reveal
4 - Each JoyCon weights 50g -> no one else claimed this
5 - 525mAh battery in each JoyCon -> No one else claimed this
6 - Exact number and type of I/Os present in the dock -> no one else claimed this
7- "Orange and Blue" Joycons -> Units are actually red and blue (it could be that the chinese word is the same for orange and red?), but no one else claimed this until January's official reveal


I think anyone could say that so far, the Foxconn leaker has a much better track record than Eurogamer, or Emily Rogers or anyone else.
And the most interesting part is how both Eurogamer and Foxconn leaks may be both true, just pointing to different devkits. Both Eurogamer's and Foxconn's seems to be using the same base frequency with different multipliers.


as even in the original chinese language post he is very clear that he is speculating.
Some things were assumed to be speculation by the leaker himself, like the CPU being Cortex A73, the GPU being Pascal and the SoC being made in TSMC.
Others, such as the clocks and amount of RAM, were presented with the same certainty as the 4310mAh battery.


The Asmedia controller is indeed a USB 3.0 controller which is also a USB 3.1 Type A/Gen 1 controller which means 5.0 Gbps.
So maybe it isn't using Thunderbolt, but it doesn't mean it can't use the USB-C pins to carry a PCI-Express signal some other way. I can't find it right now, but I'm pretty sure I saw the Asmedia controller datasheet and it clearly mentioned the ability to enter alternate mode (redirecting 20 pins to work with a different controller). You only need 16 pins to transmit 4 PCIe lanes, plus some 3 or 4 for acknowledgment and power states.
For example, Microsoft is using its dedicated port to carry PCIe between the Surface Book tablet and its GPU-equipped keyboard, and it does so without resorting to Thunderbolt. And it's hot-pluggable like Thunderbolt connections.



Still, it would have its own array of GDDR5 and its own heatsink, along with it's own bank of VRMs. I'm not aware that the leaker spotted any of these things.
Yes, it should have all those things. Doesn't mean this Foxconn worker could tell what is what, though. Central processing chips are generally easier to identify than anything else, especially if they have a heatsink connected to it (which he does mention when he refers to the fan).
He could be wrong or lying about the 200mm^2 chip. I myself have said it could be just a southbridge or a FPGA driving more IOs. Though it's intriguing how he specifically claimed 12*18mm which seem to be GP106's measurements.



No I never asked why they used A72 instead of Denver, I asked why nVidia used A72 for Switch and A57 for Tegra P1 (sorry didn't even realize it was given this name). Stop and think about it. If nVidia went through the trouble of producing a proper A72 implementation for their SoC there's zero reason why they would still be using A57 on P1, when A57 is inferior in every metric (power consumption, performance AND size). Hence why with every other SoC maker they're using A72, some having skipped A57 entirely. The only reason why nVidia would be using A57 in P1 is because they already did the leg work on it for X1 and do not want to spend the engineering time and possibly updated licensing fees on it. But if they did all that for Nintendo they'd be reusing the work for their own chip.

There's no P1 AFAIK. Parker is Tegra X2.
Two simple reasons for Parker not having Cortex A72 are:
1 - Parker design was finished several months before a possible custom SoC for the Switch. Drive PX2 has been in Tesla cars since October (I think?) and QA processes for automotive are probably a lot more demanding than they are for entertainment devices.
2 - Parker was designed for automotive and seems to have at least a 25W TDP, where the ~5W difference between 4*A57 and 4*A72 at 2GHz doesn't really matter. It's not like 5W were going to give those Teslas any meaningful mileage advantage.




If all you did was increase the textures 2 x 2 (one mip map level) you could easily increase the total size of the game package by 100%. And that's going to roughly double the size of your cart costs, which will already be several dollars, and therefore several dollars more than competing formats. And that's for something that only a tiny proportion of NX users would ever be interested in. This doesn't strike me as a smart business move.
Ultra Street Fighter 2 for $40 doesn't strike me as a smart business move either, but maybe that's why the games are so expensive so far: the cartridges are expensive.
 
My comment was not intended as an attack rather a general point that we are now in a dark place that is hard to get out of grasping at things that probably don't exist. When we're reduced to falling back on tech such as this phantom PCIe transport that Nintendo and Nvidia have cooked up but decided not to tell the world about we're pretty far from the light.

As to the idea that it has been stealth upclocked where is the evidence aside from this individual reddit post? Recall the WiiU launch and that amazing Zen garden demo that no one got within a mile of quality wise post launch, Nintendo is certainly willing to push demos that are unrepresentative of final game quality (just like everybody else) so why did we not see any? We had games that are not due for launch until Q4 but no hint of the greater detail a GPU with an order of magnitude more perf might provide.

Delighted to see the post shared but beyond amusement I'm not sure it adds more to the conversation.

Ultra Street Fighter 2 for $40 doesn't strike me as a smart business move either, but maybe that's why the games are so expensive so far: the cartridges are expensive.

But they're not carts, they're just flash memory so they shouldn't be a huge factor in the BoM for games (assuming the 13GB number for Zelda BotW is true)
 
However, it could (less probable IMO) also point to 1st-gen developers having developed for less powerful devkits with a TX1 and those clocks. Then the custom chip for production arrived and came up more powerful (i.e. 1785MHz CPU and 921MHz GPU in docked mode). According to @Syferz there was someone from Eurogamer's staff who wrote on neogaf that those clocks were from earlier in fall, whereas the Foxconn leaker posted in later November, so we could simply be looking at different devkit iterations.

A Tegra X1 device in Switch-like form factor with suitable (modest, also Switch-like) cooling wouldn't have had a problem with 1785MHz CPU and 921MHz GPU clocks. Shield TV managed this a year and a half ago (2GHz CPU, actually). If it's 1GHz CPU and 304/768MHz GPU it's because of a design around power budgeting for portable gaming.

I just don't see how these clock speeds substantiate claims that it's using Cortex-A72, Pascal, or was manufactured on 16nm. The logic behind the argument that 16nm A72 @ 1.78GHz uses the same power as 20nm A57 @ 1GHz and vice-versa with the GPU I find pretty hand wavey and definitely not the smoking gun @Syferz seems to be painting it as. Let's say that developers were using X1 hardware as an early placeholder with the expectation that final hardware would use something with A72, Pascal, and 16nm. There'd be no reason why such a devkit would be limited to the Eurogamer clocks because they wouldn't want to restrain it to a portable form factor (when it's a powered device). Not when the power budget doesn't reflect what the final device will be. Unless Nintendo also thought they'd be using something like X1 then changed it late, but I doubt this because such dramatic design changes don't usually happen so late.

For all we know, maybe launch games will be stuck with 1GHz/304MHz/768MHz but later firmware updates will allow for higher speeds at the expense of battery life on select games. This too is not unprecedented.

There's no P1 AFAIK. Parker is Tegra X2.
Two simple reasons for Parker not having Cortex A72 are:
1 - Parker design was finished several months before a possible custom SoC for the Switch. Drive PX2 has been in Tesla cars since October (I think?) and QA processes for automotive are probably a lot more demanding than they are for entertainment devices.
2 - Parker was designed for automotive and seems to have at least a 25W, where the ~5W difference between 4*A57 and 4*A72 at 2GHz doesn't really matter. It's not like 5W were going to give those Teslas any meaningful mileage advantage.

nVidia announced in October that Tesla will be using Drive PX2 (https://blogs.nvidia.com/blog/2016/10/20/tesla-motors-self-driving/), while a lot of other sites have interpreted that to mean that they already are I don't think this is the case. Here Tesla says "coming in 2017" https://electrek.co/2016/11/11/tesl...r-self-driving-hardware-its-five-years-ahead/

I don't know what the realistic lead is for getting software onto a self-driving system (not exactly the same as traditional automotive), but even if gaming stuff doesn't need a lot of QA hardware has still for many years lagged far behind the cutting edge in other devices because developers need time with it to get games ready. It'd be a huge reversal if Nintendo is now using something newer than competing products.

No matter what Parker was designed for, if they had Cortex-A72s ready to use they'd use them, they're better in other metrics than power consumption or perf/W.
 
The power consumption graph is for the A57 cores manufactured on the 20nm process, how much would that drop by moving to 16nm FinFet? The 2nd gen Maxwell on Tegra was already basically Pascal, correct me if I am wrong, but I believe it was the move to 16nm that the really changed the name to Pascal.
 
My comment was not intended as an attack rather a general point that we are now in a dark place that is hard to get out of grasping at things that probably don't exist.
No one is grasping. At least to me, this is just a light conversation on what could or not be.
After looking at the release games, I already said what I think the SoC is the most likely to be at this point: TX1 at those clocks and that's it.
It doesn't stop me from thinking and discussing what it might be if the Foxconn leaker is right. I just happen to enjoy that. :)


When we're reduced to falling back on tech such as this phantom PCIe transport that Nintendo and Nvidia have cooked up but decided not to tell the world about we're pretty far from the light.
Microsoft hasn't told the world how they cooked up a hot-swappable PCIe transport in the Surface Book they've had in the market for almost a year, either.
The idea of external PCI-Express ports for graphics is rather old. ATi had XGP which even went to market back in 2009.
And once you find a way to redirect the necessary number of pins in USB-C to carry 4*PCIe 3.0 lanes, it might not be all that hard to do what Microsoft is already doing with their own port.


As to the idea that it has been stealth upclocked where is the evidence aside from this individual reddit post?
None. But that reddit post carries immense weight ever since Nintendo posted those weight and battery specs in their official site.


I just don't see how these clock speeds substantiate claims that it's using Cortex-A72, Pascal, or was manufactured on 16nm.
The 1785MHz clock speeds on a 16FF Cortex A72 module match a very similar power consumption to a 1GHz 20nm Cortex A57, as you see in the charts I posted. It's that simple.
The largest motive behind the GPU being Pascal would be nvidia's own statement about the Switch having "the same architecture as as the world's top-performing GeForce gaming graphics cards". When that post appeared in nvidia's blog in October, the Pascal Titan X had been in the market for some months already.


Let's say that developers were using X1 hardware as an early placeholder with the expectation that final hardware would use something with A72, Pascal, and 16nm. There'd be no reason why such a devkit would be limited to the Eurogamer clocks because they wouldn't want to restrain it to a portable form factor (when it's a powered device). Not when the power budget doesn't reflect what the final device will be.
Nintendo and nvidia didn't know what clock speeds would be attained by the custom SoC when it came from production, so they went with whatever clocks the TX1 could do at their initial power/thermal targets. They knew it couldn't be any worse, so they just stuck with those first clocks for the initial generations of devkits.

Does this make a lot of sense? No.. A certain headroom with a very low risk could/should have been used.
But it wouldn't be the dumbest thing Nintendo did with the Switch, like the tablet missing cameras or microphones, and requiring a smartphone with an app installed to do voice chat for the games.


For all we know, maybe launch games will be stuck with 1GHz/304MHz/768MHz but later firmware updates will allow for higher speeds at the expense of battery life on select games. This too is not unprecedented.
True.

No matter what Parker was designed for, if they had Cortex-A72s ready to use they'd use them, they're better in other metrics than power consumption or perf/W.
They're better in metrics that may not matter for a GPGPU-centric autonomous driving solution.
 
Status
Not open for further replies.
Back
Top