Nintendo Switch Tech Speculation discussion

Status
Not open for further replies.
As large of a difference as there is between 600p and 720p, its even more telling going from 720p to 1080p when reading.
Yup, and you can pull tricks to make text rendering look better on a lower resolution screen but games are often drawing things into the 3D distance where low resolution really hampers image quality.
 
As large of a difference as there is between 600p and 720p, its even more telling going from 720p to 1080p when reading. I'm now using a Samsung 1080p tablet and want a better tablet with UHD screen but this one is still so functional and performant enough for surfing and watching videos that I cant justify upgrading.

Nintendo Switch just may not be a very premium device for reading and surfing the web. Is the screen even 7"? It looks about 6", right?

For gaming, having a screen resolution that actually matches the native rendering resolution at least some of the time is more of a benefit than a disadvantage. Plus lower resolution can help with power consumption and price, two areas Nintendo needs all the help they can get in.

I assume the dock allows for overclocking/higher clocking and therefore native 1080.

Let's say the dock really does allow higher clocks because of conductive and forced convection cooling between the two units (over an interface that I'm guessing is plastic). How much higher is really feasible? Enough to go from 720p to 1080p? Will both the GPU and memory bandwidth really scale by over 2x?

The downside with designing something like this with a big dynamic range in performance is that efficiency will be lower for at least some points on the performance curve - generally the lower points.
 
They say no extra hardware in the dock but I wouldn't say that rules out the soc not running at higher clocks when docked when it's not power constrained anymore.

My guess is that in handheld mode resolution will be lower along with probably some dumbed down graphics to hopefully get some decent battery life with the thing running at full speed when docked.
 
IMO, there's no way you could overclock a mobile SOC to go from 720p to 1080p. It's a 4x jump in pixel count... If we're lucky there's a nice upscaler that will interpolate the frames, but that's it.
 
IMO, there's no way you could overclock a mobile SOC to go from 720p to 1080p. It's a 4x jump in pixel count... If we're lucky there's a nice upscaler that will interpolate the frames, but that's it.

It's a 2.25x jump in pixel count, but I still think even that is a stretch.
 
The overclock while docked would simply make 1080p rendering more accessible. Basically, if the developer wants to display at 1080p while docked, their game cannot be pushing the system to the max in portable mode. or more likely, if they design the game to run docked at 1080p, it will run just fine at 720p in portable mode.
 
They might do 540p on mobile and 600p when docked. Going to be hard to even hit 720p if they want games to be current gen graphics wise.
 
Sources indicate no performance boosting from the dock.

Actually, fwiw...

Per Laura Kate Dale, who got a great many things right about the NS pre-reveal and proved themselves imo worthy of at least a solid amount of trust. Good read aside from this info too btw:

http://letsplayvideogames.com/2016/10/a-deep-dive-on-lpvgs-nintendo-switch-reports-and-info/

"RE: dock, what I've heard is additional processing (lacking specifics atm) for when docked."

"This information again came from sources A (Nintendo), B (Ubisoft) and D (Manufacturing)."

"All sources claim the hardware has an easier time running docked compared to when out and about as a portable."

Now I would agree theres varying possibilities at play here, from say a simple upscaler to full blown active cooling and "upclocking" (more like just fully clocking when plugged in versus downclocking on the go, presumably?) I might also point out the vent holes people hve noted on top of the NS plus just the general knowledge that two different modes wouldn't at all be unusual given all sorts of other portable devices like smartphones, tablets, and laptops do the same sort of thing would go towards the theory that it'll have some sort of cooling solution while docked to compensate for the higher heat expenditures.
 
"Easier time running docked" could either mean higher clocks or a dGPU in the dock.
To be honest, the best possible solution I'd see for tablet+dock would be something like half a Drive PX2, with the tablet having a TX2 and the dock a GP107 through PCIe 3.0 x4. This would take the whole set to $350 or more, but they could sell just the tablet for ~$250 and the dock separately for another $100 or more.
Developing for this would be a can of worms IMO, mainly the part where games need to switch from 1 GPU to the other seamlessly. I guess the dGPU would have to be completely blocked from doing compute tasks so latency wouldn't bring unexpected bottlenecks.

I did say in the part you originally quoted that nVidia didn't have a competitive 64-bit CPU that they could offer integrated in an SoC with their GPU.
And the CPU core could have not been made by nvidia at the time. There were/are so many options. Original xbox used an Intel CPU and nvidia GPU, Nintendo used IBM CPUs with GPUs from ATi then AMD.
And what if the CPUs weren't 64bit? Can't both the Cortex A15 and A17 address over 4GB? I thought they could.



I can't find where it's specified quad-core Power7 @ 3.2GHz uses < 120W, do you have a source for that?
I didn't specify it was Power7. I wrote they were producing a 45nm SoC in 2011 using 3.2GHz CPUs that operated under a 120W power supply.


Jaguar cores @ ~1.6-1.75 GHz probably consume < 30W for the 8-core cluster. I doubt quad Power7+ would be anywhere within spitting distance of that figure when scaled down, given how aggressively it's designed for high clocks and high SMT throughput. If it made sense from a power standpoint to offer SKUs with several ~2GHz Power7+ they probably would have.

Or maybe they did offer and both Sony and Microsoft simply figured out the Jaguar cores were a better alternative.


8-core Jaguar is about 55.5mm^2 in XB1 (roughly similar in PS4). That includes the 4MB of L2 cache. Power7+ die is 567mm^2 for 8 cores on IBM's 32nm SOI. If you cut the cores in half and took out the accelerators and other unneeded things (SMP related) it'd still easily be over 200mm^2. And then there'd probably be a decent decrease in density moving everything else to this process.

Which would be one of the reasons why both console makers went with AMD.

AMD has the best possible solution because they had a good enough CPU to offer at that exact time, but this wasn't really because of a consistent technical advantage over nVidia.
So having the best possible solution for a SoC within the required timeframe isn't a technical advantage?


It was due to circumstances that just don't apply today, and don't contribute to a cause for concern that Nintendo is using nVidia in Switch.
Who exactly is concerned that Nintendo is using nvidia in the Switch?
It's certainly not me. I think nvidia taking care of the hardware is the scenario that best avoids a complete disappointment on console's specs.


I don't include MCMs under the basis that they were not on the table for the XB1 or PS4 designs..
Source?
Where exactly did Mark Cerny or Yoshida or Don Mattrick or any other Sony/Microsoft official claim that only SoCs would be on the table? Especially where no home console at the time had ever used a full SoC to date, I find it very hard that they would impose such a limitation from the start.
 
I think you guys think too much inside the box. Why not make a fairly large chip (think advanced Tegra X1 cpu part + GTX 1050 non-TI level GP107 GPU) and then switch off half the GPU (or possibly more) when in mobile mode? It might even have 2 GB of GDDR5 that's only used when docked. That might not be enough to bring performance to XB1 levels, but could push it into the general area and well beyond "normal" mobile levels. BTW, are we sure the Switch SOC is made by TMSC? Couldn't it be made by Samsung (like the GP107)?

Another thing: We have not seen the underside of the tablet or the dock from above yet. Maybe there is no fan inside the tablet, but the dock might have a large fan that allows for better cooling than an integrated fan (plus would save some weight). Air would enter the tablet from the dock outlets through bottom vents of the tablet (yet unseen) and exists at the top vents. This might allow the nVidia SOC to use a lot more power in docked mode than in mobile mode. Noise might be an issue though.

Something else that bothers me: Why not use Vulcan or DX12 and go with NVN? Less overhead, closer the hardware? Would that really be reason enough to use a proprietary API? Or maybe nVidia has included something else into the SOC, some special programmable hardware that can do certain things more power efficient than either CPU or GPU, something that does not fit the current programming model of DX/Vulcan and therefore the need for a new API?

Anyway, just my $0.02. Happy speculating!
 
They wont be able to use DX12 unless they pay license fees and enter into a contract with Microsoft.
 
They wont be able to use DX12 unless they pay license fees and enter into a contract with Microsoft.
Actually, Windows10 License for devices <10" is free, if I'm not mistaken
But you're right, but most people mean just the hardware-feature level of DX12 requires, not the software.
 
And the CPU core could have not been made by nvidia at the time. There were/are so many options. Original xbox used an Intel CPU and nvidia GPU, Nintendo used IBM CPUs with GPUs from ATi then AMD.
And what if the CPUs weren't 64bit? Can't both the Cortex A15 and A17 address over 4GB? I thought they could.

I don't know why you're twisting this so far beyond what I said, which was very plainly that nVidia could not offer a competitive 64-bit CPU in an SoC at the time. There are obvious major disadvantages to not using an SoC, not using 64-bit (seriously, PAE is terrible) or having to use a CPU that's much larger and consumes much more power. Older systems used separate discrete CPUs and GPUs because they didn't have much of a choice, and yes at that point there isn't a real roadblock in sourcing them from different companies.

The entire point behind this statement is that today, nVidia can offer a competitive 64-bit CPU in an SoC. At the time they couldn't because their CPU efforts were mobile focused unlike AMD's where Windows was their bread and butter, and that dictated 64-bit earlier than phones and tablets did. Now phones and tablets do, so we've got our 64-bit CPUs.
 
I don't know why you're twisting this so far beyond what I said, which was very plainly that nVidia could not offer a competitive 64-bit CPU in an SoC at the time.

This is the post I initially replied to, and I got the impression you were talking about Microsoft and Sony, not nvidia:

There's been kind of a dark shadow over them since PS4/XB1 came out since they didn't get a design win there and tried to play it off like it didn't matter. But they weren't really in the running because they couldn't yet build an SoC with a competitive 64-bit CPU. So AMD had the right tech at the right time.

Yes, nvidia alone could not have made the whole SoC at the time, at least not with a 64bit CPU. Choosing a nvidia GPU would require bringing CPU cores from a separate entity if 64bit cores were mandatory (I wonder if they were, though).
We agree on that.

I don't think not being a SoC would have been such a big problem, though. I don't doubt that AMD being able to provide a semi custom SoC capable of being >8x more powerful than the earlier generation got a substantial weight in the final decision, for both cost-effectiveness and power consumption. I don't think the first hardware teams from Microsoft and Sony started their design contests back in 2008/2009 saying "we only accept SoCs with 64bit CPUs", as that would make things way too narrow.
 
If the dock has active cooling, it may be that some of the GPU cores are power-gated off in portable mode, but turned on in docked mode. That would allow power to scale much more than simply by changing the clock speed. GPU cores may be power-gated off in portable mode for both battery and thermal reasons.
 
Status
Not open for further replies.
Back
Top