Nintendo Switch Tech Speculation discussion

Status
Not open for further replies.
So I've heard that Nvidia has a work around for the bandwidth problem? Anyone know what that might be? And if not, how do you think devs will work around it?
 
No AA, no AF, no complex shaders.
If there are issues with bandwidth, developers will tend to use techniques that emphasize usage of the GPU processing facilities and its on-chip caches.That said, the bandwidth situation BytesperSecondperFLOP is equivalent or better than its desktop siblings, particularly in mobile mode. I'm not sure any tricks out of the ordinary is required, staying on-chip is generally a good idea under any circumstances.

How to best utilize the step up in GPU capabilities going from mobile to docked seems like an interesting question however, where the CPU/GPU/memory balance shifts significantly relative to itself.
 
Any examples of what this would look like? With and without I mean.
Zelda does not have AF (not sure about AA). There are screenshots on eurogamer. Blur line looks hideous. Mario Kart 8 does not have AA (lots of shimmering, does not differ from Wii U version much), possibly AF too.
I'm Setsuna does not have some lightsources and possibly some postprocessing filters.

You can make a simple comparison on PC. Is it even possible to disable AF on modern PC games?
 
Okay, I just looked up it. Basically makes textures look better at weird angles. I hope they find a way to get some AA on some Switch games, though.

The I am Setsuna thing is weird. I feel like they probably could have gotten it to 60 on Switch if they'd really wanted to. They might have pushed it out quickly for a launch release.
 
I just tried to tease a bit more techniques out of sebbbi when he was on tap so to speak, but any answers would by necessity be rather situational.
Nvidia is a bit like Intel. There's no obvious bottlenecks. Most code runs fast on their GPUs.

Practically the only thing that can hurt you when porting code from AMD to Nvidia is constant buffers. Nvidia has special hardware for constant buffers. Porting raw/structured buffer code to use constant buffers can give you around 30% gain on Nvidia: https://developer.nvidia.com/content/how-about-constant-buffers. AMD hardware on the other hand hasn't got special constant buffer hardware, but they got a more flexible scalar unit that suits the same purpose. Scalar unit operates one lane per 64 threads (wave) and has special scalar registers and scalar data cache. If AMD shader compiler detects that a memory load has address (index) that is guaranteed to be uniform accross each wave (64 threads), it emits scalar unit code instead of vector code. Most constant buffer loads are from constant address, so the scalar unit is used. But AMD also does the same optimization for raw/structured buffers. Console developers prefer raw/structured buffers because the performance is identical to constant buffers, and A) there's no size limit (constant buffer size limit is 4096 float4 elements) and B) compute shader can directly write to raw/structured buffer (constant buffer can be only directly written by CPU and/or data can be transferred by copy operation from other GPU buffers).

I am not developing for Nintendo Switch, so I don't know any low level hardware details of it. Hopefully Switch developers are allowed to show low level details at their GDC and Siggraph talks. I would be highly interested in knowing more Nvidia hardware details.
 
Last edited:
wuda if they cuda.
CUDA is great. Way ahead of DirectCompute in programmer friendliness. Much easier to write generic code and libraries (thus many well optimized libraries exist). Can also seamlessly integrate CUDA inside C/C++. Much less code bloat and faster to iterate. I had high hopes for the new Microsoft SM 6.0 compiler. It added nice features on top of HLSL, but didn't add any productivity improvements to the language. Hopefully it improves rapidly...
 
It's a Unity game. There's not so much you can do to optimise that. Unity is about being a jack-of-all-trades, master of none, so don't expect incredible performance from it.

This isn't the first time I've heard this. Is Unity themselves doing anything about it? I mean because while I am Setsuna is a nice looking game, it's pretty much PS2/Wii era visuals. Sounds like Unity needs to get their shit together. XD

A question I had for you guys. It was said that an enhanced performance mode was added for handheld configuration. Basically it clocks up one part and clocks down another. Could they do that sort of thing for the console configuration as well? Because some games will be more CPU heavy, but less GPU heavy and vise versa. I know the Switch was clocked down due to battery life and to keep the system from throttling like the shield does, but could they let devs trade one thing for another?
 
No AA, no AF, no complex shaders.
I still can't believe Nintendo didn't add eDRAM. I wonder how much it would cost nowadays to follow the Ps2's example ; as much bus width as possible. Effects would look incredible and good image quality should be a given.

It's not like eDRAM is dead, it exists in 128mb packages...
 
This isn't the first time I've heard this. Is Unity themselves doing anything about it?
The Unity Engine runs on the highest end PC and the lowest end mobile and a couple of TV operating systems. It needs to work in a way that'll match all targets. As such, it'll have to make compromises. It's also all about being easy to use which also makes compromises. For one thing, it uses C# with garbage collection. That makes the system incredibly flexible but not great in efficiency. Devs can implement some workarounds.

It is what it is. If Unity et al didn't exist and all games had to use proprietary engines, games would run better but there'd be far, far fewer of them. As it is Unity enables some amazing experiences and gamers should be very pleased for it. I think the next big thing is a properly multithreaded engine. At the moment only some parts are multithreaded as I understand it and a lot is done on a single main thread, which severely limits usable CPU power.
 
This isn't the first time I've heard this. Is Unity themselves doing anything about it?

Yep

http://aras-p.info/texts/files/2017_GDC_UnityScriptableRenderPipeline.pdf

Switch is between Wii U and Xbox one, from Julian Eggebrecht -
https://mynintendonews.com/2017/03/...ance-is-somewhere-between-wii-u-and-xbox-one/
Speaking of rouge squadron :D

News to nobody really..

Except for them getting the rights to their old games.. that's cool.
 
Switch is between Wii U and Xbox one, from Julian Eggebrecht -
https://mynintendonews.com/2017/03/...ance-is-somewhere-between-wii-u-and-xbox-one/
Speaking of rouge squadron :D

The Switch having 101% the performance of Wii U would also put it "between Wii U and Xbox One".
The Switch having 99% the performance of Xbox One would also put it "between Wii U and Xbox One".

Want to hear another super relevant remark? In absolute theoretical performance throughput, the Switch is closer to the original Gameboy than it is to the Xbox One.
Ha!

Another completely shallow statement from a dev trying his best not to say absolutely nothing that might bother someone, ending up saying absolutely nothing of relevance.
I get that they avoid the question, but why come up with these completely misleading claims instead of just "I'd rather not comment"?

So it seems that it's just a stock X1. The thread just popped up on GAF. Wonder why they did no modifications?

Original source is techinsights, who have updated an x-ray of the SoC in their analysis:
http://techinsights.com/about-techinsights/overview/blog/nintendo-switch-teardown/

No changes.
Extreme low effort double confirmed.
 
That's disappointing (stock X1). But I guess that should be a given, Nintendo doesn't give a damn about h/w. At least, they should be making a nice change for each unit sold, the Switch isn't all that different from a Shield and that goes for $199.
 
So it seems that it's just a stock X1. The thread just popped up on GAF. Wonder why they did no modifications?

Cost and time, basically.

No customisation potentially saves hundreds of millions of dollars. Even minor customisation requires chip layout and validation , hardware debugging, re-spins etc. Older IP is cheaper, older process (like 20nm) is cheaper, no R&D needed on the chip at all ... no point dropping half a billion dollars when you don't think it'll help your USP much. Nintendo have better things to spend that money on.

Also removes some uncertainty very early on, which Nintendo have always been keen on.
 
That's disappointing (stock X1). But I guess that should be a given, Nintendo doesn't give a damn about h/w.
Nintendo hardware is solid and reliable. Nintendo do not prioritise performance hardware and there is zero evidence high performance hardware is what customers want. It's why PS4 Pro is not outselling PS4. It's why 1080 cards do not outsell 1070 cards. Good enough is literally good enough! :yep2:

WiiU didn't fail because the hardware was weak, it failed because nobody had any fucking idea what it was supposed to be. Nintendo, included!
 
Status
Not open for further replies.
Back
Top