Network technologies for gaming - Wifi versus Ethernet *spawn

This is a symptom of cheap wifi routers which have dual band transceiver sharing a single antenna array. A modern poor design equivalent to when when running 802.11a and 802.11b networks on one router gave compromised performance in both. Cheap routers mean compromised hardware choices.

The only cheap router that I consistently find have good performance is xiaomi, even better if it's the one supported by xwrt (open wrt fork with special sauce to make the wifi as performant as stock fw).

But then it means placing the trust to China. As most models doesn't have international version that usually follows us/eu law.
 
In general what JPT said became irelevant for most people with good WiFi5 routers/devices and almost obsolete with WiFI6 which released in 2020 which virtually eliminated wireless congestion by having 2,000 subcarrier channels. We've come a long way from 802.11b where different people competed for 5-11 channels.

I've got a good wifi6 router and before that a wifi5 router and it still has too many spikes in any location with wireless traffic (the aformentioned apartment that I rented briefly). Each time I tried multiple routers from multiple respected vendors (even some enterprise level routers) and each time there was a lack of consistency in congested locations. So, no it hasn't become irrelevant for most people as there are quite a few people living in apartments, condos or flats that share common walls with their neighbors (if their neighbors also have wireless routers with powerful transmitters and multiple APs).

I was hopeful both times after their introduction that I could go fully wireless, but that hasn't proven to be the case either for me or people I know who live in areas with wireless congestion.

Regards,
SB
 
I feel part of this is going to be a difference in subjective viewpoints since I don't think anyone has specifically defined any of their criteria. What's unusable for example? Could be different between the posters here. This is no different than something vague like "playable" FPS or "unplayable" stuttering.

Someone could consider an unusable connection just a moment of jitter once per day because it kills them during their competitive gaming session. Another person could get latency spikes multiple times per hour but never notice it since they aren't playing FPS games regularly.
 
I feel part of this is going to be a difference in subjective viewpoints since I don't think anyone has specifically defined any of their criteria. What's unusable for example? Could be different between the posters here. This is no different than something vague like "playable" FPS or "unplayable" stuttering.

Someone could consider an unusable connection just a moment of jitter once per day because it kills them during their competitive gaming session. Another person could get latency spikes multiple times per hour but never notice it since they aren't playing FPS games regularly.
How about constant judder like final fantasy 16 60fps mode?

The 30fps mode only rarely judder.

Although with wifi 5ghz, the judder should be very rare
 
I've got a good wifi6 router and before that a wifi5 router and it still has too many spikes in any location with wireless traffic (the aformentioned apartment that I rented briefly). Each time I tried multiple routers from multiple respected vendors (even some enterprise level routers) and each time there was a lack of consistency in congested locations. So, no it hasn't become irrelevant for most people as there are quite a few people living in apartments, condos or flats that share common walls with their neighbors (if their neighbors also have wireless routers with powerful transmitters and multiple APs).

I was hopeful both times after their introduction that I could go fully wireless, but that hasn't proven to be the case either for me or people I know who live in areas with wireless congestion.

Regards,
SB
Are you using them in 5ghz band with free enough channels?

Usually as long as the overlaps doesn't have almost the same signal strength, it's still okay, albeit still imperfect.
 
Are you using them in 5ghz band with free enough channels?

Usually as long as the overlaps doesn't have almost the same signal strength, it's still okay, albeit still imperfect.

Yeah, it's still too problematic if there's wireless congestion. It's better than 2.4 ghz, but still absolutely no competition for a wired connection.

Regards,
SB
 
Okay, I understand i poked a hornets nest here. How to explain without it being the boy who cried wolf.
A comparison, even though they never are 100% correct.
If you buy a car and have the choice between 4WD and 2WD and everything else is about the same, then 4WD is a better choice in almost all instances.

Now Wifi vs cabled ethernet, if they both are set up correctly cabled will be better. Of course if you mess up the cabling or you need to have the mobility that wifi gives you then that is better.
But looking at the underlying stats, Wifi will have errors in the transmissions between AP and clients in a higher rate than you will on cabled ethernet.

I grabbed two packet captures I had on my laptop and made some graphs to show you two different networks that are supposedly running fine from the users pov, because the usage is not real-time services and even then there is some slack before a person will notice it. But added latency is an result of one way services tries to cover up errors with for instance retries.

Looing at the first picture, the black line is all the data that got captured on the wifi channel.
The green line is the number of wifi frames that are retries of previous frames.
The red bars are tcp issues that are occurring.
The blue bar, was not supposed to be there, but its mpeg 2 ts packets basically.

Now as you see retries on wifi increases you will see some delayed tcp issues happening, this is what I mean, these you will not see on a cabled network which is as properly setup as the wifi network. I can not say for certain that wifi is the cause of those tcp issues, without digging into some more. But most likely it is why based on this quick look at the graph.

If you look at the second image, again the black and red means the same, but this time i went for blue line instead of green (I hate the color picking thingyi n wireshark)
Here you see there are wifi retries but this time we do not see any tcp issues.

So as I said, this can kick you in the but and with the example DSoup provided of shoddy ethernet cable installation, you can fix that. But if your neighbour trashes your wifi with radio noise then there really is not much you can do to fix it. Other than put in a cable.

In my home everything in TV room is cabled, rest of the house runs over wifi even if I got a couple of ethernet connections here and there. Mostly used when I test weird AP setups and they need ethernet backhaul for specific scenarios.
 

Attachments

  • Screenshot 2024-01-09 at 13.42.02.png
    Screenshot 2024-01-09 at 13.42.02.png
    107.6 KB · Views: 11
  • Screenshot 2024-01-09 at 13.29.37.png
    Screenshot 2024-01-09 at 13.29.37.png
    95.7 KB · Views: 9
Last edited:
In general what JPT said became irelevant for most people with good WiFi5 routers/devices and almost obsolete with WiFI6 which released in 2020 which virtually eliminated wireless congestion by having 2,000 subcarrier channels. We've come a long way from 802.11b where different people competed for 5-11 channels.

WiFI7 further improves on this, whilst also aiming to hit 3-5ms latency for most traffic. Of course, building construction and layout is important, multiple walls of concrete, brick, or any construction with rebar is kryptonite to wifi and that's why bridges and mesh networks exist.

The real world practical effect of OFDMA in noisy WiFi is still debatable, especially with OFDM in competition for RF resources. Also many of the early Certified Wifi6 devices did not even support OFDMA properly even if it was a certification requirement. On top of that, many vendors do not even bother to certify against Wifi6 due to the cost.

6GHz shows better results than OFDMA currently, ie using 6 as the backhaul for the Mesh. At least according to data I have access to.

That sounds like an example of a bad router/device pairing, poor placement and/or insufficient coverage. I recognise all of this from 10-15 years ago when I first began to consider switching from wired to wireless but personally have not experienced bad wifi like this for that long.
You will see this if you have the classic hidden node issue, especially noticeable in *MDUs, of course less so in SDUs and its worse on 2.4GHz than on 5/6GHz, due to the signal degradation properties off 5/6GHz vs 2.4Ghz
I am working with some people that collects wifi "quality" data across lower double digit millions of homes in Europe and the Wifi still is not as stable as you have it in your home on average across Europe.
On top of that Aggregation frames add latency on top of that and if those have to retransmit on either the wifi layer or tcp layer you get more. For UDP its of course just gone.

*
MDU = Multiple Dwelling Unit
SDU = Single Dwelling Unit
 
I've got a good wifi6 router and before that a wifi5 router and it still has too many spikes in any location with wireless traffic (the aformentioned apartment that I rented briefly). Each time I tried multiple routers from multiple respected vendors (even some enterprise level routers) and each time there was a lack of consistency in congested locations.
By spikes I assume you mean latency? I would be interested to know exactly which WiFi6 router you are on the 6Ghz band where you believe are suffering issues resulting from cross-band traffic because that is pretty much impossible unless you have a whole bunch of exceedingly tiny domiciles (designed for mice) because Wifi6 has 2,000+ sub-channels meaning you can literally cram 2,000 devices in relatively close proximity with no cross-traffic issues. Obviously, any system can be configured poorly and nearby rogue (read: shit) devices can impact this.

As I said above, environment factors remain an issue but the type of environmental issues (concrete, brick, rebar walls) that could impact your individual wifi performance, wild likewise provide RF insulation from interference from neighbouring wifi networks. WiFi6 was designed for this negative to be a positive, which is also a physics side-effect of shifting to higher bandwidth communications because eventually you reach microwave bandwidths where any physical obstruction mitigates signal propagation.

The real world practical effect of OFDMA in noisy WiFi is still debatable, especially with OFDM in competition for RF resources. Also many of the early Certified Wifi6 devices did not even support OFDMA properly even if it was a certification requirement. On top of that, many vendors do not even bother to certify against Wifi6 due to the cost.
This depends upon how the country you in manages RF bandwidth allocation and licensing. For example, in Europe and almost all ITU standards adoption countries apart from parts of Northern America, OFDM is a bare bones fallback when all else fails because as a mainstream technique to manage multiple users of limited RF spectrum, and was deprecated in the 1970s. In most counties, prescribed use OFDMA is pretty much mandatory and has been for a long time. Of course, if you live in a third world county (n terms of radio communications standards) then that sucks.

6GHz shows better results than OFDMA currently, ie using 6 as the backhaul for the Mesh. At least according to data I have access to.
This statement makes no sense. OFDMA is a technique for making the best use of available bandwidth in any given RF spectrum, where 6Ghz is part of the spectrum in which you expect devices to employ OFDMA.

You will see this if you have the classic hidden node issue, especially noticeable in *MDUs, of course less so in SDUs and its worse on 2.4GHz than on 5/6GHz, due to the signal degradation properties off 5/6GHz vs 2.4Ghz
Why would hidden nodes be an issue in competing WiFi5, WiFi6 or incoming WiFi7 networks? This only impacts devices trying to interact with each other hanging off a specific point (i.e. a single wifi network), it has no impact to devices accessing other wifi networks.
 
This depends upon how the country you in manages RF bandwidth allocation and licensing. For example, in Europe and almost all ITU standards adoption countries apart from parts of Northern America, OFDM is a bare bones fallback when all else fails because as a mainstream technique to manage multiple users of limited RF spectrum, and was deprecated in the 1970s. In most counties, prescribed use OFDMA is pretty much mandatory and has been for a long time. Of course, if you live in a third world county (n terms of radio communications standards) then that sucks.

OFDMA was introduced in 802.11ax/Wifi 6, ie in WiFi 5, OFDM was used. Mobile networks used it before WiFi, but in regards to WiFi, it was introduced with ax/6.

So your 1970s statement what is mandatory is not correct in regards to WiFi. I am open to you speaking of some other OFDMA that is not Orthogonal Frequency Division Multiple Access as utilised with WiFi 6.


This statement makes no sense. OFDMA is a technique for making the best use of available bandwidth in any given RF spectrum, where 6Ghz is part of the spectrum in which you expect devices to employ OFDMA.

Now, english is not my first language, so I might have screwed it up.
But the addition of 6GHz to WiFii has had more positive impact, than the addition of OFDMA, not sure I can put it more clearly than that.

Why would hidden nodes be an issue in competing WiFi5, WiFi6 or incoming WiFi7 networks? This only impacts devices trying to interact with each other hanging off a specific point (i.e. a single wifi network), it has no impact to devices accessing other wifi networks.

No, even nodes that just are on the same channel, but not part of your network (different bss), will create problems like it was a hidden node.
Because the medium is not clear to send and then the STA will back off and wait. That causes latency, enable RTS/CTS and you get a better behaved network, but latency again. And still does not help with your neighbours network. BSS coloring introduced with Wifi6, was supposed to help with that, but then again Wifi 5 does not support it, so it would not adhere to those rules anyway and I belive Wifi6/7 would have to adopt to Wifi 5 standards. Ie not act on coloring information to make obss work, since that would mean even less airtime/txops available for Wifi 5.

Also Wifi 5,6,7 on same channels uses the same basic rates to play nice with each other, so even then you get the same issue if the medium is detected as busy with cca.
 
OFDMA was introduced in 802.11ax/Wifi 6, ie in WiFi 5, OFDM was used. Mobile networks used it before WiFi, but in regards to WiFi, it was introduced with ax/6.
Not quite, it became mandatory as a standard in 802.11ax. Variations of the same technique itself has been employed unofficially in wireless devices for decades, it's particularly important for signal continuance when device pairs agree to switch sub channels due to frequency congestion or other noise.

No, even nodes that just are on the same channel, but not part of your network (different bss), will create problems like it was a hidden node.
Because the medium is not clear to send and then the STA will back off and wait. That causes latency, enable RTS/CTS and you get a better behaved network, but latency again.
This still makes no sense. These are problems created between two devices on the same network. The Wikipedia article explaining the issue is here. I'm not sure what you're describing but it's not the hidden node problem.
 
Not quite, it became mandatory as a standard in 802.11ax. Variations of the same technique itself has been employed unofficially in wireless devices for decades, it's particularly important for signal continuance when device pairs agree to switch sub channels due to frequency congestion or other noise.
There was no OFDMA in Wifi before Wifi 6. That was one of the new shiny things, that got promoted as big with 6. Variations and optional stuff is just brand proprietary that does not get used by anybody else. Every chip vendor that is in IEEE or Wifi Alliance fight to get in their pet items and kill the others pet items. So no OFDMA was not in standardised 802.11 / WiFi before Wifi 6.

Wi-Fi 6 uses a combination of two modulation methods: orthogonal frequency division multiplexing (OFDM) and orthogonal frequency division multiple access (OFDMA).

OFDMA is introduced in the 802.11ax standard for data transmissions. While OFDM is used in previous standards, it continues to be used in 802.11ax for management and control frames in order to maintain backward compatibility with legacy devices.

Thats from Cisco Merki's page on WiFi 6. https://documentation.meraki.com/MR..._Practices/Wi-Fi_6_(802.11ax)_Technical_Guide

I am not sure why we are going down on this road.

This still makes no sense. These are problems created between two devices on the same network. The Wikipedia article explaining the issue is here. I'm not sure what you're describing but it's not the hidden node problem.

Now I will do as you did with variations OFDMA, try to slide out on a technicality. I got enough OCD to recognise the tactic and use it.
You are correct the narrow definition on WIkipedia does not mention other networks.

But when you actually look at how the radios react to radio transmission being detected in the RF band/channel above a treshold that the is defined. Everybody backs of, does not matter if you are able to decode or not.
So you are technically correct in that I put a to wide definition of hidden node vs Wikipedia, but the end result is the same. Your devices can not transmit since the medium is busy. Taken up by neighbours devices, your devices reacts to 2 different clear channel assessments if I remember it correctly. Signal detection ie a proper transmission and/or Energy detection ie signal is not able to decode. Those have different tresholds.

Yes, here its explained CCA SD and CCA ED
 
I use a mix of 10Gb, 2.5Gb and 1Gb/s UTP and fiber Ethernet at home, and I have some legacy 10/100Mbit devices. I also use an Asus Wifi 6 mesh network with Ethernet backhaul.

I don't think anyone could argue Wifi is ever better than wired networking. Good enough, I guess. Cables going bad hasn't been a real concern in home networking (I've had Wifi modules dying, though). Rough handling by datacenter staff in our many cabinets is a different matter.
 
There was no OFDMA in Wifi before Wifi 6. That was one of the new shiny things, that got promoted as big with 6.
OFDMA is multiplexing. Multiplexing as a technology is older than I am. OFDMA is a specific standard of multiplexing. You may also know the same technique as spacial multiplexing, and a previous variations was MIMO that was a variation that been implemented since 802.11n.

It's not new technology, it's a new name for a particular implementation of an old technique that has been used in wifi for a long time. ¯\_(ツ)_/¯

I don't think anyone could argue Wifi is ever better than wired networking. Good enough, I guess.
Nobody is arguing that, but equally the assertion that people should use cables instead of wifi so there aren't 100ms latencies from the wifi connections suggests that some poor wifi networking setups are setting some folks expectations.
 
Nobody is arguing that, but equally the assertion that people should use cables instead of wifi so there aren't 100ms latencies from the wifi connections suggests that some poor wifi networking setups are setting some folks expectations.
I think it's more subtle than that. In a competitive shooter, which are the most popular titles on consoles, Wifi has (allegedly) inherent faults which mean small, unpredictable packet issues which can impact your gaming experience. A wifi setup that is immune to this can (allegedly) only be achieved 1) by someone who knows what they are doing investing in the right hardware and 2) living in an environment with minimal EM interference which is something many people can't control.

You seem to be suggesting that Wifi 6 solves problem 2?

Another important factor I think is ISP's supplying the router most folk use, which aren't the best hardware. As such, a cable connection to such routers is likely a sensible move to secure a more reliable connection.
 
Nobody is arguing that, but equally the assertion that people should use cables instead of wifi so there aren't 100ms latencies from the wifi connections suggests that some poor wifi networking setups are setting some folks expectations.
Sure, but it's also fair to say that for an average user it's (much) easier to have a cable work well than a wifi connection for stuff like gaming. There is - after all - a reason why Fortnite and other games suggest wired connections on their loading screens.

It's not that it's impossible to have a good experience with wifi; obviously I assume at this point the majority of people are using wifi because in a lot of housing situations it's annoying or impractical to get a wire between the needed locations. A lot of folks also use laptops that don't have convenient ethernet ports these days. But as this thread demonstrates there are a ton of pretty technically detailed factors that can interfere with it working well, only some of which are improved in newer Wifi gear/standards (which obviously most folks don't even have and likely won't for a long while).

Contrast that with the fact that if someone plugs an ethernet cable into their desktop motherboard LAN and the other end into their router there is a very high likelihood they will have a good experience. The point is not to shame people who are having a fine experience with wifi or cannot reasonably run a cable. It's more for the cases where someone *is* having issues and could relatively easily run a cable but have not bothered because they weren't aware that this might indeed solve their problems.
 
The recommendation should always be "cable if you can, wifi if you can't".. It's a trade off of performance/stability vs convenience.

Personally Wifi has been solid for me for quite some time.. mind you I have a fairly expensive router ($300+) though and wouldn't expect the average person to have quite the same experience, or tell them to expect the same experience from just any old connection. It's about setting the proper expectations.

Generally, solid advice is that if the game relies on low latency and precision, choose wired. If not, then do whatever is most convenient and works well enough for you.

An example of that is that I will NEVER willingly go back to wired VR. I will always choose wireless VR streamed from my PC, over wired. The quality and experience that I am able to have with my setup and with the quality of the software driving it, is personally of a standard that is not only acceptable, but excellent to me.

People who live in apartments with multiple routers and networks all in close proximity may have less stable connections and interference, but only they can say whether something is acceptable or not to them.
 
OFDMA is multiplexing. Multiplexing as a technology is older than I am. OFDMA is a specific standard of multiplexing. You may also know the same technique as spacial multiplexing, and a previous variations was MIMO that was a variation that been implemented since 802.11n.

It's not new technology, it's a new name for a particular implementation of an old technique that has been used in wifi for a long time. ¯\_(ツ)_/¯
Come on now :D , I never said multiplexing was new in Wifi 6, I wrote OFDMA. Which is a specific thing and is supposedly so much better than OFDM. And you where the one that brought up 2000+ subchannels/resource units in Wifi 6/7 and claimed it made my assertion obsolete due to those subchannels. But those resource units feature is only in OFDMA and not OFDM.
 
Come on now :D , I never said multiplexing was new in Wifi 6, I wrote OFDMA. Which is a specific thing and is supposedly so much better than OFDM.
OFDMA introduced multiplexing into the previous OFDM standard. It is just a specific implementation of multiplexing. Standards shift from option to mandatory and evolve over time. What advances do you think OFDMA delivers in 802.11ac over what MIMO delivered in 802.11n?

It's new/different in the way Windows 11 is new/different relative to Windows 11. ¯\_(ツ)_/¯

Sure, but it's also fair to say that for an average user it's (much) easier to have a cable work well than a wifi connection for stuff like gaming. There is - after all - a reason why Fortnite and other games suggest wired connections on their loading screens.
Sure, if you have a small number of devices you want to connect to your internet router, and their things with an ethernet post, and the devices are close, a cable is pretty simple to plug into two devices. There are certainly les things to setup badly or can go wrong, and I wouldn't touch a cheap wifi router with a bargepole, they often junk.
 
Last edited by a moderator:
OFDMA introduced multiplexing into the previous OFDM standard. It is just a specific implementation of multiplexing. Standards shift from option to mandatory and evolve over time. What advances do you think OFDMA delivers in 802.11ac over what MIMO delivered in 802.11n?
2000+ subchannels?
 
Back
Top