Why don't GPUs assist in preventing online cheating (hacking) in games?

Perhaps if on startup the game would make a short record of the current processes running on the computer, and analyze them to detect if any are attempting to edit the game's process. If nothing's detected, then nothing will happen. However, as long as the game is running, if another process is started up, the game will immediately analyze it and make sure it is not accessing data from the game (exceptions will have to be made on things like antivirus software), and if it seems harmless enough, then the game will continue without a hiccup, as if nothing happened.
Been done, been moaned at in the press for being intrusive, been worked around trivially with rootkit techniques.

And for those who still want to sign or encrypt things, call me back when you can find a protected game without a no-CD crack, as until you can prevent altering the client (e.g. to remove protection) there's no point.

Democoder has it right; the best you can do is information-hide so the client doesn't know anything the player doesn't. Unfortunately this is a huge bandwidth hog and prevents some useful things (e.g. post-game replays in RTS).
 
What would happen under circumstances where you shoot people who are not visible? Around the corner splash damage, through-wall shots, etc. You still need hitboxes there despite not being visible.
 
What would happen under circumstances where you shoot people who are not visible? Around the corner splash damage, through-wall shots, etc. You still need hitboxes there despite not being visible.
In almost all modern games it is the SERVER doing the hitbox detection and not the client. In that case the client does not need to know about any hitbox at all.
 
Client still does hitbox collision detection when doing predicted movement. Can't have predicted movements allowing players to move through each other... looks kind of wrong. It may not be the same hitbox used for detecting weapon hits but it's still a hit box and is likely to be bigger and simpler then the weapon hit box if they differ.
 
Client still does hitbox collision detection when doing predicted movement. Can't have predicted movements allowing players to move through each other... looks kind of wrong. It may not be the same hitbox used for detecting weapon hits but it's still a hit box and is likely to be bigger and simpler then the weapon hit box if they differ.
But if there's latency you need a hack for this problem anyway as it's trivial for two people to occupy the same space.
 
Well, all what needs to be done, is to ask client for copy of screen data that had already been rendered before it received the request.

The requests and image data can be time stamped by server and driver. The driver can store some amount of previously rendered frames. We don't need to look back in time further then half ping time. With reasonable framerate and ping, this would amount to a handful of frames.

Yes, size of image data makes practical solution harder, but not impossible (as I said: scissor it, scale down, skip n-th frame, etc.)

Sure, the hacker can artificially increase perceived ping, and even disable wallhacks for a single frame. However, a client who seems to experience strange lag spikes occuring in coincidence with the server's request, could be easily marked as suspicious.
The hacker would simply store the commands to the previous frames and re-render the requested frame without cheats and hack the timestamp as needed.
 
The hacker would simply store the commands to the previous frames and re-render the requested frame without cheats and hack the timestamp as needed.

The driver timestamps screenshots when it stores them, not when it retrieves them. If hacker's re-rendered old scene for the screenshot, as you are describing, it would get older time stamp then the legit one. As I said previously, the difference would be about interval of ping. In worst case, the fake screenshot it would arrive to the server with relative age twice older then expected. The server, by keeping statistics of client's ping before the request, can detect this.

This is a tradeoff: we want the server to request as old screenshots as possible, but we also want the driver to not waste too much memory for the screenshot history storage.
 
The driver timestamps screenshots when it stores them, not when it retrieves them. If hacker's re-rendered old scene for the screenshot, as you are describing, it would get older time stamp then the legit one.
The hack would just change the time stamp to whatever it needs to be to look new. The video driver surely isn't going to be sending off the screenshot, so you're counting on game code to do the sending, and we've already assumed it's been compromised.

As I said previously, the difference would be about interval of ping. In worst case, the fake screenshot it would arrive to the server with relative age twice older then expected. The server, by keeping statistics of client's ping before the request, can detect this.
The good news for the hacker is that with something the size of a full-resolution screenshot, the average ping is goint to be a many times longer than it would be normally.
It takes a lot longer to download a multi-megabyte file than it takes to render the frame in question.

The margin of error for the server to reject a frame may be only a few percent, and anything can make a legitimate upload take just a little longer than usual.

This is a tradeoff: we want the server to request as old screenshots as possible, but we also want the driver to not waste too much memory for the screenshot history storage.
Memory isn't really the biggest constraint here.
Getting a file the size of a full frame buffer to upload, analyzing the image, and having the server recieve all this data and do all this work for multiple players who are all sending multi-megabyte files at the same time is the biggest problem.

What good is having a screenshot repository unless there's something to look at it? What good is making one if it means the pings are measured in seconds?
 
You can't achieve security by trusting the client, even in the unlikely scenario of everyone who plays being on a TPM platform, there is still the possibility of bugs in the game or kernel allowing hackers to write an exploit and gain trusted access. At best, by attacking the problem at the client, you can make it difficult and slow down the time from game being published to game being hacked, as well as possibly making the installation of the hack too complex for many people.

For wall hacks and other information leakage issues, one can burn server resources, compute fine-grained visibility per client, and occlude information from the prying eyes of hacked clients (by not sending information giving the position of enemy entities if they are not visible) It's not really a far cry from what advanced FPS AI does today, only you compute from the client's state and viewpoint instead of the AI. Sure, it will chew up CPU resources, and people wanting to run 32 or 64 player servers will probably need 8-core beefy machines, but serious players who want to remove hacking (CounterStrike, DoD, et al) already pay extra for beefy hosted servers on high bandwidth low-latency networks, and many clans would be willing to pay for even more beefy servers if they could eliminate wall hacks, map hacks, et al. My own CS clan pays $100/mo for their rented rack-mounted machines.

For aimbots, there isn't really a good solution. One could use statistical methods to analyze the distribution of hit locations and accuracy as compared to a large number of human sample players and look for tell tale signs in the distribution, or one could download aimbot hacks, run them through a number of sample trials, and compute the hit distribution, and then use this as a "fingerprint" that is provided to other servers to detect hackers.

However, this would depend on the aimbot authors not designing countermeasures. As soon as they found out what the servers were doing, they'd adjust, just like spammers vs bayesian filters, they'd find out what models were being used, and adjust the aimbots so that did not exhibit such an obvious artificial or unique fingerprint. (possibly by just using variances recorded from human trial runs to perturb the aimbot)

Although, to be truly effective, they'd probably have to make the aimbot less accurate as well, which would make it less of a threat. I wouldn't care if a n00b used aimbots if it was only as good as the very best human players. I'd care if it was better than the best human players could ever be.
 
For wall hacks and other information leakage issues, one can burn server resources, compute fine-grained visibility per client, and occlude information from the prying eyes of hacked clients (by not sending information giving the position of enemy entities if they are not visible)
How well would that work in complex environments?
Wall-hacks could be curtailed, but unless the resolution of visibility calculations is about as fine as a GPU's Z-reject, it's possible to occlude too much information. Maybe an enemy's shoulder is poking outside a door frame. The danger of having that information hidden would depend on just how intensively the visibility information is processed by the server.

Hiding behind a fern that's rendered as partly transparent textures stretched over a broad polygon would make someone invisible, unless the game implements some internal tracking of what objects can allow partial occlusion, and have running comparisons of the size of different objects to see that they can't fully occlude each other.

A niave approach would get that Bugs Bunny hide behind a lampost effect, or people could just hide behind a towel.

For aimbots, there isn't really a good solution. One could use statistical methods to analyze the distribution of hit locations and accuracy as compared to a large number of human sample players and look for tell tale signs in the distribution, or one could download aimbot hacks, run them through a number of sample trials, and compute the hit distribution, and then use this as a "fingerprint" that is provided to other servers to detect hackers.
That system could be circumvented by an aimbot that the player can activate only at certain times. I guess over-eager cheaters would be caught, but who's to say if that last-second victory shot was just lucky or a cheat?

Using a so-called "representative" test set would also be problematic, because it would have to span a population of players who range from very bad to almost aim-bot good.
Dynamically updated values are problematic because they can become biased to one extreme or the other, and are vulnerable to sudden changes in overall accuracy.

If there are different presets, it would be up to a server admin to pick the right one, but that would depend heavily on controlling what players can come in, like a list of friends. That would make everything the server does almost irrelevant, since the best defense against cheating is playing only with people you know and trust.

Although, to be truly effective, they'd probably have to make the aimbot less accurate as well, which would make it less of a threat. I wouldn't care if a n00b used aimbots if it was only as good as the very best human players. I'd care if it was better than the best human players could ever be.

If the Bayesian filters are adjusted according to a given game's players, then a cheater could actually exploit it. If the filters think a given level of accuracy that is x% above average is enough to flag a cheat, then a cheater could just play very badly and get the better players kicked.

There are countermeasures and counter-counter measures, but I'd probably leave any final determination of cheating up to the guy running the server.

Let's just hope he's not the one with the aim-bot. ;)
 
Wall-hacks could be curtailed, but unless the resolution of visibility calculations is about as fine as a GPU's Z-reject, it's possible to occlude too much information. Maybe an enemy's shoulder is poking outside a door frame. The danger of having that information hidden would depend on just how intensively the visibility information is processed by the server.

A think *conservative* gross geometry bounding box/convex hull box computations would prevent improper hiding of information, whilest stopping most egregious hacks. However, if you wanted Z-reject level resolution, you can use the idle GPU on the server which is not currently being used (assuming they have one), to render z-only occlusion test passes at the cost of a modest amount of fillrate and bandwidth, but lots of geometry load. A unified chip like the Xenos/R600 would eat this up, and the most likely bottleneck would probably be triangle setup.

The problem is soluable, the client renderer already solves it. The only discussion is whether it can be made cheap enough on a server so that hosting 32 players doesn't require a cluster of 32 server computers. As for partially transparent objects, we don't really need to solve this anymore than we need to compute whether a players camoflage uniform renders him visible or invisible, whether fog or church windows should count, etc, simply treat any surface with a transparent texture/alpha-test as 100% transparent. Game engines already store visibility datastructures for overdraw reduction, and some already use anti-portals to efficiently determine what is occluded. You'd just need to scale up the anti-portals.

That system could be circumvented by an aimbot that the player can activate only at certain times. I guess over-eager cheaters would be caught, but who's to say if that last-second victory shot was just lucky or a cheat?

Again, the assumption that an anti-cheat system is valuable only if it can stop all cheating. Today, aimbot and wallhack players try to cloak or hide their cheating because of people spectating them, sooner or later, they give themselves away. Maybe a super-super cheater only turns on his cheat in very very rare occassions. I'm not really concerned about that guy, just like the criminal justice system isn't concerned with the guy who pirates a few games occassionally. I'm more concerned with the people who have it on all the time. A guy who only turns his cheat on during the final round of a tournament, is like a baseball pitcher who only uses spitballs in the last inning. Morally irritating, but it doesn't completely ruin the game. In contrast, counter-strike servers infested by brazen aimbot and wallhack cheaters completely ruin the game and make it unplayable.

Using a so-called "representative" test set would also be problematic, because it would have to span a population of players who range from very bad to almost aim-bot good.
Dynamically updated values are problematic because they can become biased to one extreme or the other, and are vulnerable to sudden changes in overall accuracy.

Of course, but you're not saying anything I already haven't. Moreover, the purpose of statistics collection isn't to boot you off the server in real time when you start cheating, but to process asynchronously, probably offline, and later ban you. For sure, there will be false positives, but that situation is no different than today, where people are accused of cheating and the only evidence is the word of spectators who claim it, which are more often than not, wrong. The server has advanced knowledge which spectators do not have, for example, like being able to tell if the cheater is aiming at someone who is occluded. Only spectators who themselves are wallhackers can determine this.

I'd settle for a system that used a large collection of player data, as well as known representative sample data from cheating programs to place a confidence level on the probability of someone cheating. I'd then let the administrator determine what to do about it, and/or let the players see this figure publically.





If the Bayesian filters are adjusted according to a given game's players, then a cheater could actually exploit it. If the filters think a given level of accuracy that is x% above average is enough to flag a cheat, then a cheater could just play very badly and get the better players kicked.

There are countermeasures and counter-counter measures, but I'd probably leave any final determination of cheating up to the guy running the server.

I would not, and did not advocate filtering according to simple accuracy, rather, using shot distribution modulo target movement/weapon. Current aimbots use a simple PRNG perturbation to aimpoints on the hitbox of a model which does not reproduce the distribution of human aim (just look at a heat map of human shots vs aimbot+PRNG) It is true that more complex aimbots could seek to obfuscate, just as more advanced spambots can obfuscate spam content to fool spam filters. But that's the best you can do, since there is not 100% solution against bots. Just like CAPTCHA today can defeat alot of bot-automated signup/click fraud, but CAPTCHA is continually evolving because the counter-measures evolve.

Bayesian spam filtering isn't 100% effective. But it removes 80-90% of spam from my mailbox which is a far sight better than 0%. I have my own 'friends and family' email addresses as well that get zero spam, but gaming online is more than just about only playing with your friends, it's about meeting new people, and frankly, I'm not content to only play on 'safe' servers run by me and my friends.
 
The hack would just change the time stamp to whatever it needs to be to look new. The video driver surely isn't going to be sending off the screenshot, so you're counting on game code to do the sending, and we've already assumed it's been compromised.
You are making assumption that the driver shoots itself in a foot. We are talking about driver that uses cryptography to protect the image data from modifications. Why do you think the driver would refrain itself from applying the same protection to the time stamp? It's against logic.
The good news for the hacker is that with something the size of a full-resolution screenshot, the average ping is goint to be a many times longer than it would be normally.
It takes a lot longer to download a multi-megabyte file than it takes to render the frame in question.
I've already adressed it. There are many possible ways to reduce size of the image data, and you don't need to upload them in one, continuous burst.
The margin of error for the server to reject a frame may be only a few percent, and anything can make a legitimate upload take just a little longer than usual.
You didn't quite understand. The time of arrival at server doesn't affect the embedded timestamp. The server can estimate expected timestamp before it even makes the request, so it isn't affected either.
Memory isn't really the biggest constraint here.
Getting a file the size of a full frame buffer to upload, analyzing the image, and having the server recieve all this data and do all this work for multiple players who are all sending multi-megabyte files at the same time is the biggest problem.
Again, you insist on assuming the dumbest possible implementation and ignoring obvious better options.

You don't need to upload full framebuffer.
You don't need to analyse nor receive data from multiple players at the same time.
Analyzing the image can have fast early-exit (subtract 2 images and compare result against threshold, using occlusion query)
 
You are making assumption that the driver shoots itself in a foot. We are talking about driver that uses cryptography to protect the image data from modifications.

In the absense of hardware trusted computing, no software tamper resistant mechanism is resilient to hacking. Cryptography is irrelevent. I cracked games on the commodore 64 that used cycle-timed decryption/reencryption bootloaders, so that only 1 instruction was ever in plaintext and only "just in time" when the PC was pointing at that address. That was 20 years ago. Cryptography does jack shit when I have access to step through the decryption routine.
 
In the absense of hardware trusted computing, no software tamper resistant mechanism is resilient to hacking. Cryptography is irrelevent. I cracked games on the commodore 64 that used cycle-timed decryption/reencryption bootloaders, so that only 1 instruction was ever in plaintext and only "just in time" when the PC was pointing at that address. That was 20 years ago. Cryptography does jack shit when I have access to step through the decryption routine.
Agreed.

About the only thing that would remove the ability to cheat would be DRM hardware but can anyone see that being an accepted approach?
 
Agreed.

About the only thing that would remove the ability to cheat would be DRM hardware but can anyone see that being an accepted approach?
YES
Heard of the xbox lots of people bought that hardware. They'll buy it for PC too just you wait and see.
 
Xbox DRM was cracked, quite easily I might add. TPM platforms are only as secure as the code running on them. Most of the DRM kernels have exhibited exploits in the trusted portions of the kernel allowing injection of circumvention code. Sony PSP was circumvented by many buffer overflow exploits until numerous firmware fixes removed what appeared to be the last of the easily findable kernel exploits.

However, soft-hacks are not the only hacks available. Hardware hacks are available as well, and game systems are not likely to keep all datastructures encrypted in DRAM due to performance reasons, leading to the possibility of DRAM probes to snoop for critical session keys for code injection, or just plain ole "game shark" style hacking.

The paradox of TPM/DRM is that the only real way to ferret out all the bugs is probably to release the source so that everyone can hack it and attack it, on the other hand, not too many people are likely to do this prior to the release of the actual equipment and content worth stealing.
 
YES
Heard of the xbox lots of people bought that hardware. They'll buy it for PC too just you wait and see.
But the XBOX wasn't really a full hardware solution. It still used a standard CPU so unencrypted code still existed outside of the CPU.
 
You are making assumption that the driver shoots itself in a foot. We are talking about driver that uses cryptography to protect the image data from modifications. Why do you think the driver would refrain itself from applying the same protection to the time stamp? It's against logic.
All this happens locally, so it's circumventable.
What's so hard about finding the bit region associated with the time-stamp and just copy and pasting a valid frame's time-stamp to the fake? You could try to scatter the data throughout the image file, but that takes extra work.

You could try watermarking the image, but given issues with how GPU implementations differ in precision, that would be unreliable.

The encryption would fall easily because the client is able to access both the encrypted and non-encrypted frame data. It would take someone a little versed in encryption a few dozen test frames to create a good candidate hack, which amounts to about 1/5 of a second's worth of frames at 60 fps.

I've already adressed it. There are many possible ways to reduce size of the image data, and you don't need to upload them in one, continuous burst.
I'm also interested in how long you think it takes to encrypt a multi-megabyte file, then compress it, and then upload it.

You could just take snapshots very rarely, but that means the frames are increasingly likely to miss points of cheating, and the longer time frames allow real-time circumvention of the process to be achievable.

You didn't quite understand. The time of arrival at server doesn't affect the embedded timestamp. The server can estimate expected timestamp before it even makes the request, so it isn't affected either.
So it all rides on a time-stamp the server has to assume wasn't just created to look valid. The server can't reliably estimate a timestamp either. System timers aren't that precise. A good hack could quite possibly fit within the same millisecond or very close to it.
If need be, a client's software could be manipulated so that the simulation pauses on the update frame. This means multiple frames are rendered the same, which would allow for a quick comparison for the needed timestamp.

Again, you insist on assuming the dumbest possible implementation and ignoring obvious better options.

Outline the exact solution you are trying to sell me on, with details like how often these tests occur, how image data can be verified, and what exact calculations will be done, and what those results will tell the server, and then tell me how the server is supposed to interpret those results or even how it can interpret those results.

You don't need to upload full framebuffer.
How do you decide what parts of the framebuffer are relevant?
You can't let the client software decide that, since it would be compromised.

Signs of cheating might be on the periphery, like an edge-of-screen wallhack.

You certainly can't hope for a video driver that's net-accessible.

You don't need to analyse nor receive data from multiple players at the same time.
Unless you plan on analyzing only every couple thousand frames, multiple players will be evaluated concurrently. There's a lot of data being thrown around, and you haven't explained just how often it should be examined, or how you expect it to be examined.

Analyzing the image can have fast early-exit (subtract 2 images and compare result against threshold, using occlusion query)

What threshold?
What would color differences tell you? There's no geometry data in the framebuffer, you're looking at a single image. What does any of that tell the server?

"This pixel's blue component is 67!!! OMG HAX!!"

What are you subtracting from? Certainly not two frames from the same compromised machine. If you did, what's to stop the client from sending the exact same frame every time to exploit the early exit?

Is the server independently rendering the test frame?
If it is, then it is almost guaranteed not to match even with a fair client.
GPUs render differently depending on driver settings and implementation differences.
Someone's frame could skip, the test frame may not even be rendered.
Any number of things can make it so that the reference cannot match the test subject.

You could have the client add its own stamp for the simulation cycle it's on, but that's asking the compromised client all over again.

The client's world data will always be a bit off from the server's, so you need some pretty sophisticated analysis to understand what image differences can be traced to that. That needs more than just a framebuffer to be calculated.

The life of a game server must be boring if it has the luxury of running the equivalent of a photoshop filter and a reverse physics calculation every X milliseconds for dozens of players.
 
Last edited by a moderator:
Back
Top