"framerate" has always been how often we game state and draw a new frame. At minimum that means updating the camera matrix, although for almost any 3d game it also means updating player position if nothing else.
Which for Warzone, the game is updated is 20 times a second, because the ground truth of that game is the server. But locally, it's rendering 120 frames every second. And those frames are interpolated from data sent to and from the server 20 times a second. How would this be different than a game running locally at 20 updates a second, but the output is updated 120 times a second using a mix of rasterized, interpolated, and/or AI generated frames? If a player is running from the left side of the screen to the right side of the screen, the server (again, the ground truth simulation for that title) is only updating 20 times a second, and the local machine running the game is interpolating the movement making the character move smoothly through the scene. If we are tying performance to simulation or game state, how is this different from frame generation?
And I'll bring this up again. What about games that lock certain parts of the simulation to lower rates. Beyond the examples others and myself offered above, there are famous examples of this like Bioshock pre-console gen 8 patch having all of it's physics and some of it's other animations locked to 30fps. But probably most relevant to a conversation about performance, Quake, which was a benchmark staple for years, had it's characters rendered in what amounts to keyframe only animations, and IIRC they aren't all even animated at the same rate. There were limited physics objects in games that ran on that engine as well (Hexen 2 comes to mind) that the physics objects (bodies, barrels and stuff) were moved around the game world at lower than per frame rates. Later releases of both Bioshock and Quake would smooth out those animations, but no one would decry that the real performance of those games was ever limited to those lower numbers.