Just to add some detail to the DF answer, the question is a perfect example of where folks get confused between "latency/responsiveness" and throughput ("frame rate" here).1:50:13 Supporter Q4: Could developers run a game’s logic at high rates to improve responsiveness, while keeping frame-rate untouched?
The latency or "responsiveness" of a game is purely the end to end time between making an input to seeing (or hearing/experiencing in any way, but generally seeing in this context) the output. Decoupling the game logic/input/simulation from rendering is indeed very common, but it is not done primarily for latency reasons. Indeed having it too decoupled adds a bit of complexity to various late latched input sampling techniques (ex. NVIDIA reflex or equivalent). The primary purpose in running these simulations at higher rates is usually physics stability, especially in the case of things like racing games where high speeds mean you are at a great risk of undersampling important effects and interactions if you run the simulation too infrequently.
Knowing a game's (stable) frame rate gives a lower bound on latency (i.e. if it takes this long to draw, the end to end latency has to be at least that), but it gives no upper bound as the other contributors could be arbitrarily long. Thus while reducing the time to draw a frame will generally reduce the end to end latency as increase the frame rate, thinking of the two as the same will lead to confusion like in the the question.
tldr: this is yet another example where frame rates are confusing; thinking about frame times is correct here.