I thought that increasing internal rendering precision couldn't mess things up, and that render targets were parts of the frame buffer, but am I wrong?
Increasing internal calculationg precision does not mess up things, unless the rendering technique specially depends on lower quality rendering / clamping (some toon shading algorithms could do this for example).
However if the game instructs the API to create a 16 bit texture, and the API/driver creates a 32 bit texture instead, there will be major problems if the game locks the surface and processes it by CPU (expecting it to be 16 bits per pixel). Most surfaces cannot be locked (back buffer, front buffer, z-buffer and all default pool textures in DirectX, unless specially described as locked surfaces), so this is not often a problem, and the driver can correctly detect the surfaces that can be rendered in 32 bit depth without problems. However there are cases where the driver cannot be sure, so it's better just to obey the program and create a 16 bit texture. Potential image quality improvement is not as important as the potential incompatiblity (program crashes/hangs).
On earlier DirectX versions (DX5 - DX7) all render targets and depth buffers could be locked by default. On newer DirectX versions render targets and depth buffers cannot be locked by default (there are specific formats for lockable render targets). This allows the hardware to implement the render targets in a format better suited for hardware rendering (cache friendly tiled buffers, stencil bits in separate buffer than z-bits, etc).
Locking a buffer = a method to map a GPU resource (texture, vertex buffer, index buffer, etc) to CPU address space. This allows the developer to modify and read the GPU data using the CPU. Usually most buffers are locked at the beginning of the game/level, and data is filled in. After this only dynamic buffers need to be updated (locked).