I feel that the lawsuit is trying to conflate a term used in generalized marketing in one particular consumer electronics context with a debate over terms of an art removed from general understanding (Guerilla Games having something of a history with imprecise technical language aside).
My worry would be that giving this suit standing would be trying to mandate a specific relationship between a more nebulously timed 1080p used for the behavior of pixels in a video signal sent to the TV and how a set of graphical algorithms sample a simulated interactive 3D environment.
1080p for a TV in terms of more static media asks whether all lines of horizontal resolution in a particular frame are refreshed in a frame. It doesn't demand that every pixel in every line be different from the frame before, but unlike 1080i there is specific relationship that indicates that pixels in one line are transmitted as part of a different frame than those in the next line.
For a broadcast or media playback, this relationship and the values that define the visual output of those specific screen positions as transmitted to the TV are known and generally not variable.
I'm assuming we're going for the "real" HD moniker of 1920x1080, to avoid the debate in the media and TV realm where the actual length of those 1080 lines of resolution is not so set in stone. I don't think it actually has bearing here, even if it weren't.
The final render output of KZ's multiplayer mode does fit the literal definition, at least at the level of what gets passed through the HDMI cable to the TV. This doesn't exclude simple upscaling, but the actual final frames as set down at the final buffer flip are 1920x1080 pixels in dimension as well. Any specific screen position can at some point in gameplay change in the next frame in a way not consistent with a fixed relationship with other parts of the screen.
The debate arises because the number of newly rasterized visual samples of the simulation in screen space per frame is half that. The other half have values that rely on fetching three half-resolution data sets with alternating columns and a 1920x1080 reference. Internal tracking values and color comparisons are used to derive final outputs for the missing columns.
The question I ask is, does the version of 1080p the lawsuit claims we were cheated of that
1) Has been allowed in extant marketing to shave off columns
2) Defines how a television should interpret specific values in a signal and how it should sync those up with its physical output
3) Is used in common vernacular, but has through a result of convenience and the natural imprecision of human language overlapped with technical and academic communication and debate
4) Defines a specific relationship of pixels in a succession of transmitted frames
match up with what the final render output of KZ does?
There is no shortfall in pixels submitted to the outside world in a specific frame.
The number of instantaneous samples every 1/30th of a second is close to being equivalent to the non-disputed single player mode.
The primary detraction is that this method loses in terms of accuracy and precision in terms of how well it represents the dynamic state of the simulation at any instant.
The relationship of the filled-in values in the final output is derived from an algorithmically mutable set source values, with at least one source being what people would consider 1080p that is generated dynamically, with varying amounts of processing possible for each pixel.
While many pixels may be the same or likely almost the same, they don't have a fixed relationship beyond that the pixels that change the least are those whose simulation value has changed the least. Any given pixel can change from one frame to the next, but the renderer doesn't perform the same set of operations on each one from one frame to the next. One outcome is less demanding, but the descriptions of how the renderer gets to the point of reprojecting a pixel show it is not trivial.
Should the court demand that there be a mandatory link between 1080p, as a TV sees movie playback, and the specific number of sampling points as determined by a fixed function rasterizer's fragment coverage that is passed unmasked through a fragment shader, and then passed through as a singular contributor to a render back-end's writing to a buffer, possibly up to 2.5x as rapidly as a movie runs?
Is this saying that a wholly imprecise advertising must mandate a highly specific implementation of 3D engine output?
We have game engines that heavily rely on fractional-resolution intermediate values, values held over from previous frames, assets of varying resolutions on disk and varying visual resolution depending on where they are relative to the camera. We have values that don't go through the conventional raster pipeline. We don't require that every pixel that comes out of the renderer have had every shader applied to it identically.
How can we declare a somewhat artifacted and possibly blurry reprojected output as being inferior when as a rule the non-objectionable games have everyone slathering a heavy layer of film grain, god rays, simulated cataracts, bloom, fog, DOF, and then maybe an artifacting scaler on top? What is the legal measure for being too blurry to count as whatever 1080p should have been?
Arriving at a generally similar, but likely artifacted and less precise output is how many smart TVs get away saying that they are 240Hz, when it is physically beyond them.
Is the court to take a marketing blooper from one realm of extreme imprecision and non-technical vernacular and make it an exact requirement in a different field where the definitions are actually different and not really understood or considered in the former?