....its not about cherry picking vectors just need more r/d a few years ago raytracing was deemed impossible for realtime and now we have semi pathtraced games, a billion polygons per scene where impossible until ue5 was revealed running on the ps5 with billions of triangles seamlessly, even other unusual technologies like in media molecule's dream solve things that traditional pipelines couldnt. Because people dared and researched it. i can only see vectors solving aliasing issues otherwise its gonna be unending reconstruction techniques that will never be clearer than a vector image and also have artefcats.
It's very easy to overlook slight inaccuracies in a still image -- in a moving image you would have constant wobbles and pops. Also, of course, this technique would need to run in ~<1ms to fit into the pipeline at 60fps.
I'm not sure why you're fixated on vectors -- you're bringing an interesting intuition (calculating geometry has infinite resolution, frame buffers have limited resolution) but as joej and shifty said earlier, we already have geometry transformed into screen space -- that's what the vertex shader does -- and we already have that converted into an infinitely sharp 2d image at a given resolution -- what's what rasterization does. We also have hardware features to evaluate those edges at a higher resolution for AA -- that's what MSAA does.
What trying to do that after rendering the image would get you is somewhat interesting to think about though:
You'd get vectors representing all of the information which wasn't captured in the initial geometry. Texture detail on surfaces, screen space effects like motion blur, etc. Even if a machine could generate it, that would be a
lot of vectors, and it's unlikely the image would look exactly the same scaled up or down. We'd be generating vectors from a given resolution, so even if we did a perfect job, an edge which looked sharp at that res might actually be soft at a higher res in the source content... but we wouldn't have that info to encode into the vectors so we'd lose it during "vector upscaling".
Ultimately that sounds expensive to calculate, and maybe even expensive to just rasterize that image -- a vector drawing capturing all of the photorealistic texture detail of a final render might be almost as complex as the full 3d scene we sent to the gpu initially, with tons of overdraw for transparent gradients all over, so we might be doing just as much rasterization as we would be if we simply rendered the original scene at a higher resolution.
It's hard to imagine doing better in terms of giving vector geoemtry to the screen than we can by culling super well in 3d in the first place, and sending something as close to a "2d slice" of visible 3d triangles as possible, so technologies like nanite (or maybe just regular REYES path tracers) are a much better fit to get your desired data on screen at your desired sharpness.
Regarding your thought on current gfx being maybe on the wrong track, i totally share this attitude. They make the impression they would care about optimization, and game devs pretend to have even the most expertise on optimization from all programming fields.
But imo, that's not true. They do just low level and close to the metal optimizations. But they forget about coming up with faster algorithms, and they are stuck at brute force solutions for everything related to gfx.
I don't think that's quite fair, devs are focused on close to the metal stuff because cache coherency and gpu utilization as such huge insurmountable factors for our field that it's hard to look past them. Many state of the art techniques are overly complex algorithms in principle, but are more cache friendly. There's always a need for more boundary pushing and creative algorithms (Look at nanite, which is mostly a cool computer sciency-y tree algo) but game programming is mostly focused on the challenges that exist to get to the next level of complexity with the content we need to ship right now.