Alucardx23
Regular
Hi guys, I want to share an idea that I had and I want to hear your opinion. Since the really big problem that needs to be tackled with cloud gaming/processing is lag, why not make the combination of streaming visual information with local processing? This would solve the problem of input lag and would give the advantage of the processing power that is available remotely.
Think of this as an evolution of the way graphics were handled on Resident Evil 2 or any other game with pre processed backgrounds from Donkey Kong to Fear Effect. These type of games managed to give the impression of higher detail by Pre processing the backgrounds on stronger hardware. On my example instead of the pre processed backgrounds saved on the disk as pictures, it would be a stream of visual information from the server.
Taking this God of War 3 picture as an example of this, only the elements that are inside the red line would be rendered locally; anything outside of the red line would be rendered on a remote server.
From the console perspective it would just need to replace the green background with the video stream.
This could also be used to stream texture information as a video, think of it as an evolution of parallax mapping, to stream 3D textures. Why would I need to draw and save the textures locally in the example below, if I can just stream the information from the server as a video? If I’m not mistaken this would mean faster loading times and it would free up a lot of local processing to be used on all things that are sensitive to lag.
Since local memory would not be a limitation you can put all the details that you want.
Think of this as an evolution of the way graphics were handled on Resident Evil 2 or any other game with pre processed backgrounds from Donkey Kong to Fear Effect. These type of games managed to give the impression of higher detail by Pre processing the backgrounds on stronger hardware. On my example instead of the pre processed backgrounds saved on the disk as pictures, it would be a stream of visual information from the server.
Taking this God of War 3 picture as an example of this, only the elements that are inside the red line would be rendered locally; anything outside of the red line would be rendered on a remote server.
From the console perspective it would just need to replace the green background with the video stream.
This could also be used to stream texture information as a video, think of it as an evolution of parallax mapping, to stream 3D textures. Why would I need to draw and save the textures locally in the example below, if I can just stream the information from the server as a video? If I’m not mistaken this would mean faster loading times and it would free up a lot of local processing to be used on all things that are sensitive to lag.
Since local memory would not be a limitation you can put all the details that you want.
Last edited by a moderator: