1280x1024 fov question

Nappe1 said:
Malcom: so eventually sooner or later, everyone has own small iMax projector, room with spherical roof and GFX card capable rendering polarcordinate system? ;)

Hmm you didnt understand what i meant :)
Well its not easy to explain things like this, maybe if you know a lot of definitions from the graphics world its easyer but i dont.
 
malcolm said:
The fish-eye thing is probably what i mean.
And it is the only correct way to fit a 120 camera fov in a 40 view fov.
This is what i meant ofcource, you wouldnt want to use a 40 game fov.
You can not have 120 in 40 with the correct way you say.
Whats important is to get all objects on the screen to be the correct size in the persons field of view.
If you calculate something for a 120fov on the side of the screen it will be almost two times the size because it is meant to be looked at from almost two times the distance than you are.
A fish-eye distorted view would not be "more correct" than using a 120° fov when you're sitting 2m away from your 17" screen. The only "right" thing would be to get closer to the screen or use a bigger screen/multimonitor setup.
A fish-eye image is also hard to render since you need very detailed tessellation of polygons. Today's renderers can only paint triangles with straight edges.
 
malcolm said:
Nappe1 said:
Malcom: so eventually sooner or later, everyone has own small iMax projector, room with spherical roof and GFX card capable rendering polarcordinate system? ;)

Hmm you didnt understand what i meant :)
Well its not easy to explain things like this, maybe if you know a lot of definitions from the graphics world its easyer but i dont.

actually I got what you ment but with my comment I ment that you can always have more. :) with spherical room and iMax projector you can have 360 degrees horizontal and almost 180 degree vertical fov.

Best example of iMax are multi lens projectors used in planetariums. Using same tech to computer games, definately would revolutionize term "virtual reality". :) Too bad all needed stuff for using such "virtual reality rooms" would be so expensive that it hardly ever would become as mass product. :)
 
Xmas said:
malcolm said:
The fish-eye thing is probably what i mean.
And it is the only correct way to fit a 120 camera fov in a 40 view fov.
This is what i meant ofcource, you wouldnt want to use a 40 game fov.
You can not have 120 in 40 with the correct way you say.
Whats important is to get all objects on the screen to be the correct size in the persons field of view.
If you calculate something for a 120fov on the side of the screen it will be almost two times the size because it is meant to be looked at from almost two times the distance than you are.
A fish-eye distorted view would not be "more correct" than using a 120° fov when you're sitting 2m away from your 17" screen. The only "right" thing would be to get closer to the screen or use a bigger screen/multimonitor setup.
A fish-eye image is also hard to render since you need very detailed tessellation of polygons. Today's renderers can only paint triangles with straight edges.

How is the rasterizing done? or whatever that part is called.
I tought it just used pixel angle coordinates to determine wether a polygon covers it or not (or if the angle coordinate goes trough the polygon or not).
If it works like this they could just use different angle coordinates for the pixels. (couldnt they precalculate the grid for this?or im sure its a very simple formula to do real time)
Maybe it works different and its not possible for some reason?
 
Actualy it would be a lot more correct, and for what it is(different camera and view fov) it would be the only right way, actualy the perfect way, any other way is deformed, this is just the way it has to be.
 
malcolm said:
How is the rasterizing done? or whatever that part is called.
I tought it just used pixel angle coordinates to determine wether a polygon covers it or not (or if the angle coordinate goes trough the polygon or not).
If it works like this they could just use different angle coordinates for the pixels. (couldnt they precalculate the grid for this?or im sure its a very simple formula to do real time)
Maybe it works different and its not possible for some reason?
Pixel angle coordinates? What do you mean by this?

Rasterization is entirely done in 2D, using the viewport transformed vertex coordinates of the triangles. The edges of those triangles are straight lines.
 
Xmas said:
malcolm said:
How is the rasterizing done? or whatever that part is called.
I tought it just used pixel angle coordinates to determine wether a polygon covers it or not (or if the angle coordinate goes trough the polygon or not).
If it works like this they could just use different angle coordinates for the pixels. (couldnt they precalculate the grid for this?or im sure its a very simple formula to do real time)
Maybe it works different and its not possible for some reason?
Pixel angle coordinates? What do you mean by this?

Rasterization is entirely done in 2D, using the viewport transformed vertex coordinates of the triangles. The edges of those triangles are straight lines.

How is it calculated if a polygon covers a pixel or not?
Polygons are in 3d not 2d so that part must use the angle coordinates of those pixels.
So change those angles and you get the correct geometry.
i know its a 2d projection if that is what you mean but that projection does have field of view angle coordinates of the pixels right? Or somehow it must work something like this...
I think it is determined for every pixel if a polygon covers it or not?
Not just the corners of the polygons with lines pulled in between, so if its done for every pixel you just need to change those coordinates of the pixels to get the curved lines, i dont think this would be complex even if it doesnt work like this... somehow then :)
 
malcolm said:
How is it calculated if a polygon covers a pixel or not?
Polygons are in 3d not 2d so that part must use the angle coordinates of those pixels.
So change those angles and you get the correct geometry.
i know its a 2d projection if that is what you mean but that projection does have field of view angle coordinates of the pixels right? Or somehow it must work something like this...
I think it is determined for every pixel if a polygon covers it or not?
Not just the corners of the polygons with lines pulled in between, so if its done for every pixel you just need to change those coordinates of the pixels to get the curved lines, i dont think this would be complex even if it doesnt work like this... somehow then :)
First the three vertices get projected onto a plane and the viewport transformation converts the 2D-coordinates to pixel coordinates. You then have a minimum and maximum Y value, and the equations of three straight edges. For every line between those two Y values, the hardware checks where the line first crosses an edge and where it crosses a second edge. Every pixel between those two points is covered by the triangle.
 
Xmas said:
First the three vertices get projected onto a plane and the viewport transformation converts the 2D-coordinates to pixel coordinates. You then have a minimum and maximum Y value, and the equations of three straight edges. For every line between those two Y values, the hardware checks where the line first crosses an edge and where it crosses a second edge. Every pixel between those two points is covered by the triangle.

oh ok, i tought they were all done individualy.
But does that pixel grid in that last thing you are describing have to be the way it is now? If they just lay out those pixels in curved lines based on those fov calculations they can still do this right?
Or are there reasons that it has to be an ordered grid?

btw sorry if im bothering you with this, but i dont know anything about hardware so... i also know very litle about how its actualy calculated, but i do understand 3d and lighting and all that, i could calculate a 3d scene by hand :) just dont know how its done on computers.
 
Xmas said:
A fish-eye distorted view would not be "more correct" than using a 120° fov when you're sitting 2m away from your 17" screen. The only "right" thing would be to get closer to the screen or use a bigger screen/multimonitor setup.

At 2 meters I can't read text. You'd have to be running at 640x480 to be able to use a 17" monitor at that distance. I'm usually about 18 inches away from my monitor; 24" if I lean back in my chair. Then again, I've never thought that 3D games looked distorted at the edges of the screen, so maybe I'm just wacko.
 
Barnabas said:
Might come from the ancient times when computers (and video game systems) used TVs as monitors. In order to create an acceptable output they had to take the vertical screen resolution into account. The 240, 480, 720, ... resolutions were used for NTSC output, the 256, 512, 768,... resolutions for PAL.

The thing is analog outputs don't actually have an exact output resolution. So I don't think that has anything to do with it. If anything computers have brought digital resolutions to TVs, not the other way around.
 
Basic said:
The vertical resolution of a TV is, and has always been just as "digital" as on any computer monitor.

The vertical resolution is 480 lines. There are no pixels, nor any set horizontal width. My response was only to the extent that I don't see how a TV's 480 lines vertical resolution could have "transfered over" to computer monitors. Obviously if TVs didn't have a set amount of lines vertically, then it'd be a real mess.
 
Crusher said:
Xmas said:
A fish-eye distorted view would not be "more correct" than using a 120° fov when you're sitting 2m away from your 17" screen. The only "right" thing would be to get closer to the screen or use a bigger screen/multimonitor setup.

At 2 meters I can't read text. You'd have to be running at 640x480 to be able to use a 17" monitor at that distance. I'm usually about 18 inches away from my monitor; 24" if I lean back in my chair. Then again, I've never thought that 3D games looked distorted at the edges of the screen, so maybe I'm just wacko.
2m/17" was just an example where the screen covers just a very small part of your eyes' field of view, so if the screen shows an image with a wide camera fov (say, 120°), it will look strangely distorted.

I have to admit that malcolm is right in saying that a fish-eye lens projection would be the optically "most correct", though still bad way to make up for such differences in fov angle. However this is hard to impossible to realize on today's hardware. And who wants to look through a fish-eye lens anyway?
 
Ok well i didnt think it would be hard to do :)
I think it would look good actualy, you will see its a big view angle, now 120fov looks just strange.
 
I think something that would be more correct for now is calculating a normal ordered pixel grid of 40degrees view fov instead of the game fov and then if the game fov is 90degrees just putting those coordinates where the 40degrees fov coordinates are.
Result will be that the important things in the center will be bigger and the whole screen isnt used up by stretched out things that are less important anyways, and ofcource this is more correct :).
 
Back
Top