Legacy parts in current graphics cards.

epicstruggle

Passenger on Serenity
Veteran
Its been pretty slow here in this forum, so I decided to post a question that has been bugging me for a while. Are there any legacy parts of current/future graphics cards that are still there. Let me explain, in motherboards you still see ps/2, floppy,... and other features that companies have tried to get rid of but havent had the succes due to a few reasons. What Im wondering is there such a thing as a legacy part in a graphics card? And if there are why is it still there and what effect would it have by getting rid of it.

thanks for any and all info.
later,
 
Legacy stuff in modern GPUs: The VGA compatibility support (with the actual IBM VGA released back in 1988 or so, already then carrying MDA/CGA/EGA legacy stuff with it). With a bunch of I/O mapped registers that can do rather ... interesting stuff, a framebuffer that generally doesn't just behave like a standard linear memory area (4-plane planar, anyone?), a text-mode, and a separate BIOS located at a fixed memory address.

[edit]Oh yeah, look here for VGA programming info ...
 
The capability to support text mode strikes me as a little outdated, and has done since the 1980's. Even the Amiga was too modern to have an explicit text mode. About the only place it's actually needed is Bios startup and full screen command prompt windows.
 
Squigs said:
The capability to support text mode strikes me as a little outdated, and has done since the 1980's. Even the Amiga was too modern to have an explicit text mode. About the only place it's actually needed is Bios startup and full screen command prompt windows.

Which are both pretty important to getting your machine started. Do not mock the powers of a VT52. ;)
 
Personally my vote would be the VGA connector. Even analog monitors should be on DVI by now....
 
VGA mode is definitely legacy. I doubt there are many people at Ati and Nvidia that even know how it works. You tend to forget get the details when you can just steal code from a previous chip.
 
But how many transistors could the VGA part (including CGA, EGA...) possibly use? I really don't have a clue, but I'd say much less than a million. If that is the case (regardless if it's 10000 or 800000), does it really matter if it's there? It's less than 1% of the transistor budget (assuming I'm vaguely right) and it will, at least theoretically, allow me to fire up The Colonel's Bequest for a replay - something that I've planned to do the latest five years...
 
Probably a few thousand for character LUT, a few registers, logic for memory mapping and support for fancy things like bitplanes. It's so cheap that it doesn't matter.
 
Dio said:
Personally my vote would be the VGA connector. Even analog monitors should be on DVI by now....

Ditto. ASICs should have dual, integrated TMDS transmitters and nothing else.

MuFu
 
I guess I should qualify that by saying we probably need a DVI standard that can handle 4kx3kx100Hz as well :)
 
True. Dual-link DVI tops out at 1920x1080/85 Hz (or 2048x1536/60 Hz, I think). For two monitors running at such a resolution you'd need 4 transmitters. :?

Perhaps it's time for them to up the 165MHz standard clock.

MuFu.
 
MuFu said:
True. Dual-link DVI tops out at 1920x1080/85 Hz (or 2048x1536/60 Hz, I think). For two monitors running at such a resolution you'd need 4 transmitters. :?

Perhaps it's time for them to up the 165MHz standard clock.

MuFu.

Dual-link DVI should be able to transfer 2 full pixels per clock cycle, so for the resolutions you mention, hsync+vsync overhead would take up 45-50% of the total bandwidth - is that correct?
 
This would also give impetus to the creation of a high-speed successor to DVI, hopefully with at least six times the bandwidth to drive monitors like T221 and beyond at decent refresh rates. 1+ GHz digital video bandwidth, anyone?
source:http://www.theinquirer.net/?article=8617

is there any talk of a replacement for dvi? Or can dvi scale up to meet the challenges of upcoming lcds(that will have massive resolutions)?

later,
 
What about stuff like bump mapping and other "special-case" texture modes (if any)? Do current GPUs implement that in hardware, or is it done as a pixel shader?

If they still have it in hardware, when can we expect everything old to be handled through pixel shader emulation? Maybe even basic stuff like alpha blending will become pixel shaders to simplify the internals of the GPU?


*G*
 
Grall said:
What about stuff like bump mapping and other "special-case" texture modes (if any)? Do current GPUs implement that in hardware, or is it done as a pixel shader?

If they still have it in hardware, when can we expect everything old to be handled through pixel shader emulation? Maybe even basic stuff like alpha blending will become pixel shaders to simplify the internals of the GPU?


*G*
Was that meant ironically?

Pixel Shader is hardware. Calculations are done by the same ALUs, whether pixel shader or "fixed function", as long as they don't have different precision requirements (like GFFX only using the FP units when performing PS2.0 shaders).

I hope alpha blending will finally become part of the pixel shader, i.e. that you have access to the destination color in the shader. S3 DeltaChrome is the only chip that provides that capability. It was planned to become a part of GLslang, but other IHVs said that it costs too much performance so it was taken out of the spec.
 
Alpha blending in the pixel shader requires that each pixel be locked so that no two fragments affecting the same pixel are present in the pixel shader pipelines at the same time - this increases pixel shader complexity a bit while reducing performance. Also, pixel shader alpha blending is hard to combine with multisampling AA in a reasonably clean manner.
 
Xmas said:
Was that meant ironically?

No, of course not. Why would it be?

On GF2-class hardware, bumpmapping was done with dedicated hardware. Just wondering if such hardware is still present in current chips.

*G*
 
Grall said:
Xmas said:
Was that meant ironically?

No, of course not. Why would it be?

On GF2-class hardware, bumpmapping was done with dedicated hardware. Just wondering if such hardware is still present in current chips.

*G*
Your 'signature' can lead to misinterpretations ;)

Are you talking about bump environment mapping or dot3 bump mapping?
Dot3 requires a multiply and an addition of vector components. You could call that component adder 'dedicated hardware' as it is only used for calculating the dot product (although NVidia allows it to be used for other things in OpenGL). But it is usually considered an integral part of the pixel ALUs/register combiners.
BEM is implemented differently across different chips. With ps1.4 BEM is replaced with general dependent texture reads. Of course there is some circuitry which has the sole purpose of supporting this feature.
 
Back
Top