Like the material editor, it exposes shader-like programming to content/artists who don't always have a good idea of the performance tradeoffs. Since it's effectively code-in-content and defined at a very fine-grain (per-pixel for light functions, vertices for stuff like world position offset), it's extremely difficult or sometimes impossible optimize later at an engine level, even when things like the pixel counts, light counts and vertex counts get increased massively and they are no longer appropriate domains to run such things at. Ex. vertex shaders and per-vertex painting are not a good idea with nanite-level detail meshes, but people are still used to the power afforded by the engine features.Why is that? Really curious to know.
In most cases the artists just want a way to define certain things in content rather than code, but as a side-effect they often use the most powerful tool even to accomplish simple things. Ex. as in the video, using a per-pixel light function to make a light flicker uniformly is inefficient, and can never really be optimized at an engine level because the content of a light function could be doing anything.
That said, expert artists who are experienced with the tools can use the power to do amazing things and largely avoid or adapt to the various pitfalls. As this just came up again in the DF thread, this is the tradeoff of exposing powerful features. While we programmers (and sometimes end users) can sigh that people are making things too heavy or inefficient, it is the power of the tools that allow the experts to do amazing things as well.