In what way does the quote from the paperWell it doesn't help your argument that the original paper is coming out against your claims again ...

*go against my argument*

*"Like prior works on neural materials, we forgo explicit constraints on energy conservation and reciprocity***relying on the MLP learning these from data.**"*"Unless something is very wrong with loss functions or training set, it should not exceed the range of values provided by the original BRDF in a meaningful way and there are methods to enforce these limits"*? If anything, it states the same.

Can you explain why the MLP can't be mathematically integrated, while the papers you quote say exactly the opposite? There are algorithms for numerical integration, but if the task is to get a certain response from the network, this doesn't seem necessary in most of the cases.I don't know how many I have to repeat this to you but this generated approximation from the neural network can't be mathematically integrated!

In 2D space, you can fit a function using the MLP. With ReLU activations, the MLP would represent a complex piecewise linear approximation of the original function, which you can integrate.

If you want to know the response of the MLP, you can plot your functions alongside the MLP to see how the MLP corresponds to the original function.

Unimplemented because it wasn't necessary for the correct output, as per your own quote from the neural materials paper) Normalization in inference should cost next to nothing. I guess the paper was focused on the training.Also the normalization workflow they mentioned is unimplemented and would be slower which defeats the main benefit of the original paper which was faster material evaluation ...