So, might as well ask since. The PBR pipeline which is generally getting video games closer to CGI - do textures need to be specifically made for PBR? Or is it possible to also have procedurally generated textures that can interact properly with a PBR pipeline?
A material as you see it is the sum of various interactions of light. For the computer and for simplicity we decompose this black-box transformation - there is no correct large scale light/surface interaction model which could be correct for all cases, there are only correct wave based light-surface interaction models on wavelength scale, particle pased light-surface interaction models are insufficient as well - we decompose it into a handful of the most important observable features and try to find an approximate formula which is sufficiently look-alike. It is totally not physically, arguably "based" on physics, but not on physical attributes, but instead observations of physical behaviour. For example specularity and specular color. The reasons lie in the atomic properties of the surface hit, of the interaction of light bouncing from atom to atom in a possibly mixed material/molecule, lightwaves enter the electric field of the atom and there is an enery exchange, light being an electric particle/wave and so on. But the specular maps don't contain lists of which atoms in which quantities and which arangements are hit. Instead the light-model-maker looked at observations of light-surface interactions (for example the fresnel curve is a plot of light surface-interaction showing the change in wavelength, amount of absorbtion and retro-reflection over the angle, that's a lot of stuff thrown into one value) on a very very large scale - millions of times larger than atomic scale, and thus totally "blurred" - and parameterized the formulas in such a way, that the observations (and not the causal attribute) can be fed, resulting in a plausible reproduction of observable material responses.
As such, all computer models, are not based on physical attributes, they are based on reproducing observations of the real world. And to be clear, all non-artistic models (Phong, Blinn-Phong, Oren-Nayar, Cook-Torrance, He, Ward, Stam, Ashikmin, Marschner, we can go on ...) are physically based. What distinuishes them is the scope of representable materials, and input data. For some the input data has to be trained with reference images because it's some fantasy representation, for some you can take measurements and use the data directly without anything else.
There are very few wave based models, He fe., most of them just invent something more or less senseful, Torrance invented the probabilistic based micro-surface based model, but's extremely far from physical reality, not so much though that it can't look believable.
Using the term PBR is meaningless, no scientist ever uses the term, it's more in the way of unstanding, than that it helps. If you think micro-surface models get's raytracers near reality, then you are mistaken. Only wavelength scale BSSRDFs can do that, BTFs could do that from shot to shot, but then BTFs are basically real-world captures over all parameters of a render equation.
Now to answer your question(s):
Utilizing more complex models in games helps trailing the even more complex models used in offline rendering, and using much more complex models helps trailing reality. When you model atoms and waves, then you do get imagery, which in fact, is real stuff. Let philosophy decide if atoms living in a computer and the ones outside, are the same "real" (is a simulation of reality less real than reality?).
And yes, you need to author specifically for one model or the other. Less often than more you can translate the data from one model to the other, because most models are not sub- or super-sets of another, pretty much every model has something another can't, so you can't translate it reliably all the time.
When the model is based on statistics and probability functions, then it's relative easy to invent a procedure to produce meaningful data, so that's very much possible. Other data inputs, especially the ones which require fitting to imagery, don't lend themselfs easily for procedual generation.
TL;DR: Yes, yes.