Have been trying to wrap my head around this. When you train a model to produce a fixed output should that really be considered “intelligence”? Is it not just “compression”?
The neural network in these cases is just another compression format.
it would be unfair to call NN models just another compression format
NN take inputs and outputs and generates the rules of how inputs reach outputs over dataset and over so many passes. It’s guessing how to get from A to Z. When you’re done you have a model that allows you to go from A’ to H’
With compression you take an input and apply function f(x) that removes as much data as possible to the output.
Then you take the output and apply f’(x) to the output hoping to get the input.
The largest difference is that for compression you’re trying to get back to the source.
With models, you’re trying to get the right results for inputs it hasn’t been trained on before.
The nuance is that compression and decompression has its limits.
With a NN, if drew a landscape with green colour resembling hills and valleys with blue paint in the shape of the lake, the NN can actually generate and entire picture fully detailed hills and valleys and very beautiful lake!
If you compressed that image, all the way down to just 2 colours, there’s just no way for the inverse function to create the original, all the data is entirely lost.
From this we see that the NN is generating what it believes we want it to display based on how we trained it. The compression algorithm has limits on how much data can be removed for the compression algorithm if it wants to have any real chance at getting back to source.