Post
251
I was not aware of the “Free Lunch Theorem” (Wolpert & Macready, 1990s), but it is a powerful idea. In a nutshell, if you average all the problems, every learning algorithm has the same performance.
So if a model performs best for vision, it has to perform worse in something else. And another model has to perform worse in vision (or many).
Seems this is the reason why DL/NNs dominate in language/vision tasks.
So if a model performs best for vision, it has to perform worse in something else. And another model has to perform worse in vision (or many).
Seems this is the reason why DL/NNs dominate in language/vision tasks.