This is a theoretical paper, as a companion paper of the plenary talk for the same conference ISAIC 2022. In contrast to conscious learning, which develops a single network for a normal life and is the main topic of the plenary talk, it is necessary to address the currently widespread approach, so-called "Deep Learning". Although "Deep Learning" may use different learning modes, including supervised, reinforcement and adversarial modes, almost all "Deep Learning" projects apparently suffer from the same misconduct, called "data deletion" and "test on training data". Consequently, Deep Learning almost always was not tested at all. Why? The so-called "test set" was used in the Post-Selection step of the training stage. This paper establishes a theorem that a simple method called Pure-Guess Nearest Neighbor (PGNN) reaches any required errors on validation set and test set, including zero-error requirements, through the "Deep Learning" misconduct, as long as the test set is in the possession of the author and both the amount of storage space and the time of training are finite but unbounded. However, Deep Learning methods, like the PGNN method, apparently are not generalizable since they have never been tested at all by a valid test set.
translated by 谷歌翻译