Artificial neural networks (ANN) are currently a field of intensive research. They are a proven pattern/audio/text recognition tool. ANNs will be used in medicine, autonomous vehicles, and drones. Still, very few works discuss building artificial intelligence (AI) that can effectively solve the mentioned problems. There is no guarantee that AI will operate properly in any reallife, not simulated situation.
In this work, an attempt is made to prove the unreliability of modern artificial neural networks. It is shown that constructing interpolation polynomials is a prototype of the problems associated with the ANN generation. There are examples by C.D.T. Runge, S.N. Bernstein, and the general Faber theorem stating that for any predetermined natural number corresponding to the number of nodes in the lookup table there is a point from the interpolation region and a continuous function that the interpolation polynomial does not converge to the value of the function at this point as the number of nodes increases indefinitely. This means the impossibility of ensuring efficient AI operation only by an unlimited increase in the number of neurons and data volumes (Big Data) used as training datasets.