Book review: Rebooting AI
Date:This book could have been a blog post. It was published in 2019, in the midst of hype of deep learning, before transformers were commonplace.
The book's main thesis is that neural networks, while powerful and with sometimes magical-seeming characteristics, often fail when applied on inputs that fall outside the scope of their training set. They do so in unpredictable ways, without any meaningful way to correct errors other than simply increasing the size of the training set or changing the composition of the network's layers, crossing one's fingers and hoping it works. This creates models that aren't truly understood and can't be trusted.
The authors argue that the AI community at large has failed to acknowledge this shortcoming of deep learning, with most publications focusing on advances in narrow application contexts or impressive demos, extrapolating that more training and more advanced networks will eventually lead to near-general purpose models that functions as a monolithic, black-box unit.
This single-mindedness, the authors argue, leads one down a path that can never result to generalizdd systems that can be trusted without human oversight. They instead suggest that the AI community should focus resources on building hybrid systems, combining neural networks in clever ways with traditional, symbolic AI.
This point is made, over and over, across ~200 pages, with many examples. It gets repetitive, fast. If you have ever read any of Gary Marcus' blog posts over the last few years, you will know exactly what this book is about.
I find the topic interesting but I would not recommend anyone buy this book. Save the money and read Marcus' blog instead, if you're interested.