Challenges for Deep Learning towards Human-Level AI
Humans seem to be much more efficient than current AI at learning from unlabeled observations and interaction with their environment, and current machine learning systems do not seem to understand their training data nearly as well as humans. A core objective of deep learning is to come up with learning frameworks which can discover disentangled representations which explain the important variations in the data. Progress in deep generative networks based on an adversarial criterion has been impressive and we show how these ideas can be used to estimate and optimize entropy and mutual information. and how this could be used towards unsupervised learning of high-level abstractions. This follows the ambitious objective of disentangling the underlying causal factors explaining the observed data. We argue that natural language understanding cannot come from current attempts purely based on text corpora. Instead, a learning agent must acquire information by acting in the world and jointly learn a model of the world and how languge can be used to refer to it. Natural language could be used as an additional hint about the abstract representations and disentangled factors which humans have discovered to explain their world. Some conscious thoughts also correspond to the kind of small nugget of knowledge (like a fact or a rule) which have been the main building blocks of classical symbolic AI. This therefore raises the interesting possibility of addressing some of the objectives of classical symbolic AI focused on higher-level cognition using the deep learning machinery augmented by the architectural elements necessary to implement conscious thinking about disentangled causal factors.