When and how will the gap, between human intelligence and intelligence required to understand how 100% of the human brain works, be filled?
When and how will we understand how we think, given a specific mental capacity?The question suggests there is a paradox which emerges from a system being able to understand itself. The paradox is built around what we can take for an operational definition of understanding versus a philosophical defintion.For living beings, understanding could be defined as having enough of a working internal model of a concept as to be able to use it to predict deductions that other living beings might make. So that as a collective we can agree to build scientific models, which involve principles and methods that can even be engineered to establish new technologies. In this sense, we understand what we can create, as long as it follows a system in its development that we can follow in its steps.For philosophers, understanding might mean something much more advanced. They might say we understand what we can formally prove, like basic statements we take as true and classical logic consequences. Or understanding might be what a collective social process thinks about things you can say. But ultimately philosophy eats itself because it is made of the same stuff you try to proove is true.The preamble is to explain that I donu2019t know philosophy, but that with a common sense answer to this question there are a few actual possibilities:Within 20 years or so we may discover some organising principles to human brain architecture and are able to synthesise versions of this. We might understand the brain according to these self organising principles. The knowledge gap the question asked is filled.In same 20 years or so we discover that intelligence is not intelligent, and rather is a messy mix of special cases that sometimes optimises certain problems. And there is not much of an organisation at the core of our brain, we are something like more intricate insects. The gap is never going to be filled because it is impossible to reduce the way we think to simpler elements.Or over a longer timeframe we discover extremely complex principles that underlie intelligence but these principles are too advanced for us to understand. These principles exist only in very high dimensional space, might be self modifying, might be balancing unstable networks of probabilities, self organised critical processes. In this case even when we are able to somehow create intelligence, the process is not so different to creating a child today and teaching it. The gap the question refers to is never filled because the systemu2019s mechanics is much too elaborate for the system to make sense of it.Welcome references, hope the answer is useful..