Why Gödel’s Incompleteness Theorem Applies to Humans
Two of my favorite hobbies used to be playing with a Rubik’s cube and playing Tetris. I was really good at both and I still play sometimes, but it’s substantially less than I used to play. One of the main reasons why I decided to stop playing these wonderful games was because they tend to be very addictive, and what happened to me was that I was thinking in both games all day long. Seriously, I woke up and there I was imagining solutions for the Rubik’s cube, playing it in my mind. I was eating lunch… Boom! Mr. Tetris appeared in my plate. I took a break of my homework and I imagined that little multicolored, bastard cube. I went to sleep… Hello again, Mr. Tetris! I’m not trying to convince you to avoid those games; they are actually a lot of fun. The idea of this story is that sometimes, we can get into a “loop”. Being in a “loop” means that you can’t get out of the system, you are caught in an intellectual circle. It’s like when you make a bad code and your computer can’t stop processing information until they ran out of a resource, e.g. memory or energy. So, humans can be caught into that cycle too, only that it happens at a different level, one that is more abstract and that we are still trying to figure it out.
This restriction of not being able to get out of our system, whatever that means (body, soul, wholeness with the world… I really don’t know), is a limitation of knowing things that go beyond our reach. We can’t avoid the fact that we, humans, have limitations. Of course, we have built computers and other machines that allow us to see things beyond our senses, for example x-rays, gamma, other galaxies, etc., but that doesn’t mean that we have overcome those limitations that make us humans. The machines and computers and everything that we are able to build is in some sense biased by human knowledge, for example if we want to build a machine that illustrate us how dogs see, that machine would be constructed by humans, for humans, and with the understanding of humans about how dogs see. So, how can we build something that removes those limitations? Can we build one?
This is the great dilemma with artificial intelligence. Some say we are going to be able to build something (usually a computer) that simulates the human brain. A computer as intelligent as any human. Within the ones in favor of this idea, is Douglas Hofstadter, author of Gödel, Escher, Bach. He says that the human mind is no different (surely more complex) from a formal system. He means that we are a self-referential, formal system, and that we can’t get out of our system, thus someday we are going to be able to build something as complex as our mind in the form of a computer. Now, if we can discover how our mind works and are able to reproduce it in any other form, we can see that there is no epiphenomenon or metaphysical quality we can’t explain, because we have the knowledge of how it works and are able to reproduce it. This leaves us into believing that we can see our mind as a self-referential, formal system, but this also implies that we would be inconsistent. Gödel wins. On the other hand, some believe this would not be possible; not now, not in the future. In this side, we have John R. Lucas. He says, “Minds cannot be explained as machines; machines are inherently inferior.” According to Lucas, there would never be a machine capable of getting out of the “system” as humans do.
For curiosity and the sake of the argument, let’s go with Hofstadter on this one. We might return to Lucas later. Let’s say we are able to build a machine as complex as humans, and we would name it Gödy. Does this change our limitations and how we see the world? Maybe. I’m pretty sure this would expand our capacity to understand the universe. Is this going to change reality or objective truth? Definitely not. Again, Gödy would be human-made, and we are able to change the environment in which we live, but not the things-as-they-are. Can we, as humans, know the objective truth of the universe? I’m convinced that we can’t do that, because we are limited by our senses and by everything that is human-made, as explained before. We can be able to expand our cone of vision of time and space, but that would still be limited; no matter how much we expand it, we would never have a full spectrum of the universe or multiuniverses. So, in order to know if we have reached an objective truth, we would need something more complex, of a higher order, than ourselves. Let’s add one more factor, the process of evolution. Having Gödy as intelligent as ourselves and being able to evolve by some cyber reproducing process of I-don’t-know-what, we would have the possibility that Gödy be superior to humans. They would be more intelligent and have a cone of vision of the universe bigger than ours. They might even get to the point of controlling us, and that would really suck. We may even be their pets. But let’s not digress on that, that’s another topic. Let’s say the evolved Gödy and his cyber friends would be friendly with us and we would still be able to control them. Now, are Gödy and friends going to help us reach that truth? They would be able to tell us a higher level of truth than ours, so we can compare if our knowledge is the same. But, does that imply that we would be able to reach an objective truth? Unfortunately, no. Gödy would suffer the same problem we have, since it is not God, but only Gödy. This could go on ad infinitum.