This article is a bit of a thought experiment. Sometimes it can be fun to take a group of assumptions which everyone “knows” are correct an invert them and see what happens…
Every operating system and programming book I have ever read about managing resources emphasizes the same “laws” of computing. These rules are memorized by every programmer and repeated rote at every programming interview and are expected to be common knowledge by anyone calling themselves an engineer…to the point where questioning these paradigms amounts to a sort of heresy.
Here are a couple of the big ones:
First, you are never supposed to have numerous threads operating on the same shared resources at the same time. The reasoning is that if one thread of execution is changing something at the same time as another thread, this can lead to instability, incorrect computation and…well…anarchy.
Second, there is always one and only one main thread which obeys the laws of a central operating kernel. There is one god-like kernel which everything on the computer must obey and be controlled by. Everything descends from this thread and is controlled by it. What freedom a thread of execution may have can be cut short at any time as the master kernel asserts control.
All of these approaches make the same assumption: A single processor (or a facade giving the appearance of a single processor) and sequential, atomic execution.
A good example of this is the famous “dining philosopher’s problem” in which three philosophers sitting next to one another are needing to share mutual resources without blocking or corrupting data as it is being used by another philosopher. We are all taught that allowing the philosophers to use the same fork to eat their dinner ( jointly use that resource at the same time) is something to be completely avoided…but what if that is a completely false oversimplification which is preventing us from achieving a much higher level of computing?
Imagine a computer with 1,000 kernels (which can come into or blink out of existence at any time), all using the same resources, all modifying one another’s instructions and data in real time (perhaps even modifying many other processor’s instruction sets and many resources at once?). Imagine that there are numerous centers of control at any given time, perhaps out of sync with one another, that are modifying and using the same instructions at the same time and have no direct control over one another…in short, a giant tug of war for power happening in multiple dimensions all in real time.
If you have ever had to wrestle internally with feeling sleepy, sad, angry…or whether or not to go to the movies or go to the gym…maybe this is a process which more closely resembles the results of a cacophony of independent control centers arguing with one another. If you look at society itself, cooperation is as common as competition between groups. Would it be such a leap to imagine that the brain is constantly in conflict with itself as one center of control wrestles with other centers of control? If so, why treat programmers like we do now?
This way of thinking is complete heresy to the current way programmers are taught to think about computation. Computers are not meant to argue with one another. Threads of execution are not meant to fight one another for control. Resources shared between threads are meant to be pristine and accessed atomically, not pushed and pulled and haggled over by competing centers of control.
Can you imagine the chaos of such a system? It would be simply impossible for any human programmer to conceive of or design or program such an organism. Yet at the same time…why can’t it work this way? Why would a computer attempting to replicate a human-like AI need to follow these sequential restrictions we have placed on ourselves?
What a world.