Saturday, June 24, 2006

On Intelligence

I've been reading this book called "On Intelligence" by Jeff Hawkins. He was the founder of Palm and has been a software & hardware designer for many years, but has also been interested in human and machine intelligence. He took a number of classes in bioscience and was most interested in how the brain actually works. The mistake that Artificial Intelligence (AI) experts made, he argues, is they paid no attention to the inner workings of the brain, or developed a reasonable theory of what intelligence actually is, and instead focused upon emulating human behavior.

Hawkins believes that the brain doesn't work like a computer. Computers are designed to be much faster and possess greater computational power. The AI'ers believe that if we create faster and more powerful computers, we will eventually create a real intelligent computer. Hawkins believes that's the wrong approach. He encourages the reader to look at how the brain works: first, it stores memories, then it retrieves them, and finally makes predictions based upon those memories. This is the crux of intelligence. Pattern recognition doesn't derive from computing pathways, but from reconciling what our senses are capturing with these retrieved memories.

The core component of the human brain is the neo-cortex. That's where most of the action is! It has 6 layers, from L1 to L6. Hawkins demonstrates how different regions of the cortex are hierarchically related. He further illustrates how information is hierarchically stored to mirror the hierarchy of reality! The examples he gives are great, from musical compositions to roadways. What it boils down to is that objects have subobjects which themselves have subcomponents, or a process has subprocesses and so forth. Data merges into higher levels of the hierarchy, percolating all the way to the top. With the example of music, he says "Notes are combined to form intervals. Intervals are combined to form melodic phrases. Phrases are combined to form melodies or songs. Songs are combined into albums."

The organization of information and how it's mentally stored reflects the organization of reality. We don't hear a song in one instant, nor capture all the complexity of an event in one screen shot. Internally in the brain, a song is stored hierarchically to capture the nested structure of reality; the highest part is a pointer to the entire song, the next highest part might store the memories of the phrases, then a lower part will store the intervals, and the lowest man on the totem pole remembers the notes. With an image, complete snapshots are not stored one place in memory. Instead, line segments are stored in lower hierarchical parts of the visial cortex, then shapes composed of these line segments are stored in a higher region, shapes blend into recognizable objects in yet a higher region, and finally what Hawkins calls "large scale relationships" reflecting the entire picture occur on top. Based upon what I've been reading, I interpret the storage of memories as an artistic process, not as a storage of data in a computer.

And how do we recognize patterns in reality? Well, memories reflect the relationships of the compositional components. We remember the sequence of notes in a song. So when we hear a few notes, even if they are in a different key, we remember the relationship of the notes. It's the same with a picture of a face. The face might have a different color, or perhaps part of the face is hidden, but the brain remembers the relationships of the facial components and can make auto-associations based upon what it can view (i.e. it doesn't have to see the entire picture to recognize its contents).

One of the most powerful points in the book is his use of Vernon Mountcastle's idea of a unified cortex algorithm. Mountcastle argued that different parts of the cortex (visual, aural, etc.) essentially have the same operating principles. It's the same signal processing and pattern recognition in all cortical regions. We process what we see, hear, and feel (also smell?!) using the same algorithm. What distinguishes each region are the connections to other parts of the brain and body, relationships to motor activities and responses, and so forth.

I've read about 2/3rds of the book and look forward to the end, which I'm sure will include some discussion about how real machine intelligence could occur with a proper implementation of the "memory/prediction" cortical model. Based upon what I've read, I would highly recommend this book.

Anyone read this or books touching on similar subject matter? Thoughts?

6 Comments:

At Sat Jun 24, 11:59:00 PM PDT, Anonymous Anonymous said...

Very interesting book. This approach seems to have yielded some good results in deepening an understanding of intelligence. I enjoyed reading this article, the book seems a good one. It certainly seems to have a better chance at assisting the making of smarter machines than did much of the AI stuff I learned in college in the 80's.

Still, I can't help but wonder if a lot of this brain science is missing something. We'll be able to create smarter machines, and that's a good thing, but where's the soul in all this?

The brain is a powerful organ, but it is only part of the body intelligence. It's a big system that works together, and there are even clusters of brain cells in the heart. And none of this makes any sense out of context, for example, try removing a human from reality. I've heard prolonged sensory deprivation tends to lead to insanity and mental damage.

All of the sensors and processors of the human body aparatus might perhaps be better understood as a large antenna that tunes in the human soul, like a radio, but interactive. This is what happens for lower order creatures as well, but the human neocortex is able to map to higher abstractions. Which is why humans are able to know God (as much as God is knowable at all). God is the ultimate high order abstraction.

Until science is willing to grapple with these issues, I'm afraid they'll keep building more frustrating machines along with the useful ones. And we'll get more "Bob" type computer programs, and maybe even elevators like the ones from the Syrian Cybernetic Corporation that liked to go down, and FAST (from Hitchhikers Guide to the Galaxy).

 
At Sun Jun 25, 02:50:00 AM PDT, Blogger David Epstein said...

These are thoughtful points you're making Harold. The holism of human existence is something that can't possibly be replicated. The brain must interact with the body, and if they exist as distinct entities, with the soul, spirit, and mind as well. And furthermore, the brain depends upon these "mind-body" relationships for its survival. Let's see if it would be possible to transplant a real brain into a machine that has an artificial heart, respiration, and life support system. I think that machine would be recalled!

The pursuit of real machine intelligence must be focused upon sufficient replication of the inner workings of the brain. This is not to say there are not more operationally effective models than the brain waiting to be discovered in the future. The human brain is not an optimal, or even a near-optimal system. But it's a remarkable entity that is the source of our intelligence. An evolutionary development towards non-human supra intelligence must first pass through a successful stage of modeling the brain, and Hawkins has grasped the key concepts of memory retrieval and prediction as the instruments for that paradigm.

Two other concepts he discusses are feedback and invariance. Not only do these serve to lend more credibility to the theory, but they also add more than a touch of humanism to the discussion.

Hawkins points out that the brain responds with 10 times as much feedback as input data into the system. The brain regularly receives sensory input, and then "processes" those inputs by retrieving memories and dispatching instructional sets or predictions as feedback throughout the body.

The second concept, invariance, is related to the input/feedback model. Regardless of changes in the input stream, the brain is able to recognize the patterns of that stream and respond accordingly. He illustrates this several ways, most notably through visual projections upon the retina and through "facial changes".

Each moment light hits the retina, there is a different stream of optical signals generated, even as the eye views the same subject. The same goes which a face that changes expressions or positions. The cortex receives different visual patterns for every distinct case, yet fully recognizes it to be the same face. How does it do this? The brain stores invariant representations of the face and other objects it learns to recognize. The face can change positions, but each change is compared (I used the term "reconciled" earlier) with this invariance form. This is where AI and even the artificial auto-associative models he partially praises have failed. They can't handle images that are moved, rescaled, rotated, or transformed, because they don't recall invariant representations stored in memory.

Perhaps this model of invariance is the holy grail we seek. Where does the invariance come from, and what does it represent? Hawkins refers to Plato's Theory of Forms for the answer (though he dismisses his metaphysics). The invariant forms of objects can be used to formulate an "objectivism" that resides not "out there", but within ourselves, a cerebral objectivism if you will. "Subjectivism" would be the creative mind/body response to these invariant representations, a reply to the objectivism it is faced with.

I wonder if the brain could somehow survive in isolation out of the human body, would it use its powers of feedback/predictions and invariance to infer the existence of the body and soul, or perhaps will them back into existence! I know, getting a little giddy here!

I'm also intrigued about the relationship between the brain and mind. I don't accept Hawkins description that the mind is a product of the brain. I see it more as a symbiotic relationship between the two.

By the way, what is the "Syrian Cybernetic Corporation"?

 
At Tue Jul 04, 11:38:00 AM PDT, Anonymous Anonymous said...

I haven't read this book, but your discussion at least doesn't seem to account for the prominence and peculiarities of memory and processing as handled by neural-networks. Notably, that heirarchy you describe is very much informed by modern programming concepts, but the heirarchies of the brain are much blurrier. Our memory doesn't take or create "pointers", it receives processed sensory information and "echoes back" anything that "matches". Echoes which are strong enough can re-enter the "main loop" of consciousness to become "found memories". Sub-threshold echoes don't enter consciousness, but can still bias later responses.

E.g.if you started with an image of a flat block-shape, white along both visible edges, you might not immediately think "book" (perhaps you don't see title, page-edges, covers, etc) -- but if you're discussing, say, "The Lord of the Rings", you'd probably be more likely to remember details of the books than the movie, given that prior prompt. The catch is that doing it this way is intrinsically less precise, and less reliable than an "algorithmic" method, and you tend to get a lot of "false hits" from accidental correlations. If we use this method for "computers", those will then show some of the same faults as our own brain does, losing some of the virtues that prompt us to build machines in the first place (such as precision and reliability).

There's also the point that the details of how our neurons work give them a different set of "atomic operations" than we build into our computers. For example, first and second derivatives across space and/or time, are more-or-less atomic operations to our neural structures, while a digital computer has to calculate them (with some difficulty). This is particularly obvious in our visual and auditory systems where, say, retinal processing, detects edges and contrasts before the signal even reaches the optic nerve! The flip side of this, of course, is that to nerves, "everything" is (at least potentially) an analog signal, so discrete logic and computation need to be synthesized at a much higher organizational level. Even then, they're easily disrupted by lower-level interference.

And yeah, the brain isn't the whole story. Our spinal columns alone do an amazing amount of filtering and processing, plus we have the various sensory networks, secondary loops like the enteric system ("gut brain"), and interaction with other datastreams such as hormones and intercellular signals. Lately, it seems that even the glial cells are processing information!

The biggest barrier to understanding all this is simply that it was created by a complex developmental process, which itself was accumulated over many millions of years, and neither genetics, development, nor neurology were ever "designed", much less designed to be understood!

 
At Fri Jul 07, 02:54:00 AM PDT, Blogger David Epstein said...

Hi David,

Good hearing from you and thanks for your contribution. I enjoyed your insights into mental processes, especially when you describe how memories "echo back" descriptions that are "matched" from sensory inputs, and then enter the "main loop of consciousness". That "echoing back" is closer to the author's use of feedback than my clumsy use of "pointers". My point was that information about an image or song is distributed in the cortex with the highest cortical level storing composite information. This includes relations between information and not just primary information itself. The retrieval of an image or song isn't a single activation of a memory stored in one place, but a reassembly of related
information stockpiled throughout the brain and its synaptic transmissions.

Regarding neural networks (NNs), Hawkins was a proponent of them and felt they were a big improvement over AI. Their main similarity to cerebral functionality is a distributive networking of information. But he cites 3 areas where NNs fail to embody intelligence and emulate the inner workings of the brain (from Chapter 2):

1) "The inclusion of time in brain function". Only dynamic info need apply, and NNs only process static info. They don't record any history of past events.

2) "The importance of feedback." There is no real feedback in NNs. He points out that they redirect output errors to the input (back propagation), but this only occurs during the learning phase. "When the NN was works normally, after being trained, the information flows only one way." In the brain, there is, on average, 10 times as much feedback as inputs, and most of it occurs in non-learning situations! This feedback takes the form of named
sequences of patterns that lead to predictions about what is occurring or will occur in the cortical "view screen".

3) "Any theory or model of the brain should account for the physical architecture of the brain." NNs look nothing like the brain.

So in these and other regards, NNs don't exhibit intelligence or store anything close to what we call memories.

Relating to what you have to say about "sub-threshold echoes" and "further responses", in Chapter 6, Hawkins speaks about "the flow of input over time", and that would include input patterns that you described ("the flat box with white edges" followed by discussing "Lord
of the Rings"). He discusses how multisensory inputs merge together to form composite information about an event. This merging percolates up the cortical hierarchy and leads to
the invariance I've previously described. Intra-sensory predictions, in turn, stream down the hierarchy. Read Chapter 6 for some great examples, including "Integrating the Senses". Chapter 6 is also the most technical chapter that lays out the nuts and bolts of his theory.

Your reference to "false hits" is well taken and could be a valid criticism of his theory. I don't specifically remember him mentioning anything along these lines other than learning from mistakes, or how invariant representations will help "reconcile" them (my term "reconcile" from my original posting). Maybe I missed something. But Mountcastle's Unified Cerebral Algorithm would still be valid if it equally accounted for the "false hits" throughout the visual, aural, and tactile cortical regions; in other words, if the mistakes are equally distributed, more or less, among sight, sound, touch, and I would add olfactory senses! More importantly, since the same algorithm would be invoked throughout the brain, it should handle errors in different cortical regions in the same manner. This
auto-corrective error "handling" would be an intrinsic part of the algorithm's signal processing. I don't know anything more about Mountcastle's theory than what I've read in this book. Perhaps Googling will shed some more light.

Yes, if intelligent machines were designed to emulated the brain, there would be some imprecision and unreliability; but for the most part, it would highly precise and reliable. There are built-in limits in nature anyway like Godel's Incompleteness Theorem and
applicable corrolaries to Heisenberg's Uncertainty Principle, and even the most intelligent of machines would be bound by them. But I would agree that there are some aspects of brain behavior that should be left behind. I still have to read the last 1/3rd of the book, and perhaps the author mentions something to this effect. Well, actually, I DID peek ahead and read through some of the section about "building intelligent machines"; I just couldn't resist (or should we say "resistance is futile"). He mentions that these machines should
leverage extra-human "sensory" means like infrared, sonar, radar, etc. By enhancing the sensory experience of the machine, including enlarging the sensory detection area, more reliable information will be captured, and hence a lot of the "false hits" would be eliminated. Hawkins does speak about modeling these machines upon the brain, namely designing scalable cortical sheets to emulate cerebral functionality, but I don't think he
precludes other functionality from being implemented.

Neurons certainly do have different "atomic operations" than a computer's CPU and memory registers. Again, I'd have to see what he has to say in the last 1/3 of the book,
particularly if he mentions anything about intelligent machines overcoming difficulties capturing "sensory input" (other than what I mention above). But from what he has said so far, the intelligent machine shouldn't "compute" things like the computer, but instead retrieve memories and make appropriate predictions based upon them.

Two thoughts come to my mind in these regards: 1) Building a computer to emulate the behavior of neurons; 2) biocomputers, DNA-based computers, nanotechnology.

Thanks to your non-brain examples of intelligence.

Finally, yes, the brain is a product of undesigned evolution, so it stands to reason that we can't simply design a machine to emulate or replicate it. Well, I did mention that non-human intelligence would be part of an evolutionary process. But it also stands to reason that if we know how the brain has evolved, what mistakes were made during that
process and how they were overcome, and learn about cerebral redundancies, then we could design an brain-modeled intelligent machine by taking the short cuts (filtering out the mistakes, redundancies, long-winded evolutionary developments, and fast-forwarding when possible). At least we would be
able make a good first-stab at building an intelligent machine.

- David

 
At Fri Jul 07, 03:12:00 AM PDT, Blogger David Epstein said...

Oh, one other thing. When you mention "false hits", I also thought about dreams. A lot of sensory imput leads to false correlations and mismatched events & settings within dreams. Some neuroscientists even conclude that dreams amount to nothing more than mental junk or waste, the leftovers of our daily activities and thoughts (I don't believe it myself).

Well, what if an intelligent machine could be designed to "dream" and interpret them? That could lead to a state of "auto correction" of these false correlations, assuming we subscribe to the notion that this is the core process of dreaming; alternatively, if we see some merit in the belief that they are a form of precognition, then the "false hits" would actually be predictions of the future or a type of ESP... Hawkins does speak about creativity and imagination, but I haven't read those sections yet.

 
At Wed Aug 02, 11:35:00 PM PDT, Blogger Eino said...

What a fascinating subject!
I want to read this book as well.
It would be nice if we could make some progress with AI.
In the eighties I studied AI and for my thesis project I created a socalled 'expert system' using the OPS5 computer language for ruled-based AI systems. That experience made me realize that AI is a misnomer, because the human brain is a lot more complex than can be accomplished with the current state of affairs in computer science. We're still far away from the humanoid AI that is featured in science fiction stories.
Tasks that are simple for people are still difficult for computers, such as: recognizing a face, driving a car, making up a joke (and knowing it's funny), remembering things that are important, while ignoring things that are unimportant.
More generally: logic reasoning, learning, creativity, memory, association, problem solving that can flexibly and logically combine any information that you've ever encountered in your life: this is all incredibly difficult for computers.
Reverse engineering the human brain is very difficult, because the billions of neural connections implement a vast complex of preloaded software and stored memories. It's like reverse engineering a hex dump of an ORACLE database and retrieving both the ORACLE source code and all the data stored in the database: not easy!
That said, I would be interested in exploring this more. Can we make a list of achievements or publications in the last decade that are notable in AI?

 

Post a Comment

<< Home