Skip to main content
conference lunch move company map contacts lindholmen lindholmen 2 travel info

logo

Winfried Wilcke

 

Intelligent Machines: the 3rd wave of Artificial Intelligence

This talk will give an overview of work done at our IBM Research center in California in the “Machine Intelligence” project. This is an example of research into the ‘third wave’ of artificial intelligence, which goes beyond the statistical learning approach or the expert system of most current Machine Learning.

Traditional (statistical) machine learning – such as Deep Learning - has made huge strides in image recognition, speech processing/translation and similar pattern recognition tasks but this is only the first step on the long path to creating truly intelligent machines (aka strong or general artificial intelligence).

Current deep learning networks are only very superficially inspired by the brain in that they consist of layers of neurons connected with synapses of varying weights. The fundamental operation of (most) artificial neural networks is based on supervised training, where the network receives some known and human labeled input ("this is a cat"), then compares the current output of the network with the desired output and tweaks the conduction values of synapses until the difference (error) is minimized. Mathematically this corresponds to minimizing an error function in a very high dimensional space by tools like stochastic gradient descent. We are certain that this is NOT how the brain functions. A symptom is that today's neural network may need tens of thousands of cat images to learn to recognize cats, whereas a child may need to be told only a few times that this is a cat. Humans learn continuously and new knowledge doesn't damage prior knowledge, whereas are very brittle when trying to add new knowledge.

Machines will only become intelligent in the human sense if they develop 'common sense', contextual awareness and reasoning. A common sense statement like "Clouds pay no taxes" is obvious to us, but a machine needs to learn a huge amount of facts about the world to even have a concept of taxes, clouds and any relationship between them (none in this case). This requires an intelligent machine - like a child - to autonomously develop a detailed model of the world and the relations between the elements in this world.

The elements of such world models form autonomously in the brain or an intelligent machine based on a specific mathematical concept (hierarchical sparse distributed representations and hyperdimensional computing), where sensory inputs form an invariant hierarchy of ever more complex model elements. Such a model has been developed at IBM Research ( Context Aware Learning / CAL).

The potential of CAL has been demonstrated by building several two-legged robots which learned on their own - without explicit programming - how to walk without falling down and by forming invariant representations of simple sensory input patterns.

The talk will end with an overview of two hardware projects part of the Machine Intelligence project. The data structures of Machine Intelligence are sparse and not well suited for GPUs. Therefore a new neural supercomputer (IBM Neural Computer 3000) has been built by us to accelerate the research into Machine Intelligence algorithms.  We will discuss some early results from using this machine.