The golden age of AI

the golden age of AI - Interview with Tomaso Poggio

The golden age of AI

A DISCUSSION WITH TOMASO POGGIO ON APPLICATIONS, THEORIES AND NEW MACHINE LEARNING ALGORITHMS

At the end of the 2019 edition of the Machine Learning Conference of Prague, we spent some time together with one of the event’s speaker, Tomaso Poggio, who is the Eugene McDermott professor in MIT’s Department of Brain and Cognitive Sciences and the director of the NSF Center for Brains, Minds and Machines at MIT. As one of the founders of computational neuroscience, he is a devote supporter of interdisciplinarity as a fundamental instrument for bridging brains and machines to increase our chances to arrive at understanding and developing a ‘real artificial intelligence’.

Nowadays, machine learning has become almost ubiquitous; its applications have demonstrated to be successful in so many different contexts. Professor Poggio compared the results achieved in 1995 for identification of pedestrian with the current solutions developed by Mobileye for driverless cars: 10 false alarms per second then, versus one error every 40,000 km covered by the car today. This means that in the last twenty years, every year we have been able to double the precision of machine learning algorithms, improving their accuracy one million times overall.

“A similar trend cannot continue indefinitely,” adds Tomaso Poggio. “Machine Learning is a mature discipline that now needs to expand its application potential to many different contexts: from images, to voice, text, big data and image generation. I think that more than improving the accuracy of existing algorithms, it is now time to do basic research to develop new ones.”

Professor Poggio, for Artificial intelligence we have already experienced several hype cycles, followed by disappointment and criticism. Are we going to face another AI winter soon?

Definitely, I reckon we are not entering another AI winter as those we had in the past. However, I think that some of the expectations from AI will have to be managed, people expecting to have a really intelligent machine in less than ten years are going to be deluded. We need to have a set of several different disruptive innovations before arriving at the Artificial Intelligence we are dreaming of. For now, our focus should be on the ocean of possible applications to which we can already apply currently available learning algorithms.

This is a golden age for applications and indeed, we cannot count the number of companies that are already improving the performances of existing solutions, augmenting the level of automation and delivering new services in many different contexts from the healthcare, to the manufacturing. Avoiding fine things to say, we need more pragmatism to generate a series of incremental technologies that will have a deep impact on people’s lives.

Deep learning and reinforcement learning have been inspired by neuroscience results obtained in the fifties, yet we do not know how these Machine Learning algorithms work. Do you think that not knowing the theory behind is going to limit our capabilities to exploit these technologies?

In human history, similar patterns already happened: let us think about electricity. The discovery of the voltaic pile by Alessandro Volta in 1800 as a result of a professional dispute with Luigi Galvani, opened the door to the exploitation of different applications of electricity as such as the electric motor and circuits. However, the theory behind these discoveries was conceived only later, around the 1860s by James Clerk Maxwell.

Anyway, we are now very close to defining a theory for Deep Learning and Reinforcement Learning. In my presentation, I have introduced the three main theoretical puzzles of Deep Learning, two of which have already been resolved within some of my recent work that justify deep convolutional networks. For the third question, there are only some proposed theoretical approaches: but I think that in the next two years we will have derived a more complete understanding of the overall framework for deep learning. Will this help us in deriving better algorithms? I am not sure of that, because the theory does not always imply we can derive improved solutions and applications.

What does it take to have new sources of inspiration for future algorithms?

I think that neuroscience will keep playing a fundamental role for the advances of AI: indeed the best definition of intelligence we have is the one defined by Turing, which is deeply filled with human intelligence concepts. Therefore, studying how our brain works will definitely support us in engineering areas of computer vision and machine learning.

Indeed, neuroscience will be again a source of inspiration. However, more than this, I strongly believe interdisciplinarity will be at the core of AI future development. Breakthrough discoveries can only be derived from the cooperation among computer scientists, engineers and neuroscientists: good news for institutions such as Center for Brains, Minds and Machines of MIT or Computation and Cognition Lab of Stanford and other universities, where there is a good mix of engineering competencies, neuroscience and cognitive knowledge. Such capabilities are still maybe out of the focus in several departments of tech giants, where the side of engineering solutions is privileged.

A video is comparing human capabilities versus some robots’ poor performances. May we take these as an example of how far are we from an efficient Artificial Intelligence application in robotics?

I think that Robotics is a particularly complex field. An effective robot relies on both the intelligence of software and the use of right materials. In material science we need further and more advanced research solutions: traditionally used material for robotics may not be the right structure to achieve human performances, and we may need to develop materials that are more similar to our flesh and muscles.

I have a last question about the so-called Software 2.0, which refers to machine written software developed by neural networks that are learning weights via optimisation. Are programmers being substituted by labellers?

No, I think this is just a joke. I am fully convinced we need good human software developers as never before. Legacy systems are just an example of where programmers’ capabilities are required.

Now that we could breathe this huge sigh of relief about the fact that our software developers will not be replaced by neural networks, Tomaso Poggio is leaving us with a challenge. How to empower a machine to learn as a child does, only from very few examples (even only one) and not millions of labelled examples? At Konica Minolta Labs we are committed to follow future developments of Machine Learning to improve our services, starting with our AI-based Assistive Services and its related platforms.

X