By 2050 some experts believe that machines will have reached human level intelligence. Thanks, in part, to a new era of machine learning, computers are already starting to assimilate information from raw data in the same way as the human infant learns from the world around her.
It means we are getting machines that can, for example, teach themselves how to play computer games and get incredibly good at them (work ongoing at Google’s DeepMind) and devices that can start to communicate in human-like speech, such as voice assistants on smartphones. Computers are beginning to understand the world outside of bits and bytes.
Fei-Fei Li has spent the last 15 years teaching computers how to see. First as a PhD student and latterly as director of the computer vision lab at Stanford University, she has pursued the painstakingly difficult goal with an aim of ultimately creating the electronic eyes for robots and machines to see and, more importantly, understand their environment.
Half of all human brainpower goes into visual processing even though it is something we all do without apparent effort. “No one tells a child how to see, especially in the early years. They learn this through real-world experiences and examples,” said Ms Li in a talk at the 2015 Technology, Entertainment and Design (Ted) conference. “If you consider a child’s eyes as a pair of biological cameras, they take one picture about every 200 milliseconds, the average time an eye movement is made. So by age three, a child would have seen hundreds of millions of pictures of the real world. That’s a lot of training examples,” she added.
She decided to teach computers in a similar way. “Instead of focusing solely on better and better algorithms, my insight was to give the algorithms the kind of training data that a child is given through experiences in both quantity and quality.” More here BBC