Some people like to boast, “I never forget a face.” At a time when artificial intelligence research is advancing by leaps and bounds, it’s an odd thing to brag about. In fact, there are now computers that can remember 260 million faces. Last week, three Google researchers published a research paper on a new artificial intelligence system. The system, called FaceNet, is touted by Google as the most accurate facial recognition technology to date.

Posture and light have always been big challenges in face recognition. The graph shows the “difference” between the two faces in different combinations (a difference of 0.0 means the two faces are identical).

Faced with a popular facial-recognition database called Labeled Faces in the Wild, FaceNet was nearly 100 percent accurate. The database contains more than 13,000 photos of faces collected online. When faced with a huge database of 260 million photos of faces, the system was more than 86 percent accurate.

The researchers said they were testing the system’s “confirmation ability” against a database of people’s faces. Essentially, they’re measuring how good the algorithm is at determining whether two photos belong to the same person.

In December, a Chinese research team also claimed to be more than 99 percent accurate in identifying the database. Last year, researchers at Facebook published a paper saying they could do the same with more than 97 percent accuracy. According to some of the researchers cited in the paper, humans can identify the database with only 97.5 percent accuracy.



Giiso, founded in 2013, is a leading technology provider in the field of “artificial intelligence + information” in China, with top technologies in big data mining, intelligent semantics, knowledge mapping and other fields. At the same time, Giiso’s research and development products include editing robots, writing robots and other artificial intelligence products! With its strong technical strength, the company has received angel round investment at the beginning of its establishment, and received pre-A round investment of $5 million from GSR Venture Capital in August 2015.

But the approach taken by Google researchers is more than just checking whether two faces are the same. The system can also match names to faces – classic facial recognition technology – and even group together the faces that look the most alike and the least alike.

For now, this is just research, but it suggests that in the near future, crime-fighting, surveillance-enhancing computers of the kind we often see in online videos and movies will be more readily available. It could make online dating easier (and more superficial) than swiping through the dating app Tinder.

I love Brad Pitt circa 1998. Peter? There are 500 faces in this database that look just like him.



Initially, we’ll see Google’s FaceNet and Facebook’s DeepFace operating on their respective web platforms. They make it easier (or more automatic) for users to tag photos and find the person they’re looking for, because the algorithms know who the person in the photo is, even if the photos aren’t name-tagged. In addition, such systems could make it easier for web companies to analyze their users’ social networks and gauge global trends and celebrity popularity based on the identities of people in photos.

While Google and Facebook have only recently made such advances in facial recognition technology, similar computer systems are already ubiquitous. They all contain an artificial intelligence technique called deep learning. The technology has proved extremely effective at machine-recognition tasks such as recognizing objects (which, by some measures, machines are already better at than humans), recognizing speech and understanding written words.

In addition to Google and Facebook, Microsoft, Baidu and Yahoo are also investing heavily in deep learning research. The algorithm is already used for some of the most common features, such as smartphone voice control, Skype real-time translation, predictive text messaging and advanced image search (if you’ve already uploaded some images to your Google+ account, you can try using them to search for specific objects). Spotify and Netflix are working on ways to use deep learning to recommend videos smarter. Paypal uses it to fight fraud.



Several other tech startups are using deep learning to analyze medical images in real time and offer cloud computing services such as text analysis, computer vision and speech recognition. In recent years, companies like Twitter, Pinterest, Dropbox, Yahoo, and Google have all acquired startups specializing in deep learning. IBM just bought a Denver-based startup called AlchemyAPI to boost the intelligence of its Watson supercomputer and power its new Bluemix cloud platform (the idea is: Developers can easily connect mobile and web apps to cloud services to build smart apps without delving into the complex computer science behind ARTIFICIAL intelligence.)

It doesn’t stop there. As consumer robots, driverless cars and smart homes become a reality, deep learning will follow, providing our new toys with eyes, ears and some brains. DARPA, the DEFENSE Advanced Research Projects Agency, is also exploring how deep learning can be used to make sense of vast streams of intelligence information in real time.