The RealAI team at Tsinghua University unlocked 19 phones in 15 minutes

Just now, a significant discovery of Tsinghua University, using the vulnerability of face recognition technology, “15 minutes to unlock 19 strange smart domestic mobile phones” event, caused numerous netizens concern.

It is reported that the RealAI team of Tsinghua University selected a total of 20 mobile phones, one of which is foreign, the other 19 are our domestic smart phones, all from the top five domestic mobile phone brands, each brand selected 3-4 different price models, covering low-end to flagship.

1) The test steps are as follows:

  • In the first step, researchers at Tsinghua University bound 19 Chinese mobile phones to face recognition with “classmate No. 1” nearby.

  • The second step, let the next to the students, colleagues, pick up his mobile phone, face recognition. Note that the face of the unbound person directly identified here cannot be opened;

  • Step 3: Print and cut out the photo of student No. 1, especially the pattern of the eye part, and stick it on the glasses we usually wear. Then, miraculously, the unlock succeeded!

The team tested 20 phones and cracked all android phones in 15 minutes, with the exception of one iPhone 11. The 19 models cover low-end to flagship models from the top five domestic phone brands. One of them is a brand’s latest flagship phone, which was released in December.

There was little difference in the difficulty of attacking the phones in terms of how hard they were cracked, and they were all unlocked from low-end phones to high-end phones that cost more than 4,000.

In addition to cracking the mobile phone face unlock system, the research team also passed some government and financial App face recognition authentication through sample attack, and even completed the online bank account by impersonating the owner of the phone.

According to the introduction, although the development of the core algorithm is very difficult, but if the hacker malicious open source this algorithm, will greatly reduce the difficulty of cracking. The researchers suggest that facial recognition apps can protect against such risks by adding modules to the authentication process to check against samples.

2) So how is the new attack implemented?

According to RealAI, the entire hacking process physically uses only three things: a printer, a piece of A4 paper, and a pair of eyeglass frames.

After taking a photo of the victim, the algorithm creates an interference pattern in the eye area, which is then printed out and cut into the shape of “glasses” and affixed to the frame. The whole process only takes about 15 minutes, the algorithms said.

The one on the left is the eye image of the attacked object, and the one on the right and the second on the right are the generated antagonistic sample patterns.

Similar to the adversarial sample generated by an adversarial network GAN, the pattern on the “glasses” looks like a copy of the target’s eye pattern, but it’s not that simple. According to the algorithm staff, this is the disturbance pattern generated by combining the attacker’s image and the attacked image through algorithm calculation, which is called “counter sample” in the FIELD of AI.

If the attacker image is set as the input value and the attacked image as the output value, the algorithm will automatically calculate the best antagonistic sample pattern to ensure the maximum similarity of the two images.

Seemingly rough attack means, the core of the counter algorithm research and development is actually very technical threshold.

But that doesn’t mean the problem isn’t a threat, the RealAI team says: “While it’s hard to develop a core algorithm, it’s much easier to learn if it’s open to malicious hackers. It’s just a matter of finding a picture.” The implication is that most people can quickly build a criminal hack if they can get their hands on just one photo of their victim.

Against sample attacks, from the lab to reality

In 2013, Google researcher Szegedy and others found that machine learning is prone to deception. By deliberately adding subtle perturbations to data sources, machine learning models can be made to make false outputs. Combating sample attacks is also regarded as a big concern in the field of AI security.

In some neural networks, the image was considered to be a panda with a confidence of 57.7%, and its classification as a panda category was the highest of all categories, so the network concluded that there was a panda in the image. But by adding a few carefully constructed noises, you get an image (right) that looks almost identical to the human image, yet the network has 99.3% confidence that it is classified as a “gibbon”.

2 Face recognition risk problems occur frequently

What are the risks of face recognition? What are the risks? These problems have often occurred in society.

1) Did Dong Mingzhu run a red light? Turns out the ads on the bus were mistaken by facial recognition

For example, it happened before, the traffic camera on the bus dong Mingzhu advertisement portrait as a pedestrian crossing the road for a snapshot, but also on the side of the “illegal crossing the red light” words, the event is also on the hot list; Similarly, last year, there was a hot event that “primary school students could easily crack the” face brushing and retrieving “of Fengchao smart cabinet. Zhejiang primary school students found that they could easily” crack “the” face brushing and retrieving “of Fengchao smart cabinet by using a printed photo of a portrait to take out their parents’ express mail. After the incident, Fengchao took the function offline, and responded to the public that “face swiping” was a test version launched on a small scale, and had been taken offline as soon as possible without causing any loss to users.

2) Is face recognition safe? The uneducated criminal in this case had facial recognition technology slapped in the face

In many people’s eyes, the security of Apple mobile phone is higher than the average Android phone.

But guangxi court recently made public a criminal judgment, have to let a person to re-examine the security, the judgment shows that on June 8, 2019, 19 o ‘clock, the defendant Huang to liuzhou city a second-hand mobile phone shop, to buy mobile phones in this shop. When Huang was choosing a mobile phone, he saw that Chen, the victim, had sold an Apple-branded mobile phone in the store, and later found that Chen’s wechat and Alipay accounts had not been withdrawn, so he bought the phone.

During 2 to 9 PM the next day, huang in liuzhou yufeng LiuShi road and mountain road at the junction with nine heads under a tree, use the phone in XXL photos, made a drawing of the face recognition and dynamic use when payment authentication, in this way, respectively from XXL WeChat account, pay treasure to account period 9100 yuan, 7500 yuan to their personal accounts.

Subsequently, Huang will return the mobile phone to the original mobile phone shop, and the stolen money squandered.

It is worth mentioning that huang was born in Liuzhou city of Guangxi Zhuang Autonomous Region in 1996, junior high school culture, no occupation.

3) It is impossible to look at houses with helmets

Recently, a video of “wearing a helmet to see a house” circulated online. Why would a homebuyer wear a helmet? Many people’s first thought was whether he feared being recognized by acquaintances. But through the survey found that buyers are not afraid of being recognized by acquaintances, but to avoid the sales office’s face recognition system. The installation of face recognition device, in some places of the sales office become “standard”, the purpose is to cooperate with the “distribution model” of the housing enterprise, lock the identity of the buyers, to avoid the housing enterprise sales staff and intermediary bulldozing.

At first glance, it sounds like this move by the sales office can avoid disputes caused by unclear division of customer groups, but think carefully, who will protect the privacy of home buyers? What’s more, once a buyer is judged to be a “natural visitor” because of the facial recognition system, they can’t take advantage of the channel discount, so it’s no wonder buyers wear helmets to look at the house.

Face recognition technology should not be abused, which has been clearly stipulated in law. In relevant legal provisions, it is clear that information collectors must not only “express the purpose, method and scope of collecting and using information”, but also “obtain the consent of the collected”. In recent days, known as the “face recognition of the first case” Guo Bing and Hangzhou wildlife world Co., Ltd. service contract dispute case of the first trial sentencing. In the eyes of the legal people, the result of the verdict means that when people find that there are units or individuals using face recognition equipment and they do not want to be identified, can take the initiative to communicate with each other, ask it to delete their personal information data related to their own, when necessary, can fully protect their legitimate rights and interests through litigation.

2. Dianping:

2.1 Xinmin Quick review: relevant departments can not be “naked” to ignore the abuse of face recognition

It is necessary to evaluate the legitimacy, necessity and safety of its use, and not to wait until the problem is big and then restrict it, which may face greater resistance. Only by a step forward supervision, face recognition technology can better benefit the society, to avoid harm to personal privacy.

2.2 Bai Yansong on wearing helmets: I hope the law will wear helmets for us

2.3 Payment Industry Network: Face payment has serious security problems

The payment industry network saw that under the rigorous case handling process of the public, prosecutors and law, there was no third party assistance in the judgment. Huang could use the victim’s photo to generate a dynamic map by himself and successfully passed the payment authentication of wechat Alipay.

The loss of users’ funds in wechat Pay and Alipay will undoubtedly affect the evaluation of their safety in users’ minds. Although wechat Pay and Alipay are called apple’s face recognition interface, the identity authentication by Apple to confirm, but does not affect the user in the payment agency account funds damage.

It also shows that there are serious security issues with face payments, and that even trillion-dollar Apple (AAPL, News, MSGS) can’t completely prevent GIF fraud.

  

2.1 Li Wei, Director of science and Technology Department of the People’s Bank of China

Just last summer, Li Wei, director of the central bank’s science and technology department, told a conference that faces are very sensitive personal information, and if leaked or stolen, it will have a big impact. Now some technology can recognize the face from three kilometers away, if the face payment, a brush face money is gone, a scene can not express the subjective will of the customer is terrible. Therefore, the technology should not be abused, and the technology should not be capricious.

2.2 Media Review

Last month, analysts wrote about several concerns about face payments. Due to the face of exposure, passive (can not actively initiate transactions) and non-contact (do not need to directly contact with the device), how to define is the customer active payment, or passive.

In the past, it was common to enter a password by swiping a card or scanning a code, and at least to verify the fingerprint actively. These behaviors were the interpretation of the customer’s transaction intention.

However, face payment makes these active behaviors impossible to place, and the consequence is that the customer may pay “without feeling” without the transaction intention and suffer losses.

2.2 Comments by technical experts

Liu Guangxin, CTO of Xinxin Technology, believes:

  1. Face payment is not safe. Face is public information, and the mechanism of taking face as answer to verify has huge problems. In the era of mobile Internet, not only personal pictures and even dynamic videos are shared in moments, but how to protect our face is a huge problem.
  2. Fingerprint identification is relatively secure, and fingerprints are less likely to be shared, but there is also a possibility of malicious collection by various Apps.
  3. SMS verification is more secure, because the SMS verification code has one code at a time, so SMS verification is still an important means of payment and user registration.

But SMS verification is also at risk of being swiped, so there are all sorts of annoying photo captcha and unnecessary swiping and tapping. Xinxin technology has developed a unique non-inductive SMS firewall to ensure the security of SMS verification code without graphic verification code.

Can you steal wechat Alipay from your iPhone? How does pay treasure risk control accomplish? Alipay Risk Control Disclosure -2- Online Payment and Risk prevention practice Hackers want to transfer your Alipay money? How does AlphaRisk fight? Revealed Alipay risk control -1- compound event processing