星期二, 2月 12, 2019

FR Face recognition method


Deep Face Recognition: A Survey  arXiv:1804.06655v7 [cs.CV] 28 Sep 2018

  1. Mei Wang, 
  2. Weihong Deng
Fig. 2. The hierarchical architecture of deep FR. Algorithms consist of multiple layers of simulated neurons that convolute and pool input, during which the receptive-field size of simulated neurons are continually enlarged to integrate the low-level primary elements into multifarious facial attributes, finally feeding the data forward to one or more fully connected layer at the top of the network. The output is a compressed feature vector that represent the face. Such deep representation is widely considered the state-of-the-art technique for face recognition.


Fig. 3. Deep FR system with face detector and alignment. First, a face detector is used to localize faces. Second, the faces are aligned to normalized canonical coordinates. Third, the FR module is implemented. In FR module, face anti-spoofing recognizes whether the face is live or spoofed; face processing is used to handle recognition difficulty before training and testing; different architectures and loss functions are used to extract discriminative deep feature when training; face matching methods are used to do feature classification when the deep feature of testing data are extracted.


TABLE III DIFFERENT LOSS FUNCTIONS FOR FR


TABLE IV THE ACCURACY OF DIFFERENT VERIFICATION METHODS ON THE LFW DATASET.
Fig. 5. The development of loss functions. It marks the beginning of deep FR that Deepface [160] and DeepID [156] were introduced in 2014. After that, Euclidean-distance-based loss always played the important role in loss function, such as contractive loss, triplet loss and center loss. In 2016 and 2017, L-softmax [107] and A-softmax [106] further promoted the development of the large-margin feature learning. In 2017, feature and weight normalization also begun to show excellent performance, which leads to the study on variations of softmax. Red, green, blue and yellow rectangles represent deep methods with softmax, Euclidean-distance-based loss, angular/cosine-margin-based loss and variations of softmax, respectively.



Fig. 15. The evolution of FR datasets. Before 2007, early work in FR focused on controlled and small-scale datasets. In 2007, LFW [77] dataset was introduced which marks the beginning of FR under unconstrained conditions. Since then, more testing databases with different tasks and scenes are designed. And in 2014, CASIA-Webface [206] provided the first widely-used public training dataset, large-scale training datasets begun to be hot topic. Red rectangles represent training datasets, and other color rectangles represent testing datasets with different task and scenes.

TABLE VI THE COMMONLY USED FR DATASETS FOR TRAINING

Fig. 17. A visualization of size and estimated noise percentage of datasets. [169]


Inverted Residuals and Linear Bottlenecks: Mobile Networks for Classification, Detection and Segmentation arXiv:1801.04381v2 [cs.CV] 16 Jan 2018

Figure 4: Comparison of convolutional blocks for different architectures. ShuffleNet uses Group Convolutions [19] and shuffling, it also uses conventional residual approach where inner blocks are narrower than output. ShuffleNet and NasNet illustrations are from respective papers.

YOLO-LITE: A Real-Time Object Detection Algorithm Optimized for Non-GPU Computers


  1. Rachel Huang* School of Electrical and Computer Engineering Georgia Institute of Technology Atlanta, United States rachuang22@gmail.com 
  2. Jonathan Pedoeem* Electrical Engineering The Cooper Union New York, United States pedoeem@cooper.edu 
  3. Cuixian Chen Mathematics and Statistics UNC Wilmington North Carolina, United St






OpenFace: A general-purpose face recognition library with mobile applications

  1. Brandon Amos, 
  2. Bartosz Ludwiczuk,
  3. Mahadev Satyanarayanan
  4. http://cmusatyalab.github.io/openface/





ArcFace: Additive Angular Margin Loss for Deep Face Recognition

  1. Jiankang Deng * Imperial College London j.deng16@imperial.ac.uk
  2. Jia Guo ∗ InsightFace guojia@gmail.com
  3. Niannan Xue Imperial College London n.xue15@imperial.ac.uk
  4. Stefanos Zafeiriou Imperial College London s.zafeiriou@imperial.ac.uk



  1. Decision margins



沒有留言: