星期四, 2月 28, 2019

TX2 backup image

https://elinux.org/Jetson/TX2_Cloning

Keyword: DEFAULT_RUNLEVEL, dd, rc-sysinit, rcS, boot sequence,

Cloning the Image

cd into the directory containing the L4T installation package on the host PC. The command below will save the TX2's eMMC image to the specified file on the host.
 $ sudo ./flash.sh -r -k APP -G backup.img jetson-tx2 mmcblk0p1
In this case, we call the file backup.img, so the same flash.sh script can be re-used to format and flash other Jetson's with the image.
Note that if you receive an error from the script about unrecognized -G option, replace your flash.sh with the script from this post.

Copy the backup raw image to flashing directory

copy the .raw file which contains complete image from source device.
$ sudo cp backup.img.raw bootloader/system.img

Restoring the Image

The recommended way to restore multiple units with different serial numbers is to save the image above as "system.img" and use the head L4T flashing script, flash.sh, with the -r option (to reuse your backed-up system.img without rebuilding the vanilla image from scratch):
  $ sudo ./flash.sh -r -k APP jetson-tx2 mmcblk0p1

星期二, 2月 12, 2019

FR performance comparison





Inverted Residuals and Linear Bottlenecks: Mobile Networks for Classification, Detection and Segmentation  arXiv:1801.04381v2 [cs.CV] 16 Jan 2018
Table 4: Performance on ImageNet, comparison for different networks. As is common practice for ops, we count the total number of Multiply-Adds. In the last column we report running time in milliseconds (ms) for a single large core of the Google Pixel 1 phone (using TF-Lite). We do not report ShuffleNet numbers as the framework does not yet support efficient group convolutions. 

Face Recognition Issue




  1. Low Resolution 
  2. NIRNet (for near-infrared images)
  3. Partial
  4. low-shot
  5. Sequence loading
  6. Cross-Pose
  7. spoofing attacks
  8. makeup 
  9. Cross-Age

FR Face recognition method


Deep Face Recognition: A Survey  arXiv:1804.06655v7 [cs.CV] 28 Sep 2018

  1. Mei Wang, 
  2. Weihong Deng
Fig. 2. The hierarchical architecture of deep FR. Algorithms consist of multiple layers of simulated neurons that convolute and pool input, during which the receptive-field size of simulated neurons are continually enlarged to integrate the low-level primary elements into multifarious facial attributes, finally feeding the data forward to one or more fully connected layer at the top of the network. The output is a compressed feature vector that represent the face. Such deep representation is widely considered the state-of-the-art technique for face recognition.


Fig. 3. Deep FR system with face detector and alignment. First, a face detector is used to localize faces. Second, the faces are aligned to normalized canonical coordinates. Third, the FR module is implemented. In FR module, face anti-spoofing recognizes whether the face is live or spoofed; face processing is used to handle recognition difficulty before training and testing; different architectures and loss functions are used to extract discriminative deep feature when training; face matching methods are used to do feature classification when the deep feature of testing data are extracted.


TABLE III DIFFERENT LOSS FUNCTIONS FOR FR


TABLE IV THE ACCURACY OF DIFFERENT VERIFICATION METHODS ON THE LFW DATASET.
Fig. 5. The development of loss functions. It marks the beginning of deep FR that Deepface [160] and DeepID [156] were introduced in 2014. After that, Euclidean-distance-based loss always played the important role in loss function, such as contractive loss, triplet loss and center loss. In 2016 and 2017, L-softmax [107] and A-softmax [106] further promoted the development of the large-margin feature learning. In 2017, feature and weight normalization also begun to show excellent performance, which leads to the study on variations of softmax. Red, green, blue and yellow rectangles represent deep methods with softmax, Euclidean-distance-based loss, angular/cosine-margin-based loss and variations of softmax, respectively.



Fig. 15. The evolution of FR datasets. Before 2007, early work in FR focused on controlled and small-scale datasets. In 2007, LFW [77] dataset was introduced which marks the beginning of FR under unconstrained conditions. Since then, more testing databases with different tasks and scenes are designed. And in 2014, CASIA-Webface [206] provided the first widely-used public training dataset, large-scale training datasets begun to be hot topic. Red rectangles represent training datasets, and other color rectangles represent testing datasets with different task and scenes.

TABLE VI THE COMMONLY USED FR DATASETS FOR TRAINING

Fig. 17. A visualization of size and estimated noise percentage of datasets. [169]


Inverted Residuals and Linear Bottlenecks: Mobile Networks for Classification, Detection and Segmentation arXiv:1801.04381v2 [cs.CV] 16 Jan 2018

Figure 4: Comparison of convolutional blocks for different architectures. ShuffleNet uses Group Convolutions [19] and shuffling, it also uses conventional residual approach where inner blocks are narrower than output. ShuffleNet and NasNet illustrations are from respective papers.

YOLO-LITE: A Real-Time Object Detection Algorithm Optimized for Non-GPU Computers


  1. Rachel Huang* School of Electrical and Computer Engineering Georgia Institute of Technology Atlanta, United States rachuang22@gmail.com 
  2. Jonathan Pedoeem* Electrical Engineering The Cooper Union New York, United States pedoeem@cooper.edu 
  3. Cuixian Chen Mathematics and Statistics UNC Wilmington North Carolina, United St






OpenFace: A general-purpose face recognition library with mobile applications

  1. Brandon Amos, 
  2. Bartosz Ludwiczuk,
  3. Mahadev Satyanarayanan
  4. http://cmusatyalab.github.io/openface/





ArcFace: Additive Angular Margin Loss for Deep Face Recognition

  1. Jiankang Deng * Imperial College London j.deng16@imperial.ac.uk
  2. Jia Guo ∗ InsightFace guojia@gmail.com
  3. Niannan Xue Imperial College London n.xue15@imperial.ac.uk
  4. Stefanos Zafeiriou Imperial College London s.zafeiriou@imperial.ac.uk



  1. Decision margins



NN Keyword


OpenFace
image

Affine Transformation
image 数据集 LFW: 13323张人脸/5749人 
  1. restricted: 只有是/不是的标记
  2. unrestricted:其他的训练对也可以拿到
  3. unsupervised:不在LFW上训练

./darknet detector demo cfg/coco.data cfg/yolov3-tiny.cfg yolov3-tiny.weights "nvcamerasrc ! video/x-raw(memory:NVMM), wigth=(int)640, height=(int)480, format=(string)I420, framerate=(fraction)30/1 ! nvvidconv ! video/x-raw, format=(string)BGRx ! videoconvert ! video/x-raw, format=(string)BGR ! appsink"

sudo nvpmodel -m 2