neural-network_based-face-detection.zip

  • chenzhiyuan
    了解作者
  • matlab
    开发工具
  • 5.5MB
    文件大小
  • zip
    文件格式
  • 0
    收藏次数
  • 1 积分
    下载积分
  • 6
    下载次数
  • 2013-12-11 14:24
    上传日期
neural network_based face detectionw基于神经网络实现人脸识别的论文
neural-network_based-face-detection.zip
  • neural network_based face detection.pdf
    5.9MB
内容介绍
<html xmlns="http://www.w3.org/1999/xhtml"> <head> <meta charset="utf-8"> <meta name="generator" content="pdf2htmlEX"> <meta http-equiv="X-UA-Compatible" content="IE=edge,chrome=1"> <link rel="stylesheet" href="https://static.pudn.com/base/css/base.min.css"> <link rel="stylesheet" href="https://static.pudn.com/base/css/fancy.min.css"> <link rel="stylesheet" href="https://static.pudn.com/prod/directory_preview_static/625dcd12d9b6ce52bd2fcb5e/raw.css"> <script src="https://static.pudn.com/base/js/compatibility.min.js"></script> <script src="https://static.pudn.com/base/js/pdf2htmlEX.min.js"></script> <script> try{ pdf2htmlEX.defaultViewer = new pdf2htmlEX.Viewer({}); }catch(e){} </script> <title></title> </head> <body> <div id="sidebar" style="display: none"> <div id="outline"> </div> </div> <div id="pf1" class="pf w0 h0" data-page-no="1"><div class="pc pc1 w0 h0"><img class="bi x0 y0 w1 h1" alt="" src="https://static.pudn.com/prod/directory_preview_static/625dcd12d9b6ce52bd2fcb5e/bg1.jpg"><div class="t m0 x1 h2 y1 ff1 fs0 fc0 sc0 ls0 ws0">IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, VOL. 20, NO. 1, JANUARY 1998<span class="_ _0"> </span>23</div><div class="t m0 x2 h3 y2 ff1 fs1 fc0 sc0 ls1 ws1">Neural Network-Based Face Detection</div><div class="t m0 x3 h4 y3 ff1 fs2 fc0 sc0 ls2 ws0">Henry A. Rowley, </div><div class="t m1 x4 h5 y4 ff1 fs3 fc0 sc0 ls3 ws0">Student Member, IEEE,</div><div class="t m0 x5 h4 y3 ff1 fs2 fc0 sc0 ls2 ws0"> Shumeet Baluja, and Takeo Kanade,</div><div class="t m1 x6 h5 y4 ff1 fs3 fc0 sc0 ls3 ws0"> Fellow, IEEE</div><div class="t m2 x7 h6 y5 ff2 fs4 fc0 sc0 ls4 ws0">Abstract</div><div class="t m2 x8 h7 y6 ff1 fs4 fc0 sc0 ls0 ws2">&#8212;We present a neural network-based upright frontal face detection system. A retinally connected neural network examines</div><div class="t m2 x7 h7 y7 ff1 fs4 fc0 sc0 ls0 ws2">small windows of an image and decides whether each window contains a face. The system arbitrates between multiple networks to</div><div class="t m2 x7 h7 y8 ff1 fs4 fc0 sc0 ls0 ws2">improve performance over a single network. We present a straightforward procedure for aligning positive face examples for train<span class="_ _1"></span><span class="ls4">ing.</span></div><div class="t m2 x7 h7 y9 ff1 fs4 fc0 sc0 ls0 ws2">To collect negative examples, we use a bootstrap algorithm, which adds false detections into the training set as training progr<span class="_ _1"></span>esses.</div><div class="t m2 x7 h7 ya ff1 fs4 fc0 sc0 ls0 ws2">This eliminates the difficult task of manually selecting nonface training examples, which must be chosen to span the entire space of</div><div class="t m2 x7 h7 yb ff1 fs4 fc0 sc0 ls0 ws2">nonface images. Simple heuristics, such as using the fact that faces rarely overlap in images, can further improve the accuracy.</div><div class="t m2 x7 h7 yc ff1 fs4 fc0 sc0 ls0 ws2">Comparisons with several other state-of-the-art face detection systems are presented, showing that our system has comparable</div><div class="t m2 x7 h7 yd ff1 fs4 fc0 sc0 ls0 ws2">performance in terms of detection and false-positive rates.</div><div class="t m2 x7 h6 ye ff2 fs4 fc0 sc0 ls0 ws2">Index Terms</div><div class="t m2 x9 h7 yf ff1 fs4 fc0 sc0 ls0 ws2">&#8212;Face detection, pattern recognition, computer vision, artificial neural networks, machine learning.</div><div class="t m2 xa h8 y10 ff3 fs5 fc0 sc0 ls0 ws3">&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212; </div><div class="t m2 xb h9 y11 ff4 fs5 fc0 sc0 ls0 ws3">&#10022;</div><div class="t m2 xc h8 y10 ff3 fs5 fc0 sc0 ls0 ws3"> &#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;</div><div class="t m2 x1 ha y12 ff2 fs6 fc0 sc0 ls5 ws4">1I</div><div class="t m2 xd hb y13 ff2 fs7 fc0 sc0 ls0 ws5">NTRODUCTION</div><div class="t m2 xe hc y14 ff3 fs8 fc0 sc0 ls0 ws6">N</div><div class="t m2 xf h8 y15 ff3 fs5 fc0 sc0 ls0 ws7"> this paper, we present a neural network-based algo-</div><div class="t m2 xe h8 y16 ff3 fs5 fc0 sc0 ls0 ws7">rithm to detect upright, frontal views of faces in gray-</div><div class="t m2 x1 h8 y17 ff3 fs5 fc0 sc0 ls0 ws8">scale images.</div><div class="t m2 x10 hd y18 ff3 fs9 fc0 sc0 ls0 ws9">1</div><div class="t m2 x11 h8 y19 ff3 fs5 fc0 sc0 ls6 ws8"> The algorithm works by applying one or</div><div class="t m2 x1 h8 y1a ff3 fs5 fc0 sc0 ls0 wsa">more neural networks directly to portions of the input im-</div><div class="t m2 x1 h8 y1b ff3 fs5 fc0 sc0 ls0 wsb">age and arbitrating their results. Each network is trained to</div><div class="t m2 x1 h8 y1c ff3 fs5 fc0 sc0 ls0 wsc">output the presence or absence of a face. The algorithms</div><div class="t m2 x1 h8 y1d ff3 fs5 fc0 sc0 ls0 wsd">and training methods are designed to be general, with little</div><div class="t m2 x1 h8 y1e ff3 fs5 fc0 sc0 ls0 ws3">customization for faces.</div><div class="t m2 x12 h8 y1f ff3 fs5 fc0 sc0 ls0 wse">Many face detection researchers have used the idea that</div><div class="t m2 x1 h8 y20 ff3 fs5 fc0 sc0 ls0 wsf">facial images can be characterized directly in terms of pixel</div><div class="t m2 x1 h8 y21 ff3 fs5 fc0 sc0 ls0 ws10">intensities. These images can be characterized by probabil-</div><div class="t m2 x1 h8 y22 ff3 fs5 fc0 sc0 ls0 ws11">istic models of the set of face images [4], [13], [15] or im-</div><div class="t m2 x1 h8 y23 ff3 fs5 fc0 sc0 ls0 ws12">plicitly by neural networks or other mechanisms [3], [12],</div><div class="t m2 x1 h8 y24 ff3 fs5 fc0 sc0 ls7 ws8">[14], [19], [21], [23], [25], [26]. The parameters for these</div><div class="t m2 x1 h8 y25 ff3 fs5 fc0 sc0 ls0 ws13">models are adjusted either automatically from example</div><div class="t m2 x1 h8 y26 ff3 fs5 fc0 sc0 ls0 ws14">images (as in our work) or by hand. A few authors have</div><div class="t m2 x1 h8 y27 ff3 fs5 fc0 sc0 ls0 ws15">taken the approach of extracting features and applying ei-</div><div class="t m2 x1 h8 y28 ff3 fs5 fc0 sc0 ls0 ws16">ther manually or automatically generated rules for evalu-</div><div class="t m2 x1 h8 y29 ff3 fs5 fc0 sc0 ls0 ws3">ating these features [7], [11].</div><div class="t m2 x12 h8 y2a ff3 fs5 fc0 sc0 ls8 ws17">Training a neural network for the face detection task is</div><div class="t m2 x1 h8 y2b ff3 fs5 fc0 sc0 ls8 ws18">challenging because of the difficulty in characterizing proto-</div><div class="t m2 x1 h8 y2c ff3 fs5 fc0 sc0 ls8 ws19">typical &#8220;nonface&#8221; images. Unlike face <span class="_ _1"></span><span class="ff5">recognition</span><span class="ws1a">, in which</span></div><div class="t m2 x1 h8 y2d ff3 fs5 fc0 sc0 ls8 ws1b">the classes to be discriminated are different faces, the two</div><div class="t m2 x1 h8 y2e ff3 fs5 fc0 sc0 ls8 ws18">classes to be discriminated in face <span class="ff5">detection</span><span class="ws1c"> are &#8220;images con-</span></div><div class="t m2 x1 h8 y2f ff3 fs5 fc0 sc0 ls8 ws1d">taining faces&#8221; and &#8220;images not containing faces.&#8221;<span class="_ _1"></span> It is easy to</div><div class="t m2 x13 hc y30 ff3 fs8 fc0 sc0 ls0 ws1e">1. An interactive demonstration of the system is available on the World</div><div class="t m2 x1 hc y31 ff3 fs8 fc0 sc0 ls0 ws1f">Wide Web at </div><div class="t m2 x14 he y32 ff6 fs4 fc0 sc0 ls9 ws20">http://www.cs.cmu.edu~har/faces.html</div><div class="t m2 x15 hc y33 ff3 fs8 fc0 sc0 ls0 ws21">, which allows anyone to</div><div class="t m2 x1 hc y34 ff3 fs8 fc0 sc0 ls0 ws22">submit images for processing by the face detector, and to see the detection</div><div class="t m2 x1 hc y35 ff3 fs8 fc0 sc0 ls0 ws0">results for pictures submitted by other people.</div><div class="t m2 x16 h8 y15 ff3 fs5 fc0 sc0 ls8 ws23">get a representative sample of images which contain faces,</div><div class="t m2 x16 h8 y16 ff3 fs5 fc0 sc0 ls8 ws8">but much harder to get a representative sample of those</div><div class="t m2 x16 h8 y17 ff3 fs5 fc0 sc0 ls8 ws24">which do not. We avoid the problem of using a huge training</div><div class="t m2 x16 h8 y36 ff3 fs5 fc0 sc0 ls8 ws18">set for nonfaces by selectively adding images to the training</div><div class="t m2 x16 h8 y37 ff3 fs5 fc0 sc0 ls8 ws25">set as training progresses [21]. This &#8220;bootstrap&#8221; method re-</div><div class="t m2 x16 h8 y38 ff3 fs5 fc0 sc0 ls8 ws1c">duces the size of the training set needed. The use of arbitra-</div><div class="t m2 x16 h8 y39 ff3 fs5 fc0 sc0 lsa wsb">tion between multiple networks and heuristics to clean up th<span class="_ _1"></span>e</div><div class="t m2 x16 h8 y3a ff3 fs5 fc0 sc0 ls8 ws3">results significantly improves the accuracy of the detector.</div><div class="t m2 x17 h8 y3b ff3 fs5 fc0 sc0 ls0 ws26">Detailed descriptions of the example collection and</div><div class="t m2 x16 h8 y3c ff3 fs5 fc0 sc0 ls0 ws27">training methods, network architecture, and arbitration</div><div class="t m2 x16 h8 y3d ff3 fs5 fc0 sc0 ls0 ws28">methods are given in Section 2. In Section 3, the perform-</div><div class="t m2 x16 h8 y3e ff3 fs5 fc0 sc0 ls0 ws29">ance of the system is examined. We find that the system is</div><div class="t m2 x16 h8 y3f ff3 fs5 fc0 sc0 ls0 ws2a">able to detect 90.5 percent of the faces over a test set of 130</div><div class="t m2 x16 h8 y40 ff3 fs5 fc0 sc0 ls0 ws2b">complex images, with an acceptable number of false posi-</div><div class="t m2 x16 h8 y41 ff3 fs5 fc0 sc0 ls0 ws2c">tives. Section 4 briefly discusses some techniques that can</div><div class="t m2 x16 h8 y42 ff3 fs5 fc0 sc0 ls0 ws2d">be used to make the system run faster, and Section 5 com-</div><div class="t m2 x16 h8 y43 ff3 fs5 fc0 sc0 ls0 ws2e">pares this system with similar systems. Conclusions and</div><div class="t m2 x16 h8 y44 ff3 fs5 fc0 sc0 ls0 ws3">directions for future research are presented in Section 6.</div><div class="t m2 x16 ha y45 ff2 fs6 fc0 sc0 ls5 ws4">2D</div><div class="t m2 x18 hb y46 ff2 fs7 fc0 sc0 ls0 ws2f">ESCRIPTION OF THE </div><div class="t m2 x19 ha y45 ff2 fs6 fc0 sc0 ls0 ws30">S</div><div class="t m2 x1a hb y46 ff2 fs7 fc0 sc0 ls0 ws2f">YSTEM</div><div class="t m2 x16 h8 y47 ff3 fs5 fc0 sc0 lsa ws31">Our system operates in two stages: It first applies a set of neu-</div><div class="t m2 x16 h8 y48 ff3 fs5 fc0 sc0 ls8 ws32">ral<span class="_ _2"></span> network-based filters to an image and then uses an arbi-</div><div class="t m2 x16 h8 y49 ff3 fs5 fc0 sc0 ls8 ws33">trator to combine the outputs. The filters examine each loca-</div><div class="t m2 x16 h8 y4a ff3 fs5 fc0 sc0 ls8 ws34">tion in the image at several scales, looking for locations that</div><div class="t m2 x16 h8 y4b ff3 fs5 fc0 sc0 ls8 ws35">might contain a face. The arbitrator then merges detections</div><div class="t m2 x16 h8 y4c ff3 fs5 fc0 sc0 ls8 ws3">from individual filters and eliminates overlapping detections.</div><div class="t m2 x16 hf y4d ff2 fsa fc0 sc0 ls0 ws0">2.1<span class="_ _3"> </span>Stage One: A Neural Network-Based Filter</div><div class="t m2 x16 h8 y4e ff3 fs5 fc0 sc0 ls0 ws36">The first component of our system is a filter that receives as</div><div class="t m2 x16 h8 y4f ff3 fs5 fc0 sc0 ls0 ws2d">input a 20 <span class="ff7">&#165;</span><span class="ws37"> 20 pixel region of the image and generates an</span></div><div class="t m2 x16 h8 y50 ff3 fs5 fc0 sc0 ls0 ws38">output ranging from 1 to <span class="ff7">-</span>1, signifying the presence or ab-</div><div class="t m2 x16 h8 y51 ff3 fs5 fc0 sc0 ls0 ws3">sence of a face, respectively. To detect faces anywhere in the</div><div class="t m2 x16 h8 y52 ff3 fs5 fc0 sc0 ls0 ws1d">input, the filter is applied at every location in the image. To</div><div class="t m2 x16 h8 y53 ff3 fs5 fc0 sc0 ls8 ws39">detect faces larger than the window size, the input image<span class="_ _1"></span> is</div><div class="t m2 x16 h8 y54 ff3 fs5 fc0 sc0 ls8 ws3a">repeatedly reduced in size (by subsampling), and the filter is</div><div class="t m2 x16 h8 y55 ff3 fs5 fc0 sc0 ls8 ws3b">applied at each size. This filter must have some invariance to</div><div class="t m2 x16 h8 y56 ff3 fs5 fc0 sc0 ls8 ws3c">position and scale. The amount of invariance de<span class="ls6 ws3d">termines the</span></div><div class="t m2 x16 h8 y57 ff3 fs5 fc0 sc0 ls0 ws3e">number of scales and positions at which it must be applied.</div><div class="t m2 x16 h8 y58 ff3 fs5 fc0 sc0 ls6 ws3f">For the work presented here, we apply the filter at every</div><div class="t m2 x1b h10 y59 ff1 fsb fc0 sc0 lsb ws0">0162-8828/98/$10.00 &#169; 1998 IEEE</div><div class="t m2 x1c h11 y5a ff8 fs0 fc0 sc0 ls0 ws0">&#165;&#165;&#165;&#165;&#165;&#165;&#165;&#165;&#165;&#165;&#165;&#165;&#165;&#165;&#165;&#165;</div><div class="t m2 x1 h12 y5b ff9 fs4 fc0 sc0 ls4 ws0">&#8226; <span class="_ _4"> </span><span class="ff5 ls0 ws40">H.A. Rowley and T. Kanade are with the Department of Computer Science,</span></div><div class="t m2 x1d h13 y5c ff5 fs4 fc0 sc0 ls0 ws40">Carnegie Mellon University, 5000 Forbes Avenue, Pittsburgh, PA 15213.</div><div class="t m2 x1d h13 y5d ff5 fs4 fc0 sc0 ls0 ws40">E-mail: {har, tk}@cs.cmu.edu.</div><div class="t m2 x1 h12 y5e ff9 fs4 fc0 sc0 ls4 ws40">&#8226; <span class="_ _4"> </span><span class="ff5 ls0">S. Baluja is with the Justsystem Pittsburgh Research Center, 4616 Henry</span></div><div class="t m2 x1d h13 y5f ff5 fs4 fc0 sc0 ls0 ws40">Street, Pittsburgh, PA 15213 and is also associated with the Department of</div><div class="t m2 x1d h13 y60 ff5 fs4 fc0 sc0 ls0 ws40">Computer Science and the Robotics Institute at Carnegie Mellon Univer-</div><div class="t m2 x1d h13 y61 ff5 fs4 fc0 sc0 ls0 ws40">sity. E-mail: baluja@jprc.com.</div><div class="c x1 y62 w2 h14"><div class="t m2 x0 hc y63 ff5 fs8 fc0 sc0 ls0 ws41">M</div></div><div class="t m2 x13 hc y64 ff5 fs8 fc0 sc0 ls0 ws0">anuscript received 6 May 1996; revised 9 Oct. 1997. Recommended for accep-</div><div class="t m2 x1 hc y65 ff5 fs8 fc0 sc0 ls0 ws0">tance by R.W. Picard.</div><div class="t m2 x1 hc y66 ff5 fs8 fc0 sc0 lsc ws42">For information on obtaining reprints of this article, please send e-mail to:</div><div class="t m2 x1 hc y67 ff5 fs8 fc0 sc0 ls0 ws0">tpami@computer.org, and reference IEEECS Log Number 105873.</div><div class="t m2 x1 h15 y68 ff3 fsc fc0 sc0 ls0 ws0">I</div></div><div class="pi" data-data='{"ctm":[1.681261,0.000000,0.000000,1.681261,0.000000,0.000000]}'></div></div> </body> </html>
评论
    相关推荐
    • 神经网络论文.zip
      这里有两个经典神经网络论文,分别是denseNet论文和VGGNET论文
    • 小波神经网络论文,各类算法
      建议大家可以看看这些论文提高认识,仅此而已,如果还需要的话,可以联系我
    • 神经网络论文集.zip
      神经网络论文集.zip
    • Graph-Neural-Network-Note:一个了解图神经网络的博客
      笔者最近看了一些图与图卷积神经网络论文,深感其强大,但一些调查或教程替换了读者对图神经网络背景知识的了解,对未学过信号处理的读者不太友好。同时,,很多教程只讲是什么,不讲为什么,也没有梳理清楚不同...
    • awesome-equivariant-network:等变神经网络论文清单
      等变神经网络论文清单。 工作正在进行中。 可以按照以下格式随意建议相关论文。 **Group Equivariant Convolutional Networks** Taco S. Cohen, Max Welling ICML 2016 [ paper ]...
    • neural-fortran:并行神经网络微框架
      并行神经网络微框架。 在阅读论文。 特征 任意形状和大小的密集、完全连接的神经网络 具有均方误差成本函数的反向传播 基于数据的并行性 几个激活函数 支持 32、64 和 128 位浮点数 入门 获取代码: git clone ...
    • Neural Network神经网络论文
      关于Neural Network神经网络论文,包括中文和英文的论文
    • cnn卷积神经网络论文.zip
      cnn卷积神经网络的八篇最经典论文 AlexNet:NIPS-2012-imagenet-classification-with-deep-convolutional-neural-networks-Paper VGG:Very-Deep-Convolutional-Networks-for-Large-Scale-Image-Recognition NIN:...
    • 神经网络(Graph Neural Network,GNN)综述.zip
      本文主要针对图神经网络(GNN),介绍一些最近几年该领域的一些研究进展。
    • 神经网络学习资料、ppt、论文
      入门图神经网络的好资源,了解GNN的基本原理,训练方法,以及其各种变体的应用。通俗易懂,讲解全面。入门图神经网络的好资源,了解GNN的基本原理,训练方法,以及其各种变体的应用。通俗易懂,讲解全面。