• w1_726128
  • 27.8MB
  • zip
  • 0
  • VIP专享
  • 0
  • 2022-05-13 07:56
/** @file covdet.c ** @brief Covariant feature detectors - Definition ** @author Karel Lenc ** @author Andrea Vedaldi ** @author Michal Perdoch **/ /* Copyright (C) 2013-14 Andrea Vedaldi. Copyright (C) 2012 Karel Lenc, Andrea Vedaldi and Michal Perdoch. All rights reserved. This file is part of the VLFeat library and is made available under the terms of the BSD license (see the COPYING file). */ /** <!-- ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ --> @page covdet Covariant feature detectors @author Karel Lenc @author Andrea Vedaldi @author Michal Perdoch @tableofcontents <!-- ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ --> @ref covdet.h implements a number of covariant feature detectors, based on three cornerness measures (determinant of the Hessian, trace of the Hessian (aka Difference of Gaussians, and Harris). It supprots affine adaptation, orientation estimation, as well as Laplacian scale detection. - @subpage covdet-fundamentals - @subpage covdet-principles - @subpage covdet-differential - @subpage covdet-corner-types <!-- ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ --> @section covdet-starting Getting started <!-- ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ --> The ::VlCovDet object implements a number of covariant feature detectors: Difference of Gaussian, Harris, determinant of Hessian. Variant of the basic detectors support scale selection by maximizing the Laplacian measure as well as affine normalization. @code // create a detector object VlCovDet * covdet = vl_covdet_new(method) ; // set various parameters (optional) vl_covdet_set_first_octave(covdet, -1) ; // start by doubling the image resolution vl_covdet_set_octave_resolution(covdet, octaveResolution) ; vl_covdet_set_peak_threshold(covdet, peakThreshold) ; vl_covdet_set_edge_threshold(covdet, edgeThreshold) ; // process the image and run the detector vl_covdet_put_image(covdet, image, numRows, numCols) ; vl_covdet_detect(covdet) ; // drop features on the margin (optional) vl_covdet_drop_features_outside (covdet, boundaryMargin) ; // compute the affine shape of the features (optional) vl_covdet_extract_affine_shape(covdet) ; // compute the orientation of the features (optional) vl_covdet_extract_orientations(covdet) ; // get feature frames back vl_size numFeatures = vl_covdet_get_num_features(covdet) ; VlCovDetFeature const * feature = vl_covdet_get_features(covdet) ; // get normalized feature appearance patches (optional) vl_size w = 2*patchResolution + 1 ; for (i = 0 ; i < numFeatures ; ++i) { float * patch = malloc(w*w*sizeof(*desc)) ; vl_covdet_extract_patch_for_frame(covdet, patch, patchResolution, patchRelativeExtent, patchRelativeSmoothing, feature[i].frame) ; // do something with patch } @endcode This example code: - Calls ::vl_covdet_new constructs a new detector object. @ref covdet.h supports a variety of different detectors (see ::VlCovDetMethod). - Optionally calls various functions to set the detector parameters if needed (e.g. ::vl_covdet_set_peak_threshold). - Calls ::vl_covdet_put_image to start processing a new image. It causes the detector to compute the scale space representation of the image, but does not compute the features yet. - Calls ::vl_covdet_detect runs the detector. At this point features are ready to be extracted. However, one or all of the following steps may be executed in order to process the features further. - Optionally calls ::vl_covdet_drop_features_outside to drop features outside the image boundary. - Optionally calls ::vl_covdet_extract_affine_shape to compute the affine shape of features using affine adaptation. - Optionally calls ::vl_covdet_extract_orientations to compute the dominant orientation of features looking for the dominant gradient orientation in patches. - Optionally calls ::vl_covdet_extract_patch_for_frame to extract a normalized feature patch, for example to compute an invariant feature descriptor. <!-- ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ --> @page covdet-fundamentals Covariant detectors fundamentals @tableofcontents <!-- ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ --> This page describes the fundamental concepts required to understand a covariant feature detector, the geometry of covariant features, and the process of feature normalization. <!-- ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ --> @section covdet-covariance Covariant detection <!-- ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ --> The purpose of a *covariant detector* is to extract from an image a set of local features in a manner which is consistent with spatial transformations of the image itself. For instance, a covariant detector that extracts interest points $\bx_1,\dots,\bx_n$ from image $\ell$ extracts the translated points $\bx_1+T,\dots,\bx_n+T$ from the translated image $\ell'(\bx) = \ell(\bx-T)$. More in general, consider a image $\ell$ and a transformed version $\ell' = \ell \circ w^{-1}$ of it, as in the following figure: @image html covdet.png "Covariant detection of local features." The transformation or <em>warp</em> $w : \real^2 \mapsto \real^2$ is a deformation of the image domain which may capture a change of camera viewpoint or similar imaging factor. Examples of warps typically considered are translations, scaling, rotations, and general affine transformations; however, in $w$ could be another type of continuous and invertible transformation. Given an image $\ell$, a **detector** selects features $R_1,\dots,R_n$ (one such features is shown in the example as a green circle). The detector is said to be **covariant** with the warps $w$ if it extracts the transformed features $w[R_1],\dots, w[R_n]$ from the transformed image $w[\ell]$. Intuitively, this means that the &ldquo;same features&rdquo; are extracted in both cases up to the transformation $w$. This property is described more formally in @ref covdet-principles. Covariance is a key property of local feature detectors as it allows extracting corresponding features from two or more images, making it possible to match them in a meaningful way. The @ref covdet.h module in VLFeat implements an array of feature detection algorithm that have are covariant to different classes of transformations. <!-- ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ --> @section covdet-frame Feature geometry and feature frames <!-- ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ --> As we have seen, local features are subject to image transformations, and they apply a fundamental role in matching and normalizing images. To operates effectively with local features is therefore necessary to understand their geometry. The geometry of a local feature is captured by a <b>feature frame</b> $R$. In VLFeat, depending on the specific detector, the frame can be either a point, a disc, an ellipse, an oriented disc, or an oriented ellipse. A frame captures both the extent of the local features, useful to know which portions of two images are put in correspondence, as well their shape. The latter can be used to associate to diagnose the transformation that affects a feature and remove it through the process of **normalization**. More precisely, in covariant detection feature frames are constructed to be compatible with a certain class of transformations. For example, circles are compatible with similarity transformations as they are closed under them. Likewise, ellipses are compatible with affine transformations. Beyond this closure property, the key idea here is that all feature occurrences can be seen as transformed versions of a base or <b>canonical</b> feature. For example, all discs $R$ can be obtained by
    • 协同表示分类算法
    • 贝叶斯分类算法
    • 很好的SVM分类算法
    • Bayes分类算法
    • 文本分类算法.rar
      一个很简单的文本分类算法 使用方法: 命令行参数: -t 文本文件路径 -m 你的模型文件路径 -c 可选,类别(hit 或 miss) 如果提供了-c则用于训练,否则被模型分类,输出该文本的类型(hit或miss)
    • Bayes分类算法 VC实现
      Bayes分类算法 VC实现 程序源代码
    • 多重信号分类算法
    • 归并分类算法、改进的归并分类算法和快速分类算法.zip
    • 线性分类算法
      SVM的简单数据分类N1=440; for i=1:N1 x1(1,i)=-1.7+0.9*randn(1); x1(2,i)= 1.6+0.7*randn(1); end; N2=400; for i=1:N2 x2(1,i)= 1.3+0.9*randn(1); x2(2,i)=-1.5+0.7*randn(1); end;
    • HADOOP分类算法