ppic
所属分类:模式识别(视觉/语音等)
开发工具:C++
文件大小:29KB
下载次数:0
上传日期:2018-07-28 06:10:47
上 传 者:
sh-1993
说明: 掌纹识别中心。掌纹注册和验证的微服务。
(Palmprint ID Center. Microservice of palmprint registration and verification.)
文件列表:
.codecov.yml (568, 2018-07-26)
Dockerfiles (0, 2018-07-26)
Dockerfiles\Dockerfile.ci (703, 2018-07-26)
LICENSE (1063, 2018-07-26)
app (0, 2018-07-26)
app\CMakeLists.txt (1121, 2018-07-26)
app\common (0, 2018-07-26)
app\common\singleton.h (1789, 2018-07-26)
app\core (0, 2018-07-26)
app\db (0, 2018-07-26)
app\db\session_pool.cpp (4339, 2018-07-26)
app\db\session_pool.h (2947, 2018-07-26)
app\db\smart_session.cpp (1905, 2018-07-26)
app\db\smart_session.h (2055, 2018-07-26)
app\main.cpp (3281, 2018-07-26)
app\models (0, 2018-07-26)
app\models\palmprint.h (1435, 2018-07-26)
app\models\user.cpp (2719, 2018-07-26)
app\models\user.h (2656, 2018-07-26)
app\servicers (0, 2018-07-26)
configs (0, 2018-07-26)
configs\mysql (0, 2018-07-26)
configs\mysql\conf (0, 2018-07-26)
configs\mysql\conf\my.cnf (967, 2018-07-26)
manage.sh (6396, 2018-07-26)
tests (0, 2018-07-26)
tests\CMakeLists.txt (2118, 2018-07-26)
tests\main.cpp (1483, 2018-07-26)
tests\test_common (0, 2018-07-26)
tests\test_common\singleton_unittest.cpp (2234, 2018-07-26)
tests\test_db (0, 2018-07-26)
tests\test_db\session_pool_unittest.cpp (4026, 2018-07-26)
... ...
# ppic(Palmprint ID Center)
[![pipeline status](https://gitlab.com/leosocy/ppic/badges/master/pipeline.svg)](https://gitlab.com/leosocy/ppic/commits/master)
[![codecov](https://codecov.io/gh/PalmID/ppic/branch/master/graph/badge.svg)](https://codecov.io/gh/PalmID/ppic)
[![MIT licensed](https://img.shields.io/badge/license-MIT-green.svg)](https://raw.githubusercontent.com/PalmID/ppic/master/LICENSE)
## service
## optimize
### verification
Traversing the database to find the most similar palmprint is an extremely time-consuming operation.
Therefore, an algorithm needs to be designed to achieve similar palmprint search in less time while ensuring the accuracy.
#### `Multilevel Clusterings Search`
There are `N` records in the database. If we traverse the database, time complexity is `O(n)`.
Now let's group all data into `L1` clusterings. For every clustering, if it's records more than `x`, group all data into `L2` clusterings. Iterate this step until the minimum granularity of the clusterings data size is less than `x`. The time complexity is `O(L1+L2+...+Ln+x)`.
##### How to generate `Multilevel Clusterings`
##### How to search with `Multilevel Clusterings`
近期下载者:
相关文件:
收藏者: