wear-a-mask:SPA,仅使用前端对图像执行基于深度学习的面部界标检测,并自动添加呼吸面具贴纸

  • k7_287468
    了解作者
  • 17.2MB
    文件大小
  • zip
    文件格式
  • 0
    收藏次数
  • VIP专享
    资源类型
  • 0
    下载次数
  • 2022-06-14 07:33
    上传日期
在您的头像上戴口罩 单页应用程序,仅使用前端对图像执行基于深度学习的面部界标检测,并自动添加呼吸面具贴纸。 在您的SNS化身上戴上口罩,让更多的人意识到流行病和公共卫生! 应用程序链接: : 申请链接(中文版): : 用法 用户上载他或她的化身后,页面将自动检测图片中的脸部,并识别与最合适的面膜贴纸相匹配的关键点。 然后,用户可以在由画布实现的编辑器中更改标签的位置,大小,旋转角度和翻转,然后导出修改的图形表达。 整个过程在前端执行,不需要将图片上传到服务器。 使用案例截图: 人脸检测和面部地标检测 该项目使用基于 。 人脸检测任务使用SSD MobileNet V1模型(受训练),人脸界标检测任务使用由face-api.js作者构建的基于68点CNN的检测模型(训练数据集包含约35,000张面部图像)。 模型的权重数据来自face-api.js。 自动选择和定位面膜贴纸 该项目
wear-a-mask-master.zip
内容介绍
<p align="center"><img width="400" src="https://raw.githubusercontent.com/zamhown/wear-a-mask/master/assets/logo-title-en.png" alt="logo"></p> # Wear a Mask on Your Avatar A single-page application that uses only the front-end to perform deep-learning-based facial landmark detection on images and automatically adds breathing mask stickers. **Wear a mask on your SNS avatars, just make more people aware of epidemic diseases and public health!** Application link: [https://zamhown.github.io/wear-a-mask](https://zamhown.github.io/wear-a-mask) Application link (Chinese version): [https://zamhown.gitee.io/wear-a-mask](https://zamhown.gitee.io/wear-a-mask) [中文版ReadMe](https://github.com/zamhown/wear-a-mask/blob/master/readme/README-chs.md) ## Usage After the user uploads his or her avatar, the page will automatically detect the face in the picture, and identify the key points to match the most suitable mask sticker. Then the user can change the position, size, rotation angle and flip of the sticker in an editor implemented by canvas, and then export the modified avatar. The entire process is performed on the front end, and the pictures do not need to be uploaded to the server. Usage case screenshot: ![example](https://raw.githubusercontent.com/zamhown/wear-a-mask/master/assets/example-en.jpg) ## Face Detection and Facial Landmark Detection The project uses [face-api.js](https://github.com/justadudewhohacks/face-api.js) which is based on [TensorFlow.js](https://github.com/tensorflow/tfjs). The face detection task uses the SSD MobileNet V1 model (trained with the [WIDERFACE dataset](http://mmlab.ie.cuhk.edu.hk/projects/WIDERFace)), and the facial landmark detection task uses a 68-point CNN-based detection model built by the author of face-api.js (the training dataset contains about 35,000 facial images). The models' weight data comes from face-api.js. ## Automatic Selection and Positioning of Mask Stickers The project contains several mask sticker images and data for each mask. Three key points were marked on each mask sticker (upper left corner, upper right corner, and bottom of chin). After detecting the landmarks on the user's avatar, the mask sticker that best matches the face shape can be automatically selected based on these data. After calculating the corresponding geometric transformation, the sticker image will be put in the appropriate position on the avatar. ![mask example](https://raw.githubusercontent.com/zamhown/wear-a-mask/master/assets/mask-example.png) ## Image Editor With Sticker Editing Function The image editor for this project is implemented using canvas, based on the npm package [xl_canvas](https://www.npmjs.com/package/xl_canvas). Because the package can not be used directly, it was deeply modified, and a series of functions such as flip, touch support, and export at the original resolution were added and finally integrated into the project. ## Commands ### Project setup ``` npm install ``` ### Compiles and hot-reloads for development ``` npm run serve ``` ### Compiles and minifies for production Run `build.bat`. --- Have fun!
评论
    相关推荐