CuddleCam

所属分类:人工智能/神经网络/深度学习
开发工具:Dart
文件大小:0KB
下载次数:0
上传日期:2024-02-11 05:54:41
上 传 者sh-1993
说明:  一种便携式自动婴儿监测系统,每当发生危险情况时,使用计算机视觉技术通过移动应用程序通知家长。
(A portable and automated baby monitoring system which uses computer vision techniques to notify the parents through a mobile app whenever a risky situation occurs.)

文件列表:
CuddleCam_MobileApp/
RaspberryPi_codes/

# CuddleCam A portable and automated baby monitoring system which uses computer vision techniques to notify the parents through a mobile app whenever a risky situation occurs. “CuddleCam” is a portable baby monitoring system designed to provide parents with peace of mind by utilizing machine learning. The embedded system consists of a Raspberry Pi and a webcam to monitor babies. Using trained models, the system can detect and differentiate the baby from animals such as cats, dogs, and other individuals. The system alerts parents if the baby engages in risky behavior, such as falling or crying. A mobile app, developed using Flutter and Dart, provides access to view the most recent 10 minutes of camera footage. The app also enables real-time video streaming through Wi-Fi. The project involves setting up the Raspberry Pi, training the computer vision models, constructing mobile apps, implementing video streaming capabilities, and following a step-by-step approach to achieve the desired functionality.

Architecture of the system:

CuddleCam Image

The baby is captured by using the webcam and the live video is sent to the Raspberry Pi where computer vision techniques are used to identify objects, poses and emotions of the baby. Whenever the program recognizes a risky movement or a risky situation of the baby, a notification is sent to the mobile app with clarification about the situation immediately through flask server. Both Raspberry Pi and the mobile app are connected to the same Wi-Fi network, and the Raspberry Pi is given a static IP address to be identified by the mobile in the network.

Technologies:

For the image processing part in the Raspberry Pi, computer vision libraries are used. OpenCV MediaPipe TensorFlow Lite OpenCV is used to identify the external camera connected to the Raspberry Pi. After identifying the camera, it captures the video using camera and is used for image and video processing. "ssd_mobilenet_v3_large_coco" model is used for object detection using the COCO dataset. MediaPipe is mainly used for pose detection using landmarks. And TensorFlow Lite, which is the most compatible TensorFlow version for lightweighted systems like Raspberry Pi is used to train emotion detection model with the help of FER2013 facial emotion recognition dataset. The notifications and the video footage are sent to the mobile app using Flask and HTTP protocol. Threads are used to handle parallel programming happens in the embedded system including getting the live video, processing computer vision models, saving the video footage and communicating with the mobile phone. Mobile app is developed using Flutter and Dart with necessary dependencies and permissions. There, http requests are used to receive notifications and most recent video footage.

Some UIs of the CuddleCam mobile application:

CuddleCam Image


近期下载者

相关文件


收藏者