RTSP-Server-iOS

所属分类:流媒体/Mpeg4/MP4
开发工具:Objective-C++
文件大小:47KB
下载次数:0
上传日期:2017-07-14 11:02:31
上 传 者sh-1993
说明:  RTSP Server iOS是一个基本应用程序,它使用基于iPhone上硬件视频编码的iOS FFMpeg包装器
(RTSP Server iOS is a basic app that uses FFMpeg wrapper for iOS based on the Hardware Video Encoding on iPhone)

文件列表:
Encoder Demo.xcodeproj (0, 2017-07-14)
Encoder Demo.xcodeproj\project.pbxproj (21378, 2017-07-14)
Encoder Demo (0, 2017-07-14)
Encoder Demo\AVEncoder.h (876, 2017-07-14)
Encoder Demo\AVEncoder.mm (16935, 2017-07-14)
Encoder Demo\CameraServer.h (619, 2017-07-14)
Encoder Demo\CameraServer.m (3393, 2017-07-14)
Encoder Demo\Default-568h@2x.png (18594, 2017-07-14)
Encoder Demo\Default.png (6540, 2017-07-14)
Encoder Demo\Default@2x.png (16107, 2017-07-14)
Encoder Demo\Encoder Demo-Info.plist (1606, 2017-07-14)
Encoder Demo\Encoder Demo-Prefix.pch (327, 2017-07-14)
Encoder Demo\EncoderDemoAppDelegate.h (315, 2017-07-14)
Encoder Demo\EncoderDemoAppDelegate.m (2360, 2017-07-14)
Encoder Demo\EncoderDemoViewController.h (399, 2017-07-14)
Encoder Demo\EncoderDemoViewController.m (1309, 2017-07-14)
Encoder Demo\MP4Atom.h (779, 2017-07-14)
Encoder Demo\MP4Atom.m (2484, 2017-07-14)
Encoder Demo\NALUnit.cpp (13021, 2017-07-14)
Encoder Demo\NALUnit.h (5458, 2017-07-14)
Encoder Demo\RTSPClientConnection.h (443, 2017-07-14)
Encoder Demo\RTSPClientConnection.mm (18046, 2017-07-14)
Encoder Demo\RTSPMessage.h (446, 2017-07-14)
Encoder Demo\RTSPMessage.m (2006, 2017-07-14)
Encoder Demo\RTSPServer.h (591, 2017-07-14)
Encoder Demo\RTSPServer.m (4290, 2017-07-14)
Encoder Demo\VideoEncoder.h (798, 2017-07-14)
Encoder Demo\VideoEncoder.m (2400, 2017-07-14)
Encoder Demo\en.lproj (0, 2017-07-14)
Encoder Demo\en.lproj\InfoPlist.strings (45, 2017-07-14)
Encoder Demo\en.lproj\MainStoryboard_iPad.storyboard (4055, 2017-07-14)
Encoder Demo\en.lproj\MainStoryboard_iPhone.storyboard (4261, 2017-07-14)
Encoder Demo\main.m (372, 2017-07-14)
LICENSE (1081, 2017-07-14)

## RTSP-Server-ioS This repository contains a basic RTSP Server using FFMpeg wrapper for iOS based on the [Hardware Video Encoding on iPhone ” RTSP Server example][1]. ### Disclaimer This repository contains a sample code intended to demonstrate the capabilities of the ffmpeg as a camera recorder. It is not intended to be used as-is in applications as a library dependency, and will not be maintained as such. Bug fix contributions are welcome, but issues and feature requests will not be addressed. ### Example Contents This sample code takes the following approach to this problem: - Only video is written using the `AVAssetWriter` instance, or it would be impossible to distinguish video from audio in the `mdat` atom. - Initially, I create two AVAssetWriter instances. The first frame is written to both, and then one instance is closed. Once the `moov` atom has been written to that file, I parse the file and assume that the parameters apply to both instances, since the initial conditions were the same. - Once I have the parameters, I use a dispatch_source object to trigger reads from the file whenever new data is written. The body of the `mdat` chunk consists of H2*** NALUs, each preceded by a length field. Although the length of the `mdat` chunk is not known, we can safely assume that it will continue to the end of the file (until we finish the output file and the `moov` is added). - For RTP delivery of the data, we group the NALUs into frames by parsing the NALU headers. Since there are no AUDs marking the frame boundaries, this requires looking at several different elements of the NALU header. - Timestamps arrive with the uncompressed frames from the camera and are stored in a FIFO. These timestamps are applied to the compressed frames in the same order. Fortunately, the `AVAssetWriter` live encoder does not require re-ordering of frames. Update this is no longer true, and I now have a version that supports re-ordered frames. - When the file gets too large, a new instance of `AVAssetWriter` is used, so that the old temporary file can be deleted. Transition code must then wait for the old instance to be closed so that the remaining NALUs can be read from the `mdat` atom without reading past the end of that atom into the subsequent metadata. Finally, the new file is opened and timestamps are adjusted. The resulting compressed output is seamless. A little experimentation suggests that we are able to read compressed frames from file about 500ms or so after they are captured, and these frames then arrive around 200ms after that at the client app. ## Credits * [Hardware Video Encoding on iPhone][1] * [FFmpeg][2] ### Pre-requisites - FFmpeg 3.3 - Xcode 8.3.2 ## License The code supplied here is covered under the MIT Open Source License. [1]: http://www.gdcl.co.uk/2013/02/20/iOS-Video-Encoding.html [2]: https://www.ffmpeg.org/

近期下载者

相关文件


收藏者