如何解决AVAssetWriter在iOS上产生带有听不见的音频的视频,但在其他平台上不产生吗?
我有一个AVAssetWriter管道设置,用于记录h264视频和AAC(LC)音频。音频源以10kHz的间隔(48个采样,保证)以48kHz的频率产生16位带符号整数MONO PCM音频。
AVAssetWriter音频接收器的转换格式设置为AAC 44.1kHz,64kbit / s MONO。
var channelLayout = AudioChannelLayout.init()
channelLayout.mChannelLayoutTag = kAudioChannelLayoutTag_Mono
let audioOutputSettings: [String: Any] = [
AVFormatIDKey : kAudioFormatMPEG4AAC,AVSampleRateKey : 44100,AVNumberOfChannelsKey: 1,AVEncoderBitRateKey: 64000,AVChannelLayoutKey: NSData(bytes: &channelLayout,length: MemoryLayout.size(ofValue: channelLayout)),]
我自己在Objective-C ++中构造了音频CMSampleBuffer(音频源是WebRTC,C ++库)。请参见下面的代码段。
...
self.numberOfChannels = 1; // Use mono for the sake of brevity.
self.sampleRate = 48000;
self.numberOfSamples = 480; // Number of samples per 10ms
elapsedTimeMS = CACurrentMediaTime();
...
UInt32 sampleSize = sizeof(SInt16); // Data is 16-bit signed integer
AudioStreamBasicDescription audioFormat;
audioFormat.mSampleRate = self.sampleRate;
audioFormat.mFormatID = kAudioFormatLinearPCM;
audioFormat.mFormatFlags = kAudioFormatFlagIsSignedInteger | kAudioFormatFlagsNativeEndian | kAudioFormatFlagIsPacked;
audioFormat.mFramesPerPacket = 1;
audioFormat.mChannelsPerFrame = self.numberOfChannels;
audioFormat.mBitsPerChannel = 8 * sampleSize;
audioFormat.mBytesPerFrame = audioFormat.mBytesPerPacket = audioFormat.mChannelsPerFrame * sampleSize;
CMFormatDescriptionRef sourceFormat = NULL;
status = CMAudioFormatDescriptionCreate(kCFAllocatorDefault,&audioFormat,sizeof(acl),&acl,NULL,&sourceFormat);
if(status != 0) {
LOGEW << "Failed to create audio format description";
return nil;
}
...
// Setup mixed audio buffer
UInt32 bufferByteSize = XMAudioCapturerBufferSize * self.numberOfChannels;
if(audioBuffer == nullptr) {
audioBuffer = (uint8_t*)malloc(bufferByteSize);
memset(audioBuffer,bufferByteSize);
}
// Try to store the (mixed) audio frame data (if any samples are available)
// Data is 16-bit signed integer PCM in 48000Hz
if (m_impl != nullptr && !m_impl->GetAudioFrame(audioBuffer)) {
return NO;
}
AudioBufferList bufferList;
bufferList.mNumberBuffers = 1;
bufferList.mBuffers[0].mNumberChannels = self.numberOfChannels;
bufferList.mBuffers[0].mDataByteSize = bufferByteSize;
bufferList.mBuffers[0].mData = audioBuffer;
CMSampleTimingInfo timing = { CMTimeMake(1,self.sampleRate),CMTimeMake(self.elapsedNumberOfSamples,kCMTimeInvalid };
//CMSampleTimingInfo timing = { CMTimeMake(1,CMTimeMake(elapsedTimeMS,1000),kCMTimeInvalid };
CMSampleBufferRef buffer;
status = CMSampleBufferCreate(kCFAllocatorDefault,false,sourceFormat,(CMItemCount)self.numberOfSamples * self.numberOfChannels,1,&timing,&buffer);
if(status != 0) {
LOGEW << "Failed to allocate sample buffer";
return NO;
}
status = CMSampleBufferSetDataBufferFromAudioBufferList(buffer,kCFAllocatorDefault,&bufferList);
if(status != 0) {
LOGEW << "Failed to convert audio buffer list into sample buffer";
return NO;
}
...
CFRelease(buffer);
self.elapsedNumberOfSamples += self.numberOfSamples * self.numberOfChannels;
elapsedTimeMS += XMAudioCapturerAudioFrameSizeMS; // ++10ms
CMSampleBuffer通过委托方法从与视频不同的线程传递到AVAssetWriter音频接收器。
为简洁起见,我省略了AVAssetWriter实现。值得一提的是,传递给AVAssetWriter的第一个样本是视频样本,并且会话以该视频样本的时间戳开始。所有音频样本都从时间“ 0”开始写入,并根据每10ms写入的样本量增加。
我附上了在iPhone上制作的视频。该视频无法在QuickTime或iOS上播放。但是它可以在Windows 10,macOS和Linux / Android上的所有经过测试的播放器上运行。
产生的示例文件(至少在达尔文没有音频): https://drive.google.com/file/d/1pNHpu3fmgXCG2dmt4_yrVYr3oaR2B470/view?usp=sharing
有人知道为什么这甚至是个问题吗?我最初的想法是,达尔文平台上的AAC解码器有些挑剔,但话又说回来,我会在其他平台上看到类似的行为吗?
版权声明:本文内容由互联网用户自发贡献,该文观点与技术仅代表作者本人。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌侵权/违法违规的内容, 请发送邮件至 dio@foxmail.com 举报,一经查实,本站将立刻删除。