我正在为iPad 2上的视频编写一个OpenGL渲染缓冲区(使用iOS 4.3).这正是我正在尝试的:
A)设置一个AVAssetWriterInputPixelBufferAdaptor
>创建一个指向视频文件的AVAssetWriter
>使用适当的设置设置AVAssetWriterInput
>设置一个AVAssetWriterInputPixelBufferAdaptor将数据添加到视频文件
B)使用该AVAssetWriterInputPixelBufferAdaptor将数据写入视频文件
将OpenGL代码渲染到屏幕
>通过glreadPixels获取OpenGL缓冲区
>从OpenGL数据创建一个CVPixelBufferRef
>使用appendPixelBuffer方法将PixelBuffer附加到AVAssetWriterInputPixelBufferAdaptor
但是,我有这样做的问题.我现在的策略是在按下按钮时设置AVAssetWriterInputPixelBufferAdaptor.一旦AVAssetWriterInputPixelBufferAdaptor有效,我设置一个标志来发信号EAGLView创建一个像素缓冲区,并通过appendPixelBuffer将其附加到视频文件中,以获得给定的帧数.
现在我的代码崩溃,因为它试图附加第二个像素缓冲区,给我以下错误:
-[__NSCFDictionary appendPixelBuffer:withPresentationTime:]: unrecognized selector sent to instance 0x131db0
这是我的AVAsset设置代码(很多是基于Rudy aramayo的代码,它在正常图像上工作,但不是为纹理设置的):
- (void) testVideoWriter {
  //initialize global info
  MOVIE_NAME = @"Documents/Movie.mov";
  CGSize size = CGSizeMake(480,320);
  frameLength = CMTimeMake(1,5); 
  currentTime = kCMTimeZero;
  currentFrame = 0;
  Nsstring *MOVIE_PATH = [NSHomeDirectory() stringByAppendingPathComponent:MOVIE_NAME];
  NSError *error = nil;
  unlink([betaCompressionDirectory UTF8String]);
  videoWriter = [[AVAssetWriter alloc] initWithURL:[NSURL fileURLWithPath:betaCompressionDirectory] fileType:AVFileTypeQuickTimeMovie error:&error];
  NSDictionary *videoSettings = [NSDictionary dictionaryWithObjectsAndKeys:AVVideoCodecH264,AVVideoCodecKey,[NSNumber numberWithInt:size.width],AVVideoWidthKey,[NSNumber numberWithInt:size.height],AVVideoHeightKey,nil];
  writerInput = [AVAssetWriterInput assetWriterInputWithMediaType:AVMediaTypeVideo outputSettings:videoSettings];
  //writerInput.expectsMediaDataInRealTime = NO;
  NSDictionary *sourcePixelBufferAttributesDictionary = [NSDictionary dictionaryWithObjectsAndKeys: [NSNumber numberWithInt:kCVPixelFormatType_32BGRA],kCVPixelBufferPixelFormatTypeKey,nil];
  adaptor = [AVAssetWriterInputPixelBufferAdaptor assetWriterInputPixelBufferAdaptorWithAssetWriterInput:writerInput                                                                          sourcePixelBufferAttributes:sourcePixelBufferAttributesDictionary];
  [adaptor retain];
  [videoWriter addInput:writerInput];
  [videoWriter startWriting];
  [videoWriter startSessionAtSourceTime:kCMTimeZero];
  VIDEO_WRITER_IS_READY = true;
} 
 好的,现在我的videoWriter和适配器设置好了,我告诉我的OpenGL渲染器为每个帧创建一个像素缓冲区:
- (void) captureScreenVideo {
  if (!writerInput.readyForMoreMediaData) {
    return;
  }
  CGSize esize = CGSizeMake(eagl.backingWidth,eagl.backingHeight);
  NSInteger myDataLength = esize.width * esize.height * 4;
  gluint *buffer = (gluint *) malloc(myDataLength);
  glreadPixels(0,esize.width,esize.height,GL_RGBA,GL_UNSIGNED_BYTE,buffer);
  CVPixelBufferRef pixel_buffer = NULL;
  CVPixelBufferCreateWithBytes (NULL,kCVPixelFormatType_32BGRA,buffer,4 * esize.width,NULL,&pixel_buffer);
  /* DON'T FREE THIS BEFORE USING pixel_buffer! */ 
  //free(buffer);
  if(![adaptor appendPixelBuffer:pixel_buffer withPresentationTime:currentTime]) {
      NSLog(@"FAIL");
    } else {
      NSLog(@"Success:%d",currentFrame);
      currentTime = CMTimeAdd(currentTime,frameLength);
    }
   free(buffer);
   CVPixelBufferRelease(pixel_buffer);
  }
  currentFrame++;
  if (currentFrame > MAX_FRAMES) {
    VIDEO_WRITER_IS_READY = false;
    [writerInput markAsFinished];
    [videoWriter finishWriting];
    [videoWriter release];
    [self moveVideoToSavedPhotos]; 
  }
} 
 最后,我将视频移动到相机卷:
- (void) moveVideoToSavedPhotos {
  ALAssetsLibrary *library = [[ALAssetsLibrary alloc] init];
  Nsstring *localVid = [NSHomeDirectory() stringByAppendingPathComponent:MOVIE_NAME];    
  NSURL* fileURL = [NSURL fileURLWithPath:localVid];
  [library writeVideoAtPathToSavedPhotosAlbum:fileURL
                              completionBlock:^(NSURL *assetURL,NSError *error) {
                                if (error) {   
                                  NSLog(@"%@: Error saving context: %@",[self class],[error localizedDescription]);
                                }
                              }];
  [library release];
} 
 但是,正如我所说,我正在打电话给appendPixelBuffer.
对不起,发送这么多的代码,但我真的不知道我做错了什么.更新将图像写入视频的项目似乎是微不足道的,但是我无法通过glreadPixels创建像素缓冲区并附加它.这让我疯狂!如果任何人有任何建议或OpenGL的工作代码示例 – >视频将是惊人的…谢谢!
解决方法
我首先配置要录制的电影:
NSError *error = nil;
assetWriter = [[AVAssetWriter alloc] initWithURL:movieURL fileType:AVFileTypeAppleM4V error:&error];
if (error != nil)
{
    NSLog(@"Error: %@",error);
}
NSMutableDictionary * outputSettings = [[NSMutableDictionary alloc] init];
[outputSettings setobject: AVVideoCodecH264 forKey: AVVideoCodecKey];
[outputSettings setobject: [NSNumber numberWithInt: videoSize.width] forKey: AVVideoWidthKey];
[outputSettings setobject: [NSNumber numberWithInt: videoSize.height] forKey: AVVideoHeightKey];
assetWriterVideoInput = [AVAssetWriterInput assetWriterInputWithMediaType:AVMediaTypeVideo outputSettings:outputSettings];
assetWriterVideoInput.expectsMediaDataInRealTime = YES;
// You need to use BGRA for the video in order to get realtime encoding. I use a color-swizzling shader to line up glreadPixels' normal RGBA output with the movie input's BGRA.
NSDictionary *sourcePixelBufferAttributesDictionary = [NSDictionary dictionaryWithObjectsAndKeys: [NSNumber numberWithInt:kCVPixelFormatType_32BGRA],[NSNumber numberWithInt:videoSize.width],kCVPixelBufferWidthKey,[NSNumber numberWithInt:videoSize.height],kCVPixelBufferHeightKey,nil];
assetWriterPixelBufferInput = [AVAssetWriterInputPixelBufferAdaptor assetWriterInputPixelBufferAdaptorWithAssetWriterInput:assetWriterVideoInput sourcePixelBufferAttributes:sourcePixelBufferAttributesDictionary];
[assetWriter addInput:assetWriterVideoInput]; 
 然后使用此代码使用glreadPixels()来抓取每个渲染的帧:
CVPixelBufferRef pixel_buffer = NULL;
CVReturn status = CVPixelBufferPoolCreatePixelBuffer (NULL,[assetWriterPixelBufferInput pixelBufferPool],&pixel_buffer);
if ((pixel_buffer == NULL) || (status != kCVReturnSuccess))
{
    return;
}
else
{
    CVPixelBufferLockBaseAddress(pixel_buffer,0);
    glubyte *pixelBufferData = (glubyte *)CVPixelBufferGetBaseAddress(pixel_buffer);
    glreadPixels(0,videoSize.width,videoSize.height,pixelBufferData);
}
// May need to add a check here,because if two consecutive times with the same value are added to the movie,it aborts recording
CMTime currentTime = CMTimeMakeWithSeconds([[NSDate date] timeIntervalSinceDate:startTime],120);
if(![assetWriterPixelBufferInput appendPixelBuffer:pixel_buffer withPresentationTime:currentTime]) 
{
    NSLog(@"Problem appending pixel buffer at time: %lld",currentTime.value);
} 
else 
{
//        NSLog(@"Recorded pixel buffer at time: %lld",currentTime.value);
}
CVPixelBufferUnlockBaseAddress(pixel_buffer,0);
CVPixelBufferRelease(pixel_buffer); 
 我注意到的一件事是,如果我试图附加两个具有相同整数时间值的像素缓冲区(在所提供的基础上),则整个录制将失败,输入将永远不会再占用另一个像素缓冲区.同样,如果我尝试在从池中检索后附加像素缓冲区失败,它将中止录像.因此,上述代码中的早期救助.
除了上述代码之外,我使用一个颜色调整的着色器将OpenGL ES场景中的RGBA渲染转换为BGRA,以便AVAssetWriter进行快速编码.有了这个,我可以在iPhone 4上以30 FPS录制640×480的视频.
同样,所有的代码可以在GPUImage存储库中,在GPUImageMovieWriter类下面.