加入收藏 | 设为首页 | 会员中心 | 我要投稿 李大同 (https://www.lidatong.com.cn/)- 科技、建站、经验、云计算、5G、大数据,站长网!
当前位置: 首页 > 百科 > 正文

ios – 当我从捕获输出协议中调用它时,为什么我的图像没有更新?

发布时间:2020-12-14 17:45:42 所属栏目:百科 来源:网络整理
导读:我想做一些非常简单的事情.我希望以全屏显示视频图层,并且每秒更新一次使用当时获得的CMSampleBufferRef的UI Image.但是我遇到了两个不同的问题.第一个是改变: [connection setVideoMaxFrameDuration:CMTimeMake(1,1)];[connection setVideoMinFrameDuratio
我想做一些非常简单的事情.我希望以全屏显示视频图层,并且每秒更新一次使用当时获得的CMSampleBufferRef的UI Image.但是我遇到了两个不同的问题.第一个是改变:

[connection setVideoMaxFrameDuration:CMTimeMake(1,1)];
[connection setVideoMinFrameDuration:CMTimeMake(1,1)];

还会修改视频预览图层,我认为它只会修改av foundation将信息发送给委托的速率,但它似乎会影响整个会话(看起来更明显).所以这使我的视频每秒更新一次.我想我可以省略这些行,只需在委托中添加一个计时器,以便每秒将CMSampleBufferRef发送到另一个方法来处理它.但如果这是正确的方法,我不知道.

我的第二个问题是UIImageView没有更新,或者有时它只更新一次并且之后没有更改.我正在使用此方法来更新它:

- (void)captureOutput:(AVCaptureOutput *)captureOutput
didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer
       fromConnection:(AVCaptureConnection *)connection {
    //NSData *jpeg = [AVCaptureStillImageOutput jpegStillImageNSDataRepresentation:sampleBuffer] ;
    UIImage *image = [self imageFromSampleBuffer:sampleBuffer];
    [imageView setImage:image];
    // Add your code here that uses the image.
    NSLog(@"update");
}

我从苹果的例子中得到了什么.通过读取更新消息检查每秒正确调用该方法.但图像根本没有变化. sampleBuffer也是自动销毁还是我必须释放它?

这是另外两个重要的方法:
查看载荷:

- (void)viewDidLoad
{
    [super viewDidLoad];
    // Do any additional setup after loading the view,typically from a nib.

    session = [[AVCaptureSession alloc] init];

    // Add inputs and outputs.
    if ([session canSetSessionPreset:AVCaptureSessionPreset640x480]) {
        session.sessionPreset = AVCaptureSessionPreset640x480;
    }
    else {
        // Handle the failure.
        NSLog(@"Cannot set session preset to 640x480");
    }

    AVCaptureDevice *device = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo];

    NSError *error = nil;
    AVCaptureDeviceInput *input = [AVCaptureDeviceInput deviceInputWithDevice:device error:&error];

    if (!input) {
        // Handle the error appropriately.
        NSLog(@"Could create input: %@",error);
    }

    if ([session canAddInput:input]) {
        [session addInput:input];
    }
    else {
        // Handle the failure.
        NSLog(@"Could not add input");
    }

    // DATA OUTPUT
    dataOutput = [[AVCaptureVideoDataOutput alloc] init];

    if ([session canAddOutput:dataOutput]) {
        [session addOutput:dataOutput];

        dataOutput.videoSettings = 
        [NSDictionary dictionaryWithObject: [NSNumber numberWithInt:kCVPixelFormatType_32BGRA]
                                    forKey: (id)kCVPixelBufferPixelFormatTypeKey];
        //dataOutput.minFrameDuration = CMTimeMake(1,15);
        //dataOutput.minFrameDuration = CMTimeMake(1,1);
        AVCaptureConnection *connection = [dataOutput connectionWithMediaType:AVMediaTypeVideo];

        [connection setVideoMaxFrameDuration:CMTimeMake(1,1)];
        [connection setVideoMinFrameDuration:CMTimeMake(1,1)];

    }
    else {
        // Handle the failure.
        NSLog(@"Could not add output");
    }
    // DATA OUTPUT END

    dispatch_queue_t queue = dispatch_queue_create("MyQueue",NULL);
    [dataOutput setSampleBufferDelegate:self queue:queue];
    dispatch_release(queue);


    captureVideoPreviewLayer = [[AVCaptureVideoPreviewLayer alloc] initWithSession:session];

    [captureVideoPreviewLayer setVideoGravity:AVLayerVideoGravityResizeAspect];

    [captureVideoPreviewLayer setBounds:videoLayer.layer.bounds];
    [captureVideoPreviewLayer setPosition:videoLayer.layer.position];

    [videoLayer.layer addSublayer:captureVideoPreviewLayer];

    [session startRunning];
}

将CMSampleBufferRef转换为UIImage:

- (UIImage *) imageFromSampleBuffer:(CMSampleBufferRef) sampleBuffer 
{
    // Get a CMSampleBuffer's Core Video image buffer for the media data
    CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer); 
    // Lock the base address of the pixel buffer
    CVPixelBufferLockBaseAddress(imageBuffer,0); 

    // Get the number of bytes per row for the pixel buffer
    void *baseAddress = CVPixelBufferGetBaseAddress(imageBuffer); 

    // Get the number of bytes per row for the pixel buffer
    size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer); 
    // Get the pixel buffer width and height
    size_t width = CVPixelBufferGetWidth(imageBuffer); 
    size_t height = CVPixelBufferGetHeight(imageBuffer); 

    // Create a device-dependent RGB color space
    CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB(); 

    // Create a bitmap graphics context with the sample buffer data
    CGContextRef context = CGBitmapContextCreate(baseAddress,width,height,8,bytesPerRow,colorSpace,kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst); 
    // Create a Quartz image from the pixel data in the bitmap graphics context
    CGImageRef quartzImage = CGBitmapContextCreateImage(context); 
    // Unlock the pixel buffer
    CVPixelBufferUnlockBaseAddress(imageBuffer,0);

    // Free up the context and color space
    CGContextRelease(context); 
    CGColorSpaceRelease(colorSpace);

    // Create an image object from the Quartz image
    UIImage *image = [UIImage imageWithCGImage:quartzImage];

    // Release the Quartz image
    CGImageRelease(quartzImage);

    return (image);
}

在此先感谢您提供给我的任何帮助.

解决方法

从captureOutput的文档:didOutputSampleBuffer:fromConnection:method:

This method is called on the dispatch queue specified by the output’s sampleBufferCallbackQueue property.

这意味着如果您需要使用此方法中的缓冲区更新UI,则需要在主队列上执行此操作,如下所示:

- (void)captureOutput:(AVCaptureOutput *)captureOutput 
didOutputSampleBuffer: (CMSampleBufferRef)sampleBuffer
       fromConnection:(AVCaptureConnection *)connection {

    UIImage *image = [self imageFromSampleBuffer:sampleBuffer];
    dispatch_async(dispatch_get_main_queue(),^{
        [imageView setImage:image];
    });
}

编辑:关于你的第一个问题:我不确定我是否理解这个问题,但是如果你想每秒只更新一次图像,你也可以在“didOutputSampleBuffer”方法中使用“lastImageUpdateTime”值进行比较,看看是否有足够的时间通过更新那里的图像,否则忽略样本缓冲区.

(编辑:李大同)

【声明】本站内容均来自网络,其相关言论仅代表作者个人观点,不代表本站立场。若无意侵犯到您的权利,请及时与联系站长删除相关内容!

    推荐文章
      热点阅读