如何使用Python和PyObjC从Apple iSight捕获帧?

如何使用Python和PyObjC从Apple iSight捕获帧?,第1张

概述我试图从使用 Python(2.7或2.6版)和PyObjC(版本2.2)的Macbook Pro内置的Apple iSight摄像头捕获单帧. 作为起点,我用了this old StackOverflow的问题.为了验证它是否有意义,我以Apple’s MyRecorder为例进行了交叉引用,它似乎是基于的.不幸的是,我的脚本不起作用 我的大问题是: 我正在初始化相机吗? 我是否正确启动事件循环 我试图从使用 Python(2.7或2.6版)和PyObjC(版本2.2)的Macbook Pro内置的Apple iSight摄像头捕获单帧.

作为起点,我用了this old StackOverflow的问题.为了验证它是否有意义,我以Apple’s MyRecorder为例进行了交叉引用,它似乎是基于的.不幸的是,我的脚本不起作用

我的大问题是:

我正在初始化相机吗?
我是否正确启动事件循环?
有没有其他设置我应该做的?

在下面的示例脚本中,预期的 *** 作是在调用startimageCapture()之后,我应该从CaptureDelegate开始打印“Got a frame …”消息.但是,相机的指示灯不会打开,代理的回调从不执行.

此外,在startimageCapture()中没有任何失败,所有功能都声称成功,并且成功找到iSight设备.分析pdb中的会话对象显示它具有有效的输入和输出对象,输出已分配一个委托,设备未被其他进程使用,并且会话在startRunning()被调用后被标记为运行.

以下是代码:

#!/usr/bin/env python2.7import sysimport osimport timeimport objcimport QTKitimport AppKitfrom Foundation import NSObjectfrom Foundation import NSTimerfrom PyObjCTools import AppHelperobjc.setVerbose(True)class CaptureDelegate(NSObject):    def captureOutput_dIDOutputVIDeoFrame_withSampleBuffer_fromConnection_(self,captureOutput,vIDeoFrame,sampleBuffer,connection):        # This should get called for every captured frame        print "Got a frame: %s" % vIDeoFrameclass QuitClass(NSObject):    def quitMainLoop_(self,aTimer):        # Just stop the main loop.        print "Quitting main loop."        AppHelper.stopEventLoop()def startimageCapture():    error = None    # Create a QT Capture session    session = QTKit.QTCaptureSession.alloc().init()    # Find iSight device and open it    dev = QTKit.QTCaptureDevice.defaultinputDeviceWithMediaType_(QTKit.QTMediaTypeVIDeo)    print "Device: %s" % dev    if not dev.open_(error):        print "Couldn't open capture device."        return    # Create an input instance with the device we found and add to session    input = QTKit.QTCaptureDeviceinput.alloc().initWithDevice_(dev)    if not session.addinput_error_(input,error):        print "Couldn't add input device."        return    # Create an output instance with a delegate for callbacks and add to session    output = QTKit.QTCaptureDecompressedVIDeoOutput.alloc().init()    delegate = CaptureDelegate.alloc().init()    output.setDelegate_(delegate)    if not session.addOutput_error_(output,error):        print "Failed to add output delegate."        return    # Start the capture    print "Initiating capture..."    session.startRunning()def main():    # Open camera and start capturing frames    startimageCapture()    # Setup a timer to quit in 10 seconds (Hack for Now)    quitInst = QuitClass.alloc().init()    NSTimer.scheduledTimerWithTimeInterval_target_selector_userInfo_repeats_(10.0,quitInst,'quitMainLoop:',None,False)    # Start Cocoa's main event loop    AppHelper.runconsoleEventLoop(installinterrupt=True)    print "After event loop"if __name__ == "__main__":    main()

感谢您的任何帮助,您可以提供!

解决方法 好的,我花了一天时间穿过PyObjC的深度,并开始工作.

对于未来的记录,问题的代码不起作用的原因:可变范围和垃圾回收.会话变量在超出范围的情况下被删除,这在事件处理器运行之前发生.必须做一些事情来保留它,所以在运行时间之前它不会被释放.

将所有内容都移植到一个类中并使会话成为一个类变量,使回调开始工作.另外,下面的代码演示了将框架的像素数据变成位图格式,并通过Cocoa调用保存它,以及如何将它作为缓冲区或字符串复制回Python的世界视图.

下面的脚本将捕获单个帧

#!/usr/bin/env python2.7## camera.py -- by Trevor Bentley (02/04/2011)# # This work is licensed under a Creative Commons Attribution 3.0 Unported license.## Run from the command line on an Apple laptop running OS X 10.6,this script will# take a single frame capture using the built-in iSight camera and save it to disk# using three methods.#import sysimport osimport timeimport objcimport QTKitfrom AppKit import *from Foundation import NSObjectfrom Foundation import NSTimerfrom PyObjCTools import AppHelperclass NSImageTest(NSObject):    def init(self):        self = super(NSImageTest,self).init()        if self is None:            return None        self.session = None        self.running = True        return self    def captureOutput_dIDOutputVIDeoFrame_withSampleBuffer_fromConnection_(self,connection):        self.session.stopRunning() # I just want one frame        # Get a bitmap representation of the frame using CoreImage and Cocoa calls        ciimage = CIImage.imageWithCVImageBuffer_(vIDeoFrame)        rep = NSCIImageRep.imageRepWithCIImage_(ciimage)        bitrep = NSBitmAPImageRep.alloc().initWithCIImage_(ciimage)        bitdata = bitrep.representationUsingType_propertIEs_(NSBMPfileType,objc.NulL)        # Save image to disk using Cocoa        t0 = time.time()        bitdata.writetofile_atomically_("grab.bmp",False)        t1 = time.time()        print "Cocoa saved in %.5f seconds" % (t1-t0)        # Save a read-only buffer of image to disk using Python        t0 = time.time()        bitbuf = bitdata.bytes()        f = open("python.bmp","w")        f.write(bitbuf)        f.close()        t1 = time.time()        print "Python saved buffer in %.5f seconds" % (t1-t0)        # Save a string-copy of the buffer to disk using Python        t0 = time.time()        bitbufstr = str(bitbuf)        f = open("python2.bmp","w")        f.write(bitbufstr)        f.close()        t1 = time.time()        print "Python saved string in %.5f seconds" % (t1-t0)        # Will exit on next execution of quitMainLoop_()        self.running = False    def quitMainLoop_(self,aTimer):        # Stop the main loop after one frame is captured.  Call rAPIdly from timer.        if not self.running:            AppHelper.stopEventLoop()    def startimageCapture(self,aTimer):        error = None        print "Finding camera"        # Create a QT Capture session        self.session = QTKit.QTCaptureSession.alloc().init()        # Find iSight device and open it        dev = QTKit.QTCaptureDevice.defaultinputDeviceWithMediaType_(QTKit.QTMediaTypeVIDeo)        print "Device: %s" % dev        if not dev.open_(error):            print "Couldn't open capture device."            return        # Create an input instance with the device we found and add to session        input = QTKit.QTCaptureDeviceinput.alloc().initWithDevice_(dev)        if not self.session.addinput_error_(input,error):            print "Couldn't add input device."            return        # Create an output instance with a delegate for callbacks and add to session        output = QTKit.QTCaptureDecompressedVIDeoOutput.alloc().init()        output.setDelegate_(self)        if not self.session.addOutput_error_(output,error):            print "Failed to add output delegate."            return        # Start the capture        print "Initiating capture..."        self.session.startRunning()    def main(self):        # Callback that quits after a frame is captured        NSTimer.scheduledTimerWithTimeInterval_target_selector_userInfo_repeats_(0.1,self,True)        # Turn on the camera and start the capture        self.startimageCapture(None)        # Start Cocoa's main event loop        AppHelper.runconsoleEventLoop(installinterrupt=True)        print "Frame capture completed."if __name__ == "__main__":    test = NSImageTest.alloc().init()    test.main()
总结

以上是内存溢出为你收集整理的如何使用Python和PyObjC从Apple iSight捕获帧?全部内容,希望文章能够帮你解决如何使用Python和PyObjC从Apple iSight捕获帧?所遇到的程序开发问题。

如果觉得内存溢出网站内容还不错,欢迎将内存溢出网站推荐给程序员好友。

欢迎分享,转载请注明来源:内存溢出

原文地址:https://54852.com/web/1019408.html

(0)
打赏 微信扫一扫微信扫一扫 支付宝扫一扫支付宝扫一扫
上一篇 2022-05-23
下一篇2022-05-23

发表评论

登录后才能评论

评论列表(0条)

    保存