vfw如何与ipvfw摄像头头相连接捕获视频流

作者:王先荣前言&&& 随着Windows操作系统的不断演变,用于捕获视频的API接口也在进化,微软提供了VFW、DirectShow和MediaFoundation这三代接口。其中VFW早已被DirectShow取代,而最新的MediaFoundation被Windows Vista和Windows 7所支持。可惜的是,上述接口基于COM技术且灵活性很大,在.net中并不方便直接使用。.net封装&&& 老外有很多活雷锋,他们奉献了不少的开源项目,DirectShow.net是对DirectShow的封装,而MediaFoundation.net是对MediaFoundation的封装。它们都能在上找到。这两个封装之后的类库基本上跟原来的COM是一一对应的关系,可以用于视频捕获,但是用起来还是不够简便。&&& 通过不断的google搜索,我认为以下类库对视频捕获封装得不错,它们是:DirectX.Capture、OpenCv、EmguCv和AForge。DirectX.Capture&&& DirectX.Capture是发表在CodeProject上的一个项目,它能很方便的捕获视频和音频,在窗口预览,并将结果保存到文件。使用DirectX.Capture的示例如下:
DirectX.Capture
Capture capture = new Capture( Filters.VideoInputDevices[0],
Filters.AudioInputDevices[1] );capture.Filename = "C:\MyVideo.avi";capture.Start();//...capture.Stop();
但是,它没有提供单独获取某帧内容的方法。如果您只是需要预览并保存视频,它很好用。OpenCv&&& OpenCv对VFW和DirectShow的视频捕获部分进行了很好的封装,能够很方便的获取到某帧的内容,也可以将结果保存到视频文件中。使用OpenCv的示例如下:
IntPtr ptrCapture = CvInvoke.cvCreateCameraCapture(param.deviceInfo.Index);
while (!stop)
IntPtr ptrImage = CvInvoke.cvQueryFrame(ptrCapture);
lock (lockObject)
stop = stopC
CvInvoke.cvReleaseCapture(ref ptrCapture);
&&&&&&不过OpenCv并未对音频捕获进行封装,如果需要同时录制音频,这个搞不定。值得注意的是,从OpenCv 1.1开始已经实现了对DirectShow的封装,这跟网上很多人所说的OpenCv使用VFW进行视频捕获效率低下这种观点不一致。关于OpenCv使用DirectShow的论据请看本文的附录。EmguCv&&& EmguCv是对OpenCv在.net的封装,继承了OpenCv快速的优点,同时它更加好用。使用EmguCv的示例代码如下:
Capture capture = new Capture(param.deviceInfo.Index);
while (!stop)
pbCapture.Image = capture.QueryFrame().B
lock (lockObject)
stop = stopC
capture.Dispose();
AForge&&& AForge是一套纯正的.net开源图像处理类库,它的视频捕获类也是基于DirectShow的,但更加好用,功能更多,从使用和帮助来看更类似微软的类库。
captureAForge = new VideoCaptureDevice(cameraDevice.MonikerString);
captureAForge.NewFrame += new NewFrameEventHandler(captureAForge_NewFrame);
captureAForge.Start();
captureAForge.SignalToStop();
private void captureAForge_NewFrame(object sender, NewFrameEventArgs eventArgs)
pbCapture.Image = (Bitmap)eventArgs.Frame.Clone();
对比&&& 介绍完它们之后,我们来比较下它们。它们都是基于DirectShow的,所以性能几乎一样。实际上,我个人认为,摄像头所用的硬件和驱动程序的支持对性能影响更大。我的摄像头在Windows 7下没有专门的驱动程序,只能使用Microsoft提供的默认驱动,性能比WindowsXp要差一截。值得注意的是主要有几点:&&& (1)只有DirectX.Capture实现了对音频的捕获;&&& (2)只有DirectX.Capture不能获取单独的某帧图像;&&& (3)EmguCv的免费版基于商业许可,而其他类库的许可都很宽松;&&& (4)AForge的示例和帮助比较好,而且功能多些。
附录:OpenCv也用DirectShow来捕获视频通过分析OpenCv 2.0的源代码,我得出了OpenCv使用DirectShow来捕获视频的结论。证据如下:
DirectShow In OpenCv
(1)//_highgui.h
line:100#if (_MSC_VER &= 1400 || defined __GNUC__) && !defined WIN64 && !defined _WIN64 #define HAVE_VIDEOINPUT 1#endif(2)//cvcap_dshow.cpp
line:44#ifdef HAVE_VIDEOINPUT#include "videoinput.h"/********************* Capturing video from camera via VFW *********************/class CvCaptureCAM_DShow : public CvCapture(3)//cvapp.cpp
line:102CV_IMPL CvCapture * cvCreateCameraCapture (int index){//.....//line:140 switch (domains[i]) {
#ifdef HAVE_VIDEOINPUT
case CV_CAP_DSHOW:
capture = cvCreateCameraCapture_DShow (index);
if (capture)
本文完整源代码
using Susing System.Collections.Gusing ponentMusing System.Dusing System.Dusing System.Lusing System.Tusing System.Windows.Fusing System.Dusing System.Runtime.InteropSusing AForge.Vusing AForge.Video.DirectSusing Emgu.CV;using Emgu.CV.CvEusing Emgu.CV.Susing Emgu.CV.UI;using System.Tnamespace ImageProcessLearn{
public partial class FormCameraCapture : Form
private int framesC //已经捕获的视频帧数
private int frameC
//需要捕获的总帧数
private VideoCaptureDevice captureAForge = null;
//AForge视频捕获对象
private bool stopC
//是否停止捕获视频
private object lockObject = new object();
public FormCameraCapture()
InitializeComponent();
sw = new Stopwatch();
//窗体加载时,获取视频捕获设备列表
private void FormCameraCapture_Load(object sender, EventArgs e)
FilterInfoCollection videoDevices = new FilterInfoCollection(FilterCategory.VideoInputDevice);
if (videoDevices != null && videoDevices.Count & 0)
int idx = 0;
foreach (FilterInfo device in videoDevices)
cmbCaptureDevice.Items.Add(new DeviceInfo(device.Name, device.MonikerString, idx, FilterCategory.VideoInputDevice));
cmbCaptureDevice.SelectedIndex = 0;
//当改变视频设备时,重新填充该设备对应的能力
private void cmbCaptureDevice_SelectedIndexChanged(object sender, EventArgs e)
if (cmbCaptureDevice.SelectedItem != null)
//保存原来选择的设备能力
Size oldFrameSize = new Size(0, 0);
int oldMaxFrameRate = 0;
if (cmbDeviceCapability.SelectedItem != null)
oldFrameSize = ((DeviceCapabilityInfo)cmbDeviceCapability.SelectedItem).FrameS
oldMaxFrameRate = ((DeviceCapabilityInfo)cmbDeviceCapability.SelectedItem).MaxFrameR
//清除设备能力
cmbDeviceCapability.Items.Clear();
//添加新的设备能力
int oldCapIndex = -1;
//原来选择的设备能力的新索引
VideoCaptureDevice video = new VideoCaptureDevice(((DeviceInfo)cmbCaptureDevice.SelectedItem).MonikerString);
for (int i = 0; i & video.VideoCapabilities.L i++)
VideoCapabilities cap = video.VideoCapabilities[i];
DeviceCapabilityInfo capInfo = new DeviceCapabilityInfo(cap.FrameSize, cap.MaxFrameRate);
cmbDeviceCapability.Items.Add(capInfo);
if (oldFrameSize == capInfo.FrameSize && oldMaxFrameRate == capInfo.MaxFrameRate)
oldCapIndex =
//重新选择原来的设备能力,或者选一个新的能力
if (oldCapIndex == -1)
oldCapIndex = 0;
cmbDeviceCapability.SelectedIndex = oldCapI
//当改变设备能力时
private void cmbDeviceCapability_SelectedIndexChanged(object sender, EventArgs e)
if (int.Parse(txtRate.Text) &= ((DeviceCapabilityInfo)cmbDeviceCapability.SelectedItem).MaxFrameRate)
txtRate.Text = ((DeviceCapabilityInfo)cmbDeviceCapability.SelectedItem).MaxFrameRate.ToString();
//性能测试:测试获取指定帧数的视频,并将其转换成图像,所需要的时间,然后计算出FPS
private void btnPerformTest_Click(object sender, EventArgs e)
int frameCount = int.Parse(txtFrameCount.Text);
if (frameCount &= 0)
frameCount = 300;
DeviceInfo device = (DeviceInfo)cmbCaptureDevice.SelectedI
btnPerformTest.Enabled = false;
btnStart.Enabled = false;
txtResult.Text += PerformTestWithAForge(device.MonikerString, frameCount);
txtResult.Text += PerformTestWithEmguCv(device.Index, frameCount);
txtResult.Text += PerformTestWithOpenCv(device.Index, frameCount);
btnPerformTest.Enabled = true;
btnStart.Enabled = true;
//AForge性能测试
private string PerformTestWithAForge(string deviceMonikerString, int frameCount)
VideoCaptureDevice video = new VideoCaptureDevice(deviceMonikerString);
video.NewFrame += new NewFrameEventHandler(PerformTest_NewFrame);
video.DesiredFrameSize = ((DeviceCapabilityInfo)cmbDeviceCapability.SelectedItem).FrameS
video.DesiredFrameRate = int.Parse(txtRate.Text);
framesCaptured = 0;
this.frameCount = frameC
video.Start();
sw.Reset();
sw.Start();
video.WaitForStop();
double time = sw.Elapsed.TotalM
return string.Format("AForge性能测试,帧数:{0},耗时:{1:F05}毫秒,FPS:{2:F02},设定({3})\r\n", frameCount, time, 1000d * frameCount / time, GetSettings());
void PerformTest_NewFrame(object sender, NewFrameEventArgs eventArgs)
framesCaptured++;
if (framesCaptured & frameCount)
sw.Stop();
VideoCaptureDevice video = sender as VideoCaptureD
video.SignalToStop();
//EmguCv性能测试
private string PerformTestWithEmguCv(int deviceIndex, int frameCount)
Capture video = new Capture(deviceIndex);
video.SetCaptureProperty(CAP_PROP.CV_CAP_PROP_FRAME_WIDTH, ((DeviceCapabilityInfo)cmbDeviceCapability.SelectedItem).FrameSize.Width);
video.SetCaptureProperty(CAP_PROP.CV_CAP_PROP_FRAME_HEIGHT, ((DeviceCapabilityInfo)cmbDeviceCapability.SelectedItem).FrameSize.Height);
video.SetCaptureProperty(CAP_PROP.CV_CAP_PROP_FPS, double.Parse(txtRate.Text));
sw.Reset();
sw.Start();
for (int i = 0; i & frameC i++)
video.QueryFrame();
sw.Stop();
video.Dispose();
double time = sw.Elapsed.TotalM
return string.Format("EmguCv性能测试,帧数:{0},耗时:{1:F05}毫秒,FPS:{2:F02},设定({3})\r\n", frameCount, time, 1000d * frameCount / time, GetSettings());
//OpenCv性能测试
private string PerformTestWithOpenCv(int deviceIndex, int frameCount)
IntPtr ptrVideo = CvInvoke.cvCreateCameraCapture(deviceIndex);
CvInvoke.cvSetCaptureProperty(ptrVideo, CAP_PROP.CV_CAP_PROP_FRAME_WIDTH, ((DeviceCapabilityInfo)cmbDeviceCapability.SelectedItem).FrameSize.Width);
CvInvoke.cvSetCaptureProperty(ptrVideo, CAP_PROP.CV_CAP_PROP_FRAME_HEIGHT, ((DeviceCapabilityInfo)cmbDeviceCapability.SelectedItem).FrameSize.Height);
CvInvoke.cvSetCaptureProperty(ptrVideo, CAP_PROP.CV_CAP_PROP_FPS, double.Parse(txtRate.Text));
sw.Reset();
sw.Start();
for (int i = 0; i & frameC i++)
CvInvoke.cvQueryFrame(ptrVideo);
sw.Stop();
CvInvoke.cvReleaseCapture(ref ptrVideo);
double time = sw.Elapsed.TotalM
return string.Format("OpenCv性能测试,帧数:{0},耗时:{1:F05}毫秒,FPS:{2:F02},设定({3})\r\n", frameCount, time, 1000d * frameCount / time, GetSettings());
//得到设置所对应的字符串
private string GetSettings()
return string.Format("摄像头:{0},尺寸:{1}x{2},FPS:{3}", ((DeviceInfo)cmbCaptureDevice.SelectedItem).Name,
((DeviceCapabilityInfo)cmbDeviceCapability.SelectedItem).FrameSize.Width,
((DeviceCapabilityInfo)cmbDeviceCapability.SelectedItem).FrameSize.Height,
txtRate.Text);
//开始捕获视频
private void btnStart_Click(object sender, EventArgs e)
//得到设置项
DeviceInfo cameraDevice = (DeviceInfo)cmbCaptureDevice.SelectedI
Size frameSize = ((DeviceCapabilityInfo)cmbDeviceCapability.SelectedItem).FrameS
int rate = int.Parse(txtRate.Text);
ThreadParam param = new ThreadParam(cameraDevice, new DeviceCapabilityInfo(frameSize, rate));
if (rbAForge.Checked)
captureAForge = new VideoCaptureDevice(cameraDevice.MonikerString);
captureAForge.DesiredFrameSize = frameS
captureAForge.DesiredFrameRate =
captureAForge.NewFrame += new NewFrameEventHandler(captureAForge_NewFrame);
txtResult.Text += string.Format("开始捕获视频(方式:AForge,开始时间:{0})......\r\n", DateTime.Now.ToLongTimeString());
framesCaptured = 0;
sw.Reset();
sw.Start();
captureAForge.Start();
else if (rbEmguCv.Checked)
stopCapture = false;
Thread captureThread = new Thread(new ParameterizedThreadStart(CaptureWithEmguCv));
captureThread.Start(param);
else if (rbOpenCv.Checked)
stopCapture = false;
Thread captureThread = new Thread(new ParameterizedThreadStart(CaptureWithOpenCv));
captureThread.Start(param);
btnStart.Enabled = false;
btnStop.Enabled = true;
btnPerformTest.Enabled = false;
private void captureAForge_NewFrame(object sender, NewFrameEventArgs eventArgs)
pbCapture.Image = (Bitmap)eventArgs.Frame.Clone();
lock (lockObject)
framesCaptured++;
//EmguCv视频捕获
private void CaptureWithEmguCv(object objParam)
bool stop = false;
int framesCaptured = 0;
Stopwatch sw = new Stopwatch();
txtResult.Invoke(new AddResultDelegate(AddResultMethod), string.Format("开始捕获视频(方式:EmguCv,开始时间:{0})......\r\n", DateTime.Now.ToLongTimeString()));
ThreadParam param = (ThreadParam)objP
Capture capture = new Capture(param.deviceInfo.Index);
capture.SetCaptureProperty(CAP_PROP.CV_CAP_PROP_FRAME_WIDTH, param.deviceCapability.FrameSize.Width);
capture.SetCaptureProperty(CAP_PROP.CV_CAP_PROP_FRAME_HEIGHT, param.deviceCapability.FrameSize.Height);
capture.SetCaptureProperty(CAP_PROP.CV_CAP_PROP_FPS, param.deviceCapability.MaxFrameRate);
sw.Start();
while (!stop)
pbCapture.Image = capture.QueryFrame().B
framesCaptured++;
lock (lockObject)
stop = stopC
sw.Stop();
txtResult.Invoke(new AddResultDelegate(AddResultMethod), string.Format("捕获视频结束(方式:EmguCv,结束时间:{0},用时:{1:F05}毫秒,帧数:{2},FPS:{3:F02})\r\n",
DateTime.Now.ToLongTimeString(), sw.Elapsed.TotalMilliseconds, framesCaptured, framesCaptured / sw.Elapsed.TotalSeconds));
capture.Dispose();
//OpenCv视频捕获
private void CaptureWithOpenCv(object objParam)
bool stop = false;
int framesCaptured = 0;
Stopwatch sw = new Stopwatch();
txtResult.Invoke(new AddResultDelegate(AddResultMethod), string.Format("开始捕获视频(方式:OpenCv,开始时间:{0})......\r\n", DateTime.Now.ToLongTimeString()));
ThreadParam param = (ThreadParam)objP
IntPtr ptrCapture = CvInvoke.cvCreateCameraCapture(param.deviceInfo.Index);
CvInvoke.cvSetCaptureProperty(ptrCapture, CAP_PROP.CV_CAP_PROP_FRAME_WIDTH, param.deviceCapability.FrameSize.Width);
CvInvoke.cvSetCaptureProperty(ptrCapture, CAP_PROP.CV_CAP_PROP_FRAME_HEIGHT, param.deviceCapability.FrameSize.Height);
CvInvoke.cvSetCaptureProperty(ptrCapture, CAP_PROP.CV_CAP_PROP_FPS, param.deviceCapability.MaxFrameRate);
sw.Start();
while (!stop)
IntPtr ptrImage = CvInvoke.cvQueryFrame(ptrCapture);
MIplImage iplImage = (MIplImage)Marshal.PtrToStructure(ptrImage, typeof(MIplImage));
Image&Bgr, byte& image = new Image&Bgr, byte&(iplImage.width, iplImage.height, iplImage.widthStep, iplImage.imageData);
pbCapture.Image = image.B
//pbCapture.Image = ImageConverter.IplImagePointerToBitmap(ptrImage);
framesCaptured++;
lock (lockObject)
stop = stopC
sw.Stop();
txtResult.Invoke(new AddResultDelegate(AddResultMethod), string.Format("捕获视频结束(方式:OpenCv,结束时间:{0},用时:{1:F05}毫秒,帧数:{2},FPS:{3:F02})\r\n",
DateTime.Now.ToLongTimeString(), sw.Elapsed.TotalMilliseconds, framesCaptured, framesCaptured / sw.Elapsed.TotalSeconds));
CvInvoke.cvReleaseCapture(ref ptrCapture);
//停止捕获视频
private void btnStop_Click(object sender, EventArgs e)
if (captureAForge != null)
sw.Stop();
if (captureAForge.IsRunning)
captureAForge.SignalToStop();
captureAForge = null;
txtResult.Text += string.Format("捕获视频结束(方式:AForge,结束时间:{0},用时:{1:F05}毫秒,帧数:{2},FPS:{3:F02})\r\n",
DateTime.Now.ToLongTimeString(), sw.Elapsed.TotalMilliseconds, framesCaptured, framesCaptured / sw.Elapsed.TotalSeconds);
lock (lockObject)
stopCapture = true;
btnStart.Enabled = true;
btnStop.Enabled = false;
btnPerformTest.Enabled = true;
//用于在工作线程中更新结果的委托及方法
public delegate void AddResultDelegate(string result);
public void AddResultMethod(string result)
txtResult.Text +=
//设备信息
public struct DeviceInfo
public string N
public string MonikerS
public int I
public DeviceInfo(string name, string monikerString, int index) :
this(name, monikerString, index, Guid.Empty)
public DeviceInfo(string name, string monikerString, int index, Guid category)
MonikerString = monikerS
Category =
public override string ToString()
//设备能力
public struct DeviceCapabilityInfo
public Size FrameS
public int MaxFrameR
public DeviceCapabilityInfo(Size frameSize, int maxFrameRate)
FrameSize = frameS
MaxFrameRate = maxFrameR
public override string ToString()
return string.Format("{0}x{1}
{2}fps", FrameSize.Width, FrameSize.Height, MaxFrameRate);
//传递到捕获视频工作线程的参数
public struct ThreadParam
public DeviceInfo deviceI
public DeviceCapabilityInfo deviceC
public ThreadParam(DeviceInfo deviceInfo, DeviceCapabilityInfo deviceCapability)
this.deviceInfo = deviceI
this.deviceCapability = deviceC
希望本文对您有所帮助。
最后,祝您春节愉快~~
阅读(...) 评论()2047人阅读
VC++(31)
在VC++上使用VFW需要加入对vfw32.lib的引用。在中有一个AVICap窗口类,负责视频和音频硬件沟通,并把视频捕捉的数据保存为AVI文件,这个类是基于消息的。
(1)引用头文件,导入库文件
#include &vfw.h&
#pragma comment(lib,&vfw32&)
(2)创建一个线程,在其中调用capCreateCaptureWindow创建视频捕捉窗口。
下面是在同一个线程中实现:
hVideoWnd = capCreateCaptureWindow(&Capture&,WS_VISIBLE | WS_CHILD,10,10,300,300,*this,0);
hVideoWnd是HWND类型全局变量。
(3)调用capDriverConnect连接驱动程序。
(4)调用capPreviewRate设置预览速度。
(5)调用capPreview开始预览。
一个简单代码如下:(可以将其放到OnInitDialog()函数中)
hVideoWnd = capCreateCaptureWindow(&Capture&,WS_VISIBLE | WS_CHILD,10,10,300,300,*this,0);
&if(capDriverConnect(hVideoWnd,0))
&&capPreviewRate(hVideoWnd,66);
&&capPreview(hVideoWnd,TRUE);
运行程序后就会在对话框中看到视频。
capGrabFrame(hVideoWnd);这个函数可以实现捕捉图片,我们可以在对话框上添加一个按钮,在按钮的处理函数中,加入这个函数就可以了,按一下图片就会截图。
但是你会发现截图后,视频就不动了,怎么办?可以用以下代码,同时可以实现将截图送到剪贴板:
capGrabFrame(hVideoWnd);
&capEditCopy(hVideoWnd);
&capPreview(hVideoWnd,TRUE);
下面是创建一个新的线程的实现:(这段代码可以放到OnInitDialog中)
m_pThread = AfxBeginThread(ThreadFun,(LPVOID)this);
m_Event.ResetEvent();
::WaitForSingleObject(m_Event,INFINITE);
if(m_hVideoWnd)
if(capDriverConnect(m_hVideoWnd,0) == FALSE)
AfxMessageBox(&Connect Driver error!&);
::SetParent(m_hVideoWnd,*this);
::SetWindowLong(m_hVideoWnd,GWL_STYLE,WS_CHILD);
::SetWindowPos(m_hVideoWnd,NULL,10,10,300,300,SWP_NOREDRAW);
::ShowWindow(m_hVideoWnd,SW_SHOW);
capPreviewRate(m_hVideoWnd,10);
capPreview(m_hVideoWnd,TRUE);
UINT CImageAcquisitionDlg::ThreadFun(LPVOID lpParam)
CImageAcquisitionDlg *temp = (CImageAcquisitionDlg*)lpP
temp-&m_hVideoWnd = capCreateCaptureWindow(&Capture&,WS_POPUP,10,10,20,20,*temp,0);
if(temp-&m_hVideoWnd)
temp-&m_Event.SetEvent();
while(GetMessage(&msg,temp-&m_hVideoWnd,0,0))
TranslateMessage(&msg);
DispatchMessage(&msg);
return msg.wP
下面是网上查找的一些资料:
用户界面线程和工作者线程都是由AfxBeginThread创建的。MFC提供了两个重载版的AfxBeginThread,一个用于用户界面线程,另一个用于工作者线程,分别有如下的原型和过程:
用户界面线程的AfxBeginThread的原型如下:   
CWinThread* AFXAPI AfxBeginThread( CRuntimeClass* pThreadClass, int nPriority, UINT nStackSize, DWORD dwCreateFlags, LPSECURITY_ATTRIBUTES lpSecurityAttrs)   
其中:   
参数1是从CWinThread派生的RUNTIME_CLASS类;   
参数2指定线程优先级,如果为0,则与创建该线程的线程相同;   
参数3指定线程的堆栈大小,如果为0,则与创建该线程的线程相同;   
参数4是一个创建标识,如果是CREATE_SUSPENDED,则在悬挂状态创建线程,在线程创建后线程挂起,否则线程在创建后开始线程的执行。   
参数5表示线程的安全属性,NT下有用。   
工作者线程的AfxBeginThread的原型如下:    
CWinThread* AfxBeginThread( AFX_THREADPROC pfnThreadProc, LPVOID lParam, int nPriority = THREAD_PRIORITY_NORMAL, UINT nStackSize = 0, DWORD dwCreateFlags = 0, LPSECURITY_ATTRIBUTES lpSecurityAttrs = NULL   );
返回值: 一个指向新线程的线程对象的指针   
pfnThreadProc : 线程的入口函数,声明一定要如下:
UINT MyThreadFunction( LPVOID pParam ),不能设置为NULL;   
lpParam : 传递入线程的参数,注意它的类型为:LPVOID,所以我们可以传递一个结构体入线程.   
nPriority : 线程的优先级,一般设置为 0 .让它和主线程具有共同的优先级.   
nStackSize : 指定新创建的线程的栈的大小.如果为 0,新创建的线程具有和主线程一样的大小的栈   
dwCreateFlags : 指定创建线程以后,线程有怎么样的标志.可以指定两个值:   
CREATE_SUSPENDED : 线程创建以后,会处于挂起状态,直到调用:ResumeThread   
0 : 创建线程后就开始运行.   
lpSecurityAttrs : 指向一个 SECURITY_ATTRIBUTES 的结构体,用它来标志新创建线程的安全性.如果为 NULL , 那么新创建的线程就具有和主线程一样的安全性.   
如果要在线程内结束线程,可以在线程内调用 AfxEndThread.   
结束线程的两种方式   
1 : 这是最简单的方式,也就是让线程函数执行完成,此时线程正常结束.它会返回一个值,一般0是成功结束,   
当然你可以定义自己的认为合适的值来代表线程成功执行.在线程内调用AfxEndThread将会直接结束线程,此时线程的一切资源都会被回收.   
2 : 如果你想让另一个线程B来结束线程A,那么,你就需要在这两个线程中传递信息.   
不管是工作者线程还是界面线程,如果你想在线程结束后得到它的结果,那么你可以调用:   
::GetExitCodeThread函数
DWORD WINAPI WaitForSingleObject( __in HANDLE hHandle, __in DWORD dwMilliseconds );
  hHandle [in]对象句柄。可以指定一系列的对象,如Event、Job、Memory resource notification、Mutex、Process、Semaphore、Thread、Waitable timer等。   
当等待仍在挂起状态时,句柄被关闭,那么函数行为是未定义的。该句柄必须具有 SYNCHRONIZE 访问权限。   
dwMilliseconds [in]定时时间间隔,单位为milliseconds(毫秒).如果指定一个非零值,函数处于等待状态直到hHandle 标记的对象被触发,或者时间到了。如果dwMilliseconds 为0,对象没有被触发信号,函数不会进入一个等待状态,它总是立即返回。如果dwMilliseconds 为INFINITE,对象被触发信号后,函数才会返回。
WaitForSingleObject函数用来检测hHandle事件的信号状态,在某一线程中调用该函数时,线程暂时挂起,如果在挂起的dwMilliseconds毫秒内,线程所等待的对象变为有信号状态,则该函数立即返回;如果超时时间已经到达dwMilliseconds毫秒,但hHandle所指向的对象还没有变成有信号状态,函数照样返回。
参数dwMilliseconds有两个具有特殊意义的值:0和INFINITE。若为0,则该函数立即返回;若为INFINITE,则线程一直被挂起,直到hHandle所指向的对象变为有信号状态时为止。   
返回值:   
WAIT_ABANDONED 0x:当hHandle为mutex时,如果拥有mutex的线程在结束时没有释放核心对象会引发此返回值。   
WAIT_OBJECT_0 0x :核心对象已被激活   
WAIT_TIMEOUT 0x:等待超时   
WAIT_FAILED 0xFFFFFFFF :出现错误,可通过GetLastError得到错误代码
HWND VFWAPI capCreateCaptureWindow(
& LPCTSTR lpszWindowName,
& DWORD dwStyle,
& int nWidth,
& int nHeight,
& HWND hWnd,
Parameters
lpszWindowName
Null-terminated string containing the name used for the capture window.
Window styles used for the capture window. Window styles are described with the CreateWindowEx function.
The x-coordinate of the upper left corner of the capture window.
The y-coordinate of the upper left corner of the capture window.
Width of the capture window.
Height of the capture window.
Handle to the parent window.
Window identifier.
BOOL capDriverConnect(hwnd,iIndex)
hwnd:标识视频捕捉窗口句柄
iIndex:标识驱动程序,范围是0-9
HWND SetParent(HWND hWndChild,HWND hWndNewParent)
改变某个子窗口的父窗口。
hWndChild:子窗口句柄。   
hWndNewParent:新的父窗口句柄。如果该参数是NULL,则桌面窗口就成为新的父窗口。在WindowsNT5.0中,如果参数为HWND_MESSAGE,则子窗口成为消息窗口。
参考知识库
* 以上用户言论只代表其个人观点,不代表CSDN网站的观点或立场
访问:117376次
积分:2048
积分:2048
排名:第13853名
原创:102篇
转载:14篇
(2)(2)(3)(1)(2)(1)(26)(3)(1)(27)(4)(4)(14)(24)(2)

我要回帖

更多关于 win7摄像头弹出捕获源 的文章

 

随机推荐