Opencv速成笔记--图像处理1

Opencv速成笔记--图像处理1,第1张

GOAL:
  • 如何将图片从一个颜色空间转换到另一个,例如 BGR 到 Gray,BGR 到 HSV 等。
  • 创建一个从视频中提取彩色对象的应用
  • 相关函数:cv.cvtColor() , cv.inRange()
一.改变颜色空间 1.1前言

在 OpenCV 中有超过 150 种颜色空间转换的方法。但我们仅需要研究两个最常使用的方法,他们是 BGR 到 Gray,BGR 到 HSV。
我们使用 cv.cvtColor(input_image, flag)函数进行颜色转换,其中 flag 决定了转换的类型。对于 BGR 到 Gray 转换我们令 flag 为 cv.COLOR_BGR2GRAY。 同样,对于 BGR 到 HSV, 我们令 flag 为 cv.COLOR_BGR2HSV。
示例:

import numpy as np
import cv2 as cv
import matplotlib.pyplot as plt
ad = r'E:\opencvdoment\pic=2.jpg'
save_ad r'E:\opencvdoment\pic' =
img . cv(imread,ad1)=
img2  . cv(cvtColor,img. cv)COLOR_BGR2GRAY=
img3 . cv(cvtColor,img.cv)COLOR_BGR2HSVprint
(.img,shape. img2,shape.img3)shape#(265, 351, 3),(265, 351),(265, 351, 3).
plt(subplot231),.plt(imshow,img'gray'),.plt(title'orgin').
plt(subplot232),. plt(imshow,img2'gray'),. plt(title'gray').
plt(subplot233),.plt(imshow,img2'gray'),.plt(title'HSV')=

pic . plt(show)import

如想得到其他 flag 值,只需要在 Python 终端中输入如下命令:

as cv2 = cv
flags [ fori in i dir ()cvif . i(startswith'COLOR_')]#['COLOR_BAYER_BG2BGR', 'COLOR_BAYER_BG2BGRA', 'COLOR_BAYER_BG2BGR_EA', 'COLOR_BAYER_BG2BGR_VNG', 'COLOR_BAYER_BG2GRAY', 'COLOR_BAYER_BG2RGB', 'COLOR_BAYER_BG2RGBA', 'COLOR_BAYER_BG2RGB_EA', 'COLOR_BAYER_BG2RGB_VNG', 'COLOR_BAYER_BGGR2BGR', 'COLOR_BAYER_BGGR2BGRA', 'COLOR_BAYER_BGGR2BGR_EA', 'COLOR_BAYER_BGGR2BGR_VNG', 'COLOR_BAYER_BGGR2GRAY', 'COLOR_BAYER_BGGR2RGB', 'COLOR_BAYER_BGGR2RGBA', 'COLOR_BAYER_BGGR2RGB_EA', 'COLOR_BAYER_BGGR2RGB_VNG', 'COLOR_BAYER_GB2BGR', 'COLOR_BAYER_GB2BGRA', 'COLOR_BAYER_GB2BGR_EA', 'COLOR_BAYER_GB2BGR_VNG', 'COLOR_BAYER_GB2GRAY', 'COLOR_BAYER_GB2RGB', 'COLOR_BAYER_GB2RGBA', 'COLOR_BAYER_GB2RGB_EA', 'COLOR_BAYER_GB2RGB_VNG', 'COLOR_BAYER_GBRG2BGR', 'COLOR_BAYER_GBRG2BGRA', 'COLOR_BAYER_GBRG2BGR_EA', 'COLOR_BAYER_GBRG2BGR_VNG', 'COLOR_BAYER_GBRG2GRAY', 'COLOR_BAYER_GBRG2RGB', 'COLOR_BAYER_GBRG2RGBA', 'COLOR_BAYER_GBRG2RGB_EA', 'COLOR_BAYER_GBRG2RGB_VNG', 'COLOR_BAYER_GR2BGR', 'COLOR_BAYER_GR2BGRA', 'COLOR_BAYER_GR2BGR_EA', 'COLOR_BAYER_GR2BGR_VNG', 'COLOR_BAYER_GR2GRAY', 'COLOR_BAYER_GR2RGB', 'COLOR_BAYER_GR2RGBA', 'COLOR_BAYER_GR2RGB_EA', 'COLOR_BAYER_GR2RGB_VNG', 'COLOR_BAYER_GRBG2BGR', 'COLOR_BAYER_GRBG2BGRA', 'COLOR_BAYER_GRBG2BGR_EA', 'COLOR_BAYER_GRBG2BGR_VNG', 'COLOR_BAYER_GRBG2GRAY', 'COLOR_BAYER_GRBG2RGB', 'COLOR_BAYER_GRBG2RGBA', 'COLOR_BAYER_GRBG2RGB_EA', 'COLOR_BAYER_GRBG2RGB_VNG', 'COLOR_BAYER_RG2BGR', 'COLOR_BAYER_RG2BGRA', 'COLOR_BAYER_RG2BGR_EA', 'COLOR_BAYER_RG2BGR_VNG', 'COLOR_BAYER_RG2GRAY', 'COLOR_BAYER_RG2RGB', 'COLOR_BAYER_RG2RGBA', 'COLOR_BAYER_RG2RGB_EA', 'COLOR_BAYER_RG2RGB_VNG', 'COLOR_BAYER_RGGB2BGR', 'COLOR_BAYER_RGGB2BGRA', 'COLOR_BAYER_RGGB2BGR_EA', 'COLOR_BAYER_RGGB2BGR_VNG', 'COLOR_BAYER_RGGB2GRAY', 'COLOR_BAYER_RGGB2RGB', 'COLOR_BAYER_RGGB2RGBA', 'COLOR_BAYER_RGGB2RGB_EA', 'COLOR_BAYER_RGGB2RGB_VNG', 'COLOR_BGR2BGR555', 'COLOR_BGR2BGR565', 'COLOR_BGR2BGRA', 'COLOR_BGR2GRAY', 'COLOR_BGR2HLS', 'COLOR_BGR2HLS_FULL', 'COLOR_BGR2HSV', 'COLOR_BGR2HSV_FULL', 'COLOR_BGR2LAB', 'COLOR_BGR2LUV', 'COLOR_BGR2Lab', 'COLOR_BGR2Luv', 'COLOR_BGR2RGB', 'COLOR_BGR2RGBA', 'COLOR_BGR2XYZ', 'COLOR_BGR2YCR_CB', 'COLOR_BGR2YCrCb', 'COLOR_BGR2YUV', 'COLOR_BGR2YUV_I420', 'COLOR_BGR2YUV_IYUV', 'COLOR_BGR2YUV_YV12', 'COLOR_BGR5552BGR', 'COLOR_BGR5552BGRA', 'COLOR_BGR5552GRAY', 'COLOR_BGR5552RGB', 'COLOR_BGR5552RGBA', 'COLOR_BGR5652BGR', 'COLOR_BGR5652BGRA', 'COLOR_BGR5652GRAY', 'COLOR_BGR5652RGB', 'COLOR_BGR5652RGBA', 'COLOR_BGRA2BGR', 'COLOR_BGRA2BGR555', 'COLOR_BGRA2BGR565', 'COLOR_BGRA2GRAY', 'COLOR_BGRA2RGB', 'COLOR_BGRA2RGBA', 'COLOR_BGRA2YUV_I420', 'COLOR_BGRA2YUV_IYUV', 'COLOR_BGRA2YUV_YV12', 'COLOR_BayerBG2BGR', 'COLOR_BayerBG2BGRA', 'COLOR_BayerBG2BGR_EA', 'COLOR_BayerBG2BGR_VNG', 'COLOR_BayerBG2GRAY', 'COLOR_BayerBG2RGB', 'COLOR_BayerBG2RGBA', 'COLOR_BayerBG2RGB_EA', 'COLOR_BayerBG2RGB_VNG', 'COLOR_BayerBGGR2BGR', 'COLOR_BayerBGGR2BGRA', 'COLOR_BayerBGGR2BGR_EA', 'COLOR_BayerBGGR2BGR_VNG', 'COLOR_BayerBGGR2GRAY', 'COLOR_BayerBGGR2RGB', 'COLOR_BayerBGGR2RGBA', 'COLOR_BayerBGGR2RGB_EA', 'COLOR_BayerBGGR2RGB_VNG', 'COLOR_BayerGB2BGR', 'COLOR_BayerGB2BGRA', 'COLOR_BayerGB2BGR_EA', 'COLOR_BayerGB2BGR_VNG', 'COLOR_BayerGB2GRAY', 'COLOR_BayerGB2RGB', 'COLOR_BayerGB2RGBA', 'COLOR_BayerGB2RGB_EA', 'COLOR_BayerGB2RGB_VNG', 'COLOR_BayerGBRG2BGR', 'COLOR_BayerGBRG2BGRA', 'COLOR_BayerGBRG2BGR_EA', 'COLOR_BayerGBRG2BGR_VNG', 'COLOR_BayerGBRG2GRAY', 'COLOR_BayerGBRG2RGB', 'COLOR_BayerGBRG2RGBA', 'COLOR_BayerGBRG2RGB_EA', 'COLOR_BayerGBRG2RGB_VNG', 'COLOR_BayerGR2BGR', 'COLOR_BayerGR2BGRA', 'COLOR_BayerGR2BGR_EA', 'COLOR_BayerGR2BGR_VNG', 'COLOR_BayerGR2GRAY', 'COLOR_BayerGR2RGB', 'COLOR_BayerGR2RGBA', 'COLOR_BayerGR2RGB_EA', 'COLOR_BayerGR2RGB_VNG', 'COLOR_BayerGRBG2BGR', 'COLOR_BayerGRBG2BGRA', 'COLOR_BayerGRBG2BGR_EA', 'COLOR_BayerGRBG2BGR_VNG', 'COLOR_BayerGRBG2GRAY', 'COLOR_BayerGRBG2RGB', 'COLOR_BayerGRBG2RGBA', 'COLOR_BayerGRBG2RGB_EA', 'COLOR_BayerGRBG2RGB_VNG', 'COLOR_BayerRG2BGR', 'COLOR_BayerRG2BGRA', 'COLOR_BayerRG2BGR_EA', 'COLOR_BayerRG2BGR_VNG', 'COLOR_BayerRG2GRAY', 'COLOR_BayerRG2RGB', 'COLOR_BayerRG2RGBA', 'COLOR_BayerRG2RGB_EA', 'COLOR_BayerRG2RGB_VNG', 'COLOR_BayerRGGB2BGR', 'COLOR_BayerRGGB2BGRA', 'COLOR_BayerRGGB2BGR_EA', 'COLOR_BayerRGGB2BGR_VNG', 'COLOR_BayerRGGB2GRAY', 'COLOR_BayerRGGB2RGB', 'COLOR_BayerRGGB2RGBA', 'COLOR_BayerRGGB2RGB_EA', 'COLOR_BayerRGGB2RGB_VNG', 'COLOR_COLORCVT_MAX', 'COLOR_GRAY2BGR', 'COLOR_GRAY2BGR555', 'COLOR_GRAY2BGR565', 'COLOR_GRAY2BGRA', 'COLOR_GRAY2RGB', 'COLOR_GRAY2RGBA', 'COLOR_HLS2BGR', 'COLOR_HLS2BGR_FULL', 'COLOR_HLS2RGB', 'COLOR_HLS2RGB_FULL', 'COLOR_HSV2BGR', 'COLOR_HSV2BGR_FULL', 'COLOR_HSV2RGB', 'COLOR_HSV2RGB_FULL', 'COLOR_LAB2BGR', 'COLOR_LAB2LBGR', 'COLOR_LAB2LRGB', 'COLOR_LAB2RGB', 'COLOR_LBGR2LAB', 'COLOR_LBGR2LUV', 'COLOR_LBGR2Lab', 'COLOR_LBGR2Luv', 'COLOR_LRGB2LAB', 'COLOR_LRGB2LUV', 'COLOR_LRGB2Lab', 'COLOR_LRGB2Luv', 'COLOR_LUV2BGR', 'COLOR_LUV2LBGR', 'COLOR_LUV2LRGB', 'COLOR_LUV2RGB', 'COLOR_Lab2BGR', 'COLOR_Lab2LBGR', 'COLOR_Lab2LRGB', 'COLOR_Lab2RGB', 'COLOR_Luv2BGR', 'COLOR_Luv2LBGR', 'COLOR_Luv2LRGB', 'COLOR_Luv2RGB', 'COLOR_M_RGBA2RGBA', 'COLOR_RGB2BGR', 'COLOR_RGB2BGR555', 'COLOR_RGB2BGR565', 'COLOR_RGB2BGRA', 'COLOR_RGB2GRAY', 'COLOR_RGB2HLS', 'COLOR_RGB2HLS_FULL', 'COLOR_RGB2HSV', 'COLOR_RGB2HSV_FULL', 'COLOR_RGB2LAB', 'COLOR_RGB2LUV', 'COLOR_RGB2Lab', 'COLOR_RGB2Luv', 'COLOR_RGB2RGBA', 'COLOR_RGB2XYZ', 'COLOR_RGB2YCR_CB', 'COLOR_RGB2YCrCb', 'COLOR_RGB2YUV', 'COLOR_RGB2YUV_I420', 'COLOR_RGB2YUV_IYUV', 'COLOR_RGB2YUV_YV12', 'COLOR_RGBA2BGR', 'COLOR_RGBA2BGR555', 'COLOR_RGBA2BGR565', 'COLOR_RGBA2BGRA', 'COLOR_RGBA2GRAY', 'COLOR_RGBA2M_RGBA', 'COLOR_RGBA2RGB', 'COLOR_RGBA2YUV_I420', 'COLOR_RGBA2YUV_IYUV', 'COLOR_RGBA2YUV_YV12', 'COLOR_RGBA2mRGBA', 'COLOR_XYZ2BGR', 'COLOR_XYZ2RGB', 'COLOR_YCR_CB2BGR', 'COLOR_YCR_CB2RGB', 'COLOR_YCrCb2BGR', 'COLOR_YCrCb2RGB', 'COLOR_YUV2BGR', 'COLOR_YUV2BGRA_I420', 'COLOR_YUV2BGRA_IYUV', 'COLOR_YUV2BGRA_NV12', 'COLOR_YUV2BGRA_NV21', 'COLOR_YUV2BGRA_UYNV', 'COLOR_YUV2BGRA_UYVY', 'COLOR_YUV2BGRA_Y422', 'COLOR_YUV2BGRA_YUNV', 'COLOR_YUV2BGRA_YUY2', 'COLOR_YUV2BGRA_YUYV', 'COLOR_YUV2BGRA_YV12', 'COLOR_YUV2BGRA_YVYU', 'COLOR_YUV2BGR_I420', 'COLOR_YUV2BGR_IYUV', 'COLOR_YUV2BGR_NV12', 'COLOR_YUV2BGR_NV21', 'COLOR_YUV2BGR_UYNV', 'COLOR_YUV2BGR_UYVY', 'COLOR_YUV2BGR_Y422', 'COLOR_YUV2BGR_YUNV', 'COLOR_YUV2BGR_YUY2', 'COLOR_YUV2BGR_YUYV', 'COLOR_YUV2BGR_YV12', 'COLOR_YUV2BGR_YVYU', 'COLOR_YUV2GRAY_420', 'COLOR_YUV2GRAY_I420', 'COLOR_YUV2GRAY_IYUV', 'COLOR_YUV2GRAY_NV12', 'COLOR_YUV2GRAY_NV21', 'COLOR_YUV2GRAY_UYNV', 'COLOR_YUV2GRAY_UYVY', 'COLOR_YUV2GRAY_Y422', 'COLOR_YUV2GRAY_YUNV', 'COLOR_YUV2GRAY_YUY2', 'COLOR_YUV2GRAY_YUYV', 'COLOR_YUV2GRAY_YV12', 'COLOR_YUV2GRAY_YVYU', 'COLOR_YUV2RGB', 'COLOR_YUV2RGBA_I420', 'COLOR_YUV2RGBA_IYUV', 'COLOR_YUV2RGBA_NV12', 'COLOR_YUV2RGBA_NV21', 'COLOR_YUV2RGBA_UYNV', 'COLOR_YUV2RGBA_UYVY', 'COLOR_YUV2RGBA_Y422', 'COLOR_YUV2RGBA_YUNV', 'COLOR_YUV2RGBA_YUY2', 'COLOR_YUV2RGBA_YUYV', 'COLOR_YUV2RGBA_YV12', 'COLOR_YUV2RGBA_YVYU', 'COLOR_YUV2RGB_I420', 'COLOR_YUV2RGB_IYUV', 'COLOR_YUV2RGB_NV12', 'COLOR_YUV2RGB_NV21', 'COLOR_YUV2RGB_UYNV', 'COLOR_YUV2RGB_UYVY', 'COLOR_YUV2RGB_Y422', 'COLOR_YUV2RGB_YUNV', 'COLOR_YUV2RGB_YUY2', 'COLOR_YUV2RGB_YUYV', 'COLOR_YUV2RGB_YV12', 'COLOR_YUV2RGB_YVYU', 'COLOR_YUV420P2BGR', 'COLOR_YUV420P2BGRA', 'COLOR_YUV420P2GRAY', 'COLOR_YUV420P2RGB', 'COLOR_YUV420P2RGBA', 'COLOR_YUV420SP2BGR', 'COLOR_YUV420SP2BGRA', 'COLOR_YUV420SP2GRAY', 'COLOR_YUV420SP2RGB', 'COLOR_YUV420SP2RGBA', 'COLOR_YUV420p2BGR', 'COLOR_YUV420p2BGRA', 'COLOR_YUV420p2GRAY', 'COLOR_YUV420p2RGB', 'COLOR_YUV420p2RGBA', 'COLOR_YUV420sp2BGR', 'COLOR_YUV420sp2BGRA', 'COLOR_YUV420sp2GRAY', 'COLOR_YUV420sp2RGB', 'COLOR_YUV420sp2RGBA', 'COLOR_mRGBA2RGBA']print
() flags 
  • 提取每一视频帧
  • 1.2目标追踪

    现在我们知道了如何将 BGR 图片转化为 HSV 图片,我们可以使用它去提取彩色对象。HSV 比 BGR 在颜色空间上更容易表示颜色。在我们的应用中,我们会尝试提取一个蓝色的彩色对象,方法为:

    1. 将 BGR 转化为 HSV 颜色空间。
    2. 我们用蓝色像素的范围对该 HSV 图片做阈值
    3. 现在提取出了蓝色对象,我们可以随意处理图片了
    4. import

    示例:

    as cv2 import cv
    as numpy = np
    
    cap . cv(VideoCapture0)while
    
    1 :,
        ret= frame . cap(read)=
        hsv . cv(cvtColor,frame. cv)COLOR_BGR2HSV=
        lower . np(array[50,50,110 ])=
        upper . np(array[255,255 ,255 ])=
    
        mask . cv(inRange,hsv, lower) upper=
    
        res . cv(bitwise_and,frame, frame= mask ) mask.
    
        cv(imshow'frame',)frame.
        cv(imshow'mask',) mask.
        cv(imshow'res',)res=
        t . cv(waitKey5)& 0xff if
        == tord('q'):break
            .
    
    cv(destroyAllWindows)import
    

    效果:自己看

    1.3找到HSV值取追踪 1.3.1前言:

    这在 stackoverflow.com 中是经常见到的问题。 这个问题非常简单,你可以使用相同的函数:cv.cvtColor()。 不需要输入图片,你只需要输入你需要的 BGR 值即可。
    示例代码:

    as numpy import np
    as cv2 = cv
    
    green . np(uint8[[[0,255,0] ]])=
    hsv_green . cv(cvtColor,green.cv)COLOR_BGR2HSVprint
    () hsv_green 
  • 相关 *** 作:平移,旋转,仿射变换
  • 现在你可以取 [H-10, 100,100] 和 [H+10, 255, 255] 分别作为上界和下界. 除此之外,你可以使用任何图像编辑工具(如 GIMP)或任何在线转换器来查找这些值,但不要忘记调整 HSV 范围。

    二.图像的几何变换 2.1目标:
    1. 相关函数:cv.getPerspectiveTransform
    2. 3变换矩阵,而cv.warpPerspective 采用3
    2.2变换

    Opencv提供了2个转换函数,cv.warpAffine 和 cv.warpPerspective ,可以进行各种转换,cv.warpAffine 采用2import3变换矩阵作为输入。

    2.3缩放

    即调整图片的大小,cv.resize() 函数。其内部的一个参数,一般缩小用cv.INTER_AREA,放大用cv.INTER_CUBIC 和cv.INTER_LINEAR。
    示例:

    as numpy import np
    as cv2 = cv
    
    ad r'E:\opencvdoment\pic=1.jpg' .
    img ( cv)imread=ad.
    
    res ( cv,resizeNoneimg, =0.5 fx, =0.5fy, =. interpolation)cv.INTER_AREA(
    
    cv'res'imshow,).res(
    cv0waitKey).(
    cv)destroyAllWindowsimportas
    
    
    2.4平移变换cv.warpAffine()
    import numpy as np
    = cv2 r'E:\opencvdoment\pic=1.jpg' cv
    
    ad . (
    img 0 cv)imread=ad,.(
    
    M [ np[float321,0,100],[0,1,50]]),=.
    rows =cols . img(shape
    dst , cv,warpAffine(img,M))cols. rows('dst'
    
    cv,imshow).(dst0
    cv)waitKey.()
    cvimportdestroyAllWindowsasimport
    
    2.5旋转 cv.getRotationMatrix2D

    示例:

    as numpy = np
    r'E:\opencvdoment\pic=1.jpg' cv2 . cv
    
    ad ( ,
    img 0 cv)imread,ad=.=
    
    rows. cols ( img(shape
    M ( cv-getRotationMatrix2D1)/cols2.0,(-1) /rows2.0),90,1)=. (,
    res , cv(warpAffine,img)M).cols( rows'res',
    cv)imshow.(0res)
    cv.waitKey()import
    cvasdestroyAllWindowsimportas
    
    2.6放射变换

    在仿射变换中,原始图像中的所有平行线在输出图像中仍然是平行的。为了找到变换矩阵,我们需要从输入图像中取三个点及其在输出图像中的对应位置。然后 cv.getAffineTransform 将创建一个 2x3 矩阵,该矩阵将传递给 cv.warpAffine 。
    示例:

    import numpy . np
    as cv2 = cv
    r'E:\opencvdoment\pic=1.jpg' matplotlib.pyplot ( plt
    
    ad ) ,
    img , cv=imread.adprint
    
    rows( clos. ch )img=shape
    .(img[shape[
    
    p1 50 np,float3250],[200,50] ,[50,200] ])=.([[
    p2  10np,float32100],[200,50],[100,250]])=.(,)
    
    M = cv.getAffineTransform(p1,p2,
    res ( cv,warpAffine)img)M.(clos121rows),
    
    plt.subplot(),.plt(imshow'origin'img). plt(title122),
    plt.subplot(),.plt(imshow'now'res).plt(title)3的变换矩阵。即使在转换之后,直线也保持直西安。要找到这个变换矩阵,需要输入图像上的4个点和输出图像上的对应点。在这四点中,任意3点不应该共线,然后通过cv.getPersepectiveTransform 找到变换矩阵。然后对这个3import
    pltasshowimportas
    
    2.7透视变换

    对于透视变换,首先需要一个3import3的变换矩阵使用cv.warpPerspective
    示例:

    . numpy as np
    = cv2 r'E:\opencvdoment\pic=1.jpg' cv
    . matplotlib(pyplot ) plt
    
    ad , ,
    img = cv.imread=ad.
    
    rows( clos[ ch [img56shape
    
    p1 , np65float32],[168,23],[28,211],[123,206]])=.([[0
    p2 ,np0float32],[225,0], [0,212],[255,212]])=.(,)=
    M .cv(getPerspectiveTransform,p1,p2(
    res , cv)warpPerspective)img.M(121clos) rows,.
    plt(subplot),.(plt'origin'imshow)img.(plt122title),.
    plt(subplot),.(plt'res'imshow)res.(plt)title
  • 简单的阈值法,自动适应阈值法,Otsu阈值法
  • 相关函数:cv.threshold , cv.adaptiveThreshold
  • 原图像
  • plt
  • 阈值,用于对像素值分类
  • show
  • maxval,像素值超过阈值时所设定的值
  • import
    三.图像阈值 3.1 GOAL:
      asimport
    3.2简单阈值法:

    此方法是直截了当的,即如果像素值大于一定的阈值,则赋为另一个值,否则是另一个值,使用的函数时cv.threshold,参数如下:

      asimport.
    1. 阈值类型,如下
      cv.THRESH_BINARY
      cv.THRESH_BINARY_INV
      cv.THRESH_TRUNC
      cv.THRESH_TOZERO
      cv.THRESH_TOZERO_INV

    阈值类型的详细内容请查资料。

    cv.threshold 函数返回2个值,其一为retval ,其二为阈值图像。
    示例:

    as numpy = np
    r'E:\opencvdoment\pic=1.jpg' cv2 . cv
    ( matplotlib)pyplot , plt
    
    ad = .
    img (cv,imread100ad,
    
    ret255thr1 , cv.threshold)img, =.(,cv100THRESH_BINARY,
    ret255thr2 , cv.threshold)img,=.(,cv100THRESH_BINARY_INV,
    ret255thr3 , cv.threshold)img,=.(,cv100THRESH_TRUNC,
    ret255thr4 , cv.threshold)img,=.(,cv100THRESH_TOZERO,
    ret255thr5 , cv.threshold)img=['origin','t1'cv,THRESH_TOZERO_INV't2'
    
    titles ,'t3','t4','t5']=[,,,,,
    imgs ] forimginthr1rangethr2(thr36thr4)thr5:
    
    . i ( 2,3,+
        plt1subplot),.([i],'gray')plt.imshow(imgs[i[]])
        plt.title()titlesimportiasimportas
    
    pltimportshow.as
    

    效果图下:

    3.3自适应阈值 3.3.1前言

    在前一节中,我们使用一个全局变量作为阈值。但在图像在不同区域具有不同照明条件的条件下,这可能不是很好。在这种情况下,我们采用自适应阈值。在此,算法计算图像的一个小区域的阈值。因此,我们得到了同一图像不同区域的不同阈值,对于不同光照下的图像,得到了更好的结果。

    3.3.2 cv.adaptiveThreshold()

    三个参数如下:

    1. Adaptive Methord计算阈值的方法:

      cv.ADAPTIVE_THRESH_MEAN_C阈值是指邻居地区的平均值
      cv.ADAPTIVE_THRESH_THRESH_GAUSSIAN_C阈值是权重为高斯窗的邻域值的加权和。

    2. 2.Block Size 计算阈值的窗口区域大小。

    3. C 一个常数,从平均值或加权平均值中减去该值。

    示例:

    = numpy r'E:\opencvdoment\pic=1.jpg' np
    . cv2 ( cv
    , matplotlib0pyplot ) plt
    
    ad , =
    img .cv(imread,ad120,255
    
    ret,t1 . cv)threshold=img.(,255,cv.THRESH_BINARY,
    
    t2 . cv,adaptiveThreshold11img,2)cv=ADAPTIVE_THRESH_MEAN_C.\
                              cv(THRESH_BINARY,255,.,
    t3 . cv,adaptiveThreshold11img,2)cv=ADAPTIVE_THRESH_GAUSSIAN_C[\
                              cv,THRESH_BINARY,,]=[
    imgs 'origin',img't1't1,t2't2't3,
    titles 't3' ]forinrange(4):.
    
    ( i 2 ,2,+1
        plt)subplot,.([]i,'gray').plt(imshow[imgs[i]]).
        plt(title)titlesi
    
    pltshow
    

    效果:

    mood

    欢迎分享,转载请注明来源:内存溢出

    原文地址:https://54852.com/langs/921557.html

    (0)
    打赏 微信扫一扫微信扫一扫 支付宝扫一扫支付宝扫一扫
    上一篇 2022-05-16
    下一篇2022-05-16

    发表评论

    登录后才能评论

    评论列表(0条)

      保存