Pre-built Python bindings for OpenCV, the comprehensive open-source computer vision and image processing library with 2500+ algorithms
OpenCV's image processing module (imgproc) provides comprehensive functionality for image filtering, geometric transformations, color space conversion, feature detection, segmentation, and more. All functions are accessed through the cv2 namespace.
Image filtering operations for smoothing, sharpening, edge detection, and noise reduction.
cv2.blur(src, ksize, dst=None, anchor=None, borderType=None) -> dstApplies average blur filter (box filter normalized).
cv2.GaussianBlur(src, ksize, sigmaX, dst=None, sigmaY=None, borderType=None) -> dstApplies Gaussian blur filter using a Gaussian kernel.
cv2.medianBlur(src, ksize, dst=None) -> dstApplies median blur filter. Effective for salt-and-pepper noise removal.
cv2.bilateralFilter(src, d, sigmaColor, sigmaSpace, dst=None, borderType=None) -> dstApplies bilateral filter for edge-preserving smoothing.
cv2.boxFilter(src, ddepth, ksize, dst=None, anchor=None, normalize=True, borderType=None) -> dstApplies box filter (unnormalized if normalize=False).
cv2.sqrBoxFilter(src, ddepth, ksize, dst=None, anchor=None, normalize=True, borderType=None) -> dstCalculates normalized squared box filter (useful for local variance computation).
cv2.filter2D(src, ddepth, kernel, dst=None, anchor=None, delta=0, borderType=None) -> dstConvolves image with a custom kernel.
cv2.sepFilter2D(src, ddepth, kernelX, kernelY, dst=None, anchor=None, delta=0, borderType=None) -> dstApplies separable linear filter (more efficient for separable kernels).
cv2.Sobel(src, ddepth, dx, dy, ksize=3, scale=1, delta=0, borderType=None) -> dstCalculates image derivatives using Sobel operator.
cv2.Scharr(src, ddepth, dx, dy, scale=1, delta=0, borderType=None) -> dstCalculates image derivatives using Scharr operator (more accurate than 3x3 Sobel).
cv2.Laplacian(src, ddepth, ksize=1, scale=1, delta=0, borderType=None) -> dstCalculates Laplacian of image (sum of second derivatives).
cv2.Canny(image, threshold1, threshold2, edges=None, apertureSize=3, L2gradient=False) -> edgesDetects edges using the Canny algorithm.
Corner detection algorithms for identifying salient points in images, useful for feature tracking and image matching.
cv2.goodFeaturesToTrack(image, maxCorners, qualityLevel, minDistance, corners=None, mask=None, blockSize=3, useHarrisDetector=False, k=0.04) -> cornersDetermines strong corners on an image using Shi-Tomasi corner detection method.
cv2.cornerHarris(src, blockSize, ksize, k, dst=None, borderType=cv2.BORDER_DEFAULT) -> dstHarris corner detector.
cv2.cornerSubPix(image, corners, winSize, zeroZone, criteria) -> cornersRefines the corner locations to sub-pixel accuracy.
Hough transforms detect lines, circles, and other shapes in images, typically applied after edge detection.
cv2.HoughLines(image, rho, theta, threshold, lines=None, srn=0, stn=0, min_theta=0, max_theta=np.pi) -> linesDetects lines using the standard Hough Line Transform.
cv2.HoughLinesP(image, rho, theta, threshold, lines=None, minLineLength=0, maxLineGap=0) -> linesDetects line segments using the Probabilistic Hough Line Transform.
cv2.HoughCircles(image, method, dp, minDist, circles=None, param1=100, param2=100, minRadius=0, maxRadius=0) -> circlesDetects circles using the Hough Circle Transform.
Usage Example:
import cv2
import numpy as np
# Detect lines
edges = cv2.Canny(image, 50, 150)
lines = cv2.HoughLinesP(edges, 1, np.pi/180, threshold=100, minLineLength=50, maxLineGap=10)
if lines is not None:
for line in lines:
x1, y1, x2, y2 = line[0]
cv2.line(image, (x1, y1), (x2, y2), (0, 255, 0), 2)
# Detect circles
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
circles = cv2.HoughCircles(gray, cv2.HOUGH_GRADIENT, dp=1, minDist=20,
param1=50, param2=30, minRadius=10, maxRadius=50)
if circles is not None:
circles = np.uint16(np.around(circles))
for circle in circles[0, :]:
cv2.circle(image, (circle[0], circle[1]), circle[2], (0, 255, 0), 2)cv2.BORDER_CONSTANT # Constant border (iiiiii|abcdefgh|iiiiiii)
cv2.BORDER_REPLICATE # Replicate border (aaaaaa|abcdefgh|hhhhhhh)
cv2.BORDER_REFLECT # Reflect border (fedcba|abcdefgh|hgfedcb)
cv2.BORDER_WRAP # Wrap border (cdefgh|abcdefgh|abcdefg)
cv2.BORDER_REFLECT_101 # Reflect 101 border (gfedcb|abcdefgh|gfedcba)
cv2.BORDER_DEFAULT # Same as BORDER_REFLECT_101
cv2.BORDER_ISOLATED # Do not extrapolate beyond imageMathematical morphology operations for shape processing and noise removal.
cv2.erode(src, kernel, dst=None, anchor=None, iterations=1, borderType=None, borderValue=None) -> dstErodes image using specified structuring element.
cv2.dilate(src, kernel, dst=None, anchor=None, iterations=1, borderType=None, borderValue=None) -> dstDilates image using specified structuring element.
cv2.morphologyEx(src, op, kernel, dst=None, anchor=None, iterations=1, borderType=None, borderValue=None) -> dstPerforms advanced morphological transformations.
cv2.getStructuringElement(shape, ksize, anchor=None) -> retvalCreates structuring element for morphological operations.
cv2.MORPH_ERODE # Erosion
cv2.MORPH_DILATE # Dilation
cv2.MORPH_OPEN # Opening (erosion followed by dilation)
cv2.MORPH_CLOSE # Closing (dilation followed by erosion)
cv2.MORPH_GRADIENT # Morphological gradient (difference between dilation and erosion)
cv2.MORPH_TOPHAT # Top hat (difference between source and opening)
cv2.MORPH_BLACKHAT # Black hat (difference between closing and source)
cv2.MORPH_HITMISS # Hit-or-miss transformcv2.MORPH_RECT # Rectangular structuring element
cv2.MORPH_CROSS # Cross-shaped structuring element
cv2.MORPH_ELLIPSE # Elliptical structuring elementFunctions for resizing, rotating, warping, and remapping images.
cv2.resize(src, dsize, dst=None, fx=0, fy=0, interpolation=cv2.INTER_LINEAR) -> dstResizes image to specified size or by scale factors.
cv2.pyrDown(src, dst=None, dstsize=None, borderType=None) -> dstDownsamples image using Gaussian pyramid.
cv2.pyrUp(src, dst=None, dstsize=None, borderType=None) -> dstUpsamples image using Gaussian pyramid.
cv2.buildPyramid(src, maxlevel, dst=None, borderType=None) -> dstConstructs Gaussian pyramid for an image.
cv2.warpAffine(src, M, dsize, dst=None, flags=cv2.INTER_LINEAR, borderMode=None, borderValue=None) -> dstApplies affine transformation to image.
cv2.getRotationMatrix2D(center, angle, scale) -> retvalCalculates 2D rotation matrix for rotating around a center point.
cv2.getAffineTransform(src, dst) -> retvalCalculates affine transform from three pairs of corresponding points.
cv2.invertAffineTransform(M, iM=None) -> iMInverts affine transformation.
cv2.warpPerspective(src, M, dsize, dst=None, flags=cv2.INTER_LINEAR, borderMode=None, borderValue=None) -> dstApplies perspective transformation to image.
cv2.getPerspectiveTransform(src, dst, solveMethod=cv2.DECOMP_LU) -> retvalCalculates perspective transform from four pairs of corresponding points.
cv2.remap(src, map1, map2, interpolation, dst=None, borderMode=None, borderValue=None) -> dstApplies generic geometrical transformation using mapping arrays.
cv2.convertMaps(map1, map2, dstmap1type, dstmap1=None, dstmap2=None, nninterpolation=False) -> dstmap1, dstmap2Converts image transformation maps from one representation to another.
cv2.getRectSubPix(image, patchSize, center, patch=None, patchType=-1) -> patchRetrieves pixel rectangle from image with sub-pixel accuracy.
cv2.INTER_NEAREST # Nearest-neighbor interpolation
cv2.INTER_LINEAR # Bilinear interpolation
cv2.INTER_CUBIC # Bicubic interpolation
cv2.INTER_AREA # Resampling using pixel area relation (best for decimation)
cv2.INTER_LANCZOS4 # Lanczos interpolation over 8x8 neighborhood
cv2.INTER_LINEAR_EXACT # Bit-exact bilinear interpolation
cv2.INTER_NEAREST_EXACT # Bit-exact nearest-neighbor interpolation
cv2.INTER_MAX # Mask for interpolation codes
cv2.WARP_FILL_OUTLIERS # Fill all pixels outside source image
cv2.WARP_INVERSE_MAP # Inverse transformation (dst->src instead of src->dst)Functions for converting between different color representations.
cv2.cvtColor(src, code, dst=None, dstCn=0) -> dstConverts image from one color space to another.
cv2.cvtColorTwoPlane(src1, src2, code, dst=None) -> dstConverts two-plane YUV format to RGB/BGR.
RGB/BGR conversions:
cv2.COLOR_BGR2RGB
cv2.COLOR_RGB2BGR
cv2.COLOR_BGR2BGRA
cv2.COLOR_RGB2RGBA
cv2.COLOR_BGRA2BGR
cv2.COLOR_RGBA2RGB
cv2.COLOR_BGR2RGBA
cv2.COLOR_RGB2BGRA
cv2.COLOR_RGBA2BGR
cv2.COLOR_BGRA2RGBGrayscale conversions:
cv2.COLOR_BGR2GRAY
cv2.COLOR_RGB2GRAY
cv2.COLOR_GRAY2BGR
cv2.COLOR_GRAY2RGB
cv2.COLOR_GRAY2BGRA
cv2.COLOR_GRAY2RGBA
cv2.COLOR_BGRA2GRAY
cv2.COLOR_RGBA2GRAYHSV conversions:
cv2.COLOR_BGR2HSV
cv2.COLOR_RGB2HSV
cv2.COLOR_HSV2BGR
cv2.COLOR_HSV2RGB
cv2.COLOR_BGR2HSV_FULL
cv2.COLOR_RGB2HSV_FULL
cv2.COLOR_HSV2BGR_FULL
cv2.COLOR_HSV2RGB_FULLHLS conversions:
cv2.COLOR_BGR2HLS
cv2.COLOR_RGB2HLS
cv2.COLOR_HLS2BGR
cv2.COLOR_HLS2RGB
cv2.COLOR_BGR2HLS_FULL
cv2.COLOR_RGB2HLS_FULL
cv2.COLOR_HLS2BGR_FULL
cv2.COLOR_HLS2RGB_FULLLab conversions:
cv2.COLOR_BGR2Lab
cv2.COLOR_RGB2Lab
cv2.COLOR_Lab2BGR
cv2.COLOR_Lab2RGB
cv2.COLOR_LBGR2Lab
cv2.COLOR_LRGB2Lab
cv2.COLOR_Lab2LBGR
cv2.COLOR_Lab2LRGBLuv conversions:
cv2.COLOR_BGR2Luv
cv2.COLOR_RGB2Luv
cv2.COLOR_Luv2BGR
cv2.COLOR_Luv2RGB
cv2.COLOR_LBGR2Luv
cv2.COLOR_LRGB2Luv
cv2.COLOR_Luv2LBGR
cv2.COLOR_Luv2LRGBYUV conversions:
cv2.COLOR_BGR2YUV
cv2.COLOR_RGB2YUV
cv2.COLOR_YUV2BGR
cv2.COLOR_YUV2RGB
cv2.COLOR_YUV2RGB_NV12
cv2.COLOR_YUV2BGR_NV12
cv2.COLOR_YUV2RGB_NV21
cv2.COLOR_YUV2BGR_NV21
cv2.COLOR_YUV2RGBA_NV12
cv2.COLOR_YUV2BGRA_NV12
cv2.COLOR_YUV2RGBA_NV21
cv2.COLOR_YUV2BGRA_NV21
cv2.COLOR_YUV2RGB_YV12
cv2.COLOR_YUV2BGR_YV12
cv2.COLOR_YUV2RGB_IYUV
cv2.COLOR_YUV2BGR_IYUV
cv2.COLOR_YUV2RGBA_YV12
cv2.COLOR_YUV2BGRA_YV12
cv2.COLOR_YUV2RGBA_IYUV
cv2.COLOR_YUV2BGRA_IYUV
cv2.COLOR_YUV2GRAY_420
cv2.COLOR_YUV2GRAY_NV21
cv2.COLOR_YUV2GRAY_NV12
cv2.COLOR_YUV2GRAY_YV12
cv2.COLOR_YUV2GRAY_IYUVYCrCb conversions:
cv2.COLOR_BGR2YCrCb
cv2.COLOR_RGB2YCrCb
cv2.COLOR_YCrCb2BGR
cv2.COLOR_YCrCb2RGBXYZ conversions:
cv2.COLOR_BGR2XYZ
cv2.COLOR_RGB2XYZ
cv2.COLOR_XYZ2BGR
cv2.COLOR_XYZ2RGBBayer pattern conversions:
cv2.COLOR_BayerBG2BGR
cv2.COLOR_BayerGB2BGR
cv2.COLOR_BayerRG2BGR
cv2.COLOR_BayerGR2BGR
cv2.COLOR_BayerBG2RGB
cv2.COLOR_BayerGB2RGB
cv2.COLOR_BayerRG2RGB
cv2.COLOR_BayerGR2RGB
cv2.COLOR_BayerBG2GRAY
cv2.COLOR_BayerGB2GRAY
cv2.COLOR_BayerRG2GRAY
cv2.COLOR_BayerGR2GRAYOperations for computing and analyzing image histograms.
cv2.calcHist(images, channels, mask, histSize, ranges, hist=None, accumulate=False) -> histCalculates histogram of image(s).
cv2.calcBackProject(images, channels, hist, ranges, scale, dst=None) -> dstCalculates back projection of histogram.
cv2.compareHist(H1, H2, method) -> retvalCompares two histograms.
cv2.equalizeHist(src, dst=None) -> dstEqualizes histogram of grayscale image.
cv2.createCLAHE(clipLimit=40.0, tileGridSize=(8,8)) -> retvalCreates CLAHE (Contrast Limited Adaptive Histogram Equalization) object.
cv2.HISTCMP_CORREL # Correlation
cv2.HISTCMP_CHISQR # Chi-Square
cv2.HISTCMP_INTERSECT # Intersection
cv2.HISTCMP_BHATTACHARYYA # Bhattacharyya distance
cv2.HISTCMP_HELLINGER # Synonym for BHATTACHARYYA
cv2.HISTCMP_CHISQR_ALT # Alternative Chi-Square
cv2.HISTCMP_KL_DIV # Kullback-Leibler divergenceBinary and adaptive thresholding operations.
cv2.threshold(src, thresh, maxval, type, dst=None) -> retval, dstApplies fixed-level threshold to image.
cv2.adaptiveThreshold(src, maxValue, adaptiveMethod, thresholdType, blockSize, C, dst=None) -> dstApplies adaptive threshold (threshold varies across image).
cv2.THRESH_BINARY # dst = (src > thresh) ? maxval : 0
cv2.THRESH_BINARY_INV # dst = (src > thresh) ? 0 : maxval
cv2.THRESH_TRUNC # dst = (src > thresh) ? thresh : src
cv2.THRESH_TOZERO # dst = (src > thresh) ? src : 0
cv2.THRESH_TOZERO_INV # dst = (src > thresh) ? 0 : src
cv2.THRESH_MASK # Mask for threshold types
cv2.THRESH_OTSU # Use Otsu's algorithm (flag, combine with type)
cv2.THRESH_TRIANGLE # Use Triangle algorithm (flag, combine with type)cv2.ADAPTIVE_THRESH_MEAN_C # Threshold = mean of neighborhood - C
cv2.ADAPTIVE_THRESH_GAUSSIAN_C # Threshold = weighted sum (Gaussian) - CAdvanced segmentation algorithms for partitioning images into regions.
cv2.watershed(image, markers) -> markersPerforms marker-based image segmentation using watershed algorithm.
cv2.grabCut(img, mask, rect, bgdModel, fgdModel, iterCount, mode=cv2.GC_EVAL) -> mask, bgdModel, fgdModelSegments foreground using GrabCut algorithm.
cv2.connectedComponents(image, labels=None, connectivity=8, ltype=cv2.CV_32S) -> retval, labelsComputes the connected components labeled image of boolean image.
cv2.connectedComponentsWithStats(image, labels=None, stats=None, centroids=None, connectivity=8, ltype=cv2.CV_32S) -> retval, labels, stats, centroidsComputes the connected components labeled image and produces statistics.
cv2.distanceTransform(src, distanceType, maskSize, dst=None, dstType=cv2.CV_32F) -> dst, labelsCalculates distance to nearest zero pixel for each pixel.
cv2.floodFill(image, mask, seedPoint, newVal, loDiff=None, upDiff=None, flags=None) -> retval, image, mask, rectFills connected component with specified color.
cv2.GC_BGD # Background pixel (0)
cv2.GC_FGD # Foreground pixel (1)
cv2.GC_PR_BGD # Probably background pixel (2)
cv2.GC_PR_FGD # Probably foreground pixel (3)
cv2.GC_INIT_WITH_RECT # Initialize with rectangle
cv2.GC_INIT_WITH_MASK # Initialize with mask
cv2.GC_EVAL # Evaluate modecv2.DIST_USER # User-defined distance
cv2.DIST_L1 # Distance = |x1-x2| + |y1-y2|
cv2.DIST_L2 # Euclidean distance
cv2.DIST_C # Distance = max(|x1-x2|, |y1-y2|)
cv2.DIST_L12 # L1-L2 metric
cv2.DIST_FAIR # Distance = c^2(|x|/c - log(1+|x|/c))
cv2.DIST_WELSCH # Distance = c^2/2(1-exp(-(x/c)^2))
cv2.DIST_HUBER # Distance = |x|<c ? x^2/2 : c(|x|-c/2)
cv2.DIST_MASK_3 # Mask size 3
cv2.DIST_MASK_5 # Mask size 5
cv2.DIST_MASK_PRECISE # Precise distance calculationTemplate matching for finding pattern locations in images.
cv2.matchTemplate(image, templ, method, result=None, mask=None) -> resultCompares template against overlapping image regions.
cv2.TM_SQDIFF # Sum of squared differences (minimum is best)
cv2.TM_SQDIFF_NORMED # Normalized SQDIFF (minimum is best)
cv2.TM_CCORR # Cross-correlation (maximum is best)
cv2.TM_CCORR_NORMED # Normalized cross-correlation (maximum is best)
cv2.TM_CCOEFF # Correlation coefficient (maximum is best)
cv2.TM_CCOEFF_NORMED # Normalized correlation coefficient (maximum is best)Multi-scale image representation using Gaussian pyramids.
cv2.pyrDown(src, dst=None, dstsize=None, borderType=None) -> dstBlurs and downsamples image (builds next pyramid level down).
cv2.pyrUp(src, dst=None, dstsize=None, borderType=None) -> dstUpsamples and blurs image (builds next pyramid level up).
cv2.buildPyramid(src, maxlevel, dst=None, borderType=None) -> dstConstructs Gaussian pyramid.
Additional transformation and accumulation operations.
cv2.integral(src, sum=None, sdepth=-1) -> sum
cv2.integral2(src, sum=None, sqsum=None, sdepth=-1, sqdepth=-1) -> sum, sqsum
cv2.integral3(src, sum=None, sqsum=None, tilted=None, sdepth=-1, sqdepth=-1) -> sum, sqsum, tiltedCalculates integral image(s) for fast area sum computation.
cv2.accumulate(src, dst, mask=None) -> dstAdds image to accumulator.
cv2.accumulateSquare(src, dst, mask=None) -> dstAdds square of source image to accumulator.
cv2.accumulateProduct(src1, src2, dst, mask=None) -> dstAdds product of two images to accumulator.
cv2.accumulateWeighted(src, dst, alpha, mask=None) -> dstUpdates running average (exponentially weighted).
cv2.createHanningWindow(winSize, type) -> dstCreates Hanning window (used for DFT-based trackers).
cv2.phaseCorrelate(src1, src2, window=None) -> retval, responseDetects translational shift between two images using phase correlation.
Install with Tessl CLI
npx tessl i tessl/pypi-opencv-python@4.12.1