OpenCV Filters - copyMakeBorder

Forms a border around an image.


C++: void copyMakeBorder(InputArray src, OutputArray dst, int top, int bottom, int left, int right, int borderType, const Scalar& value=Scalar() )

Python: cv2.copyMakeBorder(src, top, bottom, left, right, borderType[, dst[, value]]) → dst

C: void cvCopyMakeBorder(const CvArr* src, CvArr* dst, CvPoint offset, int bordertype, CvScalar value=cvScalarAll(0) )

Python: cv.CopyMakeBorder(src, dst, offset, bordertype, value=(0, 0, 0, 0)) → None

Parameters:
  • src – Source image.
  • dst – Destination image of the same type as src and the size Size(src.cols+left+right, src.rows+top+bottom).
  • top
  • bottom
  • left
  • right – Parameter specifying how many pixels in each direction from the source image rectangle to extrapolate. For example, top=1, bottom=1, left=1, right=1 mean that 1 pixel-wide border needs to be built.
  • borderType – Border type. See borderInterpolate() for details.
  • value – Border value if borderType==BORDER_CONSTANT .


The function copies the source image into the middle of the destination image. The areas to the left, to the right, above and below the copied source image will be filled with extrapolated pixels. This is not what FilterEngine or filtering functions based on it do (they extrapolate pixels on-fly), but what other more complex functions, including your own, may do to simplify image boundary handling.

The function supports the mode when src is already in the middle of dst . In this case, the function does not copy src itself but simply constructs the border, for example:

// let border be the same in all directions
int border=2;
// constructs a larger image to fit both the image and the border
Mat gray_buf(rgb.rows + border*2, rgb.cols + border*2, rgb.depth());
// select the middle part of it w/o copying data
Mat gray(gray_canvas, Rect(border, border, rgb.cols, rgb.rows));
// convert image from RGB to grayscale
cvtColor(rgb, gray, CV_RGB2GRAY);
// form a border in-place
copyMakeBorder(gray, gray_buf, border, border,
               border, border, BORDER_REPLICATE);
// now do some custom filtering ...
...



Note: When the source image is a part (ROI) of a bigger image, the function will try to use the pixels outside of the ROI to form a border. To disable this feature and always do extrapolation, as if src was not a ROI, use borderType | BORDER_ISOLATED.

Reference: OpenCV Documentation - copyMakeBorder


Example
This is a sample code (C++) with images for opencv copyMakeBorder.

string imgFileName = "lena.jpg";

cv::Mat src = cv::imread(imgFileName);
if (!src.data){
    cout << "Unable to open file" << endl;
    getchar();
    return 1;
}

Mat dstBorderConstant;
copyMakeBorder(src, dstBorderConstant, 256, 256, 256, 256, BORDER_CONSTANT);

Mat dstBorderDefault;
copyMakeBorder(src, dstBorderDefault, 256, 256, 256, 256, BORDER_DEFAULT);

Mat dstBorderIsolate;
copyMakeBorder(src, dstBorderIsolate, 256, 256, 256, 256, BORDER_ISOLATED);

Mat dstBorderReflect;
copyMakeBorder(src, dstBorderReflect, 256, 256, 256, 256, BORDER_REFLECT);

Mat dstBorderReflect101;
copyMakeBorder(src, dstBorderReflect101, 256, 256, 256, 256, BORDER_REFLECT101);

Mat dstBorderReflect_101;
copyMakeBorder(src, dstBorderReflect_101, 256, 256, 256, 256, BORDER_REFLECT_101);

Mat dstBorderReplicate;
copyMakeBorder(src, dstBorderReplicate, 256, 256, 256, 256, BORDER_REPLICATE);

Mat dstBorderWrap;
copyMakeBorder(src, dstBorderWrap, 256, 256, 256, 256, BORDER_WRAP);

cv::namedWindow("Source", CV_WINDOW_FREERATIO);
   
cv::namedWindow("BorderConstant", CV_WINDOW_FREERATIO);
cv::namedWindow("BorderDefault", CV_WINDOW_FREERATIO);
cv::namedWindow("BorderIsolate", CV_WINDOW_FREERATIO);
cv::namedWindow("BorderReflect", CV_WINDOW_FREERATIO);
cv::namedWindow("BorderReflect101", CV_WINDOW_FREERATIO);
cv::namedWindow("BorderReflect_101", CV_WINDOW_FREERATIO);
cv::namedWindow("BorderReplicate", CV_WINDOW_FREERATIO);
cv::namedWindow("BorderWrap", CV_WINDOW_FREERATIO);


cv::imshow("Source", src);

cv::imshow("BorderConstant", dstBorderConstant);
cv::imshow("BorderDefault", dstBorderDefault);
cv::imshow("BorderIsolate", dstBorderIsolate);
cv::imshow("BorderReflect", dstBorderReflect);
cv::imshow("BorderReflect101", dstBorderReflect101);
cv::imshow("BorderReflect_101", dstBorderReflect_101);
cv::imshow("BorderReplicate", dstBorderReplicate);
cv::imshow("BorderWrap", dstBorderWrap);
cv::waitKey(0);

cv::imwrite("BorderConstant.jpg", dstBorderConstant);
cv::imwrite("BorderDefault.jpg", dstBorderDefault);
cv::imwrite("BorderIsolate.jpg", dstBorderIsolate);
cv::imwrite("BorderReflect.jpg", dstBorderReflect);
cv::imwrite("BorderReflect101.jpg", dstBorderReflect101);
cv::imwrite("BorderReflect_101.jpg", dstBorderReflect_101);
cv::imwrite("BorderReplicate.jpg", dstBorderReplicate);
cv::imwrite("BorderWrap.jpg", dstBorderWrap);



Filtered Image Source Image
borderType - BORDER_REFLECT

borderType - BORDER_REFLECT101

borderType - BORDER_REFLECT_101

borderType - BORDER_REPLICATE

borderType - BORDER_CONSTANT

borderType - BORDER_DEFAULT

borderType - BORDER_ISOLATED

borderType - BORDER_WRAP





Download complete Visual Studio project.

Continue Reading...

OpenCV Image Filtering

Continue Reading...

OpenCV Filters - boxFilter

Blurs an image using the box filter.

C++: void boxFilter(InputArray src, OutputArray dst, int ddepth, Size ksize, Point anchor=Point(-1,-1), bool normalize=true, int borderType=BORDER_DEFAULT )

Python: cv2.boxFilter(src, ddepth, ksize[, dst[, anchor[, normalize[, borderType]]]]) → dst

Parameters:
  • src – input image.
  • dst – output image of the same size and type as src.
  • ddepth – the output image depth (-1 to use src.depth()).
  • ksize – blurring kernel size.
  • anchor – anchor point; default value Point(-1,-1) means that the anchor is at the kernel center.
  • normalize – flag, specifying whether the kernel is normalized by its area or not.
  • borderType – border mode used to extrapolate pixels outside of the image.

The function smoothes an image using the kernel:

Where,

Unnormalized box filter is useful for computing various integral characteristics over each pixel neighborhood, such as covariance matrices of image derivatives (used in dense optical flow algorithms, and so on). If you need to compute pixel sums over variable-size windows, use integral().

Reference: OpenCV Documentation - boxFilter


Example
This is a sample code (C++) with images for opencv box filter.

 string imgFileName = "lena.jpg";

 cv::Mat src = cv::imread(imgFileName);
 if (!src.data){
    cout << "Unable to open file" << endl;
    getchar();
    return 1;
 }

 cv::Mat dst;
 cv::boxFilter(src, dst, -1, cv::Size(16, 16));

 cv::namedWindow("Source");
 cv::namedWindow("Filtered");

 cv::imshow("Source", src);
 cv::imshow("Filtered", dst);
 cv::waitKey(0);

 cv::imwrite("Box Filter.jpg", dst);

 return 0;


Filtered Image Source Image



Download complete Visual Studio project.

Continue Reading...

OpenCV Filters - buildPyramid

Constructs the Gaussian pyramid for an image.

C++: void buildPyramid(InputArray src, OutputArrayOfArrays dst, int maxlevel, int borderType=BORDER_DEFAULT )

Parameters:
  • src – Source image. Check pyrDown() for the list of supported types.
  • dst – Destination vector of maxlevel+1 images of the same type as src . dst[0] will be the same as src . dst[1] is the next pyramid layer, a smoothed and down-sized src , and so on.
  • maxlevel – 0-based index of the last (the smallest) pyramid layer. It must be non-negative.
  • borderType – Pixel extrapolation method (BORDER_CONSTANT don’t supported). See borderInterpolate() for details.

The function constructs a vector of images and builds the Gaussian pyramid by recursively applying pyrDown() to the previously built pyramid layers, starting from dst[0]==src.

Reference: OpenCV Documentation - buildPyramid


Example
This is a sample code (C++) with images for opencv build Pyramid filter.


sstring imgFileName = "lena.jpg";

cv::Mat src = cv::imread(imgFileName);
if (!src.data){
    cout << "Unable to open file" << endl;
    getchar();
    return 1;
}

int maxVal = 4;
vector dstVect;
cv::buildPyramid(src, dstVect, maxVal);

cv::namedWindow("Source");
cv::imshow("Source", src);

string imgName;
for (int i = 0; i < maxVal + 1; i++){
    stringstream ss;
    ss << "Filtered " << i;
    imgName = ss.str();

    cv::namedWindow(imgName);
    cv::imshow(imgName, dstVect[i]);

    ss << ".jpg";
    imgName = ss.str();
    cv::imwrite(imgName, dstVect[i]);
}
cv::waitKey(0);

return 0;


Filtered Image Source Image
Image size: 512 x 512

Image size: 256 x 256

Image size: 128 x 128

Image size: 64 x 64

Image size: 32 x 32



Download complete Visual Studio project.

Continue Reading...

OpenCV Filters - bilateralFilter

Applies the bilateral filter to an image.

C++: void blur(InputArray src, OutputArray dst, Size ksize, Point anchor=Point(-1,-1), int borderType=BORDER_DEFAULT ) 

Python: cv2.blur(src, ksize[, dst[, anchor[, borderType]]]) → dst 


Parameters:
  • src – Source 8-bit or floating-point, 1-channel or 3-channel image.
  • dst – Destination image of the same size and type as src .
  • d – Diameter of each pixel neighborhood that is used during filtering. If it is non-positive, it is computed from sigmaSpace .
  • sigmaColor – Filter sigma in the color space. A larger value of the parameter means that farther colors within the pixel neighborhood (see sigmaSpace ) will be mixed together, resulting in larger areas of semi-equal color.
  • sigmaSpace – Filter sigma in the coordinate space. A larger value of the parameter means that farther pixels will influence each other as long as their colors are close enough (see sigmaColor ). When d>0 , it specifies the neighborhood size regardless of sigmaSpace . Otherwise, d is proportional to sigmaSpace .

The function applies bilateral filtering to the input image, as described in http://www.dai.ed.ac.uk/CVonline/LOCAL_COPIES/MANDUCHI1/Bilateral_Filtering.html bilateralFilter can reduce unwanted noise very well while keeping edges fairly sharp. However, it is very slow compared to most filters.

Sigma values: For simplicity, you can set the 2 sigma values to be the same. If they are small (< 10), the filter will not have much effect, whereas if they are large (> 150), they will have a very strong effect, making the image look “cartoonish”.

Filter size: Large filters (d > 5) are very slow, so it is recommended to use d=5 for real-time applications, and perhaps d=9 for offline applications that need heavy noise filtering.

This filter does not work inplace.

Reference: OpenCV Documentation -  bilateralFilter

Example
This is a sample code (C++) with images for opencv bilateral filter filter.

 string imgFileName = "lena.jpg";

 cv::Mat src = cv::imread(imgFileName);
 if (!src.data){
    cout << "Unable to open file" << endl;
    getchar();
    return 1;
 }

 cv::Mat dst;
 cv::bilateralFilter(src, dst, 20, 100, 100);

 cv::namedWindow("Source");
 cv::namedWindow("Filtered");

 cv::imshow("Source", src);
 cv::imshow("Filtered", dst);
 cv::waitKey(0);

 cv::imwrite("BilateralFilter.jpg", dst);

 return 0;

Filtered Image Source Image




Download complete Visual Studio project.

Continue Reading...

OpenCV Filters - Blur

Blurs an image using the normalized box filter.

C++: void blur(InputArray src, OutputArray dst, Size ksize, Point anchor=Point(-1,-1), int borderType=BORDER_DEFAULT ) 
Python: cv2.blur(src, ksize[, dst[, anchor[, borderType]]]) → dst
dgdfg

Parameters:
  • src – input image; it can have any number of channels, which are processed independently, but the depth should be CV_8U, CV_16U, CV_16S, CV_32F or CV_64F.
  • dst – output image of the same size and type as src.
  • ksize – blurring kernel size.
  • anchor – anchor point; default value Point(-1,-1) means that the anchor is at the kernel center.
  • borderType – border mode used to extrapolate pixels outside of the image.
The function smoothes an image using the kernel:


The call blur(src, dst, ksize, anchor, borderType) is equivalent to boxFilter(src, dst, src.type(), anchor, true, borderType)

Reference: OpenCV Documentation - blur

Example
This is a sample code (C++) with images for opencv blur filter.

 string imgFileName = "lena.jpg";

 cv::Mat src = cv::imread(imgFileName);
 if (!src.data){
    cout << "Unable to open file" << endl;
    getchar();
    return 1;
 }

 cv::Mat dst;
 cv::blur(src, dst, cv::Size(19, 19));

 cv::namedWindow("Source");
 cv::namedWindow("Filtered");

 cv::imshow("Source", src);
 cv::imshow("Filtered", dst);
 cv::waitKey(0);

 cv::imwrite("Blur.jpg", dst);

 return 0;



Filtered Image Source Image



Download complete Visual Studio project.

Continue Reading...

OpenCV Filters - adaptiveBilateralFilter

Applies the adaptive bilateral filter to an image.

C++: void adaptiveBilateralFilter(InputArray src, OutputArray dst, Size ksize, double sigmaSpace, double maxSigmaColor=20.0, Point anchor=Point(-1, -1), int borderType=BORDER_DEFAULT )

Python: cv2.adaptiveBilateralFilter(src, ksize, sigmaSpace[, dst[, maxSigmaColor[, anchor[, borderType]]]]) → dst 

Parameters:
  • src – The source image
  • dst – The destination image; will have the same size and the same type as src
  • ksize – The kernel size. This is the neighborhood where the local variance will be calculated, and where pixels will contribute (in a weighted manner).
  • sigmaSpace – Filter sigma in the coordinate space. Larger value of the parameter means that farther pixels will influence each other (as long as their colors are close enough; see sigmaColor). Then d>0, it specifies the neighborhood size regardless of sigmaSpace, otherwise d is proportional to sigmaSpace.
  • maxSigmaColor – Maximum allowed sigma color (will clamp the value calculated in the ksize neighborhood. Larger value of the parameter means that more dissimilar pixels will influence each other (as long as their colors are close enough; see sigmaColor). Then d>0, it specifies the neighborhood size regardless of sigmaSpace, otherwise d is proportional to sigmaSpace.
  • borderType – Pixel extrapolation method.
A main part of our strategy will be to load each raw pixel once, and reuse it to calculate all pixels in the output (filtered) image that need this pixel value. The math of the filter is that of the usual bilateral filter, except that the sigma color is calculated in the neighborhood, and clamped by the optional input value.

Reference: OpenCV Documentation -  adaptiveBilateralFilter

Example
This is a sample code (C++) with images for opencv adaptive bilateral filter.

  string imgFileName = "lena.jpg";

  cv::Mat src = cv::imread(imgFileName);
  if (!src.data){
     cout << "Unable to open file" << endl;
     getchar();
     return 1;
  }

  cv::Mat dst;
  cv::adaptiveBilateralFilter(src, dst, cv::Size(11, 11), 50);//kernal size(11) should be an odd value

 cv::namedWindow("Source");
 cv::namedWindow("Filtered");

 cv::imshow("Source", src);
 cv::imshow("Filtered", dst);

 cv::waitKey(0);

 cv::imwrite("Adaptive Bilateral Filter.jpg", dst);


Filtered Image Source Image



Download complete Visual Studio project.

Continue Reading...