Pages

Convert Opencv Mat to C# Bitmap

Conversion from Opencv Mat to C# bitmap isn't a difficult task. But this conversion needs to be done from the primitive level of both Opencv Mat and C# Bitmap. Simply all the images are created from set of bytes, therefore it is necessary to create System.Drawing.Bitmat from bytes of cv::Mat. Also you will need to do this conversion in a CLI project. Total instructions can be found in following 2 articles.
  1. Call Opencv functions from C#
  2. Call Opencv functions from C# with bitmap



This article describes how to convert C# Bitmap to Opencv Mat.
Convert C# Bitmap to Opencv Mat

If you already know how to create and configure CLI project in visual studio, simply use following function in CLI project to convert Opencv Mat to C# Bitmap.

System::Drawing::Bitmap^ MatToBitmap(Mat srcImg){
    int stride = srcImg.size().width * srcImg.channels();//calc the srtide
    int hDataCount = srcImg.size().height;
   
    System::Drawing::Bitmap^ retImg;
       
    System::IntPtr ptr(srcImg.data);
   
    //create a pointer with Stride
    if (stride % 4 != 0){//is not stride a multiple of 4?
        //make it a multiple of 4 by fiiling an offset to the end of each row


       
//to hold processed data
        uchar *dataPro = new uchar[((srcImg.size().width * srcImg.channels() + 3) & -4) * hDataCount];

        uchar *data = srcImg.ptr();

        //current position on the data array
        int curPosition = 0;
        //current offset
        int curOffset = 0;

        int offsetCounter = 0;

        //itterate through all the bytes on the structure
        for (int r = 0; r < hDataCount; r++){
            //fill the data
            for (int c = 0; c < stride; c++){
                curPosition = (r * stride) + c;

                dataPro[curPosition + curOffset] = data[curPosition];
            }

            //reset offset counter
            offsetCounter = stride;

            //fill the offset
            do{
                curOffset += 1;
                dataPro[curPosition + curOffset] = 0;

                offsetCounter += 1;
            } while (offsetCounter % 4 != 0);
        }

        ptr = (System::IntPtr)dataPro;//set the data pointer to new/modified data array

        //calc the stride to nearest number which is a multiply of 4
        stride = (srcImg.size().width * srcImg.channels() + 3) & -4;

        retImg = gcnew System::Drawing::Bitmap(srcImg.size().width, srcImg.size().height,
            stride,
            System::Drawing::Imaging::PixelFormat::Format24bppRgb,
            ptr);
    }
    else{

        //no need to add a padding or recalculate the stride
        retImg = gcnew System::Drawing::Bitmap(srcImg.size().width, srcImg.size().height,
            stride,
            System::Drawing::Imaging::PixelFormat::Format24bppRgb,
            ptr);
    }
   
    array^ imageData;
    System::Drawing::Bitmap^ output;

    // Create the byte array.
    {
        System::IO::MemoryStream^ ms = gcnew System::IO::MemoryStream();
        retImg->Save(ms, System::Drawing::Imaging::ImageFormat::Png);
        imageData = ms->ToArray();
        delete ms;
    }

    // Convert back to bitmap
    {
        System::IO::MemoryStream^ ms = gcnew System::IO::MemoryStream(imageData);
        output = (System::Drawing::Bitmap^)System::Drawing::Bitmap::FromStream(ms);
    }

    return output;
}

Convert C# Bitmap to Opencv Mat

Conversion from C# bitmap to Opencv Mat isn't a difficult task. But this conversion needs to be done from the primitive level of both Opencv Mat and C# Bitmap. Simply all the images are created from set of bytes, therefore it is necessary to create cv::Mat from bytes of System.Drawing.Bitmat. Also you will need to do this conversion in a CLI project. Total instructions can be found in following 2 articles.
  1. Call Opencv functions from C#
  2. Call Opencv functions from C# with bitmap

This article describes how to convert Opencv Mat to C# Bitmap.
Convert Opencv Mat to C# Bitmap

If you already know how to create and configure CLI project in visual studio, simply use following function in CLI project to convert C# Bitmap to Opencv Mat.

Mat BitmapToMat(System::Drawing::Bitmap^ bitmap)
{
    IplImage* tmp;

    System::Drawing::Imaging::BitmapData^ bmData = bitmap->LockBits(System::Drawing::Rectangle(0, 0, bitmap->Width, bitmap->Height), System::Drawing::Imaging::ImageLockMode::ReadWrite, bitmap->PixelFormat);
    if (bitmap->PixelFormat == System::Drawing::Imaging::PixelFormat::Format8bppIndexed)
    {
        tmp = cvCreateImage(cvSize(bitmap->Width, bitmap->Height), IPL_DEPTH_8U, 1);
        tmp->imageData = (char*)bmData->Scan0.ToPointer();
    }

    else if (bitmap->PixelFormat == System::Drawing::Imaging::PixelFormat::Format24bppRgb)
    {
        tmp = cvCreateImage(cvSize(bitmap->Width, bitmap->Height), IPL_DEPTH_8U, 3);
        tmp->imageData = (char*)bmData->Scan0.ToPointer();
    }

    bitmap->UnlockBits(bmData);

    return Mat(tmp);
}

OpenCV Filters - dilate

Dilates an image by using a specific structuring element.

C++: void dilate(InputArray src, OutputArray dst, InputArray kernel, Point anchor=Point(-1,-1), int iterations=1, int borderType=BORDER_CONSTANT, const Scalar& borderValue=morphologyDefaultBorderValue() )

Python: cv2.dilate(src, kernel[, dst[, anchor[, iterations[, borderType[, borderValue]]]]]) → dst


Parameters:
  • src – input image; the number of channels can be arbitrary, but the depth should be one of CV_8U, CV_16U, CV_16S, CV_32F` or ``CV_64F.
  • dst – output image of the same size and type as src.
  • element – structuring element used for dilation; if element=Mat() , a 3 x 3 rectangular structuring element is used.
  • anchor – position of the anchor within the element; default value (-1, -1) means that the anchor is at the element center.
  • iterations – number of times dilation is applied.
  • borderType – pixel extrapolation method (see borderInterpolate() for details).
  • borderValue – border value in case of a constant border (see createMorphologyFilter() for details).


The function dilates the source image using the specified structuring element that determines the shape of a pixel neighborhood over which the maximum is taken:

The function supports the in-place mode. Dilation can be applied several ( iterations ) times. In case of multi-channel images, each channel is processed independently.

Note: An example using the morphological dilate operation can be found at opencv_source_code/samples/cpp/morphology2.cpp

Example
This is a sample code (C++) with images for opencv box filter. Since dilate is a most common feature for optical characters, the source image includes some texts and shapes.

string imgFileName = "lenaWithText.jpg";

cv::Mat src = cv::imread(imgFileName);
if (!src.data){
    cout << "Unable to open file" << endl;
    getchar();
    return 1;
}

//---------------- create kernals ----------------
int dilationSize = 2;
cv::Mat kernalMorphCross = cv::getStructuringElement(cv::MORPH_CROSS,
                        cv::Size(2 * dilationSize + 1, 2 * dilationSize + 1),
                        cv::Point(dilationSize, dilationSize));
cv::Mat kernalMorphDilate = cv::getStructuringElement(cv::MORPH_DILATE,
                        cv::Size(2 * dilationSize + 1, 2 * dilationSize + 1),
                        cv::Point(dilationSize, dilationSize));
cv::Mat kernalMorphEllipse = cv::getStructuringElement(cv::MORPH_ELLIPSE,
                        cv::Size(2 * dilationSize + 1, 2 * dilationSize + 1),
                        cv::Point(dilationSize, dilationSize));
cv::Mat kernalMorphErode = cv::getStructuringElement(cv::MORPH_ERODE,
                        cv::Size(2 * dilationSize + 1, 2 * dilationSize + 1),
                        cv::Point(dilationSize, dilationSize));
cv::Mat kernalMorphOpen = cv::getStructuringElement(cv::MORPH_OPEN,
                        cv::Size(2 * dilationSize + 1, 2 * dilationSize + 1),
                        cv::Point(dilationSize, dilationSize));
cv::Mat kernalMorphRect = cv::getStructuringElement(cv::MORPH_RECT,
                        cv::Size(2 * dilationSize + 1, 2 * dilationSize + 1),
                        cv::Point(dilationSize, dilationSize));

//---------------- apply dilate kernal to source seperately ----------------
   
cv::Mat dstMorphCross;
cv::dilate(src, dstMorphCross, kernalMorphCross);

cv::Mat dstMorphDilate;
cv::dilate(src, dstMorphDilate, kernalMorphDilate);

cv::Mat dstMorphEllipse;
cv::dilate(src, dstMorphEllipse, kernalMorphEllipse);

cv::Mat dstMorphErode;
cv::dilate(src, dstMorphErode, kernalMorphErode);

cv::Mat dstMorphOpen;
cv::dilate(src, dstMorphOpen, kernalMorphOpen);

cv::Mat dstMorphRect;
cv::dilate(src, dstMorphRect, kernalMorphRect);

//---------------- Show filtered images ----------------
cv::namedWindow("Source");
cv::namedWindow("DilateMorphCross");
cv::namedWindow("DilateMorphDilate");
cv::namedWindow("DilateMorphEllipse");
cv::namedWindow("DilateMorphErode");
cv::namedWindow("DilateMorphOpen");
cv::namedWindow("DilateMorphRect");

cv::imshow("Source", src);
cv::imshow("DilateMorphCross", dstMorphCross);
cv::imshow("DilateMorphDilate", dstMorphDilate);
cv::imshow("DilateMorphEllipse", dstMorphEllipse);
cv::imshow("DilateMorphErode", dstMorphErode);
cv::imshow("DilateMorphOpen", dstMorphOpen);
cv::imshow("DilateMorphRect", dstMorphRect);
cv::waitKey(0);

//---------------- Save filtered images ----------------
cv::imwrite("DilateMorphCross.jpg", dstMorphCross);
cv::imwrite("DilateMorphDilate.jpg", dstMorphDilate);
cv::imwrite("DilateMorphEllipse.jpg", dstMorphEllipse);
cv::imwrite("DilateMorphErode.jpg", dstMorphErode);
cv::imwrite("DilateMorphOpen.jpg", dstMorphOpen);
cv::imwrite("DilateMorphRect.jpg", dstMorphRect);

Filtered Image Source Image
Kernel shape - MORPH_CROSS
 

Kernel shape - MORPH_DILATE

Kernel Shape - MORPH_ELLIPSE


Kernel Shape - MORPH_ERODE


Kernel Shape - MORPH_OPEN

Kernel Shape - MORPH_RECT





Download complete Visual Studio project.

OpenCV Filters - copyMakeBorder

Forms a border around an image.


C++: void copyMakeBorder(InputArray src, OutputArray dst, int top, int bottom, int left, int right, int borderType, const Scalar& value=Scalar() )

Python: cv2.copyMakeBorder(src, top, bottom, left, right, borderType[, dst[, value]]) → dst

C: void cvCopyMakeBorder(const CvArr* src, CvArr* dst, CvPoint offset, int bordertype, CvScalar value=cvScalarAll(0) )

Python: cv.CopyMakeBorder(src, dst, offset, bordertype, value=(0, 0, 0, 0)) → None

Parameters:
  • src – Source image.
  • dst – Destination image of the same type as src and the size Size(src.cols+left+right, src.rows+top+bottom).
  • top
  • bottom
  • left
  • right – Parameter specifying how many pixels in each direction from the source image rectangle to extrapolate. For example, top=1, bottom=1, left=1, right=1 mean that 1 pixel-wide border needs to be built.
  • borderType – Border type. See borderInterpolate() for details.
  • value – Border value if borderType==BORDER_CONSTANT .


The function copies the source image into the middle of the destination image. The areas to the left, to the right, above and below the copied source image will be filled with extrapolated pixels. This is not what FilterEngine or filtering functions based on it do (they extrapolate pixels on-fly), but what other more complex functions, including your own, may do to simplify image boundary handling.

The function supports the mode when src is already in the middle of dst . In this case, the function does not copy src itself but simply constructs the border, for example:

// let border be the same in all directions
int border=2;
// constructs a larger image to fit both the image and the border
Mat gray_buf(rgb.rows + border*2, rgb.cols + border*2, rgb.depth());
// select the middle part of it w/o copying data
Mat gray(gray_canvas, Rect(border, border, rgb.cols, rgb.rows));
// convert image from RGB to grayscale
cvtColor(rgb, gray, CV_RGB2GRAY);
// form a border in-place
copyMakeBorder(gray, gray_buf, border, border,
               border, border, BORDER_REPLICATE);
// now do some custom filtering ...
...



Note: When the source image is a part (ROI) of a bigger image, the function will try to use the pixels outside of the ROI to form a border. To disable this feature and always do extrapolation, as if src was not a ROI, use borderType | BORDER_ISOLATED.

Reference: OpenCV Documentation - copyMakeBorder


Example
This is a sample code (C++) with images for opencv copyMakeBorder.

string imgFileName = "lena.jpg";

cv::Mat src = cv::imread(imgFileName);
if (!src.data){
    cout << "Unable to open file" << endl;
    getchar();
    return 1;
}

Mat dstBorderConstant;
copyMakeBorder(src, dstBorderConstant, 256, 256, 256, 256, BORDER_CONSTANT);

Mat dstBorderDefault;
copyMakeBorder(src, dstBorderDefault, 256, 256, 256, 256, BORDER_DEFAULT);

Mat dstBorderIsolate;
copyMakeBorder(src, dstBorderIsolate, 256, 256, 256, 256, BORDER_ISOLATED);

Mat dstBorderReflect;
copyMakeBorder(src, dstBorderReflect, 256, 256, 256, 256, BORDER_REFLECT);

Mat dstBorderReflect101;
copyMakeBorder(src, dstBorderReflect101, 256, 256, 256, 256, BORDER_REFLECT101);

Mat dstBorderReflect_101;
copyMakeBorder(src, dstBorderReflect_101, 256, 256, 256, 256, BORDER_REFLECT_101);

Mat dstBorderReplicate;
copyMakeBorder(src, dstBorderReplicate, 256, 256, 256, 256, BORDER_REPLICATE);

Mat dstBorderWrap;
copyMakeBorder(src, dstBorderWrap, 256, 256, 256, 256, BORDER_WRAP);

cv::namedWindow("Source", CV_WINDOW_FREERATIO);
   
cv::namedWindow("BorderConstant", CV_WINDOW_FREERATIO);
cv::namedWindow("BorderDefault", CV_WINDOW_FREERATIO);
cv::namedWindow("BorderIsolate", CV_WINDOW_FREERATIO);
cv::namedWindow("BorderReflect", CV_WINDOW_FREERATIO);
cv::namedWindow("BorderReflect101", CV_WINDOW_FREERATIO);
cv::namedWindow("BorderReflect_101", CV_WINDOW_FREERATIO);
cv::namedWindow("BorderReplicate", CV_WINDOW_FREERATIO);
cv::namedWindow("BorderWrap", CV_WINDOW_FREERATIO);


cv::imshow("Source", src);

cv::imshow("BorderConstant", dstBorderConstant);
cv::imshow("BorderDefault", dstBorderDefault);
cv::imshow("BorderIsolate", dstBorderIsolate);
cv::imshow("BorderReflect", dstBorderReflect);
cv::imshow("BorderReflect101", dstBorderReflect101);
cv::imshow("BorderReflect_101", dstBorderReflect_101);
cv::imshow("BorderReplicate", dstBorderReplicate);
cv::imshow("BorderWrap", dstBorderWrap);
cv::waitKey(0);

cv::imwrite("BorderConstant.jpg", dstBorderConstant);
cv::imwrite("BorderDefault.jpg", dstBorderDefault);
cv::imwrite("BorderIsolate.jpg", dstBorderIsolate);
cv::imwrite("BorderReflect.jpg", dstBorderReflect);
cv::imwrite("BorderReflect101.jpg", dstBorderReflect101);
cv::imwrite("BorderReflect_101.jpg", dstBorderReflect_101);
cv::imwrite("BorderReplicate.jpg", dstBorderReplicate);
cv::imwrite("BorderWrap.jpg", dstBorderWrap);



Filtered Image Source Image
borderType - BORDER_REFLECT

borderType - BORDER_REFLECT101

borderType - BORDER_REFLECT_101

borderType - BORDER_REPLICATE

borderType - BORDER_CONSTANT

borderType - BORDER_DEFAULT

borderType - BORDER_ISOLATED

borderType - BORDER_WRAP





Download complete Visual Studio project.

OpenCV Filters - boxFilter

Blurs an image using the box filter.

C++: void boxFilter(InputArray src, OutputArray dst, int ddepth, Size ksize, Point anchor=Point(-1,-1), bool normalize=true, int borderType=BORDER_DEFAULT )

Python: cv2.boxFilter(src, ddepth, ksize[, dst[, anchor[, normalize[, borderType]]]]) → dst

Parameters:
  • src – input image.
  • dst – output image of the same size and type as src.
  • ddepth – the output image depth (-1 to use src.depth()).
  • ksize – blurring kernel size.
  • anchor – anchor point; default value Point(-1,-1) means that the anchor is at the kernel center.
  • normalize – flag, specifying whether the kernel is normalized by its area or not.
  • borderType – border mode used to extrapolate pixels outside of the image.

The function smoothes an image using the kernel:

Where,

Unnormalized box filter is useful for computing various integral characteristics over each pixel neighborhood, such as covariance matrices of image derivatives (used in dense optical flow algorithms, and so on). If you need to compute pixel sums over variable-size windows, use integral().

Reference: OpenCV Documentation - boxFilter


Example
This is a sample code (C++) with images for opencv box filter.

 string imgFileName = "lena.jpg";

 cv::Mat src = cv::imread(imgFileName);
 if (!src.data){
    cout << "Unable to open file" << endl;
    getchar();
    return 1;
 }

 cv::Mat dst;
 cv::boxFilter(src, dst, -1, cv::Size(16, 16));

 cv::namedWindow("Source");
 cv::namedWindow("Filtered");

 cv::imshow("Source", src);
 cv::imshow("Filtered", dst);
 cv::waitKey(0);

 cv::imwrite("Box Filter.jpg", dst);

 return 0;


Filtered Image Source Image



Download complete Visual Studio project.

OpenCV Filters - buildPyramid

Constructs the Gaussian pyramid for an image.

C++: void buildPyramid(InputArray src, OutputArrayOfArrays dst, int maxlevel, int borderType=BORDER_DEFAULT )

Parameters:
  • src – Source image. Check pyrDown() for the list of supported types.
  • dst – Destination vector of maxlevel+1 images of the same type as src . dst[0] will be the same as src . dst[1] is the next pyramid layer, a smoothed and down-sized src , and so on.
  • maxlevel – 0-based index of the last (the smallest) pyramid layer. It must be non-negative.
  • borderType – Pixel extrapolation method (BORDER_CONSTANT don’t supported). See borderInterpolate() for details.

The function constructs a vector of images and builds the Gaussian pyramid by recursively applying pyrDown() to the previously built pyramid layers, starting from dst[0]==src.

Reference: OpenCV Documentation - buildPyramid


Example
This is a sample code (C++) with images for opencv build Pyramid filter.


sstring imgFileName = "lena.jpg";

cv::Mat src = cv::imread(imgFileName);
if (!src.data){
    cout << "Unable to open file" << endl;
    getchar();
    return 1;
}

int maxVal = 4;
vector dstVect;
cv::buildPyramid(src, dstVect, maxVal);

cv::namedWindow("Source");
cv::imshow("Source", src);

string imgName;
for (int i = 0; i < maxVal + 1; i++){
    stringstream ss;
    ss << "Filtered " << i;
    imgName = ss.str();

    cv::namedWindow(imgName);
    cv::imshow(imgName, dstVect[i]);

    ss << ".jpg";
    imgName = ss.str();
    cv::imwrite(imgName, dstVect[i]);
}
cv::waitKey(0);

return 0;


Filtered Image Source Image
Image size: 512 x 512

Image size: 256 x 256

Image size: 128 x 128

Image size: 64 x 64

Image size: 32 x 32



Download complete Visual Studio project.

OpenCV Filters - bilateralFilter

Applies the bilateral filter to an image.

C++: void blur(InputArray src, OutputArray dst, Size ksize, Point anchor=Point(-1,-1), int borderType=BORDER_DEFAULT ) 

Python: cv2.blur(src, ksize[, dst[, anchor[, borderType]]]) → dst 


Parameters:
  • src – Source 8-bit or floating-point, 1-channel or 3-channel image.
  • dst – Destination image of the same size and type as src .
  • d – Diameter of each pixel neighborhood that is used during filtering. If it is non-positive, it is computed from sigmaSpace .
  • sigmaColor – Filter sigma in the color space. A larger value of the parameter means that farther colors within the pixel neighborhood (see sigmaSpace ) will be mixed together, resulting in larger areas of semi-equal color.
  • sigmaSpace – Filter sigma in the coordinate space. A larger value of the parameter means that farther pixels will influence each other as long as their colors are close enough (see sigmaColor ). When d>0 , it specifies the neighborhood size regardless of sigmaSpace . Otherwise, d is proportional to sigmaSpace .

The function applies bilateral filtering to the input image, as described in http://www.dai.ed.ac.uk/CVonline/LOCAL_COPIES/MANDUCHI1/Bilateral_Filtering.html bilateralFilter can reduce unwanted noise very well while keeping edges fairly sharp. However, it is very slow compared to most filters.

Sigma values: For simplicity, you can set the 2 sigma values to be the same. If they are small (< 10), the filter will not have much effect, whereas if they are large (> 150), they will have a very strong effect, making the image look “cartoonish”.

Filter size: Large filters (d > 5) are very slow, so it is recommended to use d=5 for real-time applications, and perhaps d=9 for offline applications that need heavy noise filtering.

This filter does not work inplace.

Reference: OpenCV Documentation -  bilateralFilter

Example
This is a sample code (C++) with images for opencv bilateral filter filter.

 string imgFileName = "lena.jpg";

 cv::Mat src = cv::imread(imgFileName);
 if (!src.data){
    cout << "Unable to open file" << endl;
    getchar();
    return 1;
 }

 cv::Mat dst;
 cv::bilateralFilter(src, dst, 20, 100, 100);

 cv::namedWindow("Source");
 cv::namedWindow("Filtered");

 cv::imshow("Source", src);
 cv::imshow("Filtered", dst);
 cv::waitKey(0);

 cv::imwrite("BilateralFilter.jpg", dst);

 return 0;

Filtered Image Source Image




Download complete Visual Studio project.

OpenCV Filters - Blur

Blurs an image using the normalized box filter.

C++: void blur(InputArray src, OutputArray dst, Size ksize, Point anchor=Point(-1,-1), int borderType=BORDER_DEFAULT ) 
Python: cv2.blur(src, ksize[, dst[, anchor[, borderType]]]) → dst
dgdfg

Parameters:
  • src – input image; it can have any number of channels, which are processed independently, but the depth should be CV_8U, CV_16U, CV_16S, CV_32F or CV_64F.
  • dst – output image of the same size and type as src.
  • ksize – blurring kernel size.
  • anchor – anchor point; default value Point(-1,-1) means that the anchor is at the kernel center.
  • borderType – border mode used to extrapolate pixels outside of the image.
The function smoothes an image using the kernel:


The call blur(src, dst, ksize, anchor, borderType) is equivalent to boxFilter(src, dst, src.type(), anchor, true, borderType)

Reference: OpenCV Documentation - blur

Example
This is a sample code (C++) with images for opencv blur filter.

 string imgFileName = "lena.jpg";

 cv::Mat src = cv::imread(imgFileName);
 if (!src.data){
    cout << "Unable to open file" << endl;
    getchar();
    return 1;
 }

 cv::Mat dst;
 cv::blur(src, dst, cv::Size(19, 19));

 cv::namedWindow("Source");
 cv::namedWindow("Filtered");

 cv::imshow("Source", src);
 cv::imshow("Filtered", dst);
 cv::waitKey(0);

 cv::imwrite("Blur.jpg", dst);

 return 0;



Filtered Image Source Image



Download complete Visual Studio project.

OpenCV Filters - adaptiveBilateralFilter

Applies the adaptive bilateral filter to an image.

C++: void adaptiveBilateralFilter(InputArray src, OutputArray dst, Size ksize, double sigmaSpace, double maxSigmaColor=20.0, Point anchor=Point(-1, -1), int borderType=BORDER_DEFAULT )

Python: cv2.adaptiveBilateralFilter(src, ksize, sigmaSpace[, dst[, maxSigmaColor[, anchor[, borderType]]]]) → dst 

Parameters:
  • src – The source image
  • dst – The destination image; will have the same size and the same type as src
  • ksize – The kernel size. This is the neighborhood where the local variance will be calculated, and where pixels will contribute (in a weighted manner).
  • sigmaSpace – Filter sigma in the coordinate space. Larger value of the parameter means that farther pixels will influence each other (as long as their colors are close enough; see sigmaColor). Then d>0, it specifies the neighborhood size regardless of sigmaSpace, otherwise d is proportional to sigmaSpace.
  • maxSigmaColor – Maximum allowed sigma color (will clamp the value calculated in the ksize neighborhood. Larger value of the parameter means that more dissimilar pixels will influence each other (as long as their colors are close enough; see sigmaColor). Then d>0, it specifies the neighborhood size regardless of sigmaSpace, otherwise d is proportional to sigmaSpace.
  • borderType – Pixel extrapolation method.
A main part of our strategy will be to load each raw pixel once, and reuse it to calculate all pixels in the output (filtered) image that need this pixel value. The math of the filter is that of the usual bilateral filter, except that the sigma color is calculated in the neighborhood, and clamped by the optional input value.

Reference: OpenCV Documentation -  adaptiveBilateralFilter

Example
This is a sample code (C++) with images for opencv adaptive bilateral filter.

  string imgFileName = "lena.jpg";

  cv::Mat src = cv::imread(imgFileName);
  if (!src.data){
     cout << "Unable to open file" << endl;
     getchar();
     return 1;
  }

  cv::Mat dst;
  cv::adaptiveBilateralFilter(src, dst, cv::Size(11, 11), 50);//kernal size(11) should be an odd value

 cv::namedWindow("Source");
 cv::namedWindow("Filtered");

 cv::imshow("Source", src);
 cv::imshow("Filtered", dst);

 cv::waitKey(0);

 cv::imwrite("Adaptive Bilateral Filter.jpg", dst);


Filtered Image Source Image



Download complete Visual Studio project.

Call OpenCV functions from C#.net (Bitmap to Mat and Mat to Bitmap)


This is the second article of the article series which provide answers to following question! How to call OpenCV functions from C#.net or VB.net. Specially this article describes, how to pass System.Drawing.Bitmap to OpenCV and get a resultant image as System.Drawing.Bitmap from OpenCV.


Note that System.Drawing.Bitmap is the class type which allow you to manipulate images in C# while OpenCV treat images as cv::Mat (matrix). Therefore we need a way to convert from Bitmap to Mat vice versa in order to process and show processed images. This is the place where wrapper involved. For more details about wrappers, please refer previous article. 


From Previous Article...
So now we are going to create this wrapper for our application. Since we are dealing with .net framework, we can use CLR (Common Language Runtime) technique to create this wrapper. First you have to create a CLR project in Visual Studio. This post will describe, how to call Opencv functions from winfrom/C# and apply an Opencv filter to an image and show the Opencv window from winform.


Download complete Visual Studio project.

Step 1 - Create CLI Project

First of all we need to have a CLI project where we can call C++ functions from .net. You can follow the steps from previous article to create a CLI project.

Step 2 - Create converter function from Bitmap to Mat

Now we need to convert System.Drawing.Bitmap to cv::Mat. To do this conversion, we need to get to the primitive level of both data type. That's mean, we can simply think every image has created from a set of bytes. So bytes can live in both C++ and C#. Therefore the cv::Mat should be created from the set of bytes of Bitmap. Simply copy all bytes from Bitmap to Mat, finally it will create Mat from Bitmap. Use following function to do this conversion.

Mat BitmapToMat(System::Drawing::Bitmap^ bitmap)
{
    IplImage* tmp;

    System::Drawing::Imaging::BitmapData^ bmData = bitmap->LockBits(System::Drawing::Rectangle(0, 0, bitmap->Width, bitmap->Height), System::Drawing::Imaging::ImageLockMode::ReadWrite, bitmap->PixelFormat);
    if (bitmap->PixelFormat == System::Drawing::Imaging::PixelFormat::Format8bppIndexed)
    {
        tmp = cvCreateImage(cvSize(bitmap->Width, bitmap->Height), IPL_DEPTH_8U, 1);
        tmp->imageData = (char*)bmData->Scan0.ToPointer();
    }

    else if (bitmap->PixelFormat == System::Drawing::Imaging::PixelFormat::Format24bppRgb)
    {
        tmp = cvCreateImage(cvSize(bitmap->Width, bitmap->Height), IPL_DEPTH_8U, 3);
        tmp->imageData = (char*)bmData->Scan0.ToPointer();
    }

    bitmap->UnlockBits(bmData);

    return Mat(tmp);
}

Step 3 - Add System.Drawing namespace reference.

Once you add above function to your CLI project, you will get an error on System::Drawing::Bitmap. The reason for this error is that the project has no reference to System::Drawing::Bitmap. Follow below steps to add the reference.

Open project properties of CLI project.
















Select Common Properties then References , Click on Add New References... It will open a window that can select dll files where you can add as references.















Select Framework under Assembly category. Then mark System.Drawing.









Now the project has referenced System.Drawing, and you should not see any compile errors in the function.

Step 4 - Create converter function from Mat to Bitmap

Same as previous conversion, we need to perform this conversion from the primitive level. That's mean we need to reconstruct the Bitmap from Mat's image data bytes. Use following function to convert from cv::Mat to System.Drawing.Bitmap.

System::Drawing::Bitmap^ MatToBitmap(Mat srcImg){
    int stride = srcImg.size().width * srcImg.channels();//calc the srtide
    int hDataCount = srcImg.size().height;
   
    System::Drawing::Bitmap^ retImg;
       
    System::IntPtr ptr(srcImg.data);
   
    //create a pointer with Stride
    if (stride % 4 != 0){//is not stride a multiple of 4?
        //make it a multiple of 4 by fiiling an offset to the end of each row


       
//to hold processed data
        uchar *dataPro = new uchar[((srcImg.size().width * srcImg.channels() + 3) & -4) * hDataCount];

        uchar *data = srcImg.ptr();

        //current position on the data array
        int curPosition = 0;
        //current offset
        int curOffset = 0;

        int offsetCounter = 0;

        //itterate through all the bytes on the structure
        for (int r = 0; r < hDataCount; r++){
            //fill the data
            for (int c = 0; c < stride; c++){
                curPosition = (r * stride) + c;

                dataPro[curPosition + curOffset] = data[curPosition];
            }

            //reset offset counter
            offsetCounter = stride;

            //fill the offset
            do{
                curOffset += 1;
                dataPro[curPosition + curOffset] = 0;

                offsetCounter += 1;
            } while (offsetCounter % 4 != 0);
        }

        ptr = (System::IntPtr)dataPro;//set the data pointer to new/modified data array

        //calc the stride to nearest number which is a multiply of 4
        stride = (srcImg.size().width * srcImg.channels() + 3) & -4;

        retImg = gcnew System::Drawing::Bitmap(srcImg.size().width, srcImg.size().height,
            stride,
            System::Drawing::Imaging::PixelFormat::Format24bppRgb,
            ptr);
    }
    else{

        //no need to add a padding or recalculate the stride
        retImg = gcnew System::Drawing::Bitmap(srcImg.size().width, srcImg.size().height,
            stride,
            System::Drawing::Imaging::PixelFormat::Format24bppRgb,
            ptr);
    }
   
    array^ imageData;
    System::Drawing::Bitmap^ output;

    // Create the byte array.
    {
        System::IO::MemoryStream^ ms = gcnew System::IO::MemoryStream();
        retImg->Save(ms, System::Drawing::Imaging::ImageFormat::Png);
        imageData = ms->ToArray();
        delete ms;
    }

    // Convert back to bitmap
    {
        System::IO::MemoryStream^ ms = gcnew System::IO::MemoryStream(imageData);
        output = (System::Drawing::Bitmap^)System::Drawing::Bitmap::FromStream(ms);
    }

    return output;
}



Now we can convert cv::Mat to System.Drawing.Bitmap. You can call BitmapToMat() function to convert C# Bitmap and use converted Mat to do image processing from OpenCV, so OpenCV will produce processed image as Mat type, here you can use MatToBitmap() function to return Bitmap type object back to C#. Now you can treat this processed Bitmap as a normal Bitmap in C#.

Step 5 - Use converter functions and do image processing.

You can use above 2 functions and your own image processing code to create processed image from an input image. Here, I am trying to apply "medianBlur" opencv filter to C# Bitmap image. This is a sample code which show how to use these conversion functions along some opencv functions.


System::Drawing::Bitmap^ MyOpenCvWrapper::ApplyFilter(System::Drawing::Bitmap^ bitmap){
    Mat image = BitmapToMat(bitmap);//convert Bitmap to Mat
    if (!image.data){
        return nullptr;
    }

    Mat dstImage;//destination image

    //apply the Filter
    medianBlur(image, dstImage, 25);

    //convert Mat to Bitmap
    System::Drawing::Bitmap^ output = MatToBitmap(dstImage);

    return output;
}

Step 6 - Call from C#

Now you can call this function from C# by passing a Bitmap type object to the method and it will return the filter applied image back as a Bitmap. So you can use this returned Bitmap as a normal Bitmap in C#.

//open jpg file as Bitmap
Bitmap img = (Bitmap)Bitmap.FromFile(@"C:\Users\Public\Pictures\Sample Pictures\Tulips.jpg");

OpenCvDotNet.MyOpenCvWrapper obj = new OpenCvDotNet.MyOpenCvWrapper();
Bitmap output = obj.ApplyFilter(img);//call opencv functions and get filterred image

 output.Save("test.jpg");//save processed image

If you put this code in an event handler in C#, it will looks like this,

Step 7 - Use returned Bitmap from OpenCV in C#

you can use this returned Bitmap as a normal Bitmap in C# such as setting the image for PictureBox. Following code will open a Bitmap from a file, process in opencv to apply filter and show the results in a C# PictureBox control.

private void btnOpen_Click(object sender, EventArgs e)
{
    //allow user to open jpg file
    OpenFileDialog dlogOpen = new OpenFileDialog();
    dlogOpen.Filter = "Jpg Files|*.jpg";
    if (dlogOpen.ShowDialog() != System.Windows.Forms.DialogResult.OK)
        return;

    //open jpg file as Bitmap
    Bitmap img = (Bitmap)Bitmap.FromFile(dlogOpen.FileName);

    pbSrcImg.Image = img;//set picture box image to UI

    OpenCvDotNet.MyOpenCvWrapper processor = new OpenCvDotNet.MyOpenCvWrapper();
    Bitmap processedImg = processor.ApplyFilter(img);//call opencv functions and get filterred image

    pbDstImage.Image = processedImg;//set processed image to picture box
}

Where pbSrcImg and pbDstImage are PictureBox UI controls in C#. Once you open a jpg image it will show the UI as follow,



Download complete Visual Studio project.


From Previous Article...
So now we are going to create this wrapper for our application. Since we are dealing with .net framework, we can use CLR (Common Language Runtime) technique to create this wrapper. First you have to create a CLR project in Visual Studio. This post will describe, how to call Opencv functions from winfrom/C# and apply an Opencv filter to an image and show the Opencv window from winform.