Convert Opencv Mat to C# Bitmap

Conversion from Opencv Mat to C# bitmap isn't a difficult task. But this conversion needs to be done from the primitive level of both Opencv Mat and C# Bitmap. Simply all the images are created from set of bytes, therefore it is necessary to create System.Drawing.Bitmat from bytes of cv::Mat. Also you will need to do this conversion in a CLI project. Total instructions can be found in following 2 articles.
  1. Call Opencv functions from C#
  2. Call Opencv functions from C# with bitmap



This article describes how to convert C# Bitmap to Opencv Mat.
Convert C# Bitmap to Opencv Mat

If you already know how to create and configure CLI project in visual studio, simply use following function in CLI project to convert Opencv Mat to C# Bitmap.

System::Drawing::Bitmap^ MatToBitmap(Mat srcImg){
    int stride = srcImg.size().width * srcImg.channels();//calc the srtide
    int hDataCount = srcImg.size().height;
   
    System::Drawing::Bitmap^ retImg;
       
    System::IntPtr ptr(srcImg.data);
   
    //create a pointer with Stride
    if (stride % 4 != 0){//is not stride a multiple of 4?
        //make it a multiple of 4 by fiiling an offset to the end of each row


       
//to hold processed data
        uchar *dataPro = new uchar[((srcImg.size().width * srcImg.channels() + 3) & -4) * hDataCount];

        uchar *data = srcImg.ptr();

        //current position on the data array
        int curPosition = 0;
        //current offset
        int curOffset = 0;

        int offsetCounter = 0;

        //itterate through all the bytes on the structure
        for (int r = 0; r < hDataCount; r++){
            //fill the data
            for (int c = 0; c < stride; c++){
                curPosition = (r * stride) + c;

                dataPro[curPosition + curOffset] = data[curPosition];
            }

            //reset offset counter
            offsetCounter = stride;

            //fill the offset
            do{
                curOffset += 1;
                dataPro[curPosition + curOffset] = 0;

                offsetCounter += 1;
            } while (offsetCounter % 4 != 0);
        }

        ptr = (System::IntPtr)dataPro;//set the data pointer to new/modified data array

        //calc the stride to nearest number which is a multiply of 4
        stride = (srcImg.size().width * srcImg.channels() + 3) & -4;

        retImg = gcnew System::Drawing::Bitmap(srcImg.size().width, srcImg.size().height,
            stride,
            System::Drawing::Imaging::PixelFormat::Format24bppRgb,
            ptr);
    }
    else{

        //no need to add a padding or recalculate the stride
        retImg = gcnew System::Drawing::Bitmap(srcImg.size().width, srcImg.size().height,
            stride,
            System::Drawing::Imaging::PixelFormat::Format24bppRgb,
            ptr);
    }
   
    array^ imageData;
    System::Drawing::Bitmap^ output;

    // Create the byte array.
    {
        System::IO::MemoryStream^ ms = gcnew System::IO::MemoryStream();
        retImg->Save(ms, System::Drawing::Imaging::ImageFormat::Png);
        imageData = ms->ToArray();
        delete ms;
    }

    // Convert back to bitmap
    {
        System::IO::MemoryStream^ ms = gcnew System::IO::MemoryStream(imageData);
        output = (System::Drawing::Bitmap^)System::Drawing::Bitmap::FromStream(ms);
    }

    return output;
}

Continue Reading...

Convert C# Bitmap to Opencv Mat

Conversion from C# bitmap to Opencv Mat isn't a difficult task. But this conversion needs to be done from the primitive level of both Opencv Mat and C# Bitmap. Simply all the images are created from set of bytes, therefore it is necessary to create cv::Mat from bytes of System.Drawing.Bitmat. Also you will need to do this conversion in a CLI project. Total instructions can be found in following 2 articles.
  1. Call Opencv functions from C#
  2. Call Opencv functions from C# with bitmap

This article describes how to convert Opencv Mat to C# Bitmap.
Convert Opencv Mat to C# Bitmap

If you already know how to create and configure CLI project in visual studio, simply use following function in CLI project to convert C# Bitmap to Opencv Mat.

Mat BitmapToMat(System::Drawing::Bitmap^ bitmap)
{
    IplImage* tmp;

    System::Drawing::Imaging::BitmapData^ bmData = bitmap->LockBits(System::Drawing::Rectangle(0, 0, bitmap->Width, bitmap->Height), System::Drawing::Imaging::ImageLockMode::ReadWrite, bitmap->PixelFormat);
    if (bitmap->PixelFormat == System::Drawing::Imaging::PixelFormat::Format8bppIndexed)
    {
        tmp = cvCreateImage(cvSize(bitmap->Width, bitmap->Height), IPL_DEPTH_8U, 1);
        tmp->imageData = (char*)bmData->Scan0.ToPointer();
    }

    else if (bitmap->PixelFormat == System::Drawing::Imaging::PixelFormat::Format24bppRgb)
    {
        tmp = cvCreateImage(cvSize(bitmap->Width, bitmap->Height), IPL_DEPTH_8U, 3);
        tmp->imageData = (char*)bmData->Scan0.ToPointer();
    }

    bitmap->UnlockBits(bmData);

    return Mat(tmp);
}
Continue Reading...

OpenCV Filters - dilate

Dilates an image by using a specific structuring element.

C++: void dilate(InputArray src, OutputArray dst, InputArray kernel, Point anchor=Point(-1,-1), int iterations=1, int borderType=BORDER_CONSTANT, const Scalar& borderValue=morphologyDefaultBorderValue() )

Python: cv2.dilate(src, kernel[, dst[, anchor[, iterations[, borderType[, borderValue]]]]]) → dst


Parameters:
  • src – input image; the number of channels can be arbitrary, but the depth should be one of CV_8U, CV_16U, CV_16S, CV_32F` or ``CV_64F.
  • dst – output image of the same size and type as src.
  • element – structuring element used for dilation; if element=Mat() , a 3 x 3 rectangular structuring element is used.
  • anchor – position of the anchor within the element; default value (-1, -1) means that the anchor is at the element center.
  • iterations – number of times dilation is applied.
  • borderType – pixel extrapolation method (see borderInterpolate() for details).
  • borderValue – border value in case of a constant border (see createMorphologyFilter() for details).


The function dilates the source image using the specified structuring element that determines the shape of a pixel neighborhood over which the maximum is taken:

The function supports the in-place mode. Dilation can be applied several ( iterations ) times. In case of multi-channel images, each channel is processed independently.

Note: An example using the morphological dilate operation can be found at opencv_source_code/samples/cpp/morphology2.cpp

Example
This is a sample code (C++) with images for opencv box filter. Since dilate is a most common feature for optical characters, the source image includes some texts and shapes.

string imgFileName = "lenaWithText.jpg";

cv::Mat src = cv::imread(imgFileName);
if (!src.data){
    cout << "Unable to open file" << endl;
    getchar();
    return 1;
}

//---------------- create kernals ----------------
int dilationSize = 2;
cv::Mat kernalMorphCross = cv::getStructuringElement(cv::MORPH_CROSS,
                        cv::Size(2 * dilationSize + 1, 2 * dilationSize + 1),
                        cv::Point(dilationSize, dilationSize));
cv::Mat kernalMorphDilate = cv::getStructuringElement(cv::MORPH_DILATE,
                        cv::Size(2 * dilationSize + 1, 2 * dilationSize + 1),
                        cv::Point(dilationSize, dilationSize));
cv::Mat kernalMorphEllipse = cv::getStructuringElement(cv::MORPH_ELLIPSE,
                        cv::Size(2 * dilationSize + 1, 2 * dilationSize + 1),
                        cv::Point(dilationSize, dilationSize));
cv::Mat kernalMorphErode = cv::getStructuringElement(cv::MORPH_ERODE,
                        cv::Size(2 * dilationSize + 1, 2 * dilationSize + 1),
                        cv::Point(dilationSize, dilationSize));
cv::Mat kernalMorphOpen = cv::getStructuringElement(cv::MORPH_OPEN,
                        cv::Size(2 * dilationSize + 1, 2 * dilationSize + 1),
                        cv::Point(dilationSize, dilationSize));
cv::Mat kernalMorphRect = cv::getStructuringElement(cv::MORPH_RECT,
                        cv::Size(2 * dilationSize + 1, 2 * dilationSize + 1),
                        cv::Point(dilationSize, dilationSize));

//---------------- apply dilate kernal to source seperately ----------------
   
cv::Mat dstMorphCross;
cv::dilate(src, dstMorphCross, kernalMorphCross);

cv::Mat dstMorphDilate;
cv::dilate(src, dstMorphDilate, kernalMorphDilate);

cv::Mat dstMorphEllipse;
cv::dilate(src, dstMorphEllipse, kernalMorphEllipse);

cv::Mat dstMorphErode;
cv::dilate(src, dstMorphErode, kernalMorphErode);

cv::Mat dstMorphOpen;
cv::dilate(src, dstMorphOpen, kernalMorphOpen);

cv::Mat dstMorphRect;
cv::dilate(src, dstMorphRect, kernalMorphRect);

//---------------- Show filtered images ----------------
cv::namedWindow("Source");
cv::namedWindow("DilateMorphCross");
cv::namedWindow("DilateMorphDilate");
cv::namedWindow("DilateMorphEllipse");
cv::namedWindow("DilateMorphErode");
cv::namedWindow("DilateMorphOpen");
cv::namedWindow("DilateMorphRect");

cv::imshow("Source", src);
cv::imshow("DilateMorphCross", dstMorphCross);
cv::imshow("DilateMorphDilate", dstMorphDilate);
cv::imshow("DilateMorphEllipse", dstMorphEllipse);
cv::imshow("DilateMorphErode", dstMorphErode);
cv::imshow("DilateMorphOpen", dstMorphOpen);
cv::imshow("DilateMorphRect", dstMorphRect);
cv::waitKey(0);

//---------------- Save filtered images ----------------
cv::imwrite("DilateMorphCross.jpg", dstMorphCross);
cv::imwrite("DilateMorphDilate.jpg", dstMorphDilate);
cv::imwrite("DilateMorphEllipse.jpg", dstMorphEllipse);
cv::imwrite("DilateMorphErode.jpg", dstMorphErode);
cv::imwrite("DilateMorphOpen.jpg", dstMorphOpen);
cv::imwrite("DilateMorphRect.jpg", dstMorphRect);

Filtered Image Source Image
Kernel shape - MORPH_CROSS
 

Kernel shape - MORPH_DILATE

Kernel Shape - MORPH_ELLIPSE


Kernel Shape - MORPH_ERODE


Kernel Shape - MORPH_OPEN

Kernel Shape - MORPH_RECT





Download complete Visual Studio project.

Continue Reading...

OpenCV Filters - copyMakeBorder

Forms a border around an image.


C++: void copyMakeBorder(InputArray src, OutputArray dst, int top, int bottom, int left, int right, int borderType, const Scalar& value=Scalar() )

Python: cv2.copyMakeBorder(src, top, bottom, left, right, borderType[, dst[, value]]) → dst

C: void cvCopyMakeBorder(const CvArr* src, CvArr* dst, CvPoint offset, int bordertype, CvScalar value=cvScalarAll(0) )

Python: cv.CopyMakeBorder(src, dst, offset, bordertype, value=(0, 0, 0, 0)) → None

Parameters:
  • src – Source image.
  • dst – Destination image of the same type as src and the size Size(src.cols+left+right, src.rows+top+bottom).
  • top
  • bottom
  • left
  • right – Parameter specifying how many pixels in each direction from the source image rectangle to extrapolate. For example, top=1, bottom=1, left=1, right=1 mean that 1 pixel-wide border needs to be built.
  • borderType – Border type. See borderInterpolate() for details.
  • value – Border value if borderType==BORDER_CONSTANT .


The function copies the source image into the middle of the destination image. The areas to the left, to the right, above and below the copied source image will be filled with extrapolated pixels. This is not what FilterEngine or filtering functions based on it do (they extrapolate pixels on-fly), but what other more complex functions, including your own, may do to simplify image boundary handling.

The function supports the mode when src is already in the middle of dst . In this case, the function does not copy src itself but simply constructs the border, for example:

// let border be the same in all directions
int border=2;
// constructs a larger image to fit both the image and the border
Mat gray_buf(rgb.rows + border*2, rgb.cols + border*2, rgb.depth());
// select the middle part of it w/o copying data
Mat gray(gray_canvas, Rect(border, border, rgb.cols, rgb.rows));
// convert image from RGB to grayscale
cvtColor(rgb, gray, CV_RGB2GRAY);
// form a border in-place
copyMakeBorder(gray, gray_buf, border, border,
               border, border, BORDER_REPLICATE);
// now do some custom filtering ...
...



Note: When the source image is a part (ROI) of a bigger image, the function will try to use the pixels outside of the ROI to form a border. To disable this feature and always do extrapolation, as if src was not a ROI, use borderType | BORDER_ISOLATED.

Reference: OpenCV Documentation - copyMakeBorder


Example
This is a sample code (C++) with images for opencv copyMakeBorder.

string imgFileName = "lena.jpg";

cv::Mat src = cv::imread(imgFileName);
if (!src.data){
    cout << "Unable to open file" << endl;
    getchar();
    return 1;
}

Mat dstBorderConstant;
copyMakeBorder(src, dstBorderConstant, 256, 256, 256, 256, BORDER_CONSTANT);

Mat dstBorderDefault;
copyMakeBorder(src, dstBorderDefault, 256, 256, 256, 256, BORDER_DEFAULT);

Mat dstBorderIsolate;
copyMakeBorder(src, dstBorderIsolate, 256, 256, 256, 256, BORDER_ISOLATED);

Mat dstBorderReflect;
copyMakeBorder(src, dstBorderReflect, 256, 256, 256, 256, BORDER_REFLECT);

Mat dstBorderReflect101;
copyMakeBorder(src, dstBorderReflect101, 256, 256, 256, 256, BORDER_REFLECT101);

Mat dstBorderReflect_101;
copyMakeBorder(src, dstBorderReflect_101, 256, 256, 256, 256, BORDER_REFLECT_101);

Mat dstBorderReplicate;
copyMakeBorder(src, dstBorderReplicate, 256, 256, 256, 256, BORDER_REPLICATE);

Mat dstBorderWrap;
copyMakeBorder(src, dstBorderWrap, 256, 256, 256, 256, BORDER_WRAP);

cv::namedWindow("Source", CV_WINDOW_FREERATIO);
   
cv::namedWindow("BorderConstant", CV_WINDOW_FREERATIO);
cv::namedWindow("BorderDefault", CV_WINDOW_FREERATIO);
cv::namedWindow("BorderIsolate", CV_WINDOW_FREERATIO);
cv::namedWindow("BorderReflect", CV_WINDOW_FREERATIO);
cv::namedWindow("BorderReflect101", CV_WINDOW_FREERATIO);
cv::namedWindow("BorderReflect_101", CV_WINDOW_FREERATIO);
cv::namedWindow("BorderReplicate", CV_WINDOW_FREERATIO);
cv::namedWindow("BorderWrap", CV_WINDOW_FREERATIO);


cv::imshow("Source", src);

cv::imshow("BorderConstant", dstBorderConstant);
cv::imshow("BorderDefault", dstBorderDefault);
cv::imshow("BorderIsolate", dstBorderIsolate);
cv::imshow("BorderReflect", dstBorderReflect);
cv::imshow("BorderReflect101", dstBorderReflect101);
cv::imshow("BorderReflect_101", dstBorderReflect_101);
cv::imshow("BorderReplicate", dstBorderReplicate);
cv::imshow("BorderWrap", dstBorderWrap);
cv::waitKey(0);

cv::imwrite("BorderConstant.jpg", dstBorderConstant);
cv::imwrite("BorderDefault.jpg", dstBorderDefault);
cv::imwrite("BorderIsolate.jpg", dstBorderIsolate);
cv::imwrite("BorderReflect.jpg", dstBorderReflect);
cv::imwrite("BorderReflect101.jpg", dstBorderReflect101);
cv::imwrite("BorderReflect_101.jpg", dstBorderReflect_101);
cv::imwrite("BorderReplicate.jpg", dstBorderReplicate);
cv::imwrite("BorderWrap.jpg", dstBorderWrap);



Filtered Image Source Image
borderType - BORDER_REFLECT

borderType - BORDER_REFLECT101

borderType - BORDER_REFLECT_101

borderType - BORDER_REPLICATE

borderType - BORDER_CONSTANT

borderType - BORDER_DEFAULT

borderType - BORDER_ISOLATED

borderType - BORDER_WRAP





Download complete Visual Studio project.

Continue Reading...

OpenCV Image Filtering

Continue Reading...

OpenCV Filters - boxFilter

Blurs an image using the box filter.

C++: void boxFilter(InputArray src, OutputArray dst, int ddepth, Size ksize, Point anchor=Point(-1,-1), bool normalize=true, int borderType=BORDER_DEFAULT )

Python: cv2.boxFilter(src, ddepth, ksize[, dst[, anchor[, normalize[, borderType]]]]) → dst

Parameters:
  • src – input image.
  • dst – output image of the same size and type as src.
  • ddepth – the output image depth (-1 to use src.depth()).
  • ksize – blurring kernel size.
  • anchor – anchor point; default value Point(-1,-1) means that the anchor is at the kernel center.
  • normalize – flag, specifying whether the kernel is normalized by its area or not.
  • borderType – border mode used to extrapolate pixels outside of the image.

The function smoothes an image using the kernel:

Where,

Unnormalized box filter is useful for computing various integral characteristics over each pixel neighborhood, such as covariance matrices of image derivatives (used in dense optical flow algorithms, and so on). If you need to compute pixel sums over variable-size windows, use integral().

Reference: OpenCV Documentation - boxFilter


Example
This is a sample code (C++) with images for opencv box filter.

 string imgFileName = "lena.jpg";

 cv::Mat src = cv::imread(imgFileName);
 if (!src.data){
    cout << "Unable to open file" << endl;
    getchar();
    return 1;
 }

 cv::Mat dst;
 cv::boxFilter(src, dst, -1, cv::Size(16, 16));

 cv::namedWindow("Source");
 cv::namedWindow("Filtered");

 cv::imshow("Source", src);
 cv::imshow("Filtered", dst);
 cv::waitKey(0);

 cv::imwrite("Box Filter.jpg", dst);

 return 0;


Filtered Image Source Image



Download complete Visual Studio project.

Continue Reading...

OpenCV Filters - buildPyramid

Constructs the Gaussian pyramid for an image.

C++: void buildPyramid(InputArray src, OutputArrayOfArrays dst, int maxlevel, int borderType=BORDER_DEFAULT )

Parameters:
  • src – Source image. Check pyrDown() for the list of supported types.
  • dst – Destination vector of maxlevel+1 images of the same type as src . dst[0] will be the same as src . dst[1] is the next pyramid layer, a smoothed and down-sized src , and so on.
  • maxlevel – 0-based index of the last (the smallest) pyramid layer. It must be non-negative.
  • borderType – Pixel extrapolation method (BORDER_CONSTANT don’t supported). See borderInterpolate() for details.

The function constructs a vector of images and builds the Gaussian pyramid by recursively applying pyrDown() to the previously built pyramid layers, starting from dst[0]==src.

Reference: OpenCV Documentation - buildPyramid


Example
This is a sample code (C++) with images for opencv build Pyramid filter.


sstring imgFileName = "lena.jpg";

cv::Mat src = cv::imread(imgFileName);
if (!src.data){
    cout << "Unable to open file" << endl;
    getchar();
    return 1;
}

int maxVal = 4;
vector dstVect;
cv::buildPyramid(src, dstVect, maxVal);

cv::namedWindow("Source");
cv::imshow("Source", src);

string imgName;
for (int i = 0; i < maxVal + 1; i++){
    stringstream ss;
    ss << "Filtered " << i;
    imgName = ss.str();

    cv::namedWindow(imgName);
    cv::imshow(imgName, dstVect[i]);

    ss << ".jpg";
    imgName = ss.str();
    cv::imwrite(imgName, dstVect[i]);
}
cv::waitKey(0);

return 0;


Filtered Image Source Image
Image size: 512 x 512

Image size: 256 x 256

Image size: 128 x 128

Image size: 64 x 64

Image size: 32 x 32



Download complete Visual Studio project.

Continue Reading...

OpenCV Filters - bilateralFilter

Applies the bilateral filter to an image.

C++: void blur(InputArray src, OutputArray dst, Size ksize, Point anchor=Point(-1,-1), int borderType=BORDER_DEFAULT ) 

Python: cv2.blur(src, ksize[, dst[, anchor[, borderType]]]) → dst 


Parameters:
  • src – Source 8-bit or floating-point, 1-channel or 3-channel image.
  • dst – Destination image of the same size and type as src .
  • d – Diameter of each pixel neighborhood that is used during filtering. If it is non-positive, it is computed from sigmaSpace .
  • sigmaColor – Filter sigma in the color space. A larger value of the parameter means that farther colors within the pixel neighborhood (see sigmaSpace ) will be mixed together, resulting in larger areas of semi-equal color.
  • sigmaSpace – Filter sigma in the coordinate space. A larger value of the parameter means that farther pixels will influence each other as long as their colors are close enough (see sigmaColor ). When d>0 , it specifies the neighborhood size regardless of sigmaSpace . Otherwise, d is proportional to sigmaSpace .

The function applies bilateral filtering to the input image, as described in http://www.dai.ed.ac.uk/CVonline/LOCAL_COPIES/MANDUCHI1/Bilateral_Filtering.html bilateralFilter can reduce unwanted noise very well while keeping edges fairly sharp. However, it is very slow compared to most filters.

Sigma values: For simplicity, you can set the 2 sigma values to be the same. If they are small (< 10), the filter will not have much effect, whereas if they are large (> 150), they will have a very strong effect, making the image look “cartoonish”.

Filter size: Large filters (d > 5) are very slow, so it is recommended to use d=5 for real-time applications, and perhaps d=9 for offline applications that need heavy noise filtering.

This filter does not work inplace.

Reference: OpenCV Documentation -  bilateralFilter

Example
This is a sample code (C++) with images for opencv bilateral filter filter.

 string imgFileName = "lena.jpg";

 cv::Mat src = cv::imread(imgFileName);
 if (!src.data){
    cout << "Unable to open file" << endl;
    getchar();
    return 1;
 }

 cv::Mat dst;
 cv::bilateralFilter(src, dst, 20, 100, 100);

 cv::namedWindow("Source");
 cv::namedWindow("Filtered");

 cv::imshow("Source", src);
 cv::imshow("Filtered", dst);
 cv::waitKey(0);

 cv::imwrite("BilateralFilter.jpg", dst);

 return 0;

Filtered Image Source Image




Download complete Visual Studio project.

Continue Reading...