Simple AForge.net tutorial - getting start with AForge


AForge.NET is a powerful C# framework designed for the fields of Computer Vision and Artificial Intelligence, image processing, neural networks, genetic algorithms, machine learning, robotics, etc.

This is a simple and quick tutorial that describes how to setup Visual Studio environment to work with AForge.net. At the end of this tutorial you will have a C# project with all the required dll files for AForge.net and a simple few line code to convert RBG C# Bitmap image to grayscale  image.



Download complete Visual Studio project.


 

Step 1 - Create new project

Open Visual Studio IDE and create a new project, Goto File-->New-->Project

Create a C# winform Application project by selecting Windows Forms Application from the list and give it a proper name and folder to save the project

Step 2 - Add references for AForge.net

Open solution explorer and right click on your project then select Manage NuGet Packages...
This will open a new window to manage your references via NuGet. NuGet is a nice way to keep your references, But this required your computer connected to internet. Therefore make sure your dev machine connected to internet when you try next step.

Now, simply search for aforge on the search box and click on install button on the AForge.Imaging library. So this will take few seconds to download all the dependencies and the library itself to your project.

 Note that the coolest thing in the NuGet is that you do not need to worry about dependencies of a library, it will take care of all required libraries for the one that you want to install.

After you are done with the installation, your References for the project will looks like this,

Step 3 - Create winform

Now, let's do some UI to do our magic. Click on tool bar and draw a button. You can give it a proper name and text.

 I called my button as Process and named it as btnProcess. So let's make some code in the next step.

Step 4 - Code it !!!

Double click on the Process button in your winform designer and Visual Studio will open the code view and automatically add a Click event for your button. So let's do our coding inside the click event, then it will execute the code when you click on the Process button.

Since this code needs to convert an RGB image to grayscale, we need to apply a filter on AForge.net called Grayscale. Therefore we need to create the filter first.

Type Grayscale in the code and Visual Studio will give you an error says that "The type or namespace name 'Grayscale' could not be found (are you missing a using directive or an assembly reference?)". That's mean, we didn't import AForge namespace to this file still.
To fix this error, either you can type using AForge.Imaging.Filters; or Right on Grayscale and select Resolve-->using AForge.Imaging.Filters;
Either case your code should be like this and note that you will have using AForge.Imaging.Filters; at the top of your file.

Now, type the following code to open an image and apply the grayscale filter and save it back to the disk.
Grayscale filter = new Grayscale(0.2, 0.2, 0.2);

Bitmap img = (Bitmap)Image.FromFile("test.jpg");
Bitmap grayImage = filter.Apply(img);
grayImage.Save("grayImage.jpg");

Code Explained...
Grayscale filter = new Grayscale(0.2, 0.2, 0.2);
This will create the AForge filter named Grayscale and I am setting Red, Green, and Blue coefficients to 0.2, you can change this value and test, so it will give you different results.



Bitmap img = (Bitmap)Image.FromFile("test.jpg");
Here, I am opening a jpg file into a System.Drawing.Bitmap type object. Note that, if your image is in a different path, you need to type the full path of the jpg file instead of test.jpg.


Bitmap grayImage = filter.Apply(img);
This is the place that magic happens, img is our source image and this will return a filter applied Bitmap type object. So grayImage is the output image.


grayImage.Save("grayImage.jpg");
This will save the output image to the disk, so we can open it as a normal jpg image and view the result.

Now, you can build your project. Go to Build-->Build Solution from the main menu.

If every thing went well without any error in your code, you may see success on the Output window as follows,

Step 5 - One more before run...

Note that all the path for our jpg files are relative paths, which means that is not a the full path to the file. So this means all the paths are relative the directory where our exe file located on, in this case it will located on your bin/debug folder of the Visual Studio project. Therefore go to your project folder and put test.jpg file in the bin/debug folder as shown here,

Step 6 - Run the code

Ok, Now your project is ready to run and test. so go to Debug-->Start Debugging or press F5 to run your project.

Then you will see your winform and click on process button.

If every thing went well, you will see a jpg file in your bin/debug folder named grayImage.jpg and it is the grayscale image for the test.jpg image.



Download complete Visual Studio project.


Filtered Image Source Image
Continue Reading...

Debug PHP code in Netbeans

Netbeans is a powerful IDE for PHP and many other development environments. is it possible to debug your PHP project in Netbeans? run your PHP code line by line and look at your variable values in real time? The answer is Yes! even you can have many watch expressions to examine variable values and execute your PHP code line by line. You just need to follow these 4 steps to enable PHP debugging in your Netbeans IDE!

Note - Netbeans version 8.0.2 has used for this article

 

 

Step1: Set php.exe file location on your web server.

Click on Tools-> Options to open Options dialog box on Netbeans then go to PHP tab and set the php.exe file location (I am using xampp server, your path also will be similar to "F:\xampp\php\php.exe" if you use xampp, or if you use wamp server your path will similar to "F:\wamp\bin\php\php5.4.12\php.exe")








 

 

Step2: Setup debugging on Netbeans

Go to Debugging tab on Options dialogbox and set the values as you can see on following screenshot. Then click on Apply and OK to save your changes.

 

 

 

 

 

 

 

 

 

Step3: Setup php.ini file

Copy following lines to the end of your php.ini file.
Note - you can find this file on a similar location as  "F:\xampp\php\php.ini" if you are using xampp, or "F:\wamp\bin\php\php5.4.12\php.ini" if you are using wamp server.

[xdebug]
xdebug.remote_enable = on
xdebug.profiler_enable = off
xdebug.profiler_enable_trigger = off
xdebug.profiler_output_name = cachegrind.out.%t.%p
xdebug.profiler_output_dir = "F:\xampp\tmp"
xdebug.remote_handler = dbgp
xdebug.remote_host = localhost
xdebug.remote_port = 9000

Note - you can change xdebug.profiler_output_dir = "F:\xampp\tmp" to match with your own folder

Step4: Restart your web server and Netbeans IDE to apply the changes that we made.



Now your IDE ready for debug your project. click on icon or Debug->Debug Project to start debugging on Netbeans. once you start debugging your browser will open the web page and you can put some break points on your code, then these break points will hit depends on your user actions on your web page. So you can examine your variable values and program execution using following icons on netbeans IDE. (or you can find the same actions on Debug menu on the IDE)

 
Continue Reading...

Convert Opencv Mat to C# Bitmap

Conversion from Opencv Mat to C# bitmap isn't a difficult task. But this conversion needs to be done from the primitive level of both Opencv Mat and C# Bitmap. Simply all the images are created from set of bytes, therefore it is necessary to create System.Drawing.Bitmat from bytes of cv::Mat. Also you will need to do this conversion in a CLI project. Total instructions can be found in following 2 articles.
  1. Call Opencv functions from C#
  2. Call Opencv functions from C# with bitmap



This article describes how to convert C# Bitmap to Opencv Mat.
Convert C# Bitmap to Opencv Mat

If you already know how to create and configure CLI project in visual studio, simply use following function in CLI project to convert Opencv Mat to C# Bitmap.

System::Drawing::Bitmap^ MatToBitmap(Mat srcImg){
    int stride = srcImg.size().width * srcImg.channels();//calc the srtide
    int hDataCount = srcImg.size().height;
   
    System::Drawing::Bitmap^ retImg;
       
    System::IntPtr ptr(srcImg.data);
   
    //create a pointer with Stride
    if (stride % 4 != 0){//is not stride a multiple of 4?
        //make it a multiple of 4 by fiiling an offset to the end of each row


       
//to hold processed data
        uchar *dataPro = new uchar[((srcImg.size().width * srcImg.channels() + 3) & -4) * hDataCount];

        uchar *data = srcImg.ptr();

        //current position on the data array
        int curPosition = 0;
        //current offset
        int curOffset = 0;

        int offsetCounter = 0;

        //itterate through all the bytes on the structure
        for (int r = 0; r < hDataCount; r++){
            //fill the data
            for (int c = 0; c < stride; c++){
                curPosition = (r * stride) + c;

                dataPro[curPosition + curOffset] = data[curPosition];
            }

            //reset offset counter
            offsetCounter = stride;

            //fill the offset
            do{
                curOffset += 1;
                dataPro[curPosition + curOffset] = 0;

                offsetCounter += 1;
            } while (offsetCounter % 4 != 0);
        }

        ptr = (System::IntPtr)dataPro;//set the data pointer to new/modified data array

        //calc the stride to nearest number which is a multiply of 4
        stride = (srcImg.size().width * srcImg.channels() + 3) & -4;

        retImg = gcnew System::Drawing::Bitmap(srcImg.size().width, srcImg.size().height,
            stride,
            System::Drawing::Imaging::PixelFormat::Format24bppRgb,
            ptr);
    }
    else{

        //no need to add a padding or recalculate the stride
        retImg = gcnew System::Drawing::Bitmap(srcImg.size().width, srcImg.size().height,
            stride,
            System::Drawing::Imaging::PixelFormat::Format24bppRgb,
            ptr);
    }
   
    array^ imageData;
    System::Drawing::Bitmap^ output;

    // Create the byte array.
    {
        System::IO::MemoryStream^ ms = gcnew System::IO::MemoryStream();
        retImg->Save(ms, System::Drawing::Imaging::ImageFormat::Png);
        imageData = ms->ToArray();
        delete ms;
    }

    // Convert back to bitmap
    {
        System::IO::MemoryStream^ ms = gcnew System::IO::MemoryStream(imageData);
        output = (System::Drawing::Bitmap^)System::Drawing::Bitmap::FromStream(ms);
    }

    return output;
}

Continue Reading...

Convert C# Bitmap to Opencv Mat

Conversion from C# bitmap to Opencv Mat isn't a difficult task. But this conversion needs to be done from the primitive level of both Opencv Mat and C# Bitmap. Simply all the images are created from set of bytes, therefore it is necessary to create cv::Mat from bytes of System.Drawing.Bitmat. Also you will need to do this conversion in a CLI project. Total instructions can be found in following 2 articles.
  1. Call Opencv functions from C#
  2. Call Opencv functions from C# with bitmap

This article describes how to convert Opencv Mat to C# Bitmap.
Convert Opencv Mat to C# Bitmap

If you already know how to create and configure CLI project in visual studio, simply use following function in CLI project to convert C# Bitmap to Opencv Mat.

Mat BitmapToMat(System::Drawing::Bitmap^ bitmap)
{
    IplImage* tmp;

    System::Drawing::Imaging::BitmapData^ bmData = bitmap->LockBits(System::Drawing::Rectangle(0, 0, bitmap->Width, bitmap->Height), System::Drawing::Imaging::ImageLockMode::ReadWrite, bitmap->PixelFormat);
    if (bitmap->PixelFormat == System::Drawing::Imaging::PixelFormat::Format8bppIndexed)
    {
        tmp = cvCreateImage(cvSize(bitmap->Width, bitmap->Height), IPL_DEPTH_8U, 1);
        tmp->imageData = (char*)bmData->Scan0.ToPointer();
    }

    else if (bitmap->PixelFormat == System::Drawing::Imaging::PixelFormat::Format24bppRgb)
    {
        tmp = cvCreateImage(cvSize(bitmap->Width, bitmap->Height), IPL_DEPTH_8U, 3);
        tmp->imageData = (char*)bmData->Scan0.ToPointer();
    }

    bitmap->UnlockBits(bmData);

    return Mat(tmp);
}
Continue Reading...

OpenCV Filters - dilate

Dilates an image by using a specific structuring element.

C++: void dilate(InputArray src, OutputArray dst, InputArray kernel, Point anchor=Point(-1,-1), int iterations=1, int borderType=BORDER_CONSTANT, const Scalar& borderValue=morphologyDefaultBorderValue() )

Python: cv2.dilate(src, kernel[, dst[, anchor[, iterations[, borderType[, borderValue]]]]]) → dst


Parameters:
  • src – input image; the number of channels can be arbitrary, but the depth should be one of CV_8U, CV_16U, CV_16S, CV_32F` or ``CV_64F.
  • dst – output image of the same size and type as src.
  • element – structuring element used for dilation; if element=Mat() , a 3 x 3 rectangular structuring element is used.
  • anchor – position of the anchor within the element; default value (-1, -1) means that the anchor is at the element center.
  • iterations – number of times dilation is applied.
  • borderType – pixel extrapolation method (see borderInterpolate() for details).
  • borderValue – border value in case of a constant border (see createMorphologyFilter() for details).


The function dilates the source image using the specified structuring element that determines the shape of a pixel neighborhood over which the maximum is taken:

The function supports the in-place mode. Dilation can be applied several ( iterations ) times. In case of multi-channel images, each channel is processed independently.

Note: An example using the morphological dilate operation can be found at opencv_source_code/samples/cpp/morphology2.cpp

Example
This is a sample code (C++) with images for opencv box filter. Since dilate is a most common feature for optical characters, the source image includes some texts and shapes.

string imgFileName = "lenaWithText.jpg";

cv::Mat src = cv::imread(imgFileName);
if (!src.data){
    cout << "Unable to open file" << endl;
    getchar();
    return 1;
}

//---------------- create kernals ----------------
int dilationSize = 2;
cv::Mat kernalMorphCross = cv::getStructuringElement(cv::MORPH_CROSS,
                        cv::Size(2 * dilationSize + 1, 2 * dilationSize + 1),
                        cv::Point(dilationSize, dilationSize));
cv::Mat kernalMorphDilate = cv::getStructuringElement(cv::MORPH_DILATE,
                        cv::Size(2 * dilationSize + 1, 2 * dilationSize + 1),
                        cv::Point(dilationSize, dilationSize));
cv::Mat kernalMorphEllipse = cv::getStructuringElement(cv::MORPH_ELLIPSE,
                        cv::Size(2 * dilationSize + 1, 2 * dilationSize + 1),
                        cv::Point(dilationSize, dilationSize));
cv::Mat kernalMorphErode = cv::getStructuringElement(cv::MORPH_ERODE,
                        cv::Size(2 * dilationSize + 1, 2 * dilationSize + 1),
                        cv::Point(dilationSize, dilationSize));
cv::Mat kernalMorphOpen = cv::getStructuringElement(cv::MORPH_OPEN,
                        cv::Size(2 * dilationSize + 1, 2 * dilationSize + 1),
                        cv::Point(dilationSize, dilationSize));
cv::Mat kernalMorphRect = cv::getStructuringElement(cv::MORPH_RECT,
                        cv::Size(2 * dilationSize + 1, 2 * dilationSize + 1),
                        cv::Point(dilationSize, dilationSize));

//---------------- apply dilate kernal to source seperately ----------------
   
cv::Mat dstMorphCross;
cv::dilate(src, dstMorphCross, kernalMorphCross);

cv::Mat dstMorphDilate;
cv::dilate(src, dstMorphDilate, kernalMorphDilate);

cv::Mat dstMorphEllipse;
cv::dilate(src, dstMorphEllipse, kernalMorphEllipse);

cv::Mat dstMorphErode;
cv::dilate(src, dstMorphErode, kernalMorphErode);

cv::Mat dstMorphOpen;
cv::dilate(src, dstMorphOpen, kernalMorphOpen);

cv::Mat dstMorphRect;
cv::dilate(src, dstMorphRect, kernalMorphRect);

//---------------- Show filtered images ----------------
cv::namedWindow("Source");
cv::namedWindow("DilateMorphCross");
cv::namedWindow("DilateMorphDilate");
cv::namedWindow("DilateMorphEllipse");
cv::namedWindow("DilateMorphErode");
cv::namedWindow("DilateMorphOpen");
cv::namedWindow("DilateMorphRect");

cv::imshow("Source", src);
cv::imshow("DilateMorphCross", dstMorphCross);
cv::imshow("DilateMorphDilate", dstMorphDilate);
cv::imshow("DilateMorphEllipse", dstMorphEllipse);
cv::imshow("DilateMorphErode", dstMorphErode);
cv::imshow("DilateMorphOpen", dstMorphOpen);
cv::imshow("DilateMorphRect", dstMorphRect);
cv::waitKey(0);

//---------------- Save filtered images ----------------
cv::imwrite("DilateMorphCross.jpg", dstMorphCross);
cv::imwrite("DilateMorphDilate.jpg", dstMorphDilate);
cv::imwrite("DilateMorphEllipse.jpg", dstMorphEllipse);
cv::imwrite("DilateMorphErode.jpg", dstMorphErode);
cv::imwrite("DilateMorphOpen.jpg", dstMorphOpen);
cv::imwrite("DilateMorphRect.jpg", dstMorphRect);

Filtered Image Source Image
Kernel shape - MORPH_CROSS
 

Kernel shape - MORPH_DILATE

Kernel Shape - MORPH_ELLIPSE


Kernel Shape - MORPH_ERODE


Kernel Shape - MORPH_OPEN

Kernel Shape - MORPH_RECT





Download complete Visual Studio project.

Continue Reading...

OpenCV Filters - copyMakeBorder

Forms a border around an image.


C++: void copyMakeBorder(InputArray src, OutputArray dst, int top, int bottom, int left, int right, int borderType, const Scalar& value=Scalar() )

Python: cv2.copyMakeBorder(src, top, bottom, left, right, borderType[, dst[, value]]) → dst

C: void cvCopyMakeBorder(const CvArr* src, CvArr* dst, CvPoint offset, int bordertype, CvScalar value=cvScalarAll(0) )

Python: cv.CopyMakeBorder(src, dst, offset, bordertype, value=(0, 0, 0, 0)) → None

Parameters:
  • src – Source image.
  • dst – Destination image of the same type as src and the size Size(src.cols+left+right, src.rows+top+bottom).
  • top
  • bottom
  • left
  • right – Parameter specifying how many pixels in each direction from the source image rectangle to extrapolate. For example, top=1, bottom=1, left=1, right=1 mean that 1 pixel-wide border needs to be built.
  • borderType – Border type. See borderInterpolate() for details.
  • value – Border value if borderType==BORDER_CONSTANT .


The function copies the source image into the middle of the destination image. The areas to the left, to the right, above and below the copied source image will be filled with extrapolated pixels. This is not what FilterEngine or filtering functions based on it do (they extrapolate pixels on-fly), but what other more complex functions, including your own, may do to simplify image boundary handling.

The function supports the mode when src is already in the middle of dst . In this case, the function does not copy src itself but simply constructs the border, for example:

// let border be the same in all directions
int border=2;
// constructs a larger image to fit both the image and the border
Mat gray_buf(rgb.rows + border*2, rgb.cols + border*2, rgb.depth());
// select the middle part of it w/o copying data
Mat gray(gray_canvas, Rect(border, border, rgb.cols, rgb.rows));
// convert image from RGB to grayscale
cvtColor(rgb, gray, CV_RGB2GRAY);
// form a border in-place
copyMakeBorder(gray, gray_buf, border, border,
               border, border, BORDER_REPLICATE);
// now do some custom filtering ...
...



Note: When the source image is a part (ROI) of a bigger image, the function will try to use the pixels outside of the ROI to form a border. To disable this feature and always do extrapolation, as if src was not a ROI, use borderType | BORDER_ISOLATED.

Reference: OpenCV Documentation - copyMakeBorder


Example
This is a sample code (C++) with images for opencv copyMakeBorder.

string imgFileName = "lena.jpg";

cv::Mat src = cv::imread(imgFileName);
if (!src.data){
    cout << "Unable to open file" << endl;
    getchar();
    return 1;
}

Mat dstBorderConstant;
copyMakeBorder(src, dstBorderConstant, 256, 256, 256, 256, BORDER_CONSTANT);

Mat dstBorderDefault;
copyMakeBorder(src, dstBorderDefault, 256, 256, 256, 256, BORDER_DEFAULT);

Mat dstBorderIsolate;
copyMakeBorder(src, dstBorderIsolate, 256, 256, 256, 256, BORDER_ISOLATED);

Mat dstBorderReflect;
copyMakeBorder(src, dstBorderReflect, 256, 256, 256, 256, BORDER_REFLECT);

Mat dstBorderReflect101;
copyMakeBorder(src, dstBorderReflect101, 256, 256, 256, 256, BORDER_REFLECT101);

Mat dstBorderReflect_101;
copyMakeBorder(src, dstBorderReflect_101, 256, 256, 256, 256, BORDER_REFLECT_101);

Mat dstBorderReplicate;
copyMakeBorder(src, dstBorderReplicate, 256, 256, 256, 256, BORDER_REPLICATE);

Mat dstBorderWrap;
copyMakeBorder(src, dstBorderWrap, 256, 256, 256, 256, BORDER_WRAP);

cv::namedWindow("Source", CV_WINDOW_FREERATIO);
   
cv::namedWindow("BorderConstant", CV_WINDOW_FREERATIO);
cv::namedWindow("BorderDefault", CV_WINDOW_FREERATIO);
cv::namedWindow("BorderIsolate", CV_WINDOW_FREERATIO);
cv::namedWindow("BorderReflect", CV_WINDOW_FREERATIO);
cv::namedWindow("BorderReflect101", CV_WINDOW_FREERATIO);
cv::namedWindow("BorderReflect_101", CV_WINDOW_FREERATIO);
cv::namedWindow("BorderReplicate", CV_WINDOW_FREERATIO);
cv::namedWindow("BorderWrap", CV_WINDOW_FREERATIO);


cv::imshow("Source", src);

cv::imshow("BorderConstant", dstBorderConstant);
cv::imshow("BorderDefault", dstBorderDefault);
cv::imshow("BorderIsolate", dstBorderIsolate);
cv::imshow("BorderReflect", dstBorderReflect);
cv::imshow("BorderReflect101", dstBorderReflect101);
cv::imshow("BorderReflect_101", dstBorderReflect_101);
cv::imshow("BorderReplicate", dstBorderReplicate);
cv::imshow("BorderWrap", dstBorderWrap);
cv::waitKey(0);

cv::imwrite("BorderConstant.jpg", dstBorderConstant);
cv::imwrite("BorderDefault.jpg", dstBorderDefault);
cv::imwrite("BorderIsolate.jpg", dstBorderIsolate);
cv::imwrite("BorderReflect.jpg", dstBorderReflect);
cv::imwrite("BorderReflect101.jpg", dstBorderReflect101);
cv::imwrite("BorderReflect_101.jpg", dstBorderReflect_101);
cv::imwrite("BorderReplicate.jpg", dstBorderReplicate);
cv::imwrite("BorderWrap.jpg", dstBorderWrap);



Filtered Image Source Image
borderType - BORDER_REFLECT

borderType - BORDER_REFLECT101

borderType - BORDER_REFLECT_101

borderType - BORDER_REPLICATE

borderType - BORDER_CONSTANT

borderType - BORDER_DEFAULT

borderType - BORDER_ISOLATED

borderType - BORDER_WRAP





Download complete Visual Studio project.

Continue Reading...

OpenCV Image Filtering

Continue Reading...

OpenCV Filters - boxFilter

Blurs an image using the box filter.

C++: void boxFilter(InputArray src, OutputArray dst, int ddepth, Size ksize, Point anchor=Point(-1,-1), bool normalize=true, int borderType=BORDER_DEFAULT )

Python: cv2.boxFilter(src, ddepth, ksize[, dst[, anchor[, normalize[, borderType]]]]) → dst

Parameters:
  • src – input image.
  • dst – output image of the same size and type as src.
  • ddepth – the output image depth (-1 to use src.depth()).
  • ksize – blurring kernel size.
  • anchor – anchor point; default value Point(-1,-1) means that the anchor is at the kernel center.
  • normalize – flag, specifying whether the kernel is normalized by its area or not.
  • borderType – border mode used to extrapolate pixels outside of the image.

The function smoothes an image using the kernel:

Where,

Unnormalized box filter is useful for computing various integral characteristics over each pixel neighborhood, such as covariance matrices of image derivatives (used in dense optical flow algorithms, and so on). If you need to compute pixel sums over variable-size windows, use integral().

Reference: OpenCV Documentation - boxFilter


Example
This is a sample code (C++) with images for opencv box filter.

 string imgFileName = "lena.jpg";

 cv::Mat src = cv::imread(imgFileName);
 if (!src.data){
    cout << "Unable to open file" << endl;
    getchar();
    return 1;
 }

 cv::Mat dst;
 cv::boxFilter(src, dst, -1, cv::Size(16, 16));

 cv::namedWindow("Source");
 cv::namedWindow("Filtered");

 cv::imshow("Source", src);
 cv::imshow("Filtered", dst);
 cv::waitKey(0);

 cv::imwrite("Box Filter.jpg", dst);

 return 0;


Filtered Image Source Image



Download complete Visual Studio project.

Continue Reading...