Saturday, May 10, 2014

Extract frame from video when user presses a key in Open CV

Capturing the frames from video using Open CV is very simple. Just we need to use cvSaveImage method from Open CV library. But some times we need to capture the images interactively like when the user presses a key. For doing this we can use the function kbhit(). this function returns non zero values when there is something in the keyboard buffer. So by using this we can interactively extract the frames.

By using the fallowing code we can accomplish this.

#include "stdafx.h"
#include "cv.h"
#include "highgui.h"
#include <conio.h>

void main()
{
IplImage * img;
CvCapture * v;
v = cvCreateFileCapture("E:/v.mp4");
int k = 0;
char text[10];
while(true)
{
if(_kbhit())
{
sprintf(text,"%s%d%s","E:/Image",k,".jpeg");
cvSaveImage(text,img);
getch();
}
else
{
img= cvQueryFrame(v);
cvWaitKey(0);
k++;
}
}
_getch();
}

we have one while loop which continuously reads the frames from the video. The if block will be executed when the user presses a key from keyboard else else block gets executed. In else block we are extracting the frames from the video we have one delay loop as well else the the computer reads all the frames one by one with in fraction of seconds.

To change the saved image file name every time we are using char buffor. By using sprintf we are changing the file name. cvSaveImage saves the image on the specifed path supplied as first argument and the image to be saved as second argument. getch() is used to clear the buffor else the if block executed continuously.

Thursday, May 1, 2014

RGB to YUV format conversion using Open CV and C Language

You may think why we need to convert the image from one format to another format. There are lot of advantages if we convert the format for transmission as well as for display purpose. Some times processing of the one format of images i s easy compared to another. Generally images are processed by algorithms in YUV domain and then converted to RGB domain for Display purpose. While capturing the images camera uses RGB format b ut for storing we use YUV format for compression. When we need to display them we will again convert them to the RGB format. Generally YUV format images require the less band width compared to the RGB. Generally color correction is done in the RGB color space and contrast enhancement is done in the YUV color space. Y component used for carrying the brightness of the image and remaining two components are used for the color representation.

Fallowing code explains how we can achieve the above things:

#include "stdafx.h" 
#include "cv.h"
#include "highgui.h"
#include <conio.h>

void main()
{
    IplImage *img=cvLoadImage("E:/test.jpg");
    IplImage *Dimg=cvCreateImage(cvSize(img->width,img->height),img->depth,img->nChannels);
    //VUY
    IplImage *Y=cvCreateImage(cvSize(img->width,img->height),img->depth,3);
    IplImage *U=cvCreateImage(cvSize(img->width,img->height),img->depth,3);
    IplImage *V=cvCreateImage(cvSize(img->width,img->height),img->depth,3);

    for(int i=0;i<img->width*img->height*3;i+=3)
    {
        int y=0.257*img->imageData[i+2]+0.504*img->imageData[i+1]+0.098*img->imageData[i]+16;
        int u=-0.148*img->imageData[i+2]-0.291*img->imageData[i+1]+0.439*img->imageData[i]+128;
        int v=img->imageData[i+2]*0.439-0.368*img->imageData[i+1]-0.071*img->imageData[i]+128;
        Dimg->imageData[i]=(v<255||v>0)?v:(v>255?255:0);
        Dimg->imageData[i+1]=(u<255||u>0)?u:(u>255?255:0);
        Dimg->imageData[i+2]=(y<255||y>0)?y:(y>255?255:0);

        Y->imageData[i+2]=Dimg->imageData[i+2];
        U->imageData[i+1]=Dimg->imageData[i+1];
        V->imageData[i]=Dimg->imageData[i];
    }

    cvNamedWindow("Y",0);
    cvResizeWindow("Y",300,300);
    cvNamedWindow("U",0);
    cvResizeWindow("U",300,300);
    cvNamedWindow("V",0);
    cvResizeWindow("V",300,300);
    cvShowImage("Y",Y);
    cvShowImage("U",U);
    cvShowImage("V",V);
    cvWaitKey(0);
    _getch();
}

While calculating the YUV components some times we will encounter the under flow as well as over flow. so we have to make sure that the code does not breaks while running. So we have to tap all these things in our code.
Above code loads the image in to the code using open cv. Then the individual channels are extracted and then they are converted to the YUV domain and stored in the newly created image. To get the difference between the YUV components we represented the each channel separately in separate windows.

Wednesday, April 30, 2014

Splitting the Image in to three channels with Open CV and C Language


In image processing some tines we need to separate the channels of image. This is needful to achieve the required contrast of the image. Because some times if we remove the one or more channels from the image we can get better understanding of the image under consideration. In some cases we have to process only few channels of the image rather than the entire image. Open cv provides the functions to achieve the same. We can do the same thing with out using the functions of the open cv.

Fallowing code snippet is useful for splitting the image channels:

#include "stdafx.h" 
#include "cv.h"
#include "highgui.h"
#include <conio.h>

void main()
{
    IplImage *Simg;
    IplImage *Rimg;
    IplImage *Gimg;
    IplImage *Bimg;

    int nrows,ncols;

    Simg=cvLoadImage("E:/test.jpg");
    Rimg=cvCreateImage(cvSize(Simg->width,Simg->height),Simg->depth,3);
    Gimg=cvCreateImage(cvSize(Simg->width,Simg->height),Simg->depth,3);
    Bimg=cvCreateImage(cvSize(Simg->width,Simg->height),Simg->depth,3);

    cvNamedWindow("Red",0);
    cvNamedWindow("Green",0);
    cvNamedWindow("Blue",0);
    cvResizeWindow("Red",320,320);
    cvResizeWindow("Green",320,320);
    cvResizeWindow("Blue",320,320);

    nrows=Simg->height;
    ncols=Simg->width;
    
     for(int i=0;i<nrows*ncols*3;i++)
    {
        Rimg->imageData[i]=0;
        Gimg->imageData[i]=0;
        Bimg->imageData[i]=0;
    }
    for(int i=0;i<nrows*ncols*3;i+=3)
    {   
            Rimg->imageData[i+2]=Simg->imageData[i+2];
            Gimg->imageData[i+1]=Simg->imageData[i+1];
            Bimg->imageData[i]=Simg->imageData[i];
     }
   
    cvShowImage("Red",Rimg);
    cvShowImage("Green",Gimg);
    cvShowImage("Blue",Bimg);
    cvWaitKey(0);
    _getch();
}

for example consider the image with three channels. Load that image into the memory with the help of Open CV. Next create the three images with the same size,depth and channels. Fill the entire image with the zeros means with black pixels. Now extract the individual channels from the original image and store them in the newly created images. Open cv changes the order of the pixels to BGR instead of RGB.
If we apply the ->height and ->width on the image in Open CV we will get the number of rows and columns. But actually if the image is color we have columns more than what we get. that is if we have width as 20 the actual columns on the disk is 60 because each channel will store in separate byte. In memory they will be stored in interleaved manner. So to get the red pixels we need to get the alternate pixels with the gap of two.
By running the above code on the below image:
  

we will get the fallowing images:
 





LED flashing simulation in multisim

In real world Applications LED's play important role. They are mainly used as indicators. In embedded industry they are used to indicate the events. Embedded world mainly depends on the two things. One is embeddable processor and the software top run it. Every processor or micro controller needs little bit of power to operate and some circuit connections to make it useful.
In this post I will explain how we can use 8051 micro controller to control the behavior of LED. The simulation is done in multisim.
lf

We need to apply proper voltages to the controller so that it can operate. It needs one power source and sink. Any normal battery can be used as the power supply and the negative terminal of the same battery can be used as ground. Just connect the LED to the any port pin of the controller. But avoid to use port1 because for that port we need to supply external pull up resistors. Remaining all ports has inbuilt pull up resistors.
Now coming to the actual intention of the post we need to toggle the LED. For some time it will be on after that it is off for the same amount of time. It can be achieved by altering the output of the port pin. The software written in assembly or embedded C. The code snippet as fallows:
#include <reg52.h>
sbit pin = P1^5;
bit state;
void init(void);
void changeState(void);
void Wait(const unsigned int);
void main()
{
init();
while(1)
{
changeState();
Wait();
}
}
void init()
{
state = 0;
}
void changeState()
{
if (state == 1)
{
state = 0;
pin = 0;
}
else
{
state = 1;
pin = 1;
}
}
void Wait()
{
unsigned y;
{
for (y = 0; y <= 100; y++);
}
}
Before loading this code in the micro controller we need to convert this code into .hex file to dump in controller. We can use keil for the same. We need to include the header regx52.h it has all the ports and functions defined in it. Pin 5 is configured to operate and to control the LED.
Program has three functions. First one initializes the led state to zero. And the wait function creates the delay for some period of time. For that period LED remains in on/off state only. Change state function changes the global variable value so that in the next call to the wait function the status of the LED changes. As use val we have on main function to start and run the program.

Saturday, April 26, 2014

Color Image Rotation in C++ and with Open CV

In previous post explains how we can apply rotation to gray scale image that is single channel images. In this post I will explain the same for multy channel images aka color images.

In real time scenarios some times we need to apply the geometric transformation on the images like rotation etc. This process includes rotating the entire image around the center of the image. It is better to map the rotated image on other image like blank image. In digital computers images are treated as the matrices or two dimensional vectors.

Int his process we find some image points falling on the outside of the boundaries. There are lot of procedures to deal with this type of problems. One is leaving those points. Second one is plotting the rotated image on the larger canvas.

In digital computer this process becomes simple matrix multiplication. In geometry as well this process is represented as the matrix operations. In image processing this type of transformations are called affine transforms.

Mathematically whole precess can be represented in two steps.

x2=cos(t)*(x1-x0)-sin(t)*(y1-y0)+x0
y2=sin(t)*(x1-x0)+cos(t)*(y1-y0)+y0

x2 is the new coordinate of the rotated image corresponding to x1 of original image similarly y2.
t is the required angle of rotation.  x0 and y0 or the center coordinates of the image.

The fallowing code snippet written in open cv and c++ does the exactly same.

 #include "stdafx.h"
#include "cv.h"
#include "highgui.h"
#include <conio.h>
#include <math.h>

using namespace cv;
using namespace std;

#define PI 3.14159265

void main()
{
    int angle;
    Mat img;
    img=imread("E:/test.jpg");
    Mat nimg((img.rows),(img.cols),CV_8UC3,Scalar(0));
    Mat tm(2,2,CV_32SC1);
    Mat nc(2,2,CV_32SC1);
    Mat oc(2,2,CV_32SC1);
    cout<<"Enter the Rotation angle\n";
    cin>>angle;
    float cosine=cos(angle*PI/180.0);
    float sine=sin(angle*PI/180.0);

    float cx=img.cols/2.0;
    float cy=img.rows/2.0;

    for(int i=0;i<img.rows;i++)
    {
        for(int j=0;j<img.cols;j++)
        {
            int nx=(cosine*(i-cx))-(sine*(j-cy))+cx;
            int ny=(sine*(i-cx))+(cosine*(j-cy))+cy;
            if((nx>=0)&&(ny>=0)&&(nx<=nimg.rows)&&(ny<=nimg.cols))
            {
                           for( int c = 0; c < 3; c++ )
                {
                nimg.at<Vec3b>(nx,ny)[c]=img.at<Vec3b>(i,j)[c];
                }
            }
        }
    }
    imshow("Image",nimg);
    waitKey(100);
     _getch();
}

The difference is just we iterate through the three channels. In Open CV we will get the pixel intensities with .at<Vec3b> array and by using index we can get the individual channel intensities.  

Sunday, April 13, 2014

Image Rotation with Open cv and C++



In real time scenarios some times we need to apply the geometric transformation on the images like rotation etc. This process includes rotating the entire image around the center of the image. It is better to map the rotated image on other image like blank image. In digital computers images are treated as the matrices or two dimensional vectors.

Int his process we find some image points falling on the outside of the boundaries. There are lot of procedures to deal with this type of problems. One is leaving those points. Second one is plotting the rotated image on the larger canvas.

In digital computer this process becomes simple matrix multiplication. In geometry as well this process is represented as the matrix operations. In image processing this type of transformations are called affine transforms.

Mathematically whole precess can be represented in two steps.

x2=cos(t)*(x1-x0)-sin(t)*(y1-y0)+x0
y2=sin(t)*(x1-x0)+cos(t)*(y1-y0)+y0

x2 is the new coordinate of the rotated image corresponding to x1 of original image similarly y2.
t is the required angle of rotation.  x0 and y0 or the center coordinates of the image.

The fallowing code snippet written in open cv and c++ does the exactly same.

#include "stdafx.h" 
#include "cv.h"
#include "highgui.h"
#include <conio.h>
#include <math.h>

using namespace cv;
using namespace std;

#define PI 3.14159265

void main()
{
    int angle;
    Mat img;
    img=imread("E:/test.jpg",CV_LOAD_IMAGE_GRAYSCALE);
    Mat nimg(img.rows,img.cols,CV_8UC1,Scalar(0));
    Mat tm(2,2,CV_32SC1);
    Mat nc(2,2,CV_32SC1);
    Mat oc(2,2,CV_32SC1);
    cout<<"Enter the Rotation angle\n";
    cin>>angle;
    float cosine=cos(angle*PI/180.0);
    float sine=sin(angle*PI/180.0);

    float cx=img.cols/2.0;
    float cy=img.rows/2.0;


    for(int i=0;i<img.rows;i++)
    {
        for(int j=0;j<img.cols;j++)
        {
            int nx=(cosine*(i-cx))-(sine*(j-cy))+cx;
            int ny=(sine*(i-cx))+(cosine*(j-cy))+cy;
            if((nx>=0)&&(ny>=0)&&(nx<=img.rows)&&(ny<=img.cols))
            {
            nimg.at<uchar>(nx,ny)=img.at<uchar>(i,j);
            }
        }
    }
   imshow("Image",nimg);
   waitKey(100);
    _getch();
}

Saturday, March 8, 2014

Image thresholding using Open CV and C language with track bar

In previous post we have seen the image thresholding opeeeration. In that the user provides the thresholding level before start of the operation after if he wants to change he has to restart the app. In this post I will explain how interactively user can provide the threshold level. Open CV has lot of GUI tools like in built support for windows etc. We can add one track bar for the window to change the threshold value.

#include "stdafx.h" 
#include "cv.h"
#include "highgui.h"
#include <conio.h>
#include <string.h>

    int rows,cols,level,i,j,threshold_value = 0;

    IplImage * image;
    IplImage * img;

    char path[20];
    const int max_value=255;
    char* trackbar_value = "Value";
    char* imagedata;
    char * name="Threshold";
   

int main()
{
   
    puts("Enter the path of the image");
    gets(path);

    image=cvLoadImage(path,CV_LOAD_IMAGE_GRAYSCALE);
    rows=image->height;
    cols=image->width;
   
    img=cvLoadImage(path,CV_LOAD_IMAGE_GRAYSCALE);
       
    cvNamedWindow(name,0);
    cvResizeWindow(name,352,288);
    cvCreateTrackbar( trackbar_value,"Threshold", &threshold_value,max_value);
   
    while(true)
    {
    imagedata=(char*)image->imageData;
   
    for(i=0;i<rows*cols;i++)
    {
        img->imageData[i]=imagedata[i]>threshold_value?imagedata[i]:0;
    }
    cvShowImage(name,img);
    int c;
    c = cvWaitKey( 20 );
    if( (char)c == 27 )
    {
        cvReleaseImage(&image);
        cvReleaseImage(&img);
        cvDestroyWindow("Threshold");
          break;
    }
   }

    getch();
    return 0;
}

To add the track bar for the window we can use the cvCreateTrackbar function from the Open CV library. The first argument is the name to be displayed for the bar. Second one is the name of the window on which track must be showed. Third one is thee value we want to change with bar. fourth one is the maximum allowed value on track bar.

Thursday, March 6, 2014

ImageThresholding uisng Open CV and C

Thresholding is the very basic image processing technique. This splits the pixels of image into two subsets. One set has the pixel values less than the thresholding level and other has the pixel values greater than or equal to the thresholding value. Means thresholding produces binary images having only two values of pixels.

If we draw the histogram of the image after thresholding we will get only two impulse functions in the histogram because we have only two sets of pixels.

By using Open CV and any other programming language like C or C++ we can threshold the images. This is very simple operation. In the fallowing code snippet I explained how we can achieve the image thresholding on images.

The thresholded image looks like:



#include "stdafx.h" 
#include "cv.h"
#include "highgui.h"
#include <conio.h>
#include <string.h>

int main()
{
    IplImage * image;
    char path[20];
    int rows,cols,level,i;
    char* imagedata;

    puts("Enter the path of the image");
    gets(path);

    image=cvLoadImage(path,CV_LOAD_IMAGE_GRAYSCALE);
    rows=image->height;
    cols=image->width;

    puts("Enter the threshold level");
    scanf("%d",&level);
    imagedata=(char*)image->imageData;

    for(i=0;i<rows*cols;i++)
    {
        imagedata[i]=imagedata[i]>level?level:0;
    }

    cvShowImage("Threshold",image);
    if(cvWaitKey(1)==27)
    {
        exit(0);
    }

    getch();
    return 0;
}

I think by now you have gone through the code. As usual we started the coding with including the most important header files like cv.h and highgui.h. In the starting of the code we have one char array it stores the path of the image. After we have five integer variables. These are used for storing the height, width of the images and two more variables to iterate the image data and other one is for holding the threshold value given by the user.

The code work like this user enters the path of the image next he/she enters the threshold value. Based on the threshold the image is segmented into two separate pixel values. Here we used inplace modification. We directly changed the pixel values of image with out using other image to store the result.

We compared the image pixel value with the user entered threshold value if the pixel value is greater than the threshold we replaced that pixel with the threshold value other wise we replaced with zero value i.e black pixel.

In thresolding we have variations. The variations include threshold binary, threshold binary inverted,truncate, threshold to zero, threshold to zero inverted*.
We can implement all these types of thresholding methods just by changing the code line

imagedata[i]=imagedata[i]>level?level:0;

for threshold binary use   imagedata[i]=imagedata[i]>level?255:0; 

for threshold binary inverted use   imagedata[i]=imagedata[i]>level?0:255; 


for truncate use   imagedata[i]=imagedata[i]>level?level:imagedata[i]; 


for threshold to zero use   imagedata[i]=imagedata[i]>level?imagedata[i]:0; 


for threshold to zero inverted use   imagedata[i]=imagedata[i]>level?0:imagedata[i];

* according to Open CV documentation.

Tuesday, March 4, 2014

Convert the video into images using Open CV and C

 In previous post we have seen how we can use Open CV to show images of a folder continuously so that the viewer can fell like as a video. We can use the Open CV for extracting the frames of a video. After extracting we can save them with the desired file extension. You can use the fallowing code for that.

#include "stdafx.h" 
#include "cv.h"
#include "highgui.h"
#include <conio.h>

int main()
{
    char  video_path[100];
    char  destination[100];
    char file_name[20];
    int count=100000;

    CvCapture *capture;
    IplImage * image;
   

    printf("Enter the path to the Video file");
    gets(video_path);

    printf("Enter the destinatio folder to save images");
    gets(destination);

    capture=cvCreateFileCapture(video_path);
    for(;;)
    {
        sprintf(file_name,"%s%d%s",destination,count,".jpg");
        image=cvQueryFrame(capture);
        cvSaveImage(file_name,image);
        printf("%s is saved.. \n",file_name);
        count++;
    }
    cvReleaseCapture(&capture);
    puts("Video is converted to images...");
    getch();
    return 0;
}

In this code we used two character arrays. One is for storing the path for the video. Second one is for storing the path to the folder to store images. You might have observed the integer variable this is for giving the file names sequentially for the images. file_name is another char array for temporally storing the file name.

By using sprintf we are appending the .jpg extension to the integer and we are saving the extracted frame from the video with the name containing in the file_name char array. This data changes for each frame.
We are using the very beautiful function provided by Open CV to save the image on the secondary memory that is out HDD. The function is cvSaveImage. This function takes two arguments first one is the filename and the second one is the image data typically variable of type IplImage*.

We are using the infinity loop to get all the frames from the video. After extracting all the frames the loop automatically breaks. Next we are releasing all the resources used by the Open CV by using  cvReleaseCapture.

Thursday, February 27, 2014

Displaying the video with Open CV and C

In previous posts you have seen how we can load and display images with the Open CV and C language. Open CV can be used to process videos as well. Processing the videos is simple as the images with the Open CV. This is because the videos are made with the sequence of images. These images are called frames. Generally the videos has the frame rate of 24-30 called as fps. fps represents frames per seconds.

By using Open CV we will grab the each frame from the video from starting frame. After getting the frame from the video to memory we display it as a normal image. If we want to process the grabbed frame we can pass that frame to the processing function and we get result back.

The fallowing code snippet helps you to under stand this process.


#include "stdafx.h" 
#include "cv.h"
#include "highgui.h"
#include <string>
#include <dirent.h>
using namespace std;
CvCapture * capture;
    char* filename;
    string fullpath,path;
    DIR *dp;
    struct dirent *ep;
    const char* full_path;
class base
{
public:
virtual void input()=0;
virtual void process()=0;
virtual void play()=0;
virtual void destroy()=0;
};

class localfile:public base
{
public:
    localfile(char* file);
void input();
void process();
void play();
void destroy();
};
void localfile::input()
{
capture = cvCreateFileCapture(filename);
}
void localfile::process()
{
for ( ; ; )
    {
        play();
        cvWaitKey(33);
    }
}
void localfile::play()
{
 IplImage * frame = cvQueryFrame(capture);
 cvShowImage("Video",frame);
}
void localfile::destroy()
{
cvDestroyWindow("Video");
}
localfile::localfile(char* file)
{
filename=file;
}

int main(int argc,char** argv)
{
    char* path;
    printf("Enter the path to the Video");
    gets(path);
    localfile obj1(path);
    obj1.input();
    obj1.process();
    obj1.destroy();
    return 0;
}

When we run this code it asks for the path to the video file. Based on the path to the video the CvCapture structure will be initialized with the function cvCreateFileCapture which is available in Open CV . Now this structure helps us to get the frames as well as lot information called meta data of the video.

Open CV provides simple function called cvQueryFrame(); this function takes the CvCapture type variable as the argument. and returns the frame for each call to this function. It automatically returns the next frame from the video. After getting the frame we can display this frame as described in the previous posts.

Tuesday, February 25, 2014

Display Images as video with Open CV part-2

In previous post we have seen how we can display the sequence of images as video. But in that process we have seen a problem which can break our code. The basic condition we posed on our code is the folder contains only images. It does not have any other files which are not supported by Open CV.

In this post I will try to explain how we can get rid of this problem by using the regular expressions. These regular expressions are available now in c++ as well.To use this feature we need to include the regex header file. and we have to use the name space std::tr1

#include "stdafx.h"
#include "cv.h"
#include <regex>
#include "highgui.h"
#include <string>
#include <dirent.h>

using namespace std;
using namespace std::tr1;

char* filename;
string fullpath,path;
DIR *dp;
struct dirent *ep;
const char* full_path;

class base
{
public:
    virtual void input()=0;
    virtual void process()=0;
    virtual void destroy()=0;
};

class imagetovideo:public base
{
public:
    imagetovideo(char* folderpath);
    void input();
    void process();
    void destroy();
};

void imagetovideo::input()
{
    const char* mpath=path.c_str();
    dp=opendir(mpath);
}

void imagetovideo::process()
{
    std::tr1::regex rgx("(.jpg$)|(.png$)|(.jpeg$)|(.bmp$)|(.gif)");
    cmatch result;
    cvNamedWindow("Video",0);
    cvResizeWindow("Video",640,320);
        while(ep=readdir(dp))
        {
            regex_search(ep->d_name, result, rgx);
            if(!result.empty())   
            {
                puts(ep->d_name);
                fullpath=path+(ep->d_name);
                full_path=fullpath.c_str();
                puts(full_path);
                IplImage* img=cvLoadImage(full_path);
                cvShowImage("Video",img);
                    if(cvWaitKey(1000)==27)
                    {
                        cvDestroyWindow("Video");
                        break;
                    }
                cvReleaseImage(&img);
            }
            }
}

void imagetovideo::destroy()
{
    closedir (dp);
    cvDestroyWindow("Video");
}

imagetovideo::imagetovideo(char* folderpath)
{
    path=folderpath;
}

int main(int argc,char** argv)
{
    char* path;
    printf("Enter the path to the image");
    gets(path);
    imagetovideo obj1(path);
    obj1.input();
    obj1.process();
    obj1.destroy();
    return 0;
}

If you observe the above code nothing changed from the previous post code snippet. Only we have added the few lines of code which can control the flow of file paths to the cvShowImage function. Now the code is robust. This code will process only the image files. The regular expression filters out the other files which are not images.

The input for the path must be like D:/x/ in this D is the local disk drive number and the x is the folder containing the images. If the folder path is not specified in this manner code will not work.

In this code I used regular expressions to identify the file format. If the file is image then only file is passed to the cvShowImage function. I will explain the regular expression code lines used in this post.

std::tr1::regex rgx("(.jpg$)|(.png$)|(.jpeg$)|(.bmp$)|(.gif)");

observe the above line. You can see the extensions for the images as well as they ended with the dollar symbol. and they are separated with the | symbol. Here $ symbol indicates that in the given string search for the .jpg or .png or .jpeg etc at the end of the string.

cmatch result;

here result is the resul coming out from the  regex_search(ep->d_name, result, rgx); expression. This expression processes the string passed in the first argument the third argument is the regular expression and the second one is the cmatch type.

If this method finds the match it will store that in the cmatch type variable. If the match is found the match will be stored here. So if this not empty the file is image file. else the file is not image.

DC motor control with Pulse Width Modulation Part 1

DC Motor intro DC motor is a device which converts electrical energy into kinetic energy. It converts the DC power into movement. The typica...