In previous post explains how we can apply rotation to gray scale image that is single channel images. In this post I will explain the same for multy channel images aka color images.
In real time scenarios some times we need to apply the geometric transformation on the images like rotation etc. This process includes rotating the entire image around the center of the image. It is better to map the rotated image on other image like blank image. In digital computers images are treated as the matrices or two dimensional vectors.
Int his process we find some image points falling on the outside of the boundaries. There are lot of procedures to deal with this type of problems. One is leaving those points. Second one is plotting the rotated image on the larger canvas.
In digital computer this process becomes simple matrix multiplication. In geometry as well this process is represented as the matrix operations. In image processing this type of transformations are called affine transforms.
Mathematically whole precess can be represented in two steps.
x2=cos(t)*(x1-x0)-sin(t)*(y1-y0)+x0
y2=sin(t)*(x1-x0)+cos(t)*(y1-y0)+y0
x2 is the new coordinate of the rotated image corresponding to x1 of original image similarly y2.
t is the required angle of rotation. x0 and y0 or the center coordinates of the image.
The fallowing code snippet written in open cv and c++ does the exactly same.
#include "stdafx.h"
#include "cv.h"
#include "highgui.h"
#include <conio.h>
#include <math.h>
using namespace cv;
using namespace std;
#define PI 3.14159265
void main()
{
int angle;
Mat img;
img=imread("E:/test.jpg");
Mat nimg((img.rows),(img.cols),CV_8UC3,Scalar(0));
Mat tm(2,2,CV_32SC1);
Mat nc(2,2,CV_32SC1);
Mat oc(2,2,CV_32SC1);
cout<<"Enter the Rotation angle\n";
cin>>angle;
float cosine=cos(angle*PI/180.0);
float sine=sin(angle*PI/180.0);
float cx=img.cols/2.0;
float cy=img.rows/2.0;
for(int i=0;i<img.rows;i++)
{
for(int j=0;j<img.cols;j++)
{
int nx=(cosine*(i-cx))-(sine*(j-cy))+cx;
int ny=(sine*(i-cx))+(cosine*(j-cy))+cy;
if((nx>=0)&&(ny>=0)&&(nx<=nimg.rows)&&(ny<=nimg.cols))
{
for( int c = 0; c < 3; c++ )
{
nimg.at<Vec3b>(nx,ny)[c]=img.at<Vec3b>(i,j)[c];
}
}
}
}
imshow("Image",nimg);
waitKey(100);
_getch();
}
The difference is just we iterate through the three channels. In Open CV we will get the pixel intensities with .at<Vec3b> array and by using index we can get the individual channel intensities.
In real time scenarios some times we need to apply the geometric transformation on the images like rotation etc. This process includes rotating the entire image around the center of the image. It is better to map the rotated image on other image like blank image. In digital computers images are treated as the matrices or two dimensional vectors.
Int his process we find some image points falling on the outside of the boundaries. There are lot of procedures to deal with this type of problems. One is leaving those points. Second one is plotting the rotated image on the larger canvas.
In digital computer this process becomes simple matrix multiplication. In geometry as well this process is represented as the matrix operations. In image processing this type of transformations are called affine transforms.
Mathematically whole precess can be represented in two steps.
x2=cos(t)*(x1-x0)-sin(t)*(y1-y0)+x0
y2=sin(t)*(x1-x0)+cos(t)*(y1-y0)+y0
x2 is the new coordinate of the rotated image corresponding to x1 of original image similarly y2.
t is the required angle of rotation. x0 and y0 or the center coordinates of the image.
The fallowing code snippet written in open cv and c++ does the exactly same.
#include "stdafx.h"
#include "cv.h"
#include "highgui.h"
#include <conio.h>
#include <math.h>
using namespace cv;
using namespace std;
#define PI 3.14159265
void main()
{
int angle;
Mat img;
img=imread("E:/test.jpg");
Mat nimg((img.rows),(img.cols),CV_8UC3,Scalar(0));
Mat tm(2,2,CV_32SC1);
Mat nc(2,2,CV_32SC1);
Mat oc(2,2,CV_32SC1);
cout<<"Enter the Rotation angle\n";
cin>>angle;
float cosine=cos(angle*PI/180.0);
float sine=sin(angle*PI/180.0);
float cx=img.cols/2.0;
float cy=img.rows/2.0;
for(int i=0;i<img.rows;i++)
{
for(int j=0;j<img.cols;j++)
{
int nx=(cosine*(i-cx))-(sine*(j-cy))+cx;
int ny=(sine*(i-cx))+(cosine*(j-cy))+cy;
if((nx>=0)&&(ny>=0)&&(nx<=nimg.rows)&&(ny<=nimg.cols))
{
for( int c = 0; c < 3; c++ )
{
nimg.at<Vec3b>(nx,ny)[c]=img.at<Vec3b>(i,j)[c];
}
}
}
}
imshow("Image",nimg);
waitKey(100);
_getch();
}
The difference is just we iterate through the three channels. In Open CV we will get the pixel intensities with .at<Vec3b> array and by using index we can get the individual channel intensities.
No comments:
Post a Comment