Working with FireflyMV camera from PointGrey in Ubuntu 13.04 and ROS Hydro

2 07 2014

I had a hard time trying to get the Firefly MV camera from Point Grey, working with Ubuntu 13.04 and ROS Hydro. Finally ironed out all the bugs, and wrote this tutorial which takes you step by step to getting it running on your laptop.

I have tested it with the following environment setup:

  • Point Grey’s FireflyMV FMVU-13S2C (1328×1048 at 23FPS, 640×480 at 60 FPS)
  • Ubuntu 13.04
  • ROS Hydro
  1. FireflyMV cameras implement IIDC 1394-based Digital Camera Specification, so IIDC-over-USB support should be enabled in the lubdc1394 library. However, if the libdc1394 is properly set up with IIDC over USB enabled, they would show up in a program called coriander. We will install coriander in the next step.
  2. Installing Coriander to check camera output:
    $ cd
    $ sudo apt-get install coriander
    $ coriander

    This will popup the coriander program, and if you have FireflyMV connected then you can view the output by clicking on Display button under the Service Tab of the program. You should see the following window:

  3. Note down the GUID of the firefly camera as we will need to modify the parameter file in the ROS package that we will download later.
  4. Install the Firefly MV ROS Package: Now we will download the Firefly MV ROS package into the home folder, in order to work with the sensor and publish the image as ROS topics. In a new terminal window, type:
    $ git clone https://github.com/sidahuja/firefly_pgr.git

    Once the download has finished, type the following in the HOME directory in a terminal:

    $ sudo gedit .bashrc

    Enter your password when prompted, and add the following line to the end of the file:

    export ROS_PACKAGE_PATH=~/firefly_pgr:$ROS_PACKAGE_PATH

    Save and exit.
    Next type the following in the terminal to reload the .bashrc file:

    $ source ~/.bashrc

    Next we will edit the parameter file for the camera.

    $ gedit ~/firefly_pgr/launch/ffmv.yaml

    Edit the guid: 00b09d0100c72638 to whatever number you found for the camera using coriander in the previous steps. DO NOT use the “0x” digits in the front or any dashes while entering the number. Save and exit.
    Let us launch the stereo camera package now:

    $ roslaunch firefly_pgr firefly_pgr.launch

    If everything goes well, you will not see any errors. In a new terminal window, you can view the published topics:

    $ rostopic list

    You should see the following:

    /camera/camera_info
    /camera/image_color
    /camera/image_color/compressed
    /camera/image_color/compressed/parameter_descriptions
    /camera/image_color/compressed/parameter_updates
    /camera/image_color/compressedDepth
    /camera/image_color/compressedDepth/parameter_descriptions
    /camera/image_color/compressedDepth/parameter_updates
    /camera/image_color/theora
    /camera/image_color/theora/parameter_descriptions
    /camera/image_color/theora/parameter_updates
    /camera/image_mono
    /camera/image_mono/compressed
    /camera/image_mono/compressed/parameter_descriptions
    /camera/image_mono/compressed/parameter_updates
    /camera/image_mono/compressedDepth
    /camera/image_mono/compressedDepth/parameter_descriptions
    /camera/image_mono/compressedDepth/parameter_updates
    /camera/image_mono/theora
    /camera/image_mono/theora/parameter_descriptions
    /camera/image_mono/theora/parameter_updates
    /camera/image_raw
    /camera/image_raw/compressed
    /camera/image_raw/compressed/parameter_descriptions
    /camera/image_raw/compressed/parameter_updates
    /camera/image_raw/compressedDepth
    /camera/image_raw/compressedDepth/parameter_descriptions
    /camera/image_raw/compressedDepth/parameter_updates
    /camera/image_raw/theora
    /camera/image_raw/theora/parameter_descriptions
    /camera/image_raw/theora/parameter_updates
    /camera1394_nodelet/parameter_descriptions
    /camera1394_nodelet/parameter_updates
    /camera_nodelet_manager/bond
    /diagnostics
    /image_proc_debayer/parameter_descriptions
    /image_proc_debayer/parameter_updates
    /rosout
    /rosout_agg
    

    The launch file automatically opens up the image_view node to show you the raw image coming out of the camera.





ASUS G53S Power up issue solved

11 04 2012

Much like a lot of people online, i have been experiencing the power-up problem with my ASUS G53S laptop. In short, when the lid is open, and the ASUS laptop is already turned on, i can restart the laptop without a problem. I can even shutdown and turn it on again after a while. However, if i close the lid, and wait for a while, then try to power it up again, it just would not work!! This got really frustrating. So this is what i tried next: update BIOS, take out the battery and plug it back in again, take out the power supply plug and put it back in; however, none of these seem to work.

Then i found this forum post by the user sadozai online:  http://forum.notebookreview.com/8035363-post11.html He recommended moving a light magnet over the bottom right 2″x2″ area of the laptop, as according to him, the problem lies with the magnetic sensor installed in that area that responds to the lid open/close action. This seemed like a plausible diagnosis, so i had to test it out. All i had to do now was to find a light magnet in my office…hmmm…..how about a small motor? Motors have magnets in them right? So i just put a piece of paper on the bottom right corner and moved the motor’s butt end in circles in that area. Voila!! as soon as i pressed the power button this time, the laptop turned on without a problem! All i need now is to carry a motor with me everywhere i take my laptop :p The other option is to send it back to the shop for repair/replacement, but what fun would that be? 





Import .STL file into Autodesk Inventor Professional 2012 as an Assembly

6 03 2012

I spent a couple of hours today trying to figure out a way to import .STL files into Autodesk Inventor Professional 2012 as a part or an assembly. In this step by step tutorial, i will describe my way of accomplishing the same. For those of you that are not familiar with the STL format, please refer to this link: http://en.wikipedia.org/wiki/STL_(file_format) . You can view and process the mesh files using the popular open source software called MeshLab. I would highly recommend you download and try it out.

For this tutorial you would need the following programs from Autodesk:

  • Autodesk 3ds Max 2012
  • Autodesk Inventor Professional 2012

Both of the above are free for students.

Following shows you a snapshot of the .STL file from MeshLab that i am trying to convert:

Steps to Convert:

  1. Open Autodesk 3ds Max 2012 and click on import.
  2. Select the .STL file and open it.
  3. Once you click on open, following window will pop-up:
  4. Click Ok. The software will now try to open the file and weld the mesh. It could take a loooooooooong loooooooooong time, so have some patience.
  5. Once successfully imported, you will see the following in the Autodesk 3ds Max window:
  6. Next, click on export.
  7. Save as type “ACIS SAT (*.SAT)” file and click Save.
  8. You will see the following options box. Make sure the options for “Export Basic 3ds Max Primitives (Box, Cyl, Cone, Sph, Torus)”, “Export 3ds Max NURBS Objects”, “Export Mesh Objects (Objects Collapsible to Mesh)” are checked. Click OK.
  9. The software will now attempt to save the file as the new format .SAT file. This could take a loooooooooong loooooooooong time, so have some patience.
  10. Once finished exporting, you can now open up Autodesk Inventor Professional and open up the .SAT file. As before, it could take a while for the inventor to open it, so hang in there. Once done, you will see the following:
  11. Make sure to save the work in Inventor, and you should now have the full Assembly drawing.
  12. The downside is that a 2.11MB .STL file resulted in 77.8MB of .SAT file, and the resulting Inventor Assembly files total 90.5MB. So, be VERY VERY careful about what you are trying to convert. Make sure you have enough physical memory and hard disk space as the resulting files are usually 35 to 45 times bigger. Try and reduce your mesh as much as you can before converting.





ROS + OpenCV : Read a Video File using FFMPEG

15 08 2011

In ROS diamondback, they disabled the video interface options. That means you can not capture video from a file or a Web Camera using the standard OpenCV function calls. For more details, please visit: http://answers.ros.org/question/1128/opencv-videocapture-not-working-in-diamondback.

This frustrated me to no end, so i decided to write my own class based on FFMPEG to read a video File. This program shows how to capture frames from a video file using FFMPEG, store them in OpenCV cv::Mat template and display the result in a HighGUI Window.

Way to run it

  1. Download both the packages in to a folder: DOWNLOAD LINK , and make sure to add the folder to the ROS_PACKAGE_PATH. I have it in my HOME directory, so i added the following two lines at the end of my .bashrc:
    export ROS_PACKAGE_PATH=~/testROSOpenCVFFMPEG:$ROS_PACKAGE_PATH

    Make sure to restart the terminal window for it to take effect.

  2. I have included a sample video located in the samples sub-folder. Make sure to change the path of the input Video file on the line 350: fileName=”/home/sid/testROSOpenCVFFMPEG/samples/testVideo.avi”; to the full path and name of the file for your case.
  3. If you would like to save all the frames of the video as *.ppm files, make sure to comment out the code from line 302 to 308.
  4. Open a new terminal window and type the following:
    $ sudo bash

    Provide your password, and you will then have root access.

    # roscd testROSOpenCVFFMPEG/
    # rosmake testROSOpenCVFFMPEG

    Make sure that the package built with no errors.
    To run it, type:

    # rosrun testROSOpenCVFFMPEG testROSOpenCVFFMPEG 
  5. You should now see an OpenCV Window. The Window is waiting for you to press a key, so that it can grab the next frame from the video file. So to step, keep pressing “ANY” key. Have FUN with it! If you break it, let me know how you managed to do it, and i will try and fix……maybe🙂

testROSOpenCVFFMPEG Source Code:

/*
Copyright (c) 2011, Siddhant Ahuja (Sid)
All rights reserved.

Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions are met:
    * Redistributions of source code must retain the above copyright
      notice, this list of conditions and the following disclaimer.
    * Redistributions in binary form must reproduce the above copyright
      notice, this list of conditions and the following disclaimer in the
      documentation and/or other materials provided with the distribution.
    * Neither the name of the <organization> nor the
      names of its contributors may be used to endorse or promote products
      derived from this software without specific prior written permission.

THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND
ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
DISCLAIMED. IN NO EVENT SHALL <COPYRIGHT HOLDER> BE LIABLE FOR ANY
DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
(INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND
ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
*/
/*
This program shows how to capture frames from a video file using FFMPEG, store 
them in OpenCV cv::Mat template and display the result in a HighGUI Window.
*/

//Includes all the headers necessary to use the most common public pieces of the ROS system.
#include <ros/ros.h>
//Include all the FFMPEG header files.
extern "C" { 
	//Library containing decoders and encoders for audio/video codecs.
	#include <libavcodec/avcodec.h>
	//Library containing demuxers and muxers for multimedia container formats.
	#include <libavformat/avformat.h>
	//Library performing highly optimized image scaling and color space/pixel format conversion operations.
	#include <libswscale/swscale.h>
}
//Include headers for OpenCV Image processing.
#include <opencv2/imgproc/imgproc.hpp>
//Include headers for OpenCV GUI handling.
#include <opencv2/highgui/highgui.hpp>

using namespace std;

//Declare a string with the name of the window that we will create using OpenCV where processed images will be displayed.
static const char WINDOW[] = "Test ROS + OpenCV + FFMPEG";

//Create a Structure containing the OpenCV cv::Mat format and the status of error.
struct openCVFrameContext
{
	cv::Mat cvFrameBGR;
	bool errorStatus;
};

//This is the main class that contains all public and protected members.
class Capture_FFMPEG
{
	public:
		//Constructor.
		Capture_FFMPEG() {init();}
		
		//Deconstructor.
		~Capture_FFMPEG() { close();}
		
		//Function to open a Video File.
		virtual bool openVideoFile(const char* fileName);
		
		//Function to grab a frame and retrieve the data. This returns a structure that contains a matrix of type cv::Mat and the error status.
		virtual openCVFrameContext queryFrame();
		
		//Function to save the frame data as *.ppm file.
		virtual int saveFrame(AVFrame *pFrame, int width, int height, int iFrame);
		
	protected:
		//Function to initialize all protected variables.
		virtual void init();
		
		//Function to destroy all protected variables, structures and free up memory.
		virtual void close();
		
		char * fileName; //Variable to store the file name.
		int videoStream; //Variable to store the index of the Video Stream.
		int frameFinished; //Variable to determine if the frame data was fetched properly.
		AVFormatContext *pVFormatCtx;
		AVCodecContext *pVCodecCtx;
		AVCodec *pVCodec;
		AVFrame *pVFrame;
		AVFrame *pVFrameBGR;
		uint8_t *bufferBGR;
		AVPacket pVPacket;
		openCVFrameContext cvFrameContext;
		struct SwsContext *pVImgConvertCtx;
};

//Function to initialize all protected variables.
void Capture_FFMPEG::init()
{
	fileName=NULL;
	videoStream = -1;
	frameFinished = 0;	
};

//Function to destroy all protected variables, structures and free up memory.
void Capture_FFMPEG::close()
{
	printf("close:Destroying Variables. \n");
	
	// Free the AVFrame
	if(pVFrame)
	{
		printf("close:Freeing AVFrame. \n");
		av_free(pVFrame);
	}
	
	// Free the AVFrame
	if(pVFrameBGR)
	{
		printf("close:Freeing AVFrameBGR. \n");
		av_free(pVFrameBGR);
	}
	
  	// Free the packet that was allocated by av_read_frame.
  	if(&pVPacket)
  	{	
  		printf("close:Freeing Packet. \n");
  		av_free_packet(&pVPacket);
  	}
  	
  	// Free SwsContext.
  	if(pVImgConvertCtx)
  	{
  		printf("close:Freeing SWSContext. \n");
		sws_freeContext(pVImgConvertCtx);
	}
	
	// Close the codec.
	if(pVCodecCtx)
	{
		printf("close:Closing Codec. \n");
		avcodec_close(pVCodecCtx);
	}
	
	// Close the video file.
	if(pVFormatCtx)
	{
		printf("close:Closing Video File. \n");
  		av_close_input_file(pVFormatCtx);
  	}
};
//Function to open a Video File.
bool Capture_FFMPEG::openVideoFile(const char* fileName)
{
	bool errorStatus=false; //true: error.
		
	//Open the input file, and if unable to then return true. This function reads the file header and stores information about the file format in the AVFormatContext structure. The last three arguments are used to specify the file format, buffer size, and format options, but by setting this to NULL or 0, libavformat will auto-detect these.
	if(av_open_input_file(&pVFormatCtx, fileName, NULL, 0, NULL)!=0)
	{
		fprintf(stderr, "ERROR:openVideoFile:Could not open the video file\n");
		return errorStatus=true;
	}

	// Retrieve stream information. Populates pVFormatCtx->streams with the proper information.
	if(av_find_stream_info(pVFormatCtx)<0)
	{
		fprintf(stderr, "ERROR:openVideoFile:Could not open the stream\n");
		return errorStatus=true;
	}
	
	// Dump information about file onto standard error.
  	dump_format(pVFormatCtx, 0, fileName, 0);
	
	videoStream=-1;
	
	// Here we are only concerned with video streams, so find the index of the first video stream.
	for(int i=0; i< pVFormatCtx->nb_streams; i++)
	{
		if(pVFormatCtx->streams[i]->codec->codec_type==CODEC_TYPE_VIDEO) 
		{
			videoStream=i;
			break;
		}
	}
	
	//Check videoStream variable and if none found, throw error.
	if(videoStream<0)
	{
		fprintf(stderr, "ERROR:openVideoFile:Could not find a valid Video stream\n");
		return errorStatus=true;
	}
	
	//Get a pointer to the codec context for the video stream. The stream's information about the codec is in what we call the "codec context." This contains all the information about the codec that the stream is using, and now we have a pointer to it.
	pVCodecCtx=pVFormatCtx->streams[videoStream]->codec;
	
	// Find the actual codec and open it. Find the decoder for the video stream.
	pVCodec=avcodec_find_decoder(pVCodecCtx->codec_id);
	
	if(pVCodec==NULL) {
		fprintf(stderr, "ERROR:openVideoFile:Unsupported codec or codec not found!\n");
		return errorStatus=true;
	}
	printf("openVideoFile:Decoder: %s\n", pVCodec->name);
	
	// Open the codec.
	if(avcodec_open(pVCodecCtx, pVCodec)<0)
	{
		fprintf(stderr, "ERROR:openVideoFile:Could not open codec!\n");
		return errorStatus=true;
	}
	
	//Initialize a place to actually store the frame.
	pVFrame=avcodec_alloc_frame();
	
	// Allocate an AVFrame structure.
	pVFrameBGR=avcodec_alloc_frame();
	
	if(pVFrameBGR==NULL) {
		fprintf(stderr, "ERROR:openVideoFile:Could Not Allocate the frame!\n");
		return errorStatus=true;
	}
	
	//Allocate the place to put the raw data when we convert it. We use avpicture_get_size to get the size we need, and allocate the space manually.
	//Determine required buffer size.
	int numBytes=avpicture_get_size(PIX_FMT_BGR24, pVCodecCtx->width, pVCodecCtx->height);
	//printf("openVideoFile:Buffer Size: %d bytes\n", numBytes);
	
	//Allocate the Buffer.
	bufferBGR=(uint8_t *)av_malloc(numBytes*sizeof(uint8_t));
	
	// Assign appropriate parts of buffer to image planes in pVFrameBGR. Note that pVFrameBGR is an AVFrame, but AVFrame is a superset of AVPicture.
	avpicture_fill((AVPicture *)pVFrameBGR, bufferBGR, PIX_FMT_BGR24, pVCodecCtx->width, pVCodecCtx->height);
	
	// Create an OpenCV Matrix with 3 channels.
	cvFrameContext.cvFrameBGR.create(pVCodecCtx->height, pVCodecCtx->width ,CV_8UC(3));
	
	//Final return of Error Status.
	return errorStatus;
};

//Function to grab a frame and retrieve the data. This returns a structure that contains a matrix of type cv::Mat and the error status.
openCVFrameContext Capture_FFMPEG::queryFrame()
{
	cvFrameContext.errorStatus=false; //true: error.
	
	//Return the next frame of a stream. The information is stored as a packet in pkt.
	if(av_read_frame(pVFormatCtx, &pVPacket)<0)
	{
		fprintf(stderr, "ERROR:queryFrame:Could not Read Frame!\n");
		cvFrameContext.errorStatus=true;
		return cvFrameContext; // Codec not found.
	}
	
	//Check if the packet belonged to the video stream.
	if(pVPacket.stream_index==videoStream) 
	{
		//printf("queryFrame:Decoding Video.\n");
		// Decode the video frame from the input buffer buf of size buf_size. To decode it, it makes use of the videocodec which was coupled with avctx using avcodec_open(). The resulting decoded frame is stored in picture.
		if(avcodec_decode_video(pVCodecCtx, pVFrame, &frameFinished, pVPacket.data, pVPacket.size)<0)
		{
			fprintf(stderr, "ERROR:queryFrame:Could Not Decode Video!\n");
			cvFrameContext.errorStatus=true;
			return cvFrameContext;
		}
		
		// Did we get a video frame?
		if(frameFinished) 
		{
			// printf("queryFrame:Querying Frame Number %d Finished. Now Converting Color Space and Scaling\n", pVCodecCtx->frame_number);
			// Convert the image from its native format to BGR.
			// Return an SwsContext to be used in sws_scale.
			if(pVImgConvertCtx==NULL)
			{
				pVImgConvertCtx = sws_getContext(pVCodecCtx->width, pVCodecCtx->height, pVCodecCtx->pix_fmt, pVCodecCtx->width, pVCodecCtx->height, PIX_FMT_BGR24, SWS_BICUBIC, NULL, NULL, NULL);
			}
			
			if(pVImgConvertCtx==NULL)
			{
				fprintf(stderr, "ERROR:queryFrame:Cannot initialize the conversion context!\n");
				cvFrameContext.errorStatus=true;
				return cvFrameContext;
			}
			
			// Scales the data in src according to our settings in our SwsContext.
			sws_scale(pVImgConvertCtx, pVFrame->data, pVFrame->linesize, 0, pVCodecCtx->height, pVFrameBGR->data, pVFrameBGR->linesize);
			
			// Populate the OpenCV Matrix.
			for(int y=0; y<pVCodecCtx->height; y++)
			{
        			for (int x=0; x<pVCodecCtx->width; x++)
        			{
					cvFrameContext.cvFrameBGR.at<cv::Vec3b>(y,x)[0]=pVFrameBGR->data[0][y * pVFrameBGR->linesize[0] + x * 3 + 0];
					cvFrameContext.cvFrameBGR.at<cv::Vec3b>(y,x)[1]=pVFrameBGR->data[0][y * pVFrameBGR->linesize[0] + x * 3 + 1];
					cvFrameContext.cvFrameBGR.at<cv::Vec3b>(y,x)[2]=pVFrameBGR->data[0][y * pVFrameBGR->linesize[0] + x * 3 + 2];
					
				}
			}			
			
			// Save Frame as *.ppm file.
			/*if(saveFrame(pVFrameBGR, pVCodecCtx->width, pVCodecCtx->height, pVCodecCtx->frame_number)<0)
			{
				fprintf(stderr, "ERROR:queryFrame:Could not Save Frame!\n");
				cvFrameContext.errorStatus=true;
				return cvFrameContext;
			}*/
			
		}
	}
	
	//Final return.
	return cvFrameContext;
};

//Function to save the frame data as *.ppm file.
int Capture_FFMPEG::saveFrame(AVFrame *pFrame, int width, int height, int iFrame) {

	FILE *pFile;
	char szFilename[32];
	int  y;

	// Open file.
	sprintf(szFilename, "frame%d.ppm", iFrame);
	pFile=fopen(szFilename, "wb");
	if(pFile==NULL)
		return -1;

	// Write header.
	fprintf(pFile, "P6\n%d %d\n255\n", width, height);
	
	printf("saveFrame: %s of size %d x %d \n", szFilename, width, height);
	
	// Write pixel data.
	for(y=0; y<height; y++)
		fwrite(pFrame->data[0]+y*pFrame->linesize[0], 1, width*3, pFile);

	// Close file.
	fclose(pFile);
};

int main(int argc, char **argv)
{
	//Registers all available file formats and codecs with the library so they will be used automatically when a file with the corresponding format/codec is opened.
	av_register_all();
	
	//Declate the Name of the File.
	string fileName;
	fileName="/home/sid/testROSOpenCVFFMPEG/samples/testVideo.avi";
	
	//Create and initialize an object of class Capture_FFMPEG.
	Capture_FFMPEG *capture = new Capture_FFMPEG;
	
	//Create a Sturcture that stores the return output of queryFrame();
	openCVFrameContext testCVFrameContext;
	
	//Initialize the errorStatus to false.
	testCVFrameContext.errorStatus=false;
	
	//Open the stream capture.
	testCVFrameContext.errorStatus=capture->openVideoFile(fileName.c_str());
	
	if(testCVFrameContext.errorStatus==true)
	{
		return 1;
	}
	
	//OpenCV HighGUI call to create a display window on start-up.
	cv::namedWindow(WINDOW, CV_WINDOW_AUTOSIZE);
	cvMoveWindow(WINDOW, 50, 50); 
	
	while(!testCVFrameContext.errorStatus)
	{
		//Query Frame
		testCVFrameContext=capture->queryFrame();
		
		//Display the Data in the OpenCV HighGUI Window.
		cv::imshow(WINDOW, testCVFrameContext.cvFrameBGR);
		
		//Wait for Keyboard input. The function only works if there is at least one HighGUI window created and the window is active. If there are several HighGUI windows, any of them can be active.
		cv::waitKey(0);
	}
	
	//Destroy the object
	delete capture;
	
	//Final Return 
	return 0;
}




ROS Packages for Color Thresholding an Image

11 08 2011

Just finished programming two packages needed to quickly determine the thresholds for an image :

  • colorThresholdReconfigure: GUI based on wxWidgets helps determine the parameters
  • colorThresholdProcess: Code that uses OpenCV to threshold image and show the result in a HighGUI window

When you run both the nodes, you will see the following GUI:

Way to run it

  1. Download both the packages in to a folder: DOWNLOAD LINK , and make sure to add both the folders to the ROS_PACKAGE_PATH. I have them in my HOME directory, so i added the following two lines at the end of my .bashrc:
    export ROS_PACKAGE_PATH=~/colorThresholdReconfigure:$ROS_PACKAGE_PATH
    export ROS_PACKAGE_PATH=~/colorThresholdProcess:$ROS_PACKAGE_PATH

    Make sure to restart the terminal windows for it to take effect.

  2. Next open a terminal window and run roscore.
  3. Open another terminal window and type the following:
    $ sudo bash

    Provide your password, and you will then have root access.

    # roscd colorThresholdReconfigure/
    # rosmake colorThresholdReconfigure

    Make sure that the package built with no errors.
    To run it, type:

    # rosrun colorThresholdReconfigure colorThresholdReconfigure 
  4. Open another terminal window and type the following:
    $ sudo bash

    Provide your password, and you will then have root access.

    # roscd colorThresholdProcess/
    # rosmake colorThresholdProcess

    Make sure that the package built with no errors.
    To run it, type:

    # rosrun colorThresholdProcess colorThresholdProcess 
  5. Have FUN with it! If you break it, let me know how you managed to do it, and i will try and fix……maybe🙂

colorThresholdReconfigure Source Code:

/*
Copyright (c) 2011, Siddhant Ahuja (Sid)
All rights reserved.

Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions are met:
    * Redistributions of source code must retain the above copyright
      notice, this list of conditions and the following disclaimer.
    * Redistributions in binary form must reproduce the above copyright
      notice, this list of conditions and the following disclaimer in the
      documentation and/or other materials provided with the distribution.
    * Neither the name of the <organization> nor the
      names of its contributors may be used to endorse or promote products
      derived from this software without specific prior written permission.

THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND
ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
DISCLAIMED. IN NO EVENT SHALL <COPYRIGHT HOLDER> BE LIABLE FOR ANY
DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
(INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND
ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
*/
/*
This program provides a GUI interface for determining parameters needed to 
threshold an image.
*/

//Includes all the headers necessary to use the ROS system.
#include <ros/ros.h>
//Include headers for wxWidgets GUI handling.
#include <wx/wx.h>
#include <wx/aboutdlg.h>
#include <wx/spinctrl.h>

//Declare a list of unique IDs which would be used for event handlers.
enum
{
    	ID_About=1,
	ID_Mode, 
	ID_sourceColorSpace, 
	ID_convertToColorSpace, 
	ID_Output, 
	ID_minChannel_1, 
	ID_maxChannel_1, 
	ID_minChannel_2, 
	ID_maxChannel_2, 
	ID_minChannel_3, 
	ID_maxChannel_3
};


/*
Declare the App to set and get application-wide properties, implement the 
windowing system message or event loop, initiate application processing via 
wxApp::OnInit, and allow default processing of events not handled by other 
objects in the application.
*/

class CTRApp : public wxApp
{
public:
	//Initialization function.
	virtual bool OnInit();	
	//Exit function.
	int OnExit();

	//Global Variables.
	char** local_argv_;
};

/*
Declare macro IMPLEMENT_APP(appClass) to tell wxWidgets how to create an 
instance of the application class.
*/
IMPLEMENT_APP(CTRApp);
/*
Use DECLARE_APP(appClass) so that wxGetApp function (which returns a reference 
to your application object) is visible to other files.
*/
DECLARE_APP(CTRApp);

//Declare a frame-window class.
class CTRFrame: public wxFrame
{
public:
    	CTRFrame(const wxString& title, const wxPoint& pos, const wxSize& size);
protected: 
	//Called when the user clicks the About Option in the ToolBar.
    	void OnAbout(wxCommandEvent& event);
	void OnUpdateMode(wxCommandEvent& event);
	void OnUpdateSourceColorSpace(wxCommandEvent& event);
	void OnUpdateConvertToColorSpace(wxCommandEvent& event);
	void OnUpdateOutput(wxCommandEvent& event);
	void OnUpdateMinChannel_1(wxSpinEvent& event);
	void OnUpdateMaxChannel_1(wxSpinEvent& event);
	void OnUpdateMinChannel_2(wxSpinEvent& event);
	void OnUpdateMaxChannel_2(wxSpinEvent& event);
	void OnUpdateMinChannel_3(wxSpinEvent& event);
	void OnUpdateMaxChannel_3(wxSpinEvent& event);
	//Global Variables.
	wxString currImagePath;
private:
	//Every event handler needs its event table entry.
    	DECLARE_EVENT_TABLE();
};

//Declare Event Table.

BEGIN_EVENT_TABLE(CTRFrame, wxFrame)
	EVT_MENU(ID_About, CTRFrame::OnAbout)
	EVT_CHOICE(ID_Mode, CTRFrame::OnUpdateMode) 
	EVT_CHOICE(ID_sourceColorSpace, CTRFrame::OnUpdateSourceColorSpace) 
	EVT_CHOICE(ID_convertToColorSpace, CTRFrame::OnUpdateConvertToColorSpace) 
	EVT_CHOICE(ID_Output, CTRFrame::OnUpdateOutput)
	EVT_SPINCTRL(ID_minChannel_1, CTRFrame::OnUpdateMinChannel_1)
	EVT_SPINCTRL(ID_maxChannel_1, CTRFrame::OnUpdateMaxChannel_1)
	EVT_SPINCTRL(ID_minChannel_2, CTRFrame::OnUpdateMinChannel_2)
	EVT_SPINCTRL(ID_maxChannel_2, CTRFrame::OnUpdateMaxChannel_2)
	EVT_SPINCTRL(ID_minChannel_3, CTRFrame::OnUpdateMinChannel_3)
	EVT_SPINCTRL(ID_maxChannel_3, CTRFrame::OnUpdateMaxChannel_3)
END_EVENT_TABLE()

//Initialize the Frame.
CTRFrame::CTRFrame(const wxString& title, const wxPoint& pos, const wxSize& size): wxFrame( NULL, -1, title, pos, size )
{
	//Declare a File Menu.
	wxMenu *menuFile = new wxMenu;
	//Add Items to the array.
	menuFile->Append( ID_About, _("&About...") );
	//Declare the Menu Bar.
	wxMenuBar *menuBar = new wxMenuBar;
	//Add items from File Menu into the Menu Bar.
	menuBar->Append( menuFile, _("&File") );
	//Set the Menu Bar.
	SetMenuBar( menuBar );
	
	//Initialize All Local Variables
	int CTR_mode;
	int CTR_sourceColorSpace;
	int CTR_convertToColorSpace;
	int CTR_output;
	int CTR_minChannel_1;
	int CTR_maxChannel_1;
	int CTR_minChannel_2;
	int CTR_maxChannel_2;
	int CTR_minChannel_3;
	int CTR_maxChannel_3;
	std::string CTR_imagePath;

	//Populate all Local Variables
	ros::param::get("/CTR_mode", CTR_mode);	
	ros::param::get("/CTR_sourceColorSpace", CTR_sourceColorSpace);
	ros::param::get("/CTR_convertToColorSpace", CTR_convertToColorSpace);
	ros::param::get("/CTR_output", CTR_output);
	ros::param::get("/CTR_minChannel_1", CTR_minChannel_1);
	ros::param::get("/CTR_maxChannel_1", CTR_maxChannel_1);
	ros::param::get("/CTR_minChannel_2", CTR_minChannel_2);
	ros::param::get("/CTR_maxChannel_2", CTR_maxChannel_2);
	ros::param::get("/CTR_minChannel_3", CTR_minChannel_3);
	ros::param::get("/CTR_maxChannel_3", CTR_maxChannel_3);
	ros::param::get("/CTR_imagePath", CTR_imagePath);
	
	//Declare all Labels.
	
	wxStaticText *lblSelectMode = new wxStaticText(this, -1,_T("Mode"),wxPoint(10, 20), wxSize(100,25));	
	wxStaticText *lblSelectSourceColorSpace = new wxStaticText(this, -1,_T("Source Color Space"),wxPoint(10, 60), wxSize(155,25));
	wxStaticText *lblSelectConvertToColorSpace = new wxStaticText(this, -1,_T("Convert Color Space To"),wxPoint(10, 100), wxSize(155,25));
	wxStaticText *lblOutputType = new wxStaticText(this, -1,_T("Output Type"),wxPoint(10, 140), wxSize(100,25));	
	wxStaticText *lblMin = new wxStaticText(this, -1,_T("Min"),wxPoint(195, 180), wxSize(100,25));	
	wxStaticText *lblMax = new wxStaticText(this, -1,_T("Max"),wxPoint(320, 180), wxSize(100,25));
	wxStaticText *lblChannel_1 = new wxStaticText(this, -1,_T("Channel 1"),wxPoint(10, 220), wxSize(100,25));
	wxStaticText *lblChannel_2 = new wxStaticText(this, -1,_T("Channel 2"),wxPoint(10, 260), wxSize(100,25));
	wxStaticText *lblChannel_3 = new wxStaticText(this, -1,_T("Channel 3"),wxPoint(10, 300), wxSize(100,25));
	
	//Initialize all Controls
	
	wxChoice *modeSelect=new wxChoice(this, ID_Mode, wxPoint(195,15), wxSize(200,25));
	modeSelect->Append(_T("Select Mode"));
	modeSelect->Append(_T("Image"));
	modeSelect->SetSelection(CTR_mode);

	wxChoice *sourceColorSpaceSelect=new wxChoice(this, ID_sourceColorSpace, wxPoint(195,55), wxSize(200,25));
	sourceColorSpaceSelect->Append(_T("Select Color Space"));
	sourceColorSpaceSelect->Append(_T("BGR"));
	sourceColorSpaceSelect->Append(_T("RGB"));
	sourceColorSpaceSelect->SetSelection(CTR_sourceColorSpace);
	
	wxChoice *convertToColorSpaceSelect=new wxChoice(this, ID_convertToColorSpace, wxPoint(195,95), wxSize(200,25));
	convertToColorSpaceSelect->Append(_T("Select Color Space"));
	convertToColorSpaceSelect->Append(_T("BGR"));
	convertToColorSpaceSelect->Append(_T("RGB"));
	convertToColorSpaceSelect->Append(_T("Gray"));
	convertToColorSpaceSelect->Append(_T("HSV"));
	convertToColorSpaceSelect->Append(_T("HLS"));
	convertToColorSpaceSelect->Append(_T("CIE L*a*b*"));
	convertToColorSpaceSelect->Append(_T("CIE L*u*v*"));
	convertToColorSpaceSelect->Append(_T("CIE XYZ"));
	convertToColorSpaceSelect->Append(_T("YCrCb JPEG (a.k.a. YCC)"));
	convertToColorSpaceSelect->SetSelection(CTR_convertToColorSpace);

	wxChoice *outputTypeSelect=new wxChoice(this, ID_Output, wxPoint(195,135), wxSize(200,25));
	outputTypeSelect->Append(_T("Select Output Type"));
	outputTypeSelect->Append(_T("Color Preserve"));
	outputTypeSelect->Append(_T("Binarized"));
	outputTypeSelect->Append(_T("Binarized Inverted"));
	outputTypeSelect->SetSelection(CTR_output);

	wxSpinCtrl *minChannel_1 = new wxSpinCtrl(this, ID_minChannel_1, wxEmptyString, wxPoint(195,215), wxSize(65, 25), wxSP_ARROW_KEYS, 0, 255, CTR_minChannel_1 , _T("spinCtrlMinChannel_1"));

	wxSpinCtrl *maxChannel_1 = new wxSpinCtrl(this, ID_maxChannel_1, wxEmptyString, wxPoint(320,215), wxSize(65, 25), wxSP_ARROW_KEYS, 0, 255, CTR_maxChannel_1, _T("spinCtrlMaxChannel_1"));

	wxSpinCtrl *minChannel_2 = new wxSpinCtrl(this, ID_minChannel_2, wxEmptyString, wxPoint(195,255), wxSize(65, 25), wxSP_ARROW_KEYS, 0, 255, CTR_minChannel_2, _T("spinCtrlMinChannel_2"));

	wxSpinCtrl *maxChannel_2 = new wxSpinCtrl(this, ID_maxChannel_2, wxEmptyString, wxPoint(320,255), wxSize(65, 25), wxSP_ARROW_KEYS, 0, 255, CTR_maxChannel_2, _T("spinCtrlMaxChannel_2"));

	wxSpinCtrl *minChannel_3 = new wxSpinCtrl(this, ID_minChannel_3, wxEmptyString, wxPoint(195,295), wxSize(65, 25), wxSP_ARROW_KEYS, 0, 255, CTR_minChannel_3, _T("spinCtrlMinChannel_3"));

	wxSpinCtrl *maxChannel_3 = new wxSpinCtrl(this, ID_maxChannel_3, wxEmptyString, wxPoint(320,295), wxSize(65, 25), wxSP_ARROW_KEYS, 0, 255, CTR_maxChannel_3, _T("spinCtrlMaxChannel_3"));
	
	//Create th Status Bar
	CreateStatusBar();
	//Set Status Bar Text
	SetStatusText(wxT("Welcome to Color Threshold Reconfigure App"));
};

//Application Initialization function.
bool CTRApp::OnInit()
{
	ROS_INFO("Color Threshold Reconfigure App::Initializing");
	// Create local copy of argv, with regular char*s.
	local_argv_ =  new char*[ argc ];
	for ( int i = 0; i < argc; ++i )
	{
		local_argv_[ i ] = strdup( wxString( argv[ i ] ).mb_str() );
	}
	
	/**
	* The ros::init() function needs to see argc and argv so that it can perform
	* any ROS arguments and name remapping that were provided at the command line. For programmatic
	* remappings you can use a different version of init() which takes remappings
	* directly, but for most command-line programs, passing argc and argv is the easiest
	* way to do it.  The third argument to init() is the name of the node. Node names must be unique in a running system.
	* The name used here must be a base name, ie. it cannot have a / in it.
	* You must call one of the versions of ros::init() before using any other
	* part of the ROS system.
	*/
	ros::init(argc, local_argv_, "colorThresholdReconfigure");

	//Initialize all Session Wide Parameters

	ros::param::set("CTR_mode", 0);
	ros::param::set("CTR_sourceColorSpace", 0);
	ros::param::set("CTR_convertToColorSpace", 0);
	ros::param::set("CTR_output", 0);
	ros::param::set("CTR_minChannel_1", 0);
	ros::param::set("CTR_maxChannel_1", 255);
	ros::param::set("CTR_minChannel_2", 0);
	ros::param::set("CTR_maxChannel_2", 255);
	ros::param::set("CTR_minChannel_3", 0);
	ros::param::set("CTR_maxChannel_3", 255);
	ros::param::set("CTR_imagePath", "");
	
	//Declare the frame window
	CTRFrame *frame = new CTRFrame(_("Color Threshold Reconfigure App"), wxPoint(50, 50), wxSize(410,400) );
	//Show the frame
	frame->Show(TRUE);
	//Set it as the top most window
	SetTopWindow(frame);
	
	return TRUE;
};

//Application Exit function.
int CTRApp::OnExit()
{
	ROS_INFO("Color Threshold Reconfigure App::Exiting");
	for ( int i = 0; i < argc; ++i )
	{
		free( local_argv_[ i ] );
	}
	delete [] local_argv_;

	//Delete All Parameters
	ros::param::del("CTR_mode");
	ros::param::del("CTR_sourceColorSpace");
	ros::param::del("CTR_convertToColorSpace");
	ros::param::del("CTR_output");
	ros::param::del("CTR_minChannel_1");
	ros::param::del("CTR_maxChannel_1");
	ros::param::del("CTR_minChannel_2");
	ros::param::del("CTR_maxChannel_2");
	ros::param::del("CTR_minChannel_3");
	ros::param::del("CTR_maxChannel_3");
	ros::param::del("CTR_imagePath");
	return 0;
};

//Called when user clicks the About item in the menu
void CTRFrame::OnAbout(wxCommandEvent& WXUNUSED(event))
{
	ROS_INFO("Color Threshold Reconfigure App::Menu::About");
	wxAboutDialogInfo *info=new wxAboutDialogInfo;
    	info->SetName(_("Color Threshold Reconfigure App"));
    	info->SetVersion(_("1.0.0"));
    	info->SetDescription(_("This program provides a GUI interface for determining parameters needed to threshold an image"));
    	info->SetCopyright(_T("(C) 2011 Siddhant Ahuja (Sid) <siddahuja@gmail.com>"));
    	wxAboutBox(*info);
};

void CTRFrame::OnUpdateMode(wxCommandEvent& event)
{	
	wxChoice* choice= dynamic_cast<wxChoice*>(event.GetEventObject());
	int currModeValue=choice->GetCurrentSelection();
	wxString currModeString=choice->GetStringSelection();
	//Set Parameter
	ros::param::set("CTR_mode", currModeValue);
	ROS_INFO("Color Threshold Reconfigure App::Current Mode Selected < Value , String > : < %d, %s >",currModeValue, (const char*)currModeString.mb_str());

	switch (currModeValue)
	{
		case 0:
			{
				//Reset All Params
				ros::param::set("CTR_imagePath", "");
			}
			break;
		case 1:
			{
				ROS_INFO("Color Threshold Reconfigure App::Select Image");	
				wxFileDialog *imageOpen=new wxFileDialog(this, _T("Choose a file"), wxEmptyString, wxEmptyString, _T("All Supported Image Formats|*.bmp;*.dib;*.jpeg;*.jpg;*.jpe;*.jp2;*.png;*.pbm;*.pgm;*.ppm;*.sr;*.ras;*.tiff;*.tif"), wxFD_DEFAULT_STYLE, wxDefaultPosition, wxDefaultSize, _T("Open an Image"));
				// Creates a Open Image Dialog
				if (imageOpen->ShowModal() == wxID_OK) 
				{
					//ROS_INFO("Color Threshold Reconfigure App::Select Image::OK");	
					//Get Image Path
					wxString currImagePath=imageOpen->GetPath();
					ROS_INFO("Color Threshold Reconfigure App::Select Image::Current Image Path : %s", (const char*)currImagePath.mb_str());
					//Set Parameter
					ros::param::set("CTR_imagePath", (const char*)currImagePath.mb_str());
				}
				else
				{
					//Set Parameter
					ros::param::set("CTR_imagePath", "");
				}
				imageOpen->Destroy();
			}
			break;
		default:
			break;
	}
};

void CTRFrame::OnUpdateSourceColorSpace(wxCommandEvent& event)
{	
	wxChoice* choice= dynamic_cast<wxChoice*>(event.GetEventObject());
	int currColorSpaceValue=choice->GetCurrentSelection();
	wxString currColorSpaceString=choice->GetStringSelection();
	//Set Parameter
	ros::param::set("CTR_sourceColorSpace", currColorSpaceValue);
	ROS_INFO("Color Threshold Reconfigure App::Current Source Color Space Selected <Value,String> : < %d, %s >",currColorSpaceValue, (const char*)currColorSpaceString.mb_str());
};

void CTRFrame::OnUpdateConvertToColorSpace(wxCommandEvent& event)
{	
	wxChoice* choice= dynamic_cast<wxChoice*>(event.GetEventObject());
	int currColorSpaceValue=choice->GetCurrentSelection();
	wxString currColorSpaceString=choice->GetStringSelection();
	//Set Parameter
	ros::param::set("CTR_convertToColorSpace", currColorSpaceValue);
	ROS_INFO("Color Threshold Reconfigure App::Current Convert To Color Space Selected <Value,String> : < %d, %s >",currColorSpaceValue, (const char*)currColorSpaceString.mb_str());
};

void CTRFrame::OnUpdateOutput(wxCommandEvent& event)
{	
	wxChoice* choice= dynamic_cast<wxChoice*>(event.GetEventObject());
	int currOutputValue=choice->GetCurrentSelection();
	wxString currOutputString=choice->GetStringSelection();
	//Set Parameter
	ros::param::set("CTR_output", currOutputValue);
	ROS_INFO("Color Threshold Reconfigure App::Current Output Selected <Value,String> : < %d, %s >",currOutputValue, (const char*)currOutputString.mb_str());
};

void CTRFrame::OnUpdateMinChannel_1(wxSpinEvent& event)
{	
	wxSpinCtrl* spinCtrl= dynamic_cast<wxSpinCtrl*>(event.GetEventObject());
	int currMinChannel_1_Value=spinCtrl->GetValue();
	//Set Parameter
	ros::param::set("CTR_minChannel_1", currMinChannel_1_Value);
	ROS_INFO("Color Threshold Reconfigure App::Current Min Channel 1 Selected <Value> : < %d >",currMinChannel_1_Value);
};

void CTRFrame::OnUpdateMaxChannel_1(wxSpinEvent& event)
{	
	wxSpinCtrl* spinCtrl= dynamic_cast<wxSpinCtrl*>(event.GetEventObject());
	int currMaxChannel_1_Value=spinCtrl->GetValue();
	//Set Parameter
	ros::param::set("CTR_maxChannel_1", currMaxChannel_1_Value);
	ROS_INFO("Color Threshold Reconfigure App::Current Max Channel 1 Selected <Value> : < %d >",currMaxChannel_1_Value);
};

void CTRFrame::OnUpdateMinChannel_2(wxSpinEvent& event)
{	
	wxSpinCtrl* spinCtrl= dynamic_cast<wxSpinCtrl*>(event.GetEventObject());
	int currMinChannel_2_Value=spinCtrl->GetValue();
	//Set Parameter
	ros::param::set("CTR_minChannel_2", currMinChannel_2_Value);
	ROS_INFO("Color Threshold Reconfigure App::Current Min Channel 2 Selected <Value> : < %d >",currMinChannel_2_Value);
};

void CTRFrame::OnUpdateMaxChannel_2(wxSpinEvent& event)
{	
	wxSpinCtrl* spinCtrl= dynamic_cast<wxSpinCtrl*>(event.GetEventObject());
	int currMaxChannel_2_Value=spinCtrl->GetValue();
	//Set Parameter
	ros::param::set("CTR_maxChannel_2", currMaxChannel_2_Value);
	ROS_INFO("Color Threshold Reconfigure App::Current Max Channel 2 Selected <Value> : < %d >",currMaxChannel_2_Value);
};

void CTRFrame::OnUpdateMinChannel_3(wxSpinEvent& event)
{	
	wxSpinCtrl* spinCtrl= dynamic_cast<wxSpinCtrl*>(event.GetEventObject());
	int currMinChannel_3_Value=spinCtrl->GetValue();
	//Set Parameter
	ros::param::set("CTR_minChannel_3", currMinChannel_3_Value);
	ROS_INFO("Color Threshold Reconfigure App::Current Min Channel 3 Selected <Value> : < %d >",currMinChannel_3_Value);
};

void CTRFrame::OnUpdateMaxChannel_3(wxSpinEvent& event)
{	
	wxSpinCtrl* spinCtrl= dynamic_cast<wxSpinCtrl*>(event.GetEventObject());
	int currMaxChannel_3_Value=spinCtrl->GetValue();
	//Set Parameter
	ros::param::set("CTR_maxChannel_3", currMaxChannel_3_Value);
	ROS_INFO("Color Threshold Reconfigure App::Current Max Channel 3 Selected <Value> : < %d >",currMaxChannel_3_Value);
};

colorThresholdProcess Source Code:

/*
Copyright (c) 2011, Siddhant Ahuja (Sid)
All rights reserved.

Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions are met:
    * Redistributions of source code must retain the above copyright
      notice, this list of conditions and the following disclaimer.
    * Redistributions in binary form must reproduce the above copyright
      notice, this list of conditions and the following disclaimer in the
      documentation and/or other materials provided with the distribution.
    * Neither the name of the <organization> nor the
      names of its contributors may be used to endorse or promote products
      derived from this software without specific prior written permission.

THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND
ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
DISCLAIMED. IN NO EVENT SHALL <COPYRIGHT HOLDER> BE LIABLE FOR ANY
DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
(INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND
ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
*/
/*
This program gets the system parameters saved from the GUI interface from 
package colorThresholdReconfigure and then uses OpenCV to threshold an image. 
The resultant thresholded image is displayed in a window
and also published as a topic.
*/

//Includes all the headers necessary to use the most common public pieces of the ROS system.
#include <ros/ros.h>
//Include headers for OpenCV Image processing
#include <opencv2/imgproc/imgproc.hpp>
//Include headers for OpenCV GUI handling
#include <opencv2/highgui/highgui.hpp>
using namespace std;

//Declare a string with the name of the window that we will create using OpenCV where processed images will be displayed.
static const char WINDOW[] = "Color Threshold Process";

//Initialize All Global Variables
int CTR_mode;
int CTR_sourceColorSpace;
int CTR_convertToColorSpace;
int CTR_output;
int CTR_minChannel_1;
int CTR_maxChannel_1;
int CTR_minChannel_2;
int CTR_maxChannel_2;
int CTR_minChannel_3;
int CTR_maxChannel_3;
std::string CTR_imagePath;

//Declare a 3 channel matrix that stores the input
cv::Mat inputImg;
//Declare a 3 channel matrix that stores the output
cv::Mat outputImg;
//Create a Matrix of Zeros to be shown by default in the OpenCV Window
cv::Mat imgZeros;
//Declare the Lookup Table that will store the color space conversion constants
cv::Mat LUT_ColorSpaceConvert(2, 9, CV_8UC1);

void imgConvertColorSpace()
{
	ROS_INFO("Color Threshold Process App::Convert Color Space");
	//ROS_INFO("Color Threshold Process App::Convert Color Space::Source Color Space : < value > : < %d >", CTR_sourceColorSpace);
	//ROS_INFO("Color Threshold Process App::Convert Color Space::Convert To Color Space : < value > : < %d >", CTR_convertToColorSpace);
	if(CTR_sourceColorSpace!=CTR_convertToColorSpace && CTR_sourceColorSpace!=0 && CTR_convertToColorSpace!=0)
	{
		//ROS_INFO("Color Threshold Process App::Convert Color Space::Source and convert to color spaces DO NOT match. Attempting to convert color space.");
		//Search in the Lookup Table to find the corresponding color conversion code
		unsigned char colorConversionCode = LUT_ColorSpaceConvert.at<unsigned char>(CTR_sourceColorSpace-1,CTR_convertToColorSpace-1);
		//ROS_INFO("Color Threshold Process App::Convert Color Space::Color Conversion Code Located: < value > : < %d >", (int)colorConversionCode);

		switch (colorConversionCode)
		{
			case 0: //Conversion between same color space
				ROS_WARN("Color Threshold Process App::Convert Color Space::Attempt to convert between same color spaces.");
				break;
			case 1: //No Direct Conversion available. Convert to intermittent space like BGR
				ROS_WARN("Color Threshold Process App::Convert Color Space::No Direct Conversion available. Convert to intermittent space like BGR.");
				break;
			default: //Convert color space and store the result
				cv::cvtColor(inputImg, outputImg, colorConversionCode, 0);
				break;
		}
	}
	else
	{
		//ROS_INFO("Color Threshold Process App::Convert Color Space::Source and convert to Color spaces match <or> One of color spaces not selected. Do nothing.");
	}
};

void imgThreshold()
{
	ROS_INFO("Color Threshold Process App::Image Threshold");
	//Declare common variables
	unsigned char ch1, ch2, ch3; //stores the channel 1, channel 2, and channel 3 data
	bool dstValue;

	//Perform threshold
	for (int i=0; i< outputImg.rows; i++)
	{
		for (int j = 0; j< outputImg.cols; j++)
		{
			if(outputImg.channels()>1)
			{
				//Populate color values
				ch1 = outputImg.at<cv::Vec3b>(i,j)[0];
				ch2 = outputImg.at<cv::Vec3b>(i,j)[1];
				ch3 = outputImg.at<cv::Vec3b>(i,j)[2];
				//check range
				if(ch1>=CTR_minChannel_1 && ch1<=CTR_maxChannel_1 && ch2>=CTR_minChannel_2 && ch2<=CTR_maxChannel_2 && ch3>=CTR_minChannel_3 && ch3<=CTR_maxChannel_3)
				{
					dstValue=true;
				}
				else
				{
					dstValue=false;
				}
				if(dstValue)
				{
					switch(CTR_output)
					{
						case 0: //do nothing
							break;
						case 1: //Color Preserve
							{
								outputImg.at<cv::Vec3b>(i,j)[0]=ch1;	
								outputImg.at<cv::Vec3b>(i,j)[1]=ch2;
								outputImg.at<cv::Vec3b>(i,j)[2]=ch3;
							}
							break;
						case 2: //Binarized
							{
								outputImg.at<cv::Vec3b>(i,j)[0]=255;
								outputImg.at<cv::Vec3b>(i,j)[1]=255;
								outputImg.at<cv::Vec3b>(i,j)[2]=255;
							}
							break;	
						case 3: //Binarized Inverted
							{
								outputImg.at<cv::Vec3b>(i,j)[0]=0;
								outputImg.at<cv::Vec3b>(i,j)[1]=0;
								outputImg.at<cv::Vec3b>(i,j)[2]=0;
							}
							break;	
						default:
							break;
					}
				}
				else
				{
					switch(CTR_output)
					{
						case 0: //do nothing
							break;
						case 1: //Color Preserve
							{
								outputImg.at<cv::Vec3b>(i,j)[0]=0;
								outputImg.at<cv::Vec3b>(i,j)[1]=0;
								outputImg.at<cv::Vec3b>(i,j)[2]=0;
							}
							break;
						case 2: //Binarized
							{
								outputImg.at<cv::Vec3b>(i,j)[0]=0;
								outputImg.at<cv::Vec3b>(i,j)[1]=0;
								outputImg.at<cv::Vec3b>(i,j)[2]=0;
							}
							break;	
						case 3: //Binarized Inverted
							{	
								outputImg.at<cv::Vec3b>(i,j)[0]=255;
								outputImg.at<cv::Vec3b>(i,j)[1]=255;
								outputImg.at<cv::Vec3b>(i,j)[2]=255;
							}
							break;	
						default:
							break;
					}
				}
			}
			else
			{
				//Gray Scale Image
				//Populate color values
				ch1 = outputImg.at<unsigned char>(i,j);
				//check range
				if(ch1>=CTR_minChannel_1 && ch1<=CTR_maxChannel_1)
				{
					dstValue=true;
				}
				else
				{
					dstValue=false;
				}
				if(dstValue)
				{
					switch(CTR_output)
					{
						case 0: //do nothing
							break;
						case 1: //Color Preserve
							{
								outputImg.at<unsigned char>(i,j)=ch1;
							}
							break;
						case 2: //Binarized
							{
								outputImg.at<unsigned char>(i,j)=255;
							}
							break;	
						case 3: //Binarized Inverted
							{
								outputImg.at<unsigned char>(i,j)=0;
							}
							break;	
						default:
							break;
					}
				}
				else
				{
					switch(CTR_output)
					{
						case 0: //do nothing
							break;
						case 1: //Color Preserve
							{
								outputImg.at<unsigned char>(i,j)=0;
							}
							break;
						case 2: //Binarized
							{
								outputImg.at<unsigned char>(i,j)=0;
							}
							break;	
						case 3: //Binarized Inverted
							{	
								outputImg.at<unsigned char>(i,j)=255;
							}
							break;	
						default:
							break;
					}
				}
			}
		}
	}
	
};

void populateLUTColorSpaceConvert()
{
	ROS_INFO("Color Threshold Process App::Populating Color Space Conversion Lookup Table");
	//Row/Col order: BGR, RGB, Gray, HSV, HLS, Lab, Luv, XYZ, YCC
	//0: Conversion between same color space
	//1: No Direct Conversion available. Convert to intermittent space like BGR
	
	//BGR to BGR
	LUT_ColorSpaceConvert.at<unsigned char>(0,0) = 0;
	//BGR to RGB
	LUT_ColorSpaceConvert.at<unsigned char>(0,1) = CV_BGR2RGB;
	//BGR to Gray
	LUT_ColorSpaceConvert.at<unsigned char>(0,2) = CV_BGR2GRAY;
	//BGR to HSV
	LUT_ColorSpaceConvert.at<unsigned char>(0,3) = CV_BGR2HSV;
	//BGR to HLS
	LUT_ColorSpaceConvert.at<unsigned char>(0,4) = CV_BGR2HLS;
	//BGR to Lab
	LUT_ColorSpaceConvert.at<unsigned char>(0,5) = CV_BGR2Lab;
	//BGR to Luv
	LUT_ColorSpaceConvert.at<unsigned char>(0,6) = CV_BGR2Luv;
	//BGR to XYZ
	LUT_ColorSpaceConvert.at<unsigned char>(0,7) = CV_BGR2XYZ;
	//BGR to YCC
	LUT_ColorSpaceConvert.at<unsigned char>(0,8) = CV_BGR2YCrCb;
	
	//RGB to BGR
	LUT_ColorSpaceConvert.at<unsigned char>(1,0) = CV_RGB2BGR;
	//RGB to RGB
	LUT_ColorSpaceConvert.at<unsigned char>(1,1) = 0;
	//RGB to Gray
	LUT_ColorSpaceConvert.at<unsigned char>(1,2) = CV_RGB2GRAY;
	//RGB to HSV
	LUT_ColorSpaceConvert.at<unsigned char>(1,3) = CV_RGB2HSV;
	//RGB to HLS
	LUT_ColorSpaceConvert.at<unsigned char>(1,4) = CV_RGB2HLS;
	//RGB to Lab
	LUT_ColorSpaceConvert.at<unsigned char>(1,5) = CV_RGB2Lab;
	//RGB to Luv
	LUT_ColorSpaceConvert.at<unsigned char>(1,6) = CV_RGB2Luv;
	//RGB to XYZ
	LUT_ColorSpaceConvert.at<unsigned char>(1,7) = CV_RGB2XYZ;
	//RGB to YCC
	LUT_ColorSpaceConvert.at<unsigned char>(1,8) = CV_RGB2YCrCb;
};

/**
* This tutorial demonstrates simple image conversion between ROS image message and OpenCV formats and image processing
*/
int main(int argc, char **argv)
{
	ROS_INFO("Color Threshold Process App::Initializing");
	
	/**
	* The ros::init() function needs to see argc and argv so that it can perform
	* any ROS arguments and name remapping that were provided at the command line. For programmatic
	* remappings you can use a different version of init() which takes remappings
	* directly, but for most command-line programs, passing argc and argv is the easiest
	* way to do it.  The third argument to init() is the name of the node. Node names must be unique in a running system.
	* The name used here must be a base name, ie. it cannot have a / in it.
	* You must call one of the versions of ros::init() before using any other
	* part of the ROS system.
	*/
        ros::init(argc, argv, "colorThresholdProcess");
        
        /**
	* NodeHandle is the main access point to communications with the ROS system.
	* The first NodeHandle constructed will fully initialize this node, and the last
	* NodeHandle destructed will close down the node.
	*/
        ros::NodeHandle nh;
        
        ros::Rate loop_rate(10);
       
	//OpenCV HighGUI call to create a display window on start-up.
	cv::namedWindow(WINDOW, CV_WINDOW_AUTOSIZE);
	cvMoveWindow(WINDOW, 460, 50); 
	
	//Create a Matrix of Zeros to be shown by default in the OpenCV Window
	imgZeros=cv::Mat::zeros(240, 320, CV_8UC3);
	
	//Populate the Lookup Table
	populateLUTColorSpaceConvert();

	while (ros::ok())
	{
		//Populate all Local Variables
		ros::param::get("/CTR_mode", CTR_mode);	
		ros::param::get("/CTR_sourceColorSpace", CTR_sourceColorSpace);
		ros::param::get("/CTR_convertToColorSpace", CTR_convertToColorSpace);
		ros::param::get("/CTR_output", CTR_output);
		ros::param::get("/CTR_minChannel_1", CTR_minChannel_1);
		ros::param::get("/CTR_maxChannel_1", CTR_maxChannel_1);
		ros::param::get("/CTR_minChannel_2", CTR_minChannel_2);
		ros::param::get("/CTR_maxChannel_2", CTR_maxChannel_2);
		ros::param::get("/CTR_minChannel_3", CTR_minChannel_3);
		ros::param::get("/CTR_maxChannel_3", CTR_maxChannel_3);
		ros::param::get("/CTR_imagePath", CTR_imagePath);
		
		switch(CTR_mode)
		{
			case 0: //No option selected
				{
					//ROS_INFO("Color Threshold Process App::CTR_mode < value > : < %d > ::Clear Window", CTR_mode);
					//Display a Blank Screen
					cv::imshow(WINDOW, imgZeros);
				}
				break;
			case 1: //Process an Image
				{
					
					ROS_INFO("Color Threshold Process App::CTR_mode < value > : < %d > ::Process an Image", CTR_mode);
					if(!CTR_imagePath.empty())
					{
						//ROS_INFO("Color Threshold Process App::CTR_imagePath < string > : < %s >", CTR_imagePath.c_str());
						
						inputImg = cv::imread(CTR_imagePath.c_str(),1);
						if(inputImg.empty())
						{
							ROS_ERROR("Color Threshold Process App::Could not read image!");
							return -1;
						}
						outputImg = inputImg.clone();
						imgConvertColorSpace();
						//ROS_INFO("Color Threshold Process App::Number of Channels < value > : < %d >", outputImg.channels());
						imgThreshold();
						cv::imshow(WINDOW, outputImg);
					}
					else
					{
						//Display a Blank Screen
						cv::imshow(WINDOW, imgZeros);
					}
						
				}
				break;
			default:
				break;
		}
		
		//Add some delay in miliseconds. The function only works if there is at least one HighGUI window created and the window is active. If there are several HighGUI windows, any of them can be active.
		cv::waitKey(3);	
		
		ros::spinOnce();
		loop_rate.sleep();
	}
	//OpenCV HighGUI call to destroy a display window on shut-down.
	cv::destroyWindow(WINDOW);
	
	//ROS_INFO is the replacement for printf/cout.
	ROS_INFO("Color Threshold Process::No error.");
	return 0;
}




ROS and FFMPEG talking to each other

9 08 2011

So after a long time of struggling with getting video capture devices to work with ROS, i decided to just write my own code based on ffmpeg. For those of you not familiar with it, visit: ffmpeg.org Little did i know that i would run into even more errors trying to get my code working with ffmpeg.

The first problem was to somehow find the libavcodec, libavformat and libavutil libraries and link them to the ROS project. To do this, you must edit the CMakeLists.txt file and use target_link_libraries() function to link them. Here is what i have in mine:

cmake_minimum_required(VERSION 2.4.6)
include($ENV{ROS_ROOT}/core/rosbuild/rosbuild.cmake)

# Set the build type.  Options are:
#  Coverage       : w/ debug symbols, w/o optimization, w/ code-coverage
#  Debug          : w/ debug symbols, w/o optimization
#  Release        : w/o debug symbols, w/ optimization
#  RelWithDebInfo : w/ debug symbols, w/ optimization
#  MinSizeRel     : w/o debug symbols, w/ optimization, stripped binaries
#set(ROS_BUILD_TYPE RelWithDebInfo)

rosbuild_init()

#set the default path for built executables to the "bin" directory
set(EXECUTABLE_OUTPUT_PATH ${PROJECT_SOURCE_DIR}/bin)
#set the default path for built libraries to the "lib" directory
set(LIBRARY_OUTPUT_PATH ${PROJECT_SOURCE_DIR}/lib)

#uncomment if you have defined messages
#rosbuild_genmsg()
#uncomment if you have defined services
#rosbuild_gensrv()

set(SEARCH_LIBRARY_PATHS "/usr/lib/ /usr/local/lib/ /usr/lib32/ /usr/lib64/")

FIND_LIBRARY(LIBAVFORMAT_LIBRARY NAMES avformat PATHS ${SEARCH_LIBRARY_PATHS})
if(NOT LIBAVFORMAT_LIBRARY)
  message(FATAL_ERROR "libavformat library is missing")
else()
  message(STATUS "libavformat library found : " ${LIBAVFORMAT_LIBRARY})
endif()

FIND_LIBRARY(LIBAVCODEC_LIBRARY NAMES avcodec PATHS ${SEARCH_LIBRARY_PATHS})
if(NOT LIBAVCODEC_LIBRARY)
  message(FATAL_ERROR "libavcodec library is missing")
else()
  message(STATUS "libavcodec library found : " ${LIBAVCODEC_LIBRARY})
endif()

FIND_LIBRARY(LIBAVUTIL_LIBRARY NAMES avutil PATHS ${SEARCH_LIBRARY_PATHS})
if(NOT LIBAVUTIL_LIBRARY)
  message(FATAL_ERROR "libavutil library is missing")
else()
  message(STATUS "libavutil library found : " ${LIBAVUTIL_LIBRARY})
endif()
#common commands for building c++ executables and libraries
#rosbuild_add_library(${PROJECT_NAME} src/example.cpp)
#target_link_libraries(${PROJECT_NAME} another_library)
#rosbuild_add_boost_directories()
#rosbuild_link_boost(${PROJECT_NAME} thread)
rosbuild_add_executable(${PROJECT_NAME} src/main.cpp)

target_link_libraries(${PROJECT_NAME} ${LIBAVFORMAT_LIBRARY} ${LIBAVCODEC_LIBRARY} ${LIBAVUTIL_LIBRARY} )

Next in Ubuntu 11.04, when you install ffmpeg, it installs the useful header files in the following directories: /usr/include/libavcodec/ , /usr/include/libavformat/ and /usr/include/libavutil/

So, in your ROS code, you must provide the relative paths:

#include <libavcodec/avcodec.h>
#include <libavformat/avformat.h>
#include <libavutil/avutil.h>

Since, all the above files and libraries are actually programmed in C, to use them in C++ you must surround them with extern “C” { }, like the following:

extern "C" { 
#include <libavcodec/avcodec.h>
#include <libavformat/avformat.h>
#include <libavutil/avutil.h>
}

Also i kept on getting “error: ‘’ was not declared in this scope”. To resolve these issues, add the following to your code:

using namespace std;

Now you can use rosmake to compile your program and it should work!





Waterloo robot tears up the track

29 07 2011

Source: Waterloo Daily Bulletin

Last weekend, July 23, the University of Waterloo’s Team Wavelab Robotics returned to the International Autonomous Robot Racing Competition at the University of British Columbia, with a redesigned version of the vehicle that crashed last year just before the competition and had to be rebuilt at lightning speed. This time, it was a clean sweep for the Waterloo team.

They won the Grand Award, and also captured first place in the Design, Drag Race, and Circuit competitions. Videos of the Circuit Race Training are here and here; a video of the Drag Race Competition is here.

Earlier this year, the joint UW Robotics and WAVELab Robotics team also did well at the International Ground Vehicles Competition at Oakland University in Rochester, with the best showing of any Canadian entry. The team is advised by mechanical and mechatronics engineering professors Steve Waslander and Sanjeev Bedi.