Wednesday, February 18, 2009

Boosting in Matlab

Boris gave me his Matlab boosting code. His code is an implementation of Adaboost using haar-like features. The weak classifier is just a stump (looks at 1 feature and thresholds it).

The idea behind boosting is that many weak classifiers together create 1 strong classifier. The weak classifiers used in this implementation are extremely trivial and their individual accuracy is not that much better than 0.5. However, as we add more weak classifiers to the overall classifier, the overall accuracy improves, as shown in the following graph:


What is a haar feature?
In this implementation, a random number of rectangles are created, each with a different weight. Note that all of the training images must be the same size. We choose these haar features without seeing the images yet. The sum of the pixels in each of the different rectangles and the weights are used to come up with the feature. Each haar feature will result in 1 singleton value for each image.

Example of haar features:


What you'll need for an 'X' classifier:
1. A pool of images of 'X' (positive examples)
2. A pool of images that do not contain 'X' in them (negative examples)

Set aside part of each of the above pools for testing. The remaining, you'll use for training.

How it will go down:
1. Choose the number of haar features to create, and call this nh.
2. Apply all nh haar features to all of the training images, both positive and negative. As a result, for each image, we will have a feature vector of length nh.
3. Choose the number of weak classifiers desired, and call this nwc. Note that nwc <= nh.
4. Choose nwc of the nh haar features. Ideally, these nwc haar features are the best haar features out of the bunch. This means that these features are the strongest. Associate a threshold for each of the nwc haar features.
5. For each of the test images, find the nwc features. We now have a feature vector of length nwc for each of the test images.
6. Given the threshold for each of the nwc features, come up with a confidence for each test image.

I created an 'a' classifier for my project. I resized all of my training images to 24 by 24 pixels. I used a portion of the images of 'a' as the positive training examples, and used the remainder of them for testing. I used a combination of images of letters 'b' through 'z' as the negative training examples, and a separate portion of those for testing. Using a threshold of 0.5, the number of false positives was 194/1350, which comes out to a rate of 14%. The true positive rate is 100%. Here is the ROC curve:


How this will apply to my project: Given an image of a letter, I will apply all 26 classifiers to it. I will then have a score for each letter, and can use this instead of the nearest neighbor mechanism I was using before.

Monday, February 9, 2009

Boosting with OpenCV

My next step is to apply OpenCV's boosting framework to my problem. I have OpenCv installed, and after a bunch of linking issues, I can at least get things to compile. I have spent this weekend and today figuring out how to use it. I have a much better handle on it now and will hopefully get it working soon.

I have been referencing this tutorial, which is ok.

I plan to train 26 different classifiers, one for each letter.

Random fact: OpenCV either lets you give it one training image, where it takes random samples of the image, or it lets you give it a bunch of images and it does nothing with those. Although the tutorial mentioned above provides a way to combine the 2 approaches.

Monday, February 2, 2009

Trying PCA

I decided to use PCA on my training data. The way I did this is I used Matlab's princomp function. So I run princomp on my n x p matrix (n row vectors, where each row vector has p values). I then get a matrix D of size p x p. I decided to use the first 100 components, so I multiplied my n x p data matrix by D(:,1:100). Sometimes the results are better, sometimes they are worse. I will do more experimentation and report back.

Monday, January 26, 2009

Fixing the cropping issue

I've spent some time now fixing up my cropping function. The reason I've been spending my time on this is because I think that it's really important to have quality training data. If my training data is crap, then nothing will work. I want to be able to cleanly cut out letters. So, I changed my cropping algorithm to the following:
cut the original image into 4 pieces, as follows:


Here is the left piece:


Right piece:


Top piece:


Bottom piece:


I run the hough transform on all four pieces. In each, I know where a line would be, if there exists one. I have some way of thresholding whether or not there is a line there (if the maximum number of votes is higher than a certain percentage of the length of the image). If so, I cut the appropriate piece out.

The performance isn't as great as expected. I think that I need to fix up my hough transform function because sometimes, when there appears to be a dark line, it won't find it (the number of votes is low).

Here are the overall results anyway:


The letters straight out of the training sheet look like this, with no postprocessing:


The next step for me is to fix my hough transform function, get the training data looking good, retrying nearest neighbor, and then moving on to opencv (boosting).

Wednesday, January 14, 2009

Cropping

I get better accuracy now, when I try to classify my test data. However, I noticed that after my "cropping" tool, the test data can get pretty deformed. For example, take a look at a before and after sequence:

Before:


After:


I decreased the dimensionality of the test data by a lot. The new test data characters are 32 by 32 pixels. Before this, it was 120 by 123 pixels. This reduction in dimensionality really brought down the running time of classification. I think that I need to tweak my cropping tool though.

Cropping tool = bad. Exhibit A:
=

When I change it that only 2% of the pixels can be all white (instead of 5%), then the cut out changes to:


Now, the same test set looks like this:

Monday, January 12, 2009

Fixing up training data

I noticed that the actual letters in my training data take up a small area in the actual image. This seems bad, so I fixed up my code to minimize the amount of white space surrounding the actual letter.

Example of old training data:



Example of new training data:

Saturday, January 3, 2009

I'm baaaack

Didn't get enough last year? Miss me? Well, I'm back for more. I will be continuing the same project. Here is a refresher of how the project works:



The first thing that I will be doing this week is getting back to the point I was at at the end of winter 2008. I will then research machine learning methods to apply to the problem, to replace the nearest neighbor approach.