Monday, March 9, 2009

Jittering Data

After my results came out really bad, I decided that my training data was the problem. Both quality and quantity wise. It is difficult to get more training data fast, so Serge recommended using what I have to make more. I used Piotr Dollar's jitterImage function to take a single image and apply transformations to it.

Exaggeration:


Piotr's function applies both rotational and translational rotation to images. Serge recommended playing around with thickness also by raising an image to a power greater than 1, and also a power between 0 and 1.

I did all this, and started out with around 50,000 examples of each character instead of 135. I found out quickly that I run out of memory with this many examples. I eventually had to bring it down to 625, and training took forever.

The first thing I noticed was that the error in the training data was much higher than for when I only had 135 examples. I think that this is because there is a lot more variation in the data now and it's hard to cover all cases in 200 features. When I only had 135 examples, the error eventually went down to 0. This is the error with the 625 examples.



Also, here are the ROC curves plotted on top of each other. When I only have 135 examples, the ROC curve was a right angle.



Here is half of the 625 training examples for 'a':


To cut to the chase, my algorithm still performs poorly, and probably even worse now. D:

No comments: