Last week I attended an interesting lecture on computational neuroscience at work which covered the models that we use for modelling neurons mathematically. I have been writing a few articles based on ideas from it over the last few days, however I wanted to write this one based on the state of computers recognising handdrawn symbols.
Machine learning is a huge area of research at the moment and it comes in three forms. Unsupervised learning is the most ambitious leaving a programme to explore data for patterns in its own way. The computer will look out of some sort of structure (even if it wasn't the sort of structure we expected) and will get better and better as it learns. This explanation is quite vague because unsupervised learning can be used for so many different applications.
A second form would be using trial and error to improve a program. The artificial intelligence will have some criteria of success and failure where success is rewarded by repeating it's behaviour more, while failure is punished prompting it to use that behaviour less. For an example of this type of learning you can see a computer I made out of M&Ms which learns to beat me at a simple game called Hexapawn in this video:
However the third type of machine learning rests somewhere between the other two and is called supervised learning. One major application is for computers to recognise handwriting of humans. For example, you, as a human, are quite good at recognising the difference between the letter a and the letter d. However the amount of variation of both could mean that one person’s a looks very similar to another person’s d, but you can usually recognise the difference based on context. This is much harder for a computer to do and programming in that lines have to be at this angle, this thickness, this length etc are prone to failure on edge cases.
So the software used by companies such as the post office to read addresses on envelopes is made to learn as it goes along. Some basic rules are programmed in but from there it learns as it goes with help from humans as it goes along.
In a similar project the entire back catalogue of the New York Times has been digitised. This was easy for most words, but there would always be a lot of words that the computer couldn't detect. That is where reCAPTCHA comes in. You know those pop up windows where you have to authenticate that you are human?
Photos of words that the computer couldn't do are used. Once a few humans have given the same answer then the computer learns the word. Usually in a reCATCHA you are given 2 words where the computer knows one of them (which is used to test you) and the other is one that they are genuinely curious about. In 2011 reCAPTCHA finished transcribing the New York Times and they have since moved on to Google Books.