The Green Medium is an Emerald Award-winning, youth-run blog that seeks to innovate how we discuss and inform ourselves on environmental concerns.

Why robots will help you - A brief history of AI

Why robots will help you - A brief history of AI

Facebook captcha test to see if the user is a robot or not  Source:  BeSpecular

Facebook captcha test to see if the user is a robot or not

Source: BeSpecular

When did artificial intelligence become something the media talks about? Where did it start?

We know for a fact that AI is being used all around us and technology is growing thanks to it. But… when did artificial intelligence get its start? What is its origin?

The term AI was coined by John McCarthy and his team when they proposed to host workshops on artificial intelligence in 1955. Then in 1956, the Dartmouth Conference was held to explore ways to teach machines how to reason, think abstractly, problem solve and self-improvement. Scientists then began to discuss consciousness, intelligence, and the ability of machines. To measure a machine’s intelligence Alan Turing proposed The Turing Test, also known as The Imitation Game and gained a fair amount of attention when it first came out.

In the 1960s the field of artificial intelligence grew with the help of new programming languages. Additionally, the rise in popularity of AI was attributed to movies like “2001: A Space Odyssey” where a spacecraft controlled by HAL (Heuristically programmed ALgorithmic computer) malfunctions. In the movie, the crew members of the ships interacted with HAL as if it were another crew member. Also, during this time, intelligent industrial robots and other automatons were introduced to the workplace. This caused the loss of many factory jobs and led to people becoming wary of the inclusion of technology in everyday life.

Robots and overly eager engineers, however, do not understand human feat, so, AI advanced on in the field of robotics and automatons. But as the common people’s fear of machines grew and the novelty of AI wore off, government funding towards AI research was reduced. Although popular films like Star Wars came to the big screen and featured intelligent robots like C-3P0 and R2-D2, AI was pushed to the world sci-fi.

Furthermore, limitations in usage and a high error rate of AI programs led to a decline in popularity and a more reduction in government funding in the 1980s, 90s, and early 2000s. Programs used single layer perceptrons (or single layer neural networks) for classification type problems and the results were barely satisfactory. At this point, even the most advanced computer algorithm could not tell handwritten numbers apart with high accuracy. Although you saw the integration of more robots in everyday life like Furby (1998), Robo-chi Pet (2000), and Roomba (2002).

From left to right: Furby, Robo-chi Pet dog, Roomba  Source:  Etsy ,  The Old Robot Dogs ,  Pinterest

From left to right: Furby, Robo-chi Pet dog, Roomba

Source: Etsy, The Old Robot Dogs, Pinterest

Many researchers, like Geoffrey Hinton at the University of Toronto, suggested using multi-layer perceptrons (deep neural nets) modeled after animal brains and have machines learn the same way living creatures learn… by experience. However, there was a lack of labeled data, available memory storage, and overall lack of technology meant that computers could not obtain the needed “experience”.

That is, until in the 2000s as digital data became available en masse and technology quickly caught up. Suddenly memory storage reached millions and billions of bytes and super fast processors like GPUs (Graphic Processing Units) became widely available. In 2006, Hinton published his paper on deep learning using neural networks and the field of AI saw a sudden renew in interest.

Soon following the introduction of deep learning into the field of AI, databases like ImageNet were made widely available. ImageNet was created by Fei Fei Li (professor at Stanford University and previous vice president of Google Cloud) and her team, it is a database with labeled images of items sorted into thousands of categories. ImageNet then started a competition called the ImageNet Large Scale Visual Recognition Challenge (ILSVRC), where research teams pit their deep models against each other to see which one yields the highest image classification accuracy. Human classification of items in images is around 95% accurate, meaning that we have a 5% error rate. Previously, humans have been better at classifying images, meaning that those “Are you a robot” test truly did test how well you could read those captcha images.

However, in 2015, Residual Neural Network ( He et al.), more commonly referred to as ResNet, overtook human recognition and classified images with a 3.7% error rate. Ever since then, the competitiveness of the ILSVRC has dwindled.

Graph of the error rate of the ILSVRC winner from 2010 until 2017.

Graph of the error rate of the ILSVRC winner from 2010 until 2017.

To this day many AI researchers around the world focus the study of deep learning. AI is now used to develop self-driving cars, voice assistants, customized advertisements, and video games. We can detect faces and put filters on them to alter how we look – like Snapchat filters. Airport security has started using face recognition in e-passport lanes. Some people have even thought of using face recognition to stop underage drinkers at bars and clubs. Moreover, if you have bad hearing or have grown too attached to subtitles, you can click on the closed caption button on YouTube and get a pretty good set of machines generated subtitles.

Some of the ongoing research in AI involves the analysis of language and human voices. One interesting project is led by Rita Singh, at Carnegie Mellon University, whose project profiles people based on voice alone. According to her, your voice is as distinctive as your fingerprint. She aims to be able to detect the age, gender, race, height, weight, social status, and medical conditions of a person based on voice alone. Her research was used to track down a prankster who made repeated hoax distress calls to the US Coast Guard in 2014.

After the introduction of deep learning in 2006, we went from a computer barely able to understand what we are saying to have a voice assistant like Siri and Alexa right in our hands. Given the speed at which technology has advanced in this short period of time, it is completely normal to feel a bit afraid of technology. The technology that feels like it suddenly came out of nowhere took years of research and development behind the scenes. You may feel that machines can essentially do better than humans in every aspect, and in some case that is true, but we have to remember that even if robots can recognize weirdly deformed letters and numbers in captcha images, it cannot create memes with them

… yet

Basic of the computer brain - neural networks

Basic of the computer brain - neural networks

Why robots will help you rather than try to take over the world

Why robots will help you rather than try to take over the world