A Very Brief History of Artificial Intelligence

artificial intelligence, ai, business, technology, google, turing, minsky,, machine learning,tech prediction, elon musk, cyber warfare, big data, future tech, emerging tech

For as many years as we can remember, science fiction has been captivated with Artificial Intelligence (AI). Many a novel, artwork, and apocalyptic Hollywood blockbuster has been imagined based around sentient machines, and science is now closer than ever to creating them. Sure, we’re not quite at the point where we have a JARVIS (or even a Rosie, for that matter) around to indulge our every whim, but that doesn’t mean that AI is already impacting our lives every day. Google’s search predictions, Siri’s voice recognition, or companies like Amazon and Netflix making suggestions based on your past preferences are all examples of AI. The autocorrect on my word processor even as I type this blog is an example of AIs influence on everyday life. This is all made possible through algorithms that are designed to let the AI receive information and then respond in real time.

Although AI is in no way a new idea (its earliest roots go as far back as ancient Greece), it was the technological revolution in the 20th century that really allowed it to transition from the stuff of science fiction to a real possibility. A major breakthrough happened in 1950, when Alan Turing, an English computer scientist, and mathematician, came up with the idea of creating machines that think. He was the creator of the Turing test, which is still considered to be the standard to evaluate a machine’s ability to ‘think’.

Marvin Minsky, an American cognitive scientist was the next leading mind in the field of AI, co-founding the Massachusetts Institute of Technology’s AI laboratory in 1959, and remaining a prominent voice in the field for the next two decades or so. Minsky was also one of the people who coined the term ‘artificial intelligence’ in a proposal for a “2 month, 10 man study of artificial intelligence.” In 1968, he was also an advisor to Stanley Kubrick on “2001: A Space Odyssey,” which delivered HAL 9000, which still remains one of the most iconic portrayals of AI.

In the 1980s, the advent of personal computers increased mainstream interest in AIs, but its true potential went unrecognized for a few decades more. Today, AI is the focal point of research by high profile researchers like Elon Musk and Stephen Hawking, all working towards building AI technology that could drastically improve human lives, and indeed, human history.

AI technology is based on the ability of a machine to learn from new data as is flows in. Therefore, the more data it collects, the more refined its algorithms can get, and the better the machine operates. By the early 21st century, AI had had an unprecedented impact in the modern office by increasing productivity with new tools to manage workflow, predicting trends, and even make advertising decisions, AI has a massive impact on business decisions made every day. This is to the extent that a program called Vital was actually made a board member in a venture capital firm for identifying trends and making investment decisions.

Big data can be hugely beneficial to companies, but organizing it is no easy feat, which is probably the main factor driving these advancements in AI. Since AI can organize and analyze large amounts of data more efficiently than any human ever could, it increases efficiency and decreases the margin of error significantly. It can detect anomalies like payment fraud or filter out spam mail, scan the Web for potential buyers with patterns similar to existing customers, and even be trained to take customer support calls.

The potential for AI development is endless. As ubiquitous as it is in increasing office productivity, it will soon be an integral part of other aspects of our lives too, with self-driving cars, space exploration, or even your thermostat. It can improve detection of cyber fraud, be used in healthcare to predict and prevent illnesses, and probably much more.

But even with all this promise, the process of refining machine-learning is a slow and tedious one. Humans are still an integral part of the puzzle, as machines lack the intuition to make decisions they haven’t been programmed to. So while machines are cleverly changing the way we live our lives every day, the doomsday scenarios and tinfoil hats can be put away, at least for the time being.

LEAVE A REPLY

Please enter your comment!
Please enter your name here