Artificial Intelligence is Older than You Think!

Visualwebz
6 min readSep 18, 2023

We may already know that AI is an ever-changing, evolving, complex technology, but AI is not something new!

AI stands for Artificial Intelligence, which refers to machines capable of imitating human intelligence. This has been achieved by computer programs and algorithms designed to learn and make data-based decisions with little to no human interference.

https://data-flair.training/blogs/history-of-artificial-intelligence/

When was AI Coined?

AI or Artificial Intelligence was not coined until 1955 when John McCarthy, Marvin Minsky Nathaniel Rochester, and Claude Shannon used the term in a proposal they wrote for a 2 month, ten-man study of artificial intelligence”. However, the idea of AI has been around much longer than that, with some of the earliest concepts dating back hundreds of years. An example given in the Forbes article “A Very Short History of Artificial Intelligence (AI)” is in 1308, Catalan poet and theologian Ramon Llull published Ars generalis ultima (The Ultimate General Art), further perfecting his method of using paper-based mechanical means to create new knowledge from combinations of concepts.

While not yet coined Artificial Intelligence, things in AI started to take off in the 1940s — sparked by the Second World War and the desire to understand how to bring together the functioning of machines and humans.

In 1943, Warren Sturgis McCulloch and Walter Pitts created the formal design for Turing-Complete “artificial neurons,” which is now generally recognized as the first work of AI. In 1950, Allan Turing wrote a paper called “Computing Machinery and Intelligence.” In this paper, he proposes a test that can determine if a machine can imitate human thought that is equivalent to that of a human. The test, called the imitation test (later to be called the Turing test), involved a human interrogator, a computer, and a human interviewee. Allan Turing is often thought of as the founding father of AI.

In 1951, Christopher Strachey wrote the earliest successful AI program at the University of Manchester in England. The program was known as Strachey’s Checkers, and by the summer of 1952, the program was able to complete a game of checkers at a reasonable speed. Also, in 1952, Arthur Samuel wrote and ran the first AI program in the United States, a checkers program. Samuel took over the essentials of Strachey’s checkers program and extended it over the years.

Arthur Samuel added features that enabled the program to learn from experience. These included mechanisms for rote learning and generalization. In 1962, due to these enhancements, Samuel’s checkers program won a game against a former Connecticut checkers champion.

A computer program called “Logic Theorist” was written in 1956 by Allen Newell, Herbert A. Simon, and Cliff Shaw. Often considered the first artificial intelligence program, it was the first program deliberately engineered to perform automated reasoning. This program would eventually prove 38 of the first 52 theorems in chapter 2 of Albert North Whitehead and Bertrand Russell’s Principia Mathematica — a three-volume work on the foundations of mathematics. Newell and Simon would go on to found one of the first AI laboratories at the Carnegie Institute of Technology and develop several influential artificial intelligence programs and Ideas such as GPS, Soar, and their unified theory of cognition.

Problems in AI

AI might encounter problems when doing tasks like any other coding program. Problems may arise when asked to complete specific tasks, with tasks being primarily independent. As of yet, AI cannot answer when confusing prompts are requested. For example, one of these problems is using common sense to answer questions. A study was given about a version of AI called GPT-3, software that could assess and respond to questions.

A question was given, “Which is heavier, a toaster or a pencil?” The response was, “A pencil is heavier than a toaster.” The answer here is interesting since if a human were asked this question, the obvious answer would be a toaster. As a simple program, there is no picture of what a pencil weighs, which makes answering this question hard. Maybe the pencil weighs 100 pounds, and without having that sense to understand what the logic questions mean, this will continue to become a problem for AI.

Another area around AI is the need for the ability to learn constantly or adapt. When an AI runs its software to fulfill a task, it uses its current data pool to solve the problem. It can only output a correct answer if something is asked that can be reached. This is because AI needs to be “trained,” which means expanding the data it holds in updates. This is why AI won’t be able to tell real-time events offline until the future.

AI is written just like any other software; it all starts with the language it is written in. Typically, the language AI is written in either Python or Java, with each having its pros & cons. However, Python is the most commonly used, especially for larger-scale AI programs. The coding behind AI is written in algorithms that are made so that computers can understand them. It is modified and adapted as the code progresses, making the overall capacity broader and more effective. One thing that divides software into two is the different usage of functions. Interpreted functions are looked at as a way to develop AI quickly, while compiled functions are the first choice when performance is essential. With AI getting more of the spotlight, the code behind it is becoming more accessible. The knowledge on understanding how to create this software successfully is being spread.

Progress of AI

THE DIFFERENCES ARE MASSIVE when AI is compared from when it was first developed to now. The margin of efficiency when comparing the two is night and day. A notable example of this would be comparing AI, like GPS, when it was first created to now. When it started, the AI around GPS chose any directions that took you from where you were to where you wanted to go. Now, AI can take the input of your location and any criteria you want to meet, and several different directions are created as the output, each having something different. Increased variables are added, showing the overall capability of AI has gotten stronger compared to the past.

The coding behind it starts when it first considers your location, destination, distance, all stops, and lastly, any restrictions. An example would be Tacoma, Washington, to Seattle, Washington, with a stop at the gas station with no highways as a restriction. The AI would use all these inputs and create an efficient way to get from place to place. On the other hand, the previous software would have just taken you to any place with a gas station, regardless of efficiency.

In recent years, AI has had several developments making waves in their distinct fields. Current technologies have been pushing the bar of the furthest AI can achieve. An example of this would be the popular chatbot ChatGPT, which can understand information to a level unparalleled by other AI software. This allows it to output data previously incapable of other software. It can create reports, educate on any topic, or even converse with people.

AI is like an empty book. Whatever data it takes in, it will be able to use later. This does have its drawbacks. The data it produces is only sometimes accurate, which leads to problems down the line. Another task AI still cannot do is use deductive reasoning or “common sense.” This is one of the reasons why AI cannot replicate specific tasks. It's somewhat of a black box, and researchers still don’t understand the algorithm behind AI.

Takeaway on AI

Misconceptions around this topic are one of the reasons why this is necessary. After researching this topic, we believe that AI will be a fundamental part of technology in the future.

--

--

Visualwebz

A Seattle web design and online marketing agency that delivers high-end websites. A passion for web development and SEO.