Historical Insight of OpenAI

Visualwebz
9 min readMay 27, 2023

--

Artificial intelligence (AI) is a technology that combines numerous sciences, theories, and techniques to imitate the cognitive abilities of a human being in a machine. As automation in machines continues to become more and more prevalent in modern technology, it is only a matter of time before AI becomes the next step in innovation. Today, OpenAI is a company that continues to push the boundaries of what artificial intelligence is capable of.

https://www.techtarget.com/searchenterpriseai/definition/OpenAI

History of Artificial Intelligence

Before we discuss OpenAI, we should delve deeper into what AI is — or, in this case, was. Artificial intelligence wasn’t completely unheard of in the first half of the 20th century. The world was already familiar with the idea through science fiction, and its worlds were full of advanced robots and tech. By the 1950s, there was already a growing “generation of [curious] scientists, mathematicians, and philosophers with the concept of artificial intelligence culturally assimilated in their minds”. One such person was a young British polymath named Alan Turing, who “suggested that humans use available information as well as a reason to solve problems and make decisions, so why can’t machines do the same thing?” This was the thought process that inspired him to write the paper, Computing Machinery, and Intelligence where he discussed building intelligent machines and introduced what he called the Imitation Game.

Despite all the questions and theories Turing and many others may have had for AI, none of it was possible right then and there. The computers during that time needed to be more advanced, and fundamental changes were necessary before they could implement or test anything. Also, computing was costly, reaching upwards of $200,000 monthly to rent a computer. Even if you had access to a computer, that wouldn’t mean you were guaranteed results, as the entire idea of virtual computing was uncharted waters. This led to an unfortunate dilemma where producing a proof of concept was needed to get any funding, but developing the required proof of concept was impossible without funding in the first place.

Bateman, C. (2016, November 12). First functional electronic computer in Canada [Photo]. Spacing Toronto. http://spacing.ca/toronto/2016/11/12/first-computer-canada/

It took five years, but the semblance of a proof of concept began to take shape in a program named the “Logic Theorist”. This program was “designed to mimic the problem-solving skills of a human and was funded by the Research and Development (RAND) Corporation”. As a significant step forward in the field, the program was presented at the Dartmouth Summer Research Project on Artificial Intelligence (DSRPAI) hosted by John McCarthy in 1956. McCarthy — whom many consider one of the fathers of AI — imagined a great collaborative turn in the scientific community’s thoughts on AI. Although, to his dismay, the conference fell short of his expectations, as there wasn’t enough discussion about setting definitive standards for AI. Despite that, this conference held far more significance than what McCarthy thought initially, as this was the starting point for a common belief in achieving artificial intelligence that would continue to be upheld today.

The Turing Test

The Imitation game, also known as the Turing Test, was Turing’s way of redefining what the word intelligence meant. He introduced a practical test for computer intelligence involving three participants: a computer, a human interrogator, and a human foil. With communication restricted to only a keyboard and a display screen, the interrogator is tasked to discover which of the other two participants is the computer using questions “as penetrating and wide-ranging as he or she likes….” However, the computer would also be permitted to do anything to elicit an incorrect identification. “For instance, the computer might answer, “No,” in response to, “Are you a computer?” and might follow a request to multiply one large number by another with a long pause and an incorrect answer”. The foil has the job of helping the interrogator make the correct identification.

Suppose the test is performed enough times with a sufficient proportion of the interrogators unable to distinguish who the computer is. In that case, the computer is considered an intelligent and thinking entity”. Even with thousands of dollars in question and the prospect of creating the first true artificial intelligence, no person has ever successfully made a computer that could pass the Turing Test since its conception. With the sudden advent of the program ChatGPT, there is once again a conversation regarding the possibility that the puzzle pieces exist or are being developed as we speak.

The History of OpenAI

Starting from the 1950s, technology has yet to do anything but take enormous strides in development and innovation, and with it, Artificial Intelligence has been no exception. In late 2015, Elon Musk and Sam Altman, with the support of several prominent Silicon Valley personages, pledged $1 billion to start a non-profit company named OpenAI. From the start, the group was dedicated to “[advancing] digital intelligence in the way that is most likely to benefit humanity as a whole, unconstrained by a need to generate financial return”.

AI is “one of the biggest risks to the future of civilization”. They weren’t alone on the sentiment, as Stephen Hawking had issued warnings a year before OpenAI’s founding. Thus, OpenAI is their approach regarding AI technology. In researching how AI works, how we can use it, and how to control it, we can use it to help us thrive as a society rather than have it be the thing that makes our civilization crumble.

After OpenAI’s initial founding, they focused on the video game aspect that artificial intelligence can bring, releasing its first-ever open-source toolkit based on reinforcement learning, OpenAI Gym and Universe. In the two years that followed, OpenAI pivoted its research into what it is today: the development of Artificial General Intelligence (AGIs). In 2018, the company released a paper, “Improving Language Understanding by Generative Pre-Training,” — introducing the very first concept of a GPT.

It only took another two years for OpenAI to prove the concept, releasing the first version of a Generative Pre-Trained Transformer, GPT-1, in 2018. GPT-1 was their first language model “trained” on BookCorpus, a free dataset containing thousands of books and stories, allowing it to develop over 117 million parameters. In time, this model evolved into its second version, GPT-2. GPT-2 was a more powerful version, trained on 8 million different web pages, and contained over ten times the number of parameters, coming in at a staggering 1.5 billion parameters. GPT-2 was the version that made text prediction and generation possible.

Despite making their most significant achievement yet, they veered from their original philosophy and initially did not release GPT-2 to the public. Doing so caused Elon Musk to leave the board in 2018, expressing concern that OpenAI was not focusing enough on the actual risks AI could bring but rather prioritizing commercial applications of the technology, contradicting the reason for their existence. In a controversial decision, the company transitioned to a “capped-profit” organization. While that may be, it seems this decision started what has become their most significant achievement as of today.

What has OpenAI Achieved?

Open AI has achieved numerous groundbreaking feats in artificial intelligence with its various AI-powered programs and machine learning algorithms.

OpenAi could generate convincing synthetic text and images with their tool, DALL-E. The DALL-E AI can solve various real-world tasks, such as recognizing objects in images and even creating images of their own based on written prompts provided by the user. This technology can potentially revolutionize industries such as advertising and entertainment by providing a cost-effective and efficient way to create high-quality content. However, there are concerns about the ethical implications of using AI-generated content without proper attribution or consent and using AI images in art.

AI can revolutionize how people can go about enjoying or creating music. Open AI has developed an AI named MuseNet that allows musicians and non-musicians to generate complex and original pieces of music in various styles, ranging from classical to pop, jazz, and even video game soundtracks. The bot uses deep learning algorithms to analyze and learn from existing music, allowing it to create new compositions that sound like human composers made them. The bot can generate various pieces from musicians like Mozart, Elvis, and Katy Perry.

Through its work, OpenAI has created algorithms and programs to outperform humans who have been crowned the best in complex games like Dota 2. OpenAI Five plays 180 years' worth of games against itself every day, learning via self-play. Open AI 5 has changed how professional esports players train and learn. Open AI 5 has been able to beat some of the world’s best Dota 2 players, which has led to new strategies and techniques being developed by studying the bot’s gameplay. Additionally, Open AI 5 has sparked discussions about the future of AI in esports and its potential impact on the industry.

What They Are Researching Now

Open AI plans to continue its gaming voyage by creating an AI that can play Minecraft using a neural network through Video pre-training (VPT). This project aims to create an AI that can learn to play Minecraft without human intervention. Open AI hopes this will lead to more advanced AI systems capable of solving complex problems in various fields. The AI observes tens of thousands of hours of Minecraft gameplay and uses that to learn to do multiple tasks like crafting a diamond pickaxe, which takes most human players nearly 20 minutes to achieve. Video PreTraining: This “Video PreTraining” technique is a promising approach for AI to learn from human behavior and improve its performance in various tasks. It can potentially change how we train AI models and accelerate their development.

Open AI Minecraft

OpenAI plans on adding a way to filter images generated through DALE-E by using pre-training migration to set up guardrails to prevent generated pictures “that are not G-rated, or that could cause harm.” This feature will enhance the safety of using DALE-E-generated images, making them more suitable for a broader range of applications and audiences, including those that require strict adherence to ethical and moral standards. It will also help reduce the risk of inappropriate or harmful content being generated through the platform. Putting up ways to prevent harmful images will also help prevent people from using AI in destructive or dangerous practices.

The Future of AI

With OpenAI spearheading the development of AI technology, it is interesting to see where they will lead the field next. With the recent mass interest in their work with GPTs, they have created a worldwide trend of companies trying to match or surpass the most current GPT-3 model. Despite that, OpenAI seems intent on continuing to release more and more advanced research, as their next step is already being echoed throughout the tech world. Today, ChatGPT has already taken the world by storm, disrupting the comfort of the world with its showcase of artificial conversational abilities. ChatGPT is based only on an updated version of GPT-3, and work on GPT-4 has been confirmed by Sam Altman himself. It has also been reported that GPT-4 is expected to be released sometime this year.

When comparing the versions before GPT-4, there has been a definite pattern of each version being more advanced than the one before. So, if GPT-3 is so advanced, how much more advanced will GPT-4 be? We’ve seen the version number increase every few years, and with it, the parameters have always seemed to be ten times more than before. Rumors that GPT-4 will go as far as 100 trillion parameters have begun to float around the conversation, but Altman has already dismissed it as “Complete bullshit” . Regardless, it is safe to assume that the advent of GPT-4 will take us one step further into an artificial intelligence-filled future.

One of the most impressive features of ChatGPT has been its ability to generate not just human languages but also computer languages, including Java, C+, and Python. With this, GPT-4 will push the ceiling even higher than it can achieve. If it fulfills the right expectations, it could lead to more powerful versions of AI programming tools that could further its programming.

OpenAI and Microsoft

At the beginning of this year, OpenAI and Microsoft announced their extended partnership to accelerate worldwide AI breakthroughs. With the number of resources that Microsoft can make available to OpenAI, their opportunities will reach much farther than if they were alone. Allowing groundbreaking and specialized supercomputing development to occur at scale will enable further AI development in its own right.

Microsoft has also been working on ways to reinvent search engines for the internet. Utilizing the ChatGPT model, they launched an all-new AI-powered Bing search engine and Edge browser. This opens a new door to information gathering akin to the machines and robots we’ve seen in science fiction.

Takeaway

Artificial intelligence has been a topic found only in science fiction for the past century, but as technology advances, it is no longer just a concept. Breakthroughs have been happening year after year, allowing us to attain what we once thought impossible. OpenAI is a company that has come seemingly out of nowhere to the ordinary person. Still, with how it is leading one of the most innovative technologies out there, it is only a matter of time before artificial intelligence will affect the entire world.

--

--

Visualwebz
Visualwebz

Written by Visualwebz

A Seattle web design and online marketing agency that delivers high-end websites. A passion for web development and SEO.