ChatGPT in Numbers: 20 Impressive Statistics and Insights

ChatGPT in Numbers: 20 Impressive Statistics and Insights
ChatGPT Statistics and Facts

Did you know that before ChatGPT, there was an InstructGPT?

Even though it seems like ChatGPT is an overnight success with over 100 million users in January 2023 ever since its launch on November 30, 2022, by San Francisco–based OpenAI Inc.– but it is built on the hard work and research of decades. Just 5 days after it was launched, ChatGPT managed to cross 1M users— a record of its own.

ChatGPT is an extensive language model (LLM) designed to converse. It is capable of natural language processing and understands rhetorical questions and sarcasm. It is a generative AI and can learn from its response. The reasons behind chatGPT’s success are many. Firstly, it is incredibly easy to use. Anyone can type in a question or request, and ChatGPT will respond quite naturally and in an informative manner. It will seem like you are talking to a human. Secondly, it is incredibly versatile as it can be used for multiple tasks, including customer service, education, entertainment, and creative content writing, amongst many other things. And to top it all, it is constantly improving with each response.

ChatGPT is built on top of OpenAI's GPT-3.5. And GPT-4 foundational large language models (LLMs) have been fine-tuned using supervised and reinforcement learning techniques. But before that, there was an InstructAI, created by OpenAI in response to the growing concern that large language models like GPT-3 that could be used to generate harmful or misleading content as it was not trained to follow user instructions. In short, InstructGPT was aligning language models to follow instructions rather than complete sentences by predicting the next words.

OpenAI says, “ChatGPT is a sibling model to InstructGPT.”

How ChatGPT Came To Be?

As I mentioned before, ChatGPT seems like an overnight hit, but years of research have gone behind its development. Innovators and engineers have been working on generative AI for decades. Though Generative AI has made significant progress over the past few years, researchers began exploring its potential in the 1980s and 1990s. One of the earliest breakthroughs came with the development of recurrent neural networks (RNNs) - a type of AI that can process sequential data such as text and speech. Backpropagation— a pivotal technique for training RNNs— was introduced in a 1986 paper by Geoffrey Hinton and Terrence Sejnowski. By the 1990s, scientists had developed RNNs capable of generating text. In 1991, a method for training RNNs to generate text similar to human-written text was introduced by Yoshua Bengio and Yann LeCun. This was a breakthrough, as it demonstrated that machines could be taught to generate text identical to that which humans produce, and thus began more research and more hard work in the field of generative AI.

Success Story of OpenAI- The Makers of the ChatGPT
OpenAI offers services for AI development and implementation. It is the creator of the famous ChatGPT. This is the story of how it all started.

In the 2000s, scientists developed transformers, a new type of RNN that is particularly well-suited for natural language processing tasks. The Transformer architecture was first introduced in the paper "Attention Is All You Need" by (Ashish) Vaswani et al. (2017), and later, Ilya Sutskever developed (at Google Brain) the attention mechanism that is used in Transformers. Oriol Vinyals worked on the implementation of Transformers.

Fun Fact: In 2015, Ilya Sutskever  joined OpenAI as a research scientist and was promoted to research director in 2016 and became the chief scientist in 2018.

Later, it was Quoc V. Le, who led the team that developed Transformers at Google AI, and the Transformer architecture is now used widely for natural language processing tasks. It has achieved state-of-the-art results on multiple tasks, including machine translation, text summarization, and question-answering.

Recent years have seen the development of increasingly sophisticated models, such as OpenAI's GPT-2 (2019) and GPT-3 (2020), capable of generating text nearly indistinguishable from that written by humans. OpenAI Inc. trained GPT-2 on a massive dataset of texts and codes of approximately 40GB in size and 1.5 billion parameters. The dataset used to train GPT-3 was even bigger— approx. 570GB of text and 175B parameters. So, this is what powers ChatGPT— decades of hard work and research by a host of scientists and engineers passionate about artificial intelligence.

GPT-4 vs GPT 3.5: 5 Key Differences Explained
OpenAI claims its new upgraded version GPT-4 is much more capable than GPT-3.5. Let’s see the 5 key differences between the GPT-4 and its predecessor in this article.

Some Interesting ChatGPT Statistics and Facts Are As Follows:

  1. The large language model powering ChatGPT, GPT-3 (GPT is short for Generative Pre-trained Transformer), was not trained to get instructions from users.
  2. Before ChatGPT, OpenAI developed InstructGPT.
  3. ChatGPT was trained on a dataset of the size of approx., 570GB.
  4. ChatGPT was launched on November 30, 2022, by San Francisco–based OpenAI, the creator of DALL·E 2 and Whisper AI.
  5. The platform was initially free to the public, and the company had plans to monetize the service later.
  6. ChatGPT doesn’t provide any source about its responses unless specifically prompted.
  7. In January 2023, ChatGPT reached over 100 million users, making it the fastest-growing consumer application to date.
  8. The service works best in English but can also function in other languages like Spanish, French, German, Italian, and Portuguese to varying degrees of accuracy.
  9. No official peer-reviewed technical paper on ChatGPT was published.
  10. The training process for ChatGPT took several months and involved hundreds of GPUs.
  11. The API access to ChatGPT is available in free and paid versions, but the cost of API access ranges from $0.000048 to $0.00072 per token.
  12. It can answer questions with an accuracy of 66% on the SuperGLUE benchmark.
  13. OpenAI says that Chatbot can give wrong answers too.
  14. The maximum ChatGPT users are from the age group of 25-34 years which accounts for roughly 35.3% of total users.
  15. Its responses are limited to 2021 as per its dataset.
  16. The popular website for developers ‘Stack Overflow’ has banned this AI on its platform.
  17. After the United States (19.01%), India (5.42) accounts for the second largest traffic source on the website.
  18. According to SimilarWeb, organic traffic accounts for 88.43% of the traffic on the ChatGPT website. This means that 88.43% of the people who visit the website do so by typing the URL into their browser or clicking on a link in a search engine result. The remaining 11.57% of the traffic comes from other sources, such as social media, direct links, and email.
  19. Netflix took 3.5 years, Facebook took 10 months, Instagram took 2.5 months, whereas ChatGPT took just five days to cross 1 million users.
  20. On February 1, 2023, OpenAI also launched ChatGPT Plus, a more efficient version of ChatGPT, which people can use by purchasing a 20 USD subscription plan. OpenAI hopes to generate 200m USD in revenue by 2023 from this Platform.
ChatGPT: Inside the latest version with OpenAI CEO Sam Altman

ChatGPT's success is a testament to the power of AI. AI is capable of developing tools that can be beneficial to humans. As AI evolves, we will likely see even more powerful and innovative tools like ChatGPT and ChatGPT Plus. Google might launch Bard AI for all users in May with their annual Google I/O meet. Microsoft has already integrated a sophisticated version of AI, based on the GPT-4 model, in its Bing search engine, and look how much more efficient it has become since then. Wonder what Google can achieve by integrating Bard AI in Google Search!

FAQs

What is Google's Bard?

Bard is an artificial intelligence chatbot developed by Google, based on the LaMDA (Language Model for Dialogue Applications).

On what techniques is ChatGPT built?

ChatGPT is built on top of OpenAI's GPT-3.5. And GPT-4 foundational large language models (LLMs) have been fine-tuned using supervised and reinforcement learning techniques.


Must have tools for startups - Recommended by StartupTalky

Read more