Skip to content
Automate Bard

Automate Bard

Supercharge Your Productivity with AI

  • Stock Data
  • AI Image Up-Scaler
  • Home
  • 2023
  • November
  • 10
  • OpenAI Introduces Chat-GPT4 Turbo – What’s New?

OpenAI Introduces Chat-GPT4 Turbo – What’s New?

Posted on November 10, 2023November 10, 2023 By user No Comments on OpenAI Introduces Chat-GPT4 Turbo – What’s New?
AI News, Chat-GPT, OpenAI

OpenAI DevDay November 6th, 2023

This year at DevDay, OpenAI released a new version of Chat-GPT4 called ‘Turbo’. Turbo boasts 128K context window as its biggest feature. In addition, the Turbo model is more capable cheaper than the previous Chat-GPT versions.

A list of some new new features rolled out in Chat-GPT4 Turbo:

  • 128k context window size
  • trained on more data and world events up to April 2023
  • more optimized performance
  • improved JSON responses
  • improved instruction following
  • released to all paying developers on a limited basis

128K Context Window – What does that mean?

Context window refers to the amount of information a LLM (Large Language Model) can take as an input.

A context window of 128,000 tokens is equivalent to over 300 pages of text.

If you imagine Chat-GPT or Google Bard as a black box, we are only concerned with the amount of information we can put into it and how much information comes out of it.

The model itself does not contain any memory. It simply takes an input and processes the output. The way OpenAI and Google get the models to act as though they have memory is by feeding in the entire conversational history each time you send a new prompt.

The conversational history includes your prompts and its (Chat-GPT, Bard, etc…) responses. Together, they provide the ‘context’ of the conversation.

As a conversation with Chat-GPT or Bard gets longer, the conversational history grows. Eventually, the conversational history can grow longer than the context window. If this happens, then parts of the conversational history get excluded (it ‘forgets’).

By increasing the context window to 128K tokens or more, the models are able to take in more information to provide coherent and relevant responses. This also allows the models to examine larger documents (as text inputs) from which to generate responses from.

Google has not released the context window length for Bard.

Optimized Performance = Lower Cost

AI models can get big. Large language models (LLM) are LARGE for a reason. As the models are trained on more data, they grow in size, requiring more nodes to encode relationships.

When a user prompts the model with a question, extremely fast computers process the data in the models to generate the outputs. As the models get larger, they require more processing power which requires more electricity -> more cost.

Models can be reduced in size while still maintaining a high intelligence level. Different techniques exist to reduce model size: model distillation, model pruning, model quantization, and dataset distillation.

The goal is to reduce the number of nodes in a model which in turn reduces the memory footprint. This all translates into reduce power consumption.

In the end, if OpenAI is able to create more efficient models, they can pass those savings on to the users.

More Up-To-Date Knowledge

AI models like Bard and Chat-GPT are constantly being trained on new data to keep their knowledge levels relevant. Chat-GPT Turbo has the most up-to-date training set out of all the GPT models, including Chat-GPT 4.

For More Information

  • https://openai.com/blog/introducing-gpts
  • https://openai.com/blog/new-models-and-developer-products-announced-at-devday

Stay in the loop! Sign up for new post alerts from AutomateBard.com

* indicates required
/* real people should not fill this in and expect good things - do not remove this or risk form bot signups */

Related

Post navigation

❮ Previous Post: OpenAI Introduces GPTs – How Useful Are They?
Next Post: How To Boost Software Development Productivity with AI-Powered Code Comments ❯

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Archives

  • December 2023
  • November 2023
  • October 2023
  • September 2023
  • August 2023
  • June 2023
  • May 2023
  • April 2023

Recent Posts

  • How To Program A Conversational AI (ChatBot) Using Mistral-7B-Instruct Tutorial
  • How To Get Started With Mistral-7B-Instruct-v0.2 Tutorial
  • How To Get Started With Stable Diffusion Video Using Python – Tutorial
  • How To Get Started With Mistral-7B Tutorial
  • Can Bard Read PDFs, URLs and Images Containing Text with Gemini?

Categories

  • AI News
  • Articles
  • Automation
  • bard prompts
  • Chat-GPT
  • data gathering
  • DeepMind
  • Food
  • Gemini
  • HuggingFace
  • midjourney
  • Mistral
  • OpenAI
  • optimization
  • productivity
  • python
  • stable diffusion
  • stock data
  • testing bard
  • Uncategorized
  • Using AI Effectively
  • using bard
  • Stock Data
  • AI Image Up-Scaler

Copyright © 2025 Automate Bard.

Theme: Oceanly by ScriptsTown