Calvin Wankhede / Android Authority
Ever since ChatGPT creator OpenAI released its latest GPT-4 language model, the world of AI has been waiting with bated breath for news of a successor. But even though competitors like Google and Meta have started to catch up, OpenAI maintained that it wasn’t working on GPT-5 just yet. This led many to speculate that the company would incrementally improve its existing models for efficiency and speed before developing a brand-new model. Fast forward a few months and that indeed looks to be the case as OpenAI has released GPT-4 Turbo, a major refinement version of its latest language model.
GPT-4 Turbo introduces several new features, from an increased context window to improved knowledge of recent events. However, it won’t come to all ChatGPT users anytime soon. So in this article, let’s break down what GPT-4 Turbo brings to the table and why it’s such a big deal.
In a hurry? Here’s a quick summary of GPT-4 Turbo’s new features:
- As the name suggests, you can expect faster responses from GPT-4 Turbo compared to its predecessor.
- GPT-4 Turbo supports longer inputs, up to 128K tokens in length.
- While previous models didn’t know about events that took place after September 2021, GPT-4 Turbo has been trained on a much more recent dataset.
- The latest model is significantly cheaper for developers to integrate into their own apps.
- OpenAI will also let developers use the new model’s vision, text-to-speech, and AI image generation features via code.
- GPT-4 Turbo’s new features are making their way to all ChatGPT Plus users, meaning it will require a monthly subscription.
- In ChatGPT, you can now build your own GPTs with customized instructions for specialized tasks. Likewise, you will be able to download existing ones from the GPT store.
Keep reading to learn more about the features included within GPT-4 Turbo and how it compares to previous OpenAI models.
What is GPT-4 Turbo?
Calvin Wankhede / Android Authority
According to OpenAI, GPT-4 Turbo is the company’s “next-generation model”. Primarily, it can now retain more information and has knowledge of events that occurred up to April 2023. That’s a big jump from prior GPT generations, which had a pretty restrictive knowledge cut-off of September 2021. OpenAI offered a way to overcome that limitation by letting ChatGPT browse the internet, but that didn’t work if developers wanted to use GPT-4 without relying on external plugins or sources.
Information retrieval is another area where GPT-4 Turbo is leaps and bounds ahead of previous models. It boasts a context window of 128K tokens, which OpenAI says is roughly equivalent to 300 pages of text. This can come in handy if you need the language model to analyze a long document or remember a lot of information. For context, the previous model only supported context windows of 8K tokens (or 32K in some limited cases).
GPT-4 Turbo is simultaneously more capable and cheaper.
GPT-4 Turbo also offers a massive cost reduction to developers. The new model offers rates two to three times cheaper than its predecessor. Having said that, GPT-4 Turbo still costs an order of magnitude higher than GPT-3.5 Turbo, the model that was released alongside ChatGPT.
Unfortunately, you’ll have to spring $20 each month for a ChatGPT Plus subscription in order to access GPT-4 Turbo. Free users won’t get to enjoy the now-older vanilla GPT-4 model either, presumably because of its high operating costs. On the plus side, however, Bing Chat should switch over to GPT-4 Turbo in the near future. I’ve almost exclusively used Microsoft’s free chatbot over ChatGPT as it uses OpenAI’s latest language model with the ability to search the internet as an added bonus.
GPT-4 Turbo with Vision
When OpenAI first unveiled GPT-4 in early 2023, it made a big deal about the model’s multimodal capabilities. In short, GPT-4 was designed to handle different kinds of input beyond text like audio, images, and even video. While this capability didn’t debut alongside the model’s release, OpenAI started allowing image inputs in September 2023.
GPT-4 Turbo with Vision allows the language model to understand image and non-text inputs.
GPT-4 with Vision allows you to upload an image and have the language model describe or explain it in words. Whether it’s a complex math problem or a strange food that needs identifying, the model can likely analyze enough about it to spit out an answer. I’ve personally used the feature in ChatGPT to translate restaurant menus while abroad and found that it works much better than Google Lens or Translate.
With GPT-4 Turbo, developers can now access the model’s vision features via an API. Pricing is pegged at $0.00765 per 1080×1080 image. This affordability is good news as it means more apps could add the feature going forward.
GPT-4 Turbo vs GPT-4 and previous OpenAI models: What’s new?
Calvin Wankhede / Android Authority
GPT-4 Turbo is an iterative upgrade compared to GPT-4 but it’s still got a handful of compelling features. Luckily, existing GPT-4 users don’t have to do anything as it’s an automatic upgrade. However, if you’re still using GPT-3.5 or the free version of ChatGPT, the latest GPT-4 Turbo release is quite a big jump. Here’s how the three models compare:
GPT-3.5 | GPT-4 | GPT-4 Turbo | |
---|---|---|---|
Release date |
GPT-3.5
November 2022 |
GPT-4
March 2023 |
GPT-4 Turbo
November 2023 |
Context window |
GPT-3.5
4,096 tokens, currently 16,385 tokens |
GPT-4
8,192 tokens |
GPT-4 Turbo
128,000 tokens |
Knowledge cut-off |
GPT-3.5
September 2021 |
GPT-4
September 2021 |
GPT-4 Turbo
April 2023 |
Cost to developers |
GPT-3.5
Input: $0.001, Output: $0.002 |
GPT-4
Discontinued |
GPT-4 Turbo
Input: $0.01 |
Vision (image input) |
GPT-3.5
Not available, text-only |
GPT-4
Available |
GPT-4 Turbo
Available |
Image generation |
GPT-3.5
None |
GPT-4
Yes, via DALL-E 3 |
GPT-4 Turbo
Yes, via DALL-E 3 |
Availability |
GPT-3.5
All ChatGPT users |
GPT-4
ChatGPT Plus only |
GPT-4 Turbo
ChatGPT Plus only |
How to use GPT-4 Turbo
Calvin Wankhede / Android Authority
OpenAI has opened up access to GPT-4 Turbo to all ChatGPT Plus users, meaning you can try the new model immediately — no waitlist signup required. However, it’s unclear if the context window has increased for ChatGPT users yet. Many have reported faster responses with the new update, though.
It’s worth noting that GPT-4 Turbo via ChatGPT Plus will still have input or character limits. To access the latest model without any restrictions, simply head over to the OpenAI Playground page and log into your account. Then, look for the dropdown menu next to the word “Playground” and change the mode to Chat. Finally, change the model to GPT-4 Turbo (preview). If you don’t see models newer than GPT-3.5, you’ll have to add a payment method to your billing account.
Most users won’t want to pay for each response, however, so I’d recommend using GPT-4 Turbo via ChatGPT Plus instead. While Plus users likely won’t benefit from the massive 128,000 context window, the upgrade still offers other features like a more recent knowledge cut-off, image generation, plugin support, and GPT-4 Vision.
FAQs
No, GPT-4 Turbo requires a ChatGPT Plus subscription. Developers have to pay $0.03 per 1000 tokens (approximately 1000 words).
Yes, GPT-4 Turbo can generate images via OpenAI’s DALL-E 3 image creator.
No, GPT-4 Turbo is a large language model that simply analyzes and generates text. However, you can use ChatGPT’s browsing plugin or Bing Chat to connect the model to the internet.
GPT-4 Turbo can read PDFs via ChatGPT’s Code Interpreter or Plugins features. This will require a paid subscription.