OpenAI made several noteworthy announcements during its first-ever developer conference, including a preview of GPT-4 Turbo, an enhanced version of its large language model. GPT-4 Turbo boasts several exciting features and upgrades that enhance its functionality and usability.
One of the standout improvements is the model’s increased capacity to accept longer inputs. While previous versions had a limit of 50 pages of text, GPT-4 Turbo can now handle inputs of up to 300 pages. This expanded capacity allows for more comprehensive and complex prompts, potentially leading to more meaningful and informative responses.
OpenAI has also updated the data on which GPT-4 Turbo is trained. The model now contains knowledge about the world until April 2023, surpassing the previous version’s understanding limited to September 2021. In addition to this update, the non-Turbo version of GPT-4 also gained the ability to browse the internet for real-time information updates.
Additionally, GPT-4 Turbo introduces support for image prompts. Users can now directly input images into the chat box, and the model can generate captions or descriptions of the image’s content. This feature expands the model’s versatility and makes it more capable of handling multimedia inputs.
Another notable addition is the text-to-speech capability. GPT-4 Turbo can now process and fulfill text-to-speech requests, enabling users to convert written text into spoken words directly within the chat interface. This functionality further enhances the user experience and opens up new possibilities for voice-based interactions with the model.
Furthermore, OpenAI has addressed the need for document analysis by allowing users to upload documents directly to GPT-4 Turbo for analysis. This capability brings the model on par with other chatbots like Anthropic’s Claude, which have offered document analysis features for some time. OpenAI’s inclusion of this capability expands the range of tasks that the model can perform, making it a more comprehensive tool for users.
Developers, in particular, will benefit from the enhancements in GPT-4 Turbo. OpenAI has significantly reduced the costs associated with using the model, with three times cheaper input and output token prices. This reduction in costs makes the model more accessible and encourages broader adoption by developers.
In addition to the GPT-4 Turbo preview, OpenAI shared impressive statistics about its popular product, ChatGPT. The company announced that ChatGPT has garnered over 100 million weekly active users worldwide and is utilized by more than 92 percent of Fortune 500 companies. This widespread adoption highlights the trust and reliance placed on OpenAI’s language model by businesses globally.
OpenAI also unveiled single-application “mini-ChatGPTs” during the conference. These are smaller tools focused on specific tasks, which can be built without the need for coding knowledge. Community-created GPTs can be easily shared, and OpenAI plans to establish a verified builders’ “store” where these creations can be made available to a wider audience. This initiative aims to foster collaboration and empower developers to create specialized GPT models for various applications.
While OpenAI did not provide a specific release date for GPT-4 Turbo, it is currently available in a preview state, with access priced at $20 per month. The company’s focus on continuous improvement and innovation in language models demonstrates its commitment to pushing the boundaries of natural language processing technology. As GPT-4 Turbo evolves and becomes more widely available, it is poised to revolutionize the way we interact with AI-powered language models.