
Openai launches gpt-4o
, by Sukanya Eiampinit, 1 min reading time
, by Sukanya Eiampinit, 1 min reading time
GPT-4o is our latest model that delivers the same capabilities as GPT-4, but is faster and has improved capabilities for text, speech, and images.
Today, GPT-4o is better at understanding and discussing images you share than any other model currently available. For example, you can take a photo of a menu in a foreign language and talk to GPT-4o to translate it, learn about the history and significance of the dish, and get recommendations. In the future, the ability to have real-time voice conversations and have real-time video conversations with ChatGPT will be improved. For example, you can show ChatGPT a live sports game and ask it to explain the rules to you. We plan to launch a new voice mode with these capabilities in alpha in the coming weeks, with Plus users getting early access when we launch more widely.
To make advanced AI more accessible and useful around the world, GPT-4o’s language capabilities have been improved in both quality and speed. ChatGPT also supports over 50 languages in sign-up and login processes, user settings, and more.
We're starting to roll out GPT-4o to ChatGPT Plus and Team users, with Enterprise users coming soon, and starting today to ChatGPT Free users with usage limits. Plus users will have five times higher message limits than Free users, and Team and Enterprise users will have even higher limits.
An example of using the chat gpt-4o that will be available for us to use in the next few weeks.
https://www.youtube.com/watch?v=MirzFk_DSiI
In the next example, we will put AI to work in tensorflow on Raspberry Pi 5.