OpenAI unveils o1-preview with enhanced reasoning capabilities

More topics: AI improves by re-reading questions, and Google introduces DataGemma to address AI hallucinations

ยท

5 min read

OpenAI unveils o1-preview with enhanced reasoning capabilities

Hi AI Enthusiasts,

We are back from our summer break, and a lot has happened in the world of AI! We've rounded up the top AI news of the week for you. If you've been feeling overwhelmed this week, this is for you. Stay curious! ๐Ÿ˜Ž

This week's Magic AI tool helps you to write, plan, and organize with the help of AI.

Let's explore this week's AI news together. ๐Ÿ‘‡๐Ÿฝ


In today's Magic AI:

  • OpenAI unveils o1-preview with enhanced reasoning capabilities

  • AI improves by re-reading questions

  • Google introduces DataGemma to address AI hallucinations

  • Magic AI tool of the week

  • Hand-picked articles of the week


๐Ÿค– Check out our hand-picked AI tool list

Check out our AI Tools recommendations


Top AI news of the week

๐Ÿ’ญ OpenAI unveils o1-preview with enhanced reasoning capabilities

OpenAI has released a new model called o1-preview. It is a model with advanced reasoning capabilities. According to OpenAI, it can better master complex tasks in science, programming, and mathematics.

The details:

  • The model uses reinforcement learning and โ€œthinkโ€œ before it responds.

  • In benchmark tests in physics, chemistry, and biology, the model performs similarly to PhD students.

  • In a qualifying exam for the International Mathematical Olympiad (IMO), o1-preview solved 83 % of the tasks correctly, while GPT-4 only achieved 13 %.

  • Currently, the model has no internet access like GPT-4o.

  • Two versions: o1-preview and o1-mini

Example: Programming of a snake game

Our thoughts

The new capabilities could mean that the model behaves more like a human. It first "thinks" about how to solve a problem. This could lead to more accurate responses and opens the doors for more complex tasks in science, coding, and more.

In addition, an AI model cannot really think like a human. It only processes information and does not think! To say a model "thinks" is a marketing approach.

More information


๐Ÿค– AI improves by re-reading questions

Researchers have shown that AI systems that "re-read" questions twice significantly improve their reasoning and problem-solving abilities in various tasks.

The details:

  • The researchers call the prompting method RE2 (Re-Reading the question as input).

  • The technique works best if the question is entered twice as input. More than two repetitions can reduce the model performance.

  • It works with various AI models and can combined with other prompting methods.

If you want to dive deeper into this topic, we recommend reading the full paper.

Our thoughts

There are many prompting techniques. Some techniques can significantly improve the output of large language models. The presented method is easy to apply and seems to improve the results.

There is a lot of research about prompting techniques. In our opinion, the most accurate prompts are created when the prompts are clear and specific. Based on various prompting techniques, you should try out many prompts to achieve good results.

We have also written a comprehensive article about Prompt Engineering for Developers. Check it out if you are interested!

More information


๐Ÿ’ฌ Google introduces DataGemma to address AI hallucinations

DataGemma models are open models designed to address the challenges of hallucination. The models use real-world statistical data from Google's Data Commons.

The details:

  • DataGemma will expand the capabilities of Gemma models using two distinct approaches: RIG (Retrieval-Interleaved Generation) and RAG (Retrieval-Augmented Generation)

  • RIG (Retrieval-Interleaved Generation) enhances the language model, Gemma 2, by querying trusted sources and fact-checking with Data Commons.

  • RAG (Retrieval-Augmented Generation) allows language models to use additional information beyond their training data. The DataGemma models retrieve relevant data from Data Commons before generating responses, reducing hallucinations and improving accuracy.

  • The models are available on HuggingFace.

Our thoughts

As impressive as the capabilities of state-of-the-art LLMs are, they sometimes confidently present information that is not correct. This phenomenon is known as โ€œhallucinationโ€ and is a key challenge in generative AI.

Fact-checking using a trustworthy data set makes sense. However, we need more research on this topic. It's nice to see that Google is publishing their research on this. We are following up on this.

More information


Magic AI tool of the week

Today, it is essential to work in a structured and organized manner. There are many tools that you can use to increase your productivity at work. However, it can be overwhelming to find the ideal tool for your needs.

One of the best tools we have ever used is Notion in combination with its comprehensive AI functionalities. Notion combines the functions of a note-taking app, a document editor, a project management tool, and an AI assistance.

AI will help you complete your tasks faster and more efficiently. This tool increases your productivity from day one - we promise.

๐Ÿ‘‰๐Ÿฝ Try Notion for free!*


Articles of the week

๐Ÿ’ก
Do you enjoy our content and want to read super-detailed articles about AI?ย If so, subscribe to our Tinz Twins Hub and get our popular data science cheat sheets for FREE.

Thanks for reading, and see you next time.

- Tinz Twins


* Disclosure: The links are affiliate links, which means we will receive a commission if you purchase through these links. There are no additional costs for you.

ย