AI News: OpenAI Launches New Benchmark

ORIGINAL ARTICLE from CoinCape By Godfrey Benjamin on October 31 2024

Currently, the situation has reached a point where these models even produce false outputs or give answers without substantial evidence. This challenge is generally referred to as “hallucination.” Consequently, netizens are more geared towards the few models that provide more accurate responses with less hallucinations.

However, OpenAI decided to come up with the SimpleQA benchmark that measures factuality of language models. This vision is considered a difficult one to pursue because measuring factuality is challenging as the firm noted. SimpleQA is designed to focus on short, fact-seeking queries. Not only will this design reduce the scope of the benchmark, it will also make measuring factuality much more tractable.

Benchmark Results check HERE

Leave a Comment

Your email address will not be published. Required fields are marked *