The launch of Chatgpt in November 2022 sparked a new AI era with popularity technology. Consequently, many competitors have entered the market, developing models of large languages (LLM), chatbots, image generators, etc.
Quick advance until 2025 and almost all large technological companies launch AI products. Technology is also increasingly integrated into equipment, with AI features integrated into most smartphones, laptops and tablets.
Also: the best AI for coding in 2025 (and what not to use)
As AI becomes omnipresent, it is important to remember that LLM are emerging technologies. Consequently, in -depth evaluations of different models, services and products are more important than ever. These evaluations are our goal on ZDNET.
How we test the AI in 2025
To test an AI product, whether it is an AI model, a functionality, a chatbot, a generator or a device (think of the R1 rabbit), our experts Perform practical tests, evaluating the overall performance of the product and other contributory factors, such as daily use cases and costs.
Since generative AI is trained in huge amounts of data, including user inputs, confidentiality is also a major component of our global assessments. Finally, we consider the guarantees that protect users against Deepfakes and copyright offenses.
Also: why Canvas is the best chatgpt productivity functionality for power users
Here is a general overview of our AI test methodology. This will help you better understand how an AI product wins the registered ZDNT title and how you can use some of these assessments when making your own decisions.
What makes Ai ZDNET recommended?
Performance
To measure performance, we examine how the product has managed tasks. The factors include the speed and quality of the output. We also take into account the performance compared to the price and the other competitors on the market offer.
The performance evaluation methodology varies depending on the product IA tested. However, our tests are centered on the effectiveness of the AI performs the tasks effectively.
Also: what is the deep research of perplexity and how do you use it?
For example, when evaluating an image generator, we assess performance according to the speed with which the image generator will emerge from the images, the number of images that it generates from a invite, the way the generation corresponds to the prompt (fidelity of the prompt) and image quality.
When evaluating a text generator, we are looking for some of the same factors, such as speed and quality. However, we also take into account other elements, including internet access, cat history parameters and the possibility of creating personalized assistants.
Obligence
With so many companies rushing to develop features and products, AI is sometimes just a fashionable word applied to a product that offers little or no real value to the user.
At ZDNET, we are particularly aware of this problem, ensuring that any AI product that we recommend really improves the user’s experience in one way or another.
Also: I have mapped my iPhone control button for chatgpt – here are 5 ways to use it every day
To measure usefulness, we reflect on daily use cases in which AI would be useful, the time it could allow a user of his daily workflow and to the global return on investment, both in terms of time and money.
Price
There are so many AI Flashy subscriptions on the market that it can be tempting to pay a lot of money on the different offers. However, the truth is that you may only need to subscribe to a single model, if necessary.
Also: is it more or pro chatgpt worth it? Here’s how they compare themselves to the free version
We test subscriptions, additional modules and AI devices to determine which are worth your money. We also identify low -budget or free alternatives. If a model can do something well for free, we will always recommend it.
Safety / Confidentiality
It is undeniable that AI models can value people’s lives. However, there are compromises to use these models and we want to help keep those at a minimum for our readers. Consequently, we deprive the transparency of training practices so that users can control the way their data is used.
AI model training practices are also important for the integrity of the output. To ensure that the original authors of the work get an appropriate allocation, AI companies should train their models on the workforce they have permission to use. We always highlight commercially safe options that adopt this approach.
Generative AI models can produce text, photos, videos and more very realistic. Consequently, companies must include guarantees to prevent the creation of harmful content. Our opinions examine how a company includes protections so that users include risks.
In the end, we are more inclined to recommend AI products with railings in place, and when we recommend one that does not, we will indicate this approach explicitly and explain why.
Here are some of our AI roundups