by John Licato, University of South Florida, [This article first appeared in The Conversation, republished with permission]
I am looking for the intersection of artificial intelligence, natural language processing and human reasoning as director of Laboratory Advancing human and machine reasoning at the University of South Florida. I also market this research in a Getting started with AI which provides a vulnerability scanner for language models.
From my perspective, I have observed significant developments in the field of AI language models in 2024, both in research and in industry.
Perhaps the most interesting of these are the capabilities of smaller language models, support for handling AI hallucinations, and development frameworks. AI Agents.
Small AIs are making a splash
At the heart of commercially available generative AI products like ChatGPT are large language models, or LLMs, that are trained on large amounts of text and produce convincing, human-like language. Their size is generally measured in settingswhich are the numerical values that a model derives from its training data. Larger models, like those from major AI companies, have hundreds of billions of parameters.
There is an iterative interaction between large language models and smaller language modelswhich seems to have accelerated in 2024.
First, organizations with the greatest number of computing resources experiment with and train increasingly large and powerful language models. These generate new extended language model capabilities, benchmarks, training sets, and training or prompting tips. In turn, these are used to create smaller language models – on the order of 3 billion parameters or less – which can be run on more affordable computer setups, require less power and memory to train and can be adjusted with less data.
So it’s no surprise that developers have released a slew of powerful, smaller language models – although the definition of small keeps changing: Phi-3 And Phi-4 from Microsoft, Lama-3.2 1B and 3BAnd Qwen2-VL-2B are just a few examples.
These smaller language models can be specialized for more specific tasks, such as quickly summarizing a set of comments or checking facts against a specific reference. They can work with their big cousins produce increasingly powerful hybrid systems.
What are small language model AIs – and why would you want one?
Wider access
Increased access to high-performance language models, large and small, can be a mixed blessing. As there were many consecutive elections around the world in 2024, the temptation to misuse language patterns was strong.
Language models can give malicious users the ability to generate social media posts and deceptively influence public opinion. There was a a lot of worry on this threat in 2024, given that it was an election year in many countries.
And indeed, a robocall simulating President Joe Biden’s voice asked Democratic voters in the New Hampshire primary stay at home. OpenAI had to intervene to disrupt over 20 deceptive operations and networks who tried to use its models for deceptive campaigns. Fake videos and memes have been created and shared with the help of AI tools.
Despite the anxiety related to misinformation about AIIt is it is not yet clear what effect these efforts actually had on public opinion and the American elections. Nevertheless, American states have adopted a large number of legislation in 2024 governing the use of AI in elections and campaigns.
Robots that behave badly
Google started to include AI Insights in its search results, yielding hilarious and blatantly false results – unless you like stick in your pizza. However, other results could have been dangerously wrong, such as when he suggested mix bleach and vinegar to clean your clothes.
The major language models, as most often implemented, are prone to hallucinations. This means they may state false or misleading things, often using safe language. Even if I And others By continuing to emphasize this topic, 2024 has once again seen many organizations discover the dangers of AI hallucinations the hard way.
Despite extensive testing, a chatbot playing the role of a Catholic priest advocated for baptism via Gatorade. A chatbot provide advice on New York City laws and regulations erroneously stated that it was “legal for an employer to fire a worker who complains of sexual harassment, does not disclose her pregnancy, or refuses to cut her dreadlocks.” And OpenAI’s voice model forgot whose turn it was to speak and responded to a human in their own voice.
Fortunately, 2024 also saw new ways to mitigate and live with AI hallucinations. Companies and researchers are developing tools to ensure that AI systems follow the rules given before deploymentas well as environments to evaluate them. supposedly guardrail frames inspect the input and output of large language models in real time, although often using another layer of large language models.
And the conversation on the regulation of accelerated AIwhich led the major players in the vast space of linguistic models to update their policies on responsible scaling And exploit AI.
But even though researchers continually discover ways to reduce hallucinationsin 2024, research convincingly showed this AI hallucinations will always exist in one form or another. This may be a fundamental characteristic of what happens when an entity has limited computing and information resources. After all, even human beings are known to remembering with confidence and telling lies from time to time.
The rise of agents
Large language models, especially those powered by variants of transformer architectureare still at the origin of the most significant advances in AI. For example, developers use large language models not only to create chatbots, but also to serve as the basis for AI agents. The term “agentic AI” put forward in 2024some experts even calling it the third wave of AI.
To understand what a AI Agent That is, think of a chatbot developed in two ways: first, give it access to tools that provide the ability to take action. This could, for example, be the ability to query an external search engine, book a flight or use a calculator. Second, give him increased autonomy, or the ability to make more decisions for himself.
For example, a travel AI chatbot might be able to search for flights based on the information you give it, but a tool-equipped travel agent could plan an entire travel itinerary, including searching for flights. events, booking and adding them to your account. calendar.
AI agents can perform multiple steps of a task themselves.
In 2024, new frameworks for developing AI agents have emerged. To name a few, LangGraph, CrewAI, PhiData And AutoGen/Magentic-One were published or improved in 2024.
Businesses are just start adopting AI agents. AI agent development frameworks are new and rapidly evolving. Additionally, risks related to security, privacy, and hallucinations remain of concern.
But global market analysts I expect that to change.: 82% of organizations surveyed plan to use agents within 1-3 yearsAnd 25% of all businesses currently use generative AI are likely to adopt AI agents in 2025.
John Licatoassociate professor of computer science, director of the AMHR laboratory, University of South Florida
This article is republished from The conversation under Creative Commons license. Read the original article.