10 top AI and machine learning trends for 2024
10 top AI and machine learning trends for 2024
Find out about the most important AI and machine learning trends for 2024, such as custom business models, open-source AI, and multimedia. These trends will change the field.
Once ChatGPT was released in November 2022, the year 2023 was a turning point in the history of AI. The changes that have happened in the last year, from a thriving open sourceopen-source community to complex mixed models, have set the stage for big steps forward in AI.
However, even though generative AI is still the talk of the tech world, people’s views are changing and becoming more grown as companies move their attention from testing to real-world projects. This year’s trends show that strategies for developing and deploying AI are becoming more complex and careful, with a focus on ethics, safety, and the changing legal situation.
Here are the ten most important AI and machine learning trends for 2024.
1. The multimodal AI
Multimodal AI processes text, pictures, and sound, emulating the human capacity to process different sensory inputs.
In November 2023, OpenAI frontiers research chief Mark Chen declared, “The interfaces of the world are multimodal,” at EmTech MIT. “We want our models to see what we see and hear what we hear, and we want them to also generate content that appeals to more than one of our senses.”
The multimodal OpenAI GPT-4 model can process visual and auditory input. In his session, Chen suggested taking images of the interior of a fridge and using ChatGPT to propose a dish using the contents. By speaking the request in ChatGPT’s voice mode, the conversation might include audio.
Mark Chen, OpenAI frontiers research leader, addresses EmTech MIT attendees.
“The real power of these capabilities is going to be when you can marry up text and conversation with images and video, cross-pollinate all three of those, and apply those to a variety of businesses,” said EY Americas emerging technologies leader Matt Barrington. Most generative AI initiatives are text-based.
Real-world multimodal AI applications are varied and growing. In healthcare, multimodal models may assess medical pictures using patient history and genetics to enhance diagnosis. Multimodal models may provide non-designers and coders fundamental design and coding skills at the job function level.
“I can’t draw to save my life,” Barrington added. “Now I can. I’m good at language, so I can tap into [image creation] and have AI create some of my concepts that I couldn’t draw.”
Multimodal capabilities may also improve models by providing additional data to learn from. “As our models get better and better at modeling language and start to hit the limits of what they can learn from language, we want to provide the models with raw inputs from the world so that they can perceive the world on their own and draw their ,inferences from things like video or audio data,” said Chen.
2. Agent AI
Agency AI is a major change from reactive to proactive AI. Advanced AI agents are autonomous, proactive, and independent. Traditional AI systems react to user inputs and follow established programming, whereas AI agents comprehend their surroundings, create goals, and accomplish them without human interaction.
Environmental monitoring might use an AI agent to gather data, evaluate trends, and take preventative measures like forest fire detection. Using real-time market adaptation, a financial AI agent may actively manage an investment portfolio.
“2023 was the year of being able to chat with an AI,” writes Stanford Human-Centered AI Institute fellow Peter Norvig in a blog post. “In 2024, agents will do things for you. Schedule travel, link to other services.”
Combining agentic and multimodal AI may also expand possibilities. Chen demonstrated an application that identifies picture content in the presentation. Someone building such an application used to have to train and install their own image recognition model. Multimodal, agentic models can do this using natural language prompting.
“I reallythink that multimodal together with GPTs will open up the no-code development of computer vision applications, just in the same way that prompting opened up the no-code development of a lot of text-based applications,” said Chen.
3. Open source AI
Language models and other sophisticated generative AI systems are costly to build and need massive open-source resources and data. However, open source allows developers to build on others’ work, lowering costs and increasing AI availability. Free open-source AI allows businesses and academics to contribute and improve on existing code.
GitHub data from the previous year reveals a tremendous growth in developer interaction with AI, especially generative AI. Generative AI projects like Stable Diffusion and AutoGPT attracted thousands of first-time contributors in 2023, placing them in the top 10 of the code hosting site.
Open-source generative models were scarce early in the year and typically trailed behind proprietary solutions like ChatGPT. The environment expanded in 2023 to include formidable open-source, unresourced competitors like Meta’s Llama 2 and Mistral AI’s Mixtral models. This might change the AI environment in 2024 by giving smaller, less resourced firms access to advanced AI models and tools.
“It gives everyone easy, fairly democratized access, and it’s great for experimentation and exploration,” Barrington.
Open source development encourages openness and ethical development since more eyes on the code means more biases, errors, and security risks may be found. However, scientists worry about open source AI being used to spread misinformation and violence. Building and maintaining open source is tough for ordinary software, much alone complicated, compute-intensive AI models.
10 top AI and machine learning trends for 2024
4. Retrieval-augmented generation
Even growing popularity in 2023, generative AI techniques still suffer from hallucinations: plausible-sounding but inaccurate solutions to human questions. This issue has hindered corporate adoption since hallucinations in business-critical or customer-facing situations might be disastrous. Retrieval-augmented generation (RAG) reduces hallucinations, which might impact corporate AI adoption.
RAG improves AI-generated content accuracy and relevance by combining text creation with information retrieval. It gives LLMs external information to help them respond more accurately and appropriately. By avoiding LLM knowledge storage, model size is reduced, speeding up and lowering expenses.
“You can use RAG to go gather a ton of unstructured information, documents, etc., [and] feed it into a model without having to fine-tune or custom-train a model,” added Barrington.
These features are appealing for corporate applications that need current facts. Companies may employ RAG with foundation models to develop more effective and informative chatbots and virtual assistants.
5. Customized enterprise generative AI models
Generative AI users are particularly interested in massive, general-purpose products like Midjourney and ChatGPT. The increased desire for AI systems that can match specific criteria may lead to smaller, more concentrated models having the longest lifetime for corporate use cases.
Creating a new model from scratch is possible, but many companies lack the resources. Most firms change AI models’ architecture or fine-tune on a domain-specific data set to generate bespoke generative AI. This may be cheaper than developing a new model or using API calls to a public LLM.
“Calls to GPT-4 as an API, for example, are very expensive, both in cost and latency — how long it takes to return a result,” said Shane Luke, Workday’s AI and machine learning vice president. We are trying to have the same capabilities, but it’s highly focused and particular. So it may be a smaller, more manageable model.”
A fundamental benefit of customized generative AI models is their flexibility to serve specific markets and user demands. Generative AI solutions may be customized for customer service, supply chain management, and document inspection. This is particularly important in highly specialized fields like healthcare, finance, and law.
largest LLMs
The largest LLMs are overkill for many corporate use cases. ChatGPT may be the best consumer-facing chatbot for any issue, but “it’s not the state of the art for smaller enterprise applications,” Luke added.
As AI developers’ skills converge, Barrington expects corporations to explore more models next year. “We’re expecting, over the next year or two, for there to be a much higher degree of parity across the models — and that’s a good thing,” said.
Luke has encountered a similar situation at Workday, which offers AI services for internal testing. Luke said staff began utilizing largely OpenAI services but now use a mix of models from Google and AWS.
Customizing a model rather than utilizing a public tool allows businesses more data control, improving privacy and security. Luke described modeling Workday processes that handle sensitive personal data like handicap status and health history. “Those aren’t things that we’re going to want to send out to a third party,” stated. “Our customers generally wouldn’t be comfortable with that.”
Due to these privacy and security advantages, greater AI legislation in the future years may force firms to rely on proprietary models, said Deloitte risk advisory partner and global technology sector head Gillian Crossan.
“It’s going to encourage enterprises to focus more on private models that are proprietary, that are domain-specific, rather than focus on these large language models that are trained with data from all over the internet and everything that brings with it,” said.
6. AI and ML skill needed
Designing, training, and testing a machine learning model is difficult, much alone deploying and maintaining it in a complicated IT environment. Thus, AI and machine learning skill need is likely to surge through 2024 and beyond.
“The market is still really hot around talent,” he added. “It’s very easy to get a job in this space.”
As AI and machine learning become increasingly interwoven into company processes, individuals that can connect theory and practice are needed. This needs MLOps, or machine learning operations, to install, manage, and maintain AI systems in real-world contexts.
A recent O’Reilly research found that respondents’ firms required AI programming, data analysis and analytics, and AI and machine learning operations for generative AI initiatives. Unfortunately, these abilities are scarce. “That’s going to be one of the challenges around AI — to be able to have the talent readily available,” Crossan.
Not only IT enterprises will require these talents in 2024. IT and data are omnipresent as business operations, and AI projects are growing in popularity, making internal AI and machine learning capabilities the next step in digital transformation.
At every level of AI projects, from technical teams constructing models to the board, Crossan stressed diversity. “One of the big issues with AI and the public models is the amount of bias that exists in the training data,” stated. “And unless you have that diverse team within your organization that is challenging the results and challenging what you see, you are going to potentially end up in a worse place than you were before AI.”machine learning
7. Shadow AI
As employees across job functions become interested in generative AI, organizations are facing the issue of shadow AI: use of AI within an organization without explicit approval or oversight from the IT department. This trend is becoming increasingly prevalent as AI becomes more accessible, enabling even nontechnical workers to use it independently.machine learning
Shadow AI typically arises when employees need quick solutions to a problem or want to explore new technology faster than official channels allow. This is especially common for easy-to-use AI chatbots, which employees can try out in their web browsers with little difficulty — without going through IT review and approval processes.
On the plus side, exploring ways to use these emerging technologies evinces a proactive, innovative spirit. But it also carries risk, since end users often lack relevant information on security, data privacy and compliance. For example, a user might feed trade secrets into a public-facing LLM without realizing that doing so exposes that sensitive information to third parties.
“Once something gets out into these public models, you cannot pull it back,” Barrington said. “So there’s a bit of a fear factor and risk angle that’s appropriate for most enterprises, regardless of sector, to think through.”
Shadow AI is just one facet of the larger phenomenon of shadow IT.
In 2024, organizations will need to take steps to manage shadow AI through governance frameworks that balance supporting innovation with protecting privacy and security. This could include setting clear acceptable AI use policies and providing approved platforms, as well as encouraging collaboration between IT and business leaders to understand how various departments want to use AI.
“The reality is, everybody’s using it,” Barrington said, in reference to recent EY research finding that 90% of respondents used AI at work. “Whether you like it or not, your people are using it today, so you should figure out how to align them to ethical and responsible use of it.”
8. A generative AI reality check
As organizations progress from the initial excitement surrounding generative AI to actual adoption and integration, they’re likely to face a reality check in 2024 — a phase often referred to as the “trough of disillusionment” in the Gartner Hype Cycle.
“We’re definitely seeing a rapid shift from what we’ve been calling this experimentation phase into [asking], ‘How do I run this at scale across my enterprise?'” Barrington said.
As early enthusiasm begins to wane, organizations are confronting generative AI’s limitations, such as output quality, security and ethics concerns, and integration difficulties with existing systems and workflows. The complexity of implementing and scaling AI in a business environment is often underestimated, and tasks such as ensuring data quality, training models and maintaining AI systems in production can be more challenging than initially anticipated.
“It’s actually not very easy to build a generative AI application and put it into production in a real product setting,” Luke said.
The silver lining is that these growing pains, while unpleasant in the short term, could result in a healthier, more tempered outlook in the long run. Moving past this phase will require setting realistic expectations for AI and developing a more nuanced understanding of what AI can and can’t do. AI projects should be clearly tied to business goals and practical use cases, with a clear plan in place for measuring outcomes.
“If you have very loose use cases that are not clearly defined, that’s probably what’s going to hold you up the most,” Crossan said.
9. Increased attention to AI ethics and security risks
The proliferation of deepfakes and sophisticated AI-generated content is raising alarms about the potential for misinformation and manipulation in media and politics, as well as identity theft and other types of fraud. AI can also enhance the efficacy of ransomware and phishing attacks, making them more convincing, more adaptable and harder to detect.
Although efforts are underway to develop technologies for detecting AI-generated content, doing so remains challenging. Current AI watermarking techniques are relatively easy to circumvent, and existing AI detection software can be prone to false positives.
The increasing ubiquity of AI systems also highlights the importance of ensuring that they are transparent and fair — for example, by carefully vetting training data and algorithms for bias. Crossan emphasize that these ethics and compliance considerations interwoven throughout the process of developing an AI strategy.
“You have to be thinking about, as an enterprise … implementing AI, what are the controls that you’re going to need?” she said. “And that starts to help you plan a bit for the regulation so that you’re doing it together. You’re not doing all of this experimentation with AI and then [realizing], ‘Oh, now we need to think about the controls.’ You do it at the same time.”
Safety and ethics can also be another reason to look at smaller, more narrowly tailored models, Luke pointed out. “These smaller, tuned, domain-specific models are just far less capable than the really big ones — and we want that,” he said. “They’re less likely to be able to output something that you don’t want because they’re just not capable of as many things.”
10. Evolving AI regulation
Unsurprisingly, given these ethics and security concerns, 2024 is shaping up to be a pivotal year for AI regulation, with laws, policies and industry frameworks rapidly evolving in the U.S. and globally. Organizations will need to stay informed and adaptable in the coming year, as shifting compliance requirements could have significant implications for global operations and AI development strategies.
The EU’s AI Act, on which members of the EU’s Parliament and Council recently reached a provisional agreement, represents the world’s first comprehensive AI law. If adopted, it would ban certain uses of AI, impose obligations for developers of high-risk AI systems and require transparency from companies using generative AI, with noncompliance potentially resulting in multimillion-dollar fines. And it’s not just new legislation that could have an effect in 2024.
“Interestingly enough, the regulatory issue that I see could have the biggest impact is GDPR — good old-fashioned GDPR — because of the need for rectification and erasure, the right to be forgotten, with public large language models,” Crossan said. “How do you control that when they’re learning from massive amounts of data, and how can you assure that you’ve been forgotten?”
Together with the GDPR, the AI Act could position the EU as a global AI regulator, potentially influencing AI use and development standards worldwide. “They’re certainly ahead of where we are in the U.S. from an AI regulatory perspective,” Crossan said.
The U.S. doesn’t yet have comprehensive federal legislation comparable to the EU’s AI Act, but experts encourage organizations not to wait to think about compliance until formal requirements are in force. At EY, for example, “we’re engaging with our clients to get ahead of it,” Barrington said. Otherwise, businesses could find themselves playing catch-up when regulations do come into effect.
At the result:
Beyond the ripple effects of European policy, recent activity in the U.S. executive branch also suggests how AI regulation could play out stateside. President Joe Biden’s October executive order implemented new mandates, such as requiring AI developers to share safety test results with the U.S. government and imposing restrictions to protect against the risks of AI in engineering dangerous biological materials. Various federal agencies have also issued guidance targeting specific sectors, such as NIST’s AI Risk Management Framework and the Federal Trade Commission’s statement warning businesses against making false claims about their products’ AI use.
Further complicating matters, 2024 is an election year in the U.S., and the current slate of presidential candidates shows a wide range of positions on tech policy questions. A new administration could theoretically change the executive branch’s approach to AI oversight through reversing or revising Biden’s executive order and nonbinding agency guidan