Title: The future of AI: Built with Llama Description: As we close out 2024, Meta is leading the industry forward in AI product and technology experiences and setting a new standard for how the industry... Keywords: No keywords Text content: The future of AI: Built with Llama Our approachResearchProduct experiencesLlamaBlogTry Meta AIFEATUREDLarge Language ModelThe future of AI: Built with LlamaDecember 19, 2024•8 minute readTakeawaysLlama has quickly become the most adopted model, with more than 650 million downloads of Llama and its derivatives, twice as many downloads as we had three months ago. Meta AI is on track to be the world’s most used AI assistant by the end of the year, with nearly 600 million monthly active users. Demand for Llama continues to surge around the world, with license approvals more than doubling in the past six months. An incredible year for LlamaThe growth of Llama, our open large language model, was exponential this year thanks to a rapid drumbeat of innovation and the open approach we take to sharing updates with the AI community. We started the year by introducing Llama 3, the next generation of our state-of-the-art open large language model. We followed that in July with Llama 3.1, which included the release of 405B, the first frontier-level open AI model. Keeping up the pace of innovation, we announced Llama 3.2 at Connect 2024, sharing our first ever multimodal models, as well as small and medium-sized and lightweight, text-only models that fit onto edge and mobile devices. And to close out the year, we released Llama 3.3 70B, a text-only model that offers similar performance as the 3.1 405B at a fraction of the serving cost.As Meta Founder & CEO Mark Zuckerberg shared, Llama has quickly become the most adopted model, with more than 650 million downloads of Llama and its derivatives, twice as many downloads as we had three months ago. Putting that in perspective, Llama models have now been downloaded an average of one million times a day since our first release in February 2023. Meeting the growing demand for Llama would not be possible without the roster of partners we have across the hardware and software ecosystem, including Amazon Web Services (AWS), AMD, Microsoft’s Azure, Databricks, Dell, Google Cloud, Groq, NVIDIA, IBM watsonx, Oracle Cloud, ScaleAI, Snowflake, and more. This growing set of partners represents the best of the AI technology ecosystem and ensures Llama is optimized to run in virtually any environment and in any form, including on-device and on-premises, as well as managed service APIs from our cloud partners. Llama usage by monthly token volume has continued to make tremendous progress, with monthly token volume on key cloud partners growing over 50% month-over-month in September.Outside of the US, Llama became a global phenomenon this year, with impressive growth and appetite for our models from developers around the world and a hastened pace of adoption with the launch of our Llama 3 model collection. Llama license approvals have more than doubled in the last six months overall, with notable growth in emerging markets and downloads surging across Latin America, the Asia-Pacific region, and Europe. Beyond the high demand for Llama, we’ve been excited to see the success our partners have had this year by iterating on our work. The open source community has published more than 85,000 Llama derivatives on Hugging Face alone—an increase of over 5x than at the start of the year. This engagement and contributions from the community have helped fuel product decisions at Meta that better inform our next wave of foundational models and features to release within Meta AI—and ultimately back to the community.Growing adoption by enterprises and governmentsAs more people turn to our open models, we’ve released new features that make building on Llama a more standardized experience. This year, we developed Llama Stack, an interface for canonical toolchain components to customize Llama models and build agentic applications. We believe that offering the best simplified tool for building with Llama will only accelerate the incredible adoption we’ve already witnessed across sectors.Building on our track record of partnering to advance open AI innovation, we worked with IBM to offer Llama as part of its watsonx.AI model catalog, a next-generation enterprise studio for AI builders worldwide to train, validate, tune, and deploy AI models. This partnership with IBM means Llama is already being used by local governments, major telecommunications companies, and even by a professional soccer team to help identify potential new recruits.Block is integrating Llama into the customer support systems behind Cash App. Because Llama is open source, the company can rapidly experiment and customize the model to each of their use cases, while also allowing them to preserve the privacy of their customer data.Accenture turned to Llama in 2024 when it received a request from a leading intergovernmental body to create a chatbot that would be the organization’s first large-scale, public facing generative AI application. Built with Llama 3.1, the chatbot operates on AWS and employs various tools and services during customization and inference to ensure scalability and robustness.Spotify uses Llama to help deliver contextualized recommendations to boost artist discovery and create an even richer user experience. By combining Llama’s broad world knowledge and versatility with Spotify’s deep expertise in audio content, Spotify has created explanations that offer users personalized insights into their recommended content. The team has also created a way for its subscribers to receive personalized narratives about recommended new releases and culturally relevant commentary from their English and Spanish-speaking AI DJs.LinkedIn recently shared Liger-Kernel, an open source library designed to enable more efficient training of LLMs. Building on this scalable infrastructure, LinkedIn explored a variety of LLMs to fine-tune for tasks specific to their social network. For some applications, they found that Llama achieved comparable or better quality compared to state-of-the-art commercial foundational models—and at significantly lower costs and latencies.As open models continue to improve at unprecedented speed—and in some cases are already exceeding closed models across certain capabilities—2024 was the year that saw many enterprise users making the switch. This year, we saw momentum on AWS with customers who were seeking choice, customization, and cost efficiency turning to Llama to build, deploy, and scale generative AI applications. In one case, Arcee AI enabled its customers to fine-tune Llama models on their data, resulting in a 47% reduction in total cost of ownership compared to closed LLMs.Beyond enterprises, demand for Llama from governments around the world also grew. This year, we worked to make Llama available for use to the US government. Due to the ability of large language models to process vast amounts of data, reason, and generate usable insights, they’re positioned to help drive efficiency and help government workers improve delivery of public services. In India, the Ministry of Skill Development and Entrepreneurship is building on Llama with the goal of offering enhanced learning outcomes and student support, while in Argentina, the government recently announced it will optimize national public services delivery through the use of a WhatsApp chatbot built with Llama.The world's most accessible AI assistant and a new class of social experiencesThe rapid drumbeat of model innovation we’ve delivered over the past year is also having a ripple effect in our products. Built with Llama, Meta AI is on track to be the most used AI assistant in the world by the end of 2024 with almost 600 million monthly active users. This year, we expanded access to Meta AI to more countries and in new languages across WhatsApp, Instagram, Facebook, Messenger, and on the web. By the end of this year, we anticipate that Meta AI will be available in 43 countries and a dozen languages, and we look forward to bringing the assistant to more people and places.On WhatsApp, we continue to see promising signs of retention and engagement, which have coincided with India and Mexico becoming two of our largest markets for Meta AI usage. There are also signs Meta AI is helping people to use our apps in new ways, whether they’re sharing images with Meta AI to learn about the world around them or using the assistant as a coach to help unlock their goals.In July, we launched AI Studio, which has become the go-to destination for creators to make AIs that help them connect with their audiences in fun and useful new ways. Since we launched, we’ve seen hundreds of thousands of AIs created, offering cooking tips, memes, affirmations, and more. We recently expanded access to AI characters in more countries and languages, including India, Pakistan, Mexico, Ecuador, Peru, Colombia, Argentina, and Chile. In 2025, our goal is for AI Studio to be the world’s leading destination for AI character creation.On our hugely popular Ray-Ban Meta glasses, a custom Llama model enables Meta AI to help people get the information they need without ever having to pick up a smartphone. Last month, we announced that Meta AI is rolling out on Ray-Ban Meta glasses in France, Italy, Ireland, and Spain, giving more people the opportunity to get things done, feel inspired, and connect with people and things that they care about—right from their glasses.And across our platforms, Llama is also helping businesses through our Advantage+ Creative text generation ad tool to create text variations at scale, while additional models are enabling Advantage+ Creative video and image generation, which help businesses to create eye-catching ads to help reach the right audience. Many advertisers are seeing strong results heading into the holiday season. ObjectsHQ, a small business and e-commerce modern furniture platform, saw a 60% increase in their return on ad spend when testing the text generation feature with Advantage+ Creative Campaigns.2025 and the path aheadAs we look to 2025, the pace of innovation will only increase as we work to make Llama the industry standard for building on AI. Llama 4 will have multiple releases, driving major advancements across the board and enabling a host of new product innovation in areas like speech and reasoning.We believe AI experiences will increasingly move away from text and become voice-based as speech models become more natural, conversational, and, most importantly, helpful. We introduced voice for Meta AI this past fall across our apps and have significant plans to advance these capabilities in the first half of next year to provide our AI products with more utility and capability for consumers across our apps and devices.In October we announced Meta Movie Gen, our breakthrough set of research models for AI video generation and editing. We see incredible new possibilities to bring these experiences to our apps, lowering barriers to entry and raising the ceiling for what’s possible to create and edit with AI video.We also see significant opportunities next year for the creation of agentic AI systems with advanced reasoning. We’re testing business agents that can talk to customers, provide support, and facilitate commerce, and we’re encouraged by the interest we’re seeing on our own messaging platforms. We see these agentic systems also having benefits for consumers as we look to make AI assistants that are more task-oriented and can do things on your behalf, moving from a virtual to personal experience.We’re excited to continue the momentum we’ve sparked as we move into the new year. We’ll continue to rapidly innovate and share Llama updates that will enable more people to build with the most capable technology to date, along with a rapidly iterating and evolving set of products. All of this work continues to support our ultimate goal of building the future of human connection and the technology that makes it possible.Written by: Ahmad Al-DahleVP, GenAIShare:Our latest updates delivered to your inboxSubscribe to our newsletter to keep up with Meta AI news, events, research breakthroughs, and more.Join us in the pursuit of what’s possible with AI.See all open positionsRelated PostsComputer VisionIntroducing Segment Anything: Working toward the first foundation model for image segmentationApril 5, 2023Read postFEATUREDResearchMultiRay: Optimizing efficiency for large-scale AI modelsNovember 18, 2022Read postFEATUREDML ApplicationsMuAViC: The first audio-video speech translation benchmark March 8, 2023Read postOur approachAbout AI at MetaResponsibilityPeopleCareersResearchInfrastructureResourcesDemosProduct experiencesMeta AIAI StudioLatest newsBlogNewsletterFoundational modelsLlamaOur approachOur approachAbout AI at MetaResponsibilityPeopleCareersResearchResearchInfrastructureResourcesDemosProduct experiencesMeta AIAI StudioLatest newsLatest newsBlogNewsletterFoundational modelsLlamaPrivacy PolicyTermsCookies Meta © 2024