Title: Byte Latent Transformer: Patches Scale Better Than Tokens | Research - AI at Meta Description: We introduce the Byte Latent Transformer (BLT), a new byte-level LLM architecture that, for the first time, matches tokenization-based LLM performance at... Keywords: No keywords Text content: Byte Latent Transformer: Patches Scale Better Than Tokens | Research - AI at Meta Our approachResearchProduct experiencesLlamaBlogTry Meta AINLPByte Latent Transformer: Patches Scale Better Than TokensDecember 12, 2024AbstractWe introduce the Byte Latent Transformer (BLT), a new byte-level LLM architecture that, for the first time, matches tokenization-based LLM performance at scale with significant improvements in inference efficiency and robustness. BLT encodes bytes into dynamically sized patches, which serve as the primary units of computation. Patches are segmented dynamically based on the entropy of the next byte, allocating more compute and model capacity where increased data complexity demands it. We present the first flop controlled scaling study of byte-level models up to 8B parameters with 4T training bytes. Our results demonstrate the feasibility of scaling models trained on raw bytes without a fixed-vocabulary. Both training and inference efficiency improve due to dynamically selecting long patches when data is predictable, along with qualitative improvements on reasoning and long tail generalization. Overall, for fixed inference costs, BLT shows significantly better scaling than tokenization-based models, by simultaneously growing both patch and model size.Download the PaperAUTHORSWritten byArtidoro PagnoniRam PasunuruPedro RodriguezJohn NguyenBenjamin MullerMargaret LiChunting ZhouLili YuJason WestonLuke ZettlemoyerGargi GhoshMike LewisAri HoltzmanSrini IyerPublisherarXivResearch TopicsNatural Language Processing (NLP)Related PublicationsDecember 12, 2024NLPCORE MACHINE LEARNINGMemory Layers at ScaleVincent-Pierre Berges, Barlas OguzDecember 12, 2024Read the PaperDecember 12, 2024HUMAN & MACHINE INTELLIGENCENLPExplore Theory-of-Mind: Program-Guided Adversarial Data Generation for Theory of Mind ReasoningMelanie Sclar, Jane Yu, Maryam Fazel-Zarandi, Yulia Tsvetkov, Yonatan Bisk, Yejin Choi, Asli CelikyilmazDecember 12, 2024Read the PaperDecember 11, 2024NLPLarge Concept Models: Language Modeling in a Sentence Representation SpaceThe LCM team, Loic Barrault, Paul-Ambroise Duquenne, Maha Elbayad, Artyom Kozhevnikov, Belen Alastruey, Pierre Andrews, Mariano Coria, Guillaume Couairon, Marta R. Costa-jussa, David Dale, Hady Elsahar, Kevin Heffernan, João Maria Janeiro, Tuan Tran, Christophe Ropers, Eduardo Sánchez, Robin San Roman, Alexandre Mourachko, Safiyyah Saleem, Holger SchwenkDecember 11, 2024Read the PaperDecember 11, 2024NLPCOMPUTER VISIONMeta CLIP 1.2Hu Xu, Bernie Huang, Ellen Tan, Ching-Feng Yeh, Jacob Kahn, Christine Jou, Gargi Ghosh, Omer Levy, Luke Zettlemoyer, Scott Yih, Philippe Brunet, Kim Hazelwood, Ramya Raghavendra, Daniel Li (FAIR), Saining Xie, Christoph FeichtenhoferDecember 11, 2024Read the PaperSee All PapersHelp Us Pioneer The Future of AIWe share our open source frameworks, tools, libraries, and models for everything from research exploration to large-scale production deployment.Join our TeamOur approachAbout AI at MetaResponsibilityPeopleCareersResearchInfrastructureResourcesDemosProduct experiencesMeta AIAI StudioLatest newsBlogNewsletterFoundational modelsLlamaOur approachOur approachAbout AI at MetaResponsibilityPeopleCareersResearchResearchInfrastructureResourcesDemosProduct experiencesMeta AIAI StudioLatest newsLatest newsBlogNewsletterFoundational modelsLlamaPrivacy PolicyTermsCookies Meta © 2024