Introducing Meta Llama 3: the next generation of our state-of-the-art open source large language model — and the most capable openly available LLM to date. These next-generation models demonstrate SOTA performance on a wide range of industry benchmarks and offer new capabilities such as improved reasoning. Details in the full announcement ➡️ https://go.fb.me/a24u0h Download the models ➡️ https://go.fb.me/q8yhmh Experience Llama 3 with Meta AI ➡️ https://meta.ai Llama 3 8B & 70B deliver a major leap over Llama 2 and establish a new SOTA for models of their sizes. While we’re releasing these first two models today, we’re working to release even more for Llama 3 including multiple models with capabilities such as multimodality, multilinguality, longer context windows and more. Our largest models are over 400B parameters and while they’re still in active development, we’re very excited about how they’re trending. Across the stack, we want to kickstart the next wave of innovation in AI. We believe these are the best open source models of their class, period — we can’t wait to see what you build and look forward to your feedback.
AI at Meta
Research Services
Menlo Park, California 807,484 followers
Together with the AI community, we’re pushing boundaries through open science to create a more connected world.
About us
Through open science and collaboration with the AI community, we are pushing the boundaries of artificial intelligence to create a more connected world. We can’t advance the progress of AI alone, so we actively engage with the AI research and academic communities. Our goal is to advance AI in Infrastructure, Natural Language Processing, Generative AI, Vision, Human-Computer Interaction and many other areas of AI enable the community to build safe and responsible solutions to address some of the world’s greatest challenges.
- Website
-
https://ai.meta.com/
External link for AI at Meta
- Industry
- Research Services
- Company size
- 10,001+ employees
- Headquarters
- Menlo Park, California
- Specialties
- research, engineering, development, software development, artificial intelligence, machine learning, machine intelligence, deep learning, computer vision, engineering, computer vision, speech recognition, and natural language processing
Updates
-
Last week we released Meta Chameleon: a new mixed-modal research model from Meta FAIR. Get the models ➡️ https://go.fb.me/hrkkgf Research paper ➡️ https://go.fb.me/u75dq8 The new safety tuned 7B and 34B models we’ve released for research use can take combinations of text and images as input and produce text outputs. Chameleon is one of the first publicly released approaches using a single unified architecture for both encoding and decoding using an early fusion approach. We’ve shared details on the full modeling approach and training in the new research paper, and we hope that the work we’re releasing will help to further democratize access to foundational mixed-modal models and empower others to streamline and scale their work in this space.
-
📣 Just announced in Paris: Together with Hugging Face & Scaleway, we're launching a new accelerator for AI startups in Europe! Accepting applications until August 16 ➡️ https://go.fb.me/1ixbz0 Based at STATION F in Paris, the aim is to accelerate innovation, drive business growth and strengthen the European tech ecosystem. The startups selected will benefit from technical mentoring by research teams at Meta FAIR, access to Hugging Face’s platform + tools and access to Scaleway’s computing power in order to support their work based on open source AI.
-
-
Six papers to add to your reading list from AI researchers at Meta at #CVPR2024. • PlatoNeRF: Discerning Reality in Plato's Cave from Single-View Two Bounce Time of Flight ➡️ https://go.fb.me/tju5fo • Nymeria: A Massive Collection of Multimodal Egocentric Daily Motion in the Wild ➡️ https://go.fb.me/0wcu84 • Relightable Gaussian Codec Avatars ➡️ https://go.fb.me/gdtkjm • URHand: Universal Relightable Hands ➡️ https://go.fb.me/1lmv7o • RoHM: Robust Human Motion Reconstruction via Diffusion ➡️ https://go.fb.me/ogm92y • HybridNeRF: Efficient Neural Rendering via Adaptive Volumetric Surfaces ➡️ https://go.fb.me/tzik3j
-
-
-
-
-
+1
-
-
With the release of Meta Llama 2, we developed a technique called Ghost Attention (GAtt) which uses RLHF to fine-tune model responses keeping the initial instructions in mind, resulting in AI models that are much better at retaining initial instructions in multi-turn conversations. This approach caught the attention of SAIF CHECK who went on to build a model evaluation system using Meta Llama 3 to help companies navigate the challenges of risk and compliance with local laws where their technology is used.
-
Meta FAIR researchers have introduced JASCO, a generative text-to-music model. This new model differs from other text-to-music approaches through its ability to accept various conditioning inputs like chords or beat for improved controllability over the types of outputs it creates. You can find more details in the research paper available now, and we plan to release inference code and a pretrained model under a CC-BY-NC license soon. Details and samples ➡️ https://go.fb.me/tkp5fz
-
We're in Seattle for #CVPR2024 this week! Stop by our booth to chat with the teams behind some of our recent work or learn more about our latest research. * Things to know * 📍 Find us @ Booth 1423 📝 We're sharing 60+ publications & supporting 20+ panels and workshops. 💻 Stop by our booth to see showcases of Ego-Exo4D, V-JEPA, Segment Anything, SceneScript, Relightable Gaussian Codec Avatars, OpenEQA and more.
-
-
-
-
-
+3
-
-
Today is a good day for open science. As part of our continued commitment to the growth and development of an open ecosystem, today at Meta FAIR we’re announcing four new publicly available AI models and additional research artifacts to inspire innovation in the community and help advance AI in a responsible way. More in the video from Joelle Pineau. Details and access to everything released by FAIR today ➡️ https://go.fb.me/ba73w2 What we’re releasing: • Meta Chameleon 7B & 34B safety-tuned models that support mixed-modal input and text-only outputs. • Pretrained code completion models using Meta Multi-Token Prediction. • Meta JASCO text-to-music models capable of accepting various conditioning inputs for greater controllability — paper available today with a pretrained model coming soon. • Meta AudioSeal, an audio watermarking model that we believe is the first designed specifically for the localized detection of AI-generated speech, available under a commercial license. • More research, data and code to better support geographical and cultural diversity for AI systems.
-
Are you building something cool with Meta Llama models? Already launched something that's having an impact in your community? We'd love to hear about it! Share your Llama story ➡️ https://go.fb.me/s8ojyi
-
-
Together with Cerebral Valley, we recently hosted the first-ever Meta Llama 3 Hackathon. After receiving over 1,200 applications, we brought together 350+ attendees who built some incredible projects in just 24 hours. See the full recap and top projects here ➡️ https://go.fb.me/6dg1ib