AI at Meta’s Post

View organization page for AI at Meta, graphic

818,684 followers

Today is a good day for open science. As part of our continued commitment to the growth and development of an open ecosystem, today at Meta FAIR we’re announcing four new publicly available AI models and additional research artifacts to inspire innovation in the community and help advance AI in a responsible way. More in the video from Joelle Pineau. Details and access to everything released by FAIR today ➡️ https://go.fb.me/ba73w2 What we’re releasing: • Meta Chameleon 7B & 34B safety-tuned models that support mixed-modal input and text-only outputs. • Pretrained code completion models using Meta Multi-Token Prediction. • Meta JASCO text-to-music models capable of accepting various conditioning inputs for greater controllability — paper available today with a pretrained model coming soon. • Meta AudioSeal, an audio watermarking model that we believe is the first designed specifically for the localized detection of AI-generated speech, available under a commercial license. • More research, data and code to better support geographical and cultural diversity for AI systems.

Robert San Diego

CEO/Founder, Binary Ideas: AI Chatbot, Your New Best Employee: Never Sleeps, Never Asks for Raises (Just Results) - Chatbots for SMBs

3w

This is fantastic news! 🎉 The release of these AI models and research artifacts by Meta FAIR is a significant step towards fostering an open and collaborative AI ecosystem. I'm particularly intrigued by the Meta JASCO text-to-music models and the Meta AudioSeal for detecting AI-generated speech. These innovations could have a profound impact on various industries. Kudos to the team for prioritizing responsible AI development and supporting geographical and cultural diversity! Can't wait to dive into the details. 🚀

Bryan Brownlie

Emerald Strategy Group: Strategic Advisory - M&A - Transaction & Project Financing - Due Diligence - Private Equity - Renewable Energy

3w

The text to music generation here is phenomenal. I don't really have a use case for it, but it was great to listen to it. The most exciting thing here is the multi-modal capabilities for image and text.

This is a fantastic step forward for the AI community! We are really excited about the release of these new open-source AI models and we cannot wait to implement them to build incredible solutions. Kudos to Meta FAIR for their continued commitment to an open ecosystem

Like
Reply
Shubham Saboo

AI Product Management at Tenstorrent | 3x Author of books on GPT-3 & Neural Search in production | Microsoft MVP | X (50k+)

3w

Meta is killing it in the opensource LLM world. Irony is, who could have thought that a few years ago. But honestly, I love the Llama series of models.

Pablo Lecavalier-Ruiz, CPA

Consultant en intelligence artificielle | Tenue de livre virtuelle | Transformer la comptabilité| Spécialiste Quickbooks en ligne | MLOps

2w

À suivre. Je me demande la raison de cet effort, le open-source, parce que nous ne sommes pas dans le paradigme de communauté comme celle de Linux. C'est comme si facebook tentait d'aider le open source pour dépasser les compagnies d'IA qui se concentrent dans des efforts de régulations afin de créer des barrières légales.

Like
Reply
James Bentley

AI and Strategy Director @ Awin (Axel Springer)

3w

I listened to an interview with Joelle Pineau on the How AI Happens podcast, I highly recommend.. https://www.howaihappens.com/episodes/meta-vp-of-ai-research-joelle-pineau

Dhiraj Patra

Software Architect | Lead | Engineer for AI, ML, Cloud, IoT

3w

We must work on distribute fine tuning or train LLM or foundation model. So we can develop and process the customized model on networked devices connected on internet. By this way we can mitigate the requirements of big GPU and system.

Like
Reply
Rudy Martin

Sr. Lead Data Scientist @ Wells Fargo

3w
Like
Reply
See more comments

To view or add a comment, sign in

Explore topics