FuriosaAI has been identified as a Sample Vendor in Gartner’s 2024 Emerging Tech: Adoption Trends for Energy-Efficient Semiconductors. 📋 Power consumption is a crucial issue for the AI industry. Simply put, the machines that are running GenAI models today use a truly astonishing amount of energy. And it’s not sustainable or workable for the businesses who have to pay those power bills. Read more: https://lnkd.in/dKzNYqDr RNGD (pronounced "Renegade") is sampling with customers now. #GenAI #Efficiency
FuriosaAI
반도체 제조
Our mission is to make AI computing sustainable so everyone on Earth has access to powerful AI.
소개
FuriosaAI designs and develops data center accelerators for the most advanced AI models and applications. Our mission is to make AI computing sustainable so everyone on Earth has access to powerful AI. Our Background Three misfit engineers with each from HW, SW and algorithm fields who had previously worked for AMD, Qualcomm and Samsung got together and found FuriosaAI in 2017 to build the world’s best AI chips. The company has raised more than $100 million, with investments from DSC Investment, Korea Development Bank, and Naver, the largest internet provider in Korea. We have partnered on our first two products with a wide range of industry leaders including TSMC, ASUS, SK Hynix, GUC, and Samsung. FuriosaAI now has over 140 employees across Seoul, Silicon Valley, and Europe. Our Approach We are building the full stack solutions to offer the most optimal combination of programmability, efficiency, and ease of use. We achieve this through a “first principles” approach to engineering: We start with the core problem, which is how to accelerate.
- 웹사이트
-
https://www.furiosa.ai
FuriosaAI 외부 링크
- 업계
- 반도체 제조
- 회사 규모
- 직원 51-200명
- 본사
- Seoul
- 유형
- 비상장기업
- 설립
- 2017
위치
FuriosaAI 직원
-
Auro Tripathy
Solving the AI last mile; fast & efficient deployment. Let's get your AI creation in user's hands!
-
Oliver Libaw
Editorial/Content @FuriosaAI (Ex-Meta, Ex-Yahoo, Ex-ABC News, Ex-some other places)
-
Nuno Lopes
Associate Prof at U. Lisbon (IST-UL)
-
Sander Arts
Venture Advisor | CMO | Board Member | Author | Investor
업데이트
-
Who’s going to Web Summit and Semicon Europa (Electronica) this week? Our Lisbon- and Munich-based team members will be there. So DM if you're attending and would like to talk about LLM inference, AI chip startup, or what it's like to join our team! 👋 #websummit #semicon #electronica
-
This article from Generative Value, written by Eric Flaningam and Austin L., gives a great overview of today's AI semiconductor ecosystem and where it might be heading. 🔍 We recommend giving them a follow to learn more! 👉 https://lnkd.in/d9J2cFp2 #ai #semiconductor #ecosystem
-
In case you missed it, here’s a look back at the RNGD unveil at the 🔥 Hot Chips Symposium in Palo Alto in August. 📽 https://lnkd.in/drq2-ThF Since then, a lot has happened, and we’re excited to share more RNGD updates soon. Sign up to learn the latest on benchmarking, the RNGD compiler and software stack, product availability, and more: https://furiosa.ai/signup 🗒 RNGD (pronounced 'Renegade') is Furiosa’s second-gen AI chip for data center inference with #LLMs and multimodal models. It delivers performance, programmability and power-efficiency and it’s sampling with customers now.
Hot Chips 2024: The global unveiling of RNGD
https://www.youtube.com/
-
In this blog post, our CTO and HW engineering leads candidly discuss how we overcame some tricky challenges to bring High Bandwidth Memory 3 (HBM3) to our second-gen chip. TL;DR HBM3 is an important part of what makes RNGD (pronounced “Renegade”) a great solution for inference with multimodal models and LLMs. But it was also a pain in the ASIC to implement. 🤯 📄 Read the round-table article https://lnkd.in/gvHUXzrz #HBM3 #AIhardware
-
RNGD sampling is underway. We’ve delivered servers to three customers so far, with more to come. 📄 https://lnkd.in/gDeukDZc This is one of the most exciting steps in launching RNGD (“Renegade”), which is designed for high-performance, highly efficient inference in data centers with large language (LLM) and multimodal models. Sampling gives us a chance to validate performance, test our software ecosystem, and collect valuable real world feedback on the product. We’ll share more about what we’ve learned from our first RNGD customers soon. Learn more about RNGD: https://furiosa.ai/rngd
-
We're excited to see 21 new companies* joining the AI Platform Alliance. They join FuriosaAI and others in this important industry consortium, working towards more open, economical, and sustainable AI compute solutions. The future of AI depends on building a thriving, diverse ecosystem of hardware and software companies. You can learn more about the AI Platform Alliance on their site: https://lnkd.in/gvuiy757 *Extra shoutout to Wallaroo.AI for also going the "that's a slightly unusual name for an AI startup" route 👏 .
-
It has been a busy week for research paper announcements at FuriosaAI 📚 ✍ 📖 💪. We're proud to announce that our paper "RNGD: A 5nm Tensor Contraction Processor for Power-Efficient inference on Large Language Models", has been accepted as a Regular paper at the International Solid-State-Circuits Conference [ISSCC] 2025 - often called the “Chip Olympics” in our industry. Being selected for the “Invited Industry” session at this highly competitive IEEE conference is a significant recognition of our team's hard work and innovation. We're looking forward to presenting our work to leaders in the field this February in San Francisco. Learn more about RNGD (pronounced Renegade), which is in production now and sampling with customers: https://furiosa.ai/rngd. #IEEE #ISSCC2025 #ISSCC #LLM
-
Can MLLMs Perform Text-to-Image In-Context Learning? Our engineers Wonjun Kang and HYUNG IL KOO collaborated with researchers at the University of Wisconsin-Madison to publish a paper on multimodal large language models’ ability to perform in-context learning, where the model is given both text and images as input and then told to extrapolate to generate a suitable new image. Abilities like this will be vital as the field continues to move from text-only generative algorithms to #multimodal models. The paper was accepted at the Conference on Language Modeling (#COLM), which took place last week at the University of Pennsylvania in Philadelphia. 📄 Read their paper here: https://lnkd.in/gwCw5THb 💻 Get the code and novel dataset here: https://lnkd.in/gvEUyw8Y And learn more about RNGD here: https://furiosa.ai/rngd #LLM #AI #Multimodal
-
At #ECCV 2024 in Milan, Furiosa engineers Wonjun Kang, Kevin Galim and HYUNG IL KOO presented a novel technique to enhance diffusion-based image editing models. Their approach significantly outperforms existing strategies, preserving key details from the input image and accurately following instructions provided in text prompts. Read their paper and try the code yourself. 📄 Paper: https://lnkd.in/gBitivRX 💻 Code: https://lnkd.in/gxazsbzf 📹 A short presentation: https://lnkd.in/giZdw8WV #ECCV2024 #ComputerVision
-
+3