Yuyang (Bernie) Wang’s Post

View profile for Yuyang (Bernie) Wang, graphic

Principal ML Scientist at AWS AI

🎉 AI/ML Festival Happening in Vienna! #ICML2024 Unfortunately, I’m not there this year. Crazy to think the last time I attended ICML was already 5 years ago in Long Beach! Here’s our team’s presence I want to highlight. Do check it out if you’re around! [AutoML] The week kicked off with an AutoGluon Tutorial presented by Boran Han and Su Zhou. If you want to solve your predictive ML problems with just 3 lines of code, whether it be tabular, multi-modal (text and image), or time series, and you also want state-of-the-art accuracy and efficiency, try us out: https://lnkd.in/guHHhbtK. Speaking of which, with the latest 1.1.1 release, you can use our foundation models for #timeseries #forecasting, the #Chronos family (https://lnkd.in/gPxRKF_c), ranging from tiny to large. Chronos-t5-tiny alone has over 4MM downloads in the last month, at the moment, ranking in the top 50 among 780K+ models on Huggingface 🤗 in terms of the number of downloads. Interested? Check out the blog post on Amazon Science: https://lnkd.in/gc8sDMuN. One more thing on #AutoML… if 3 lines are 3 too many, and you want a 0-line managed service, check out Amazon SageMaker Canvas! Powered by AutoGluon, it allows you to create highly accurate machine learning models without any machine learning experience or writing a single line of code. Now onto the papers, but first, a big shout-out to our great collaborators Shuai Zhang Boran Han Danielle Maddix Robinson Gaurav Gupta Shima Alizadeh Andrew Gordon Wilson Michael Mahoney Andrew Stuart and especially our amazing interns Dyah Adila Shikai Qiu S. Chandra Mouli 🌟 [LLM] How do we transfer knowledge from LLMs to smaller task-specific models that tailor to downstream tasks? Instead of re-using the pre-trained weights tied to certain model structures, we propose to transfer (latent) features, opening the door for any downstream models. https://lnkd.in/gsHw6aXe [LLM] How can we steer #LLMs to mitigate the bias present in the training data? We introduce SteerFair, which finds the bias direction in the latent space, and steers activation values away from it during inference. https://lnkd.in/gnhxUWxQ [Scientific Machine Learning (#SciML)] Data-driven approaches such as neural operators have been disrupting many fields in scientific computing where classical numerical solvers reign. We explore diverse ensembling, built on top of NOs, with constraints, using the framework of our previous work, ProbConserve (Hansen et al., 2023), to improve their out-of-domain performance: https://lnkd.in/gfTXdsbh. And, did I mention that Boran will give a talk on Towards Foundation Models for Earth Monitoring and Forecasting, this Friday 9am CEST in Foundation Models in the Wild workshop? 😀

  • No alternative text description for this image
Alexis Roos

Sr Manager, Science and ML at Amazon

3mo

Great event and work from your team !

Elizabeth Rozet

Employer Brand @Amazon | Empowering Future Top Voices | Chronic Illness Awareness

3mo

So cool, Yuyang (Bernie) Wang! Love this recap!!

See more comments

To view or add a comment, sign in

Explore topics