Run Local LLMs on Any Device. Open-source
A high-throughput and memory-efficient inference and serving engine
Database system for building simpler and faster AI-powered application
LLM training code for MosaicML foundation models
Operating LLMs in production
Replace OpenAI GPT with another LLM in your app
The unofficial python package that returns response of Google Bard
Framework that is dedicated to making neural data processing
Low-latency REST API for serving text-embeddings
Phi-3.5 for Mac: Locally-run Vision and Language Models
Visual Instruction Tuning: Large Language-and-Vision Assistant
State-of-the-art Parameter-Efficient Fine-Tuning
PyTorch library of curated Transformer models and their components
A high-performance ML model serving framework, offers dynamic batching
Tensor search for humans
Run any Llama 2 locally with gradio UI on GPU or CPU from anywhere
Run 100B+ language models at home, BitTorrent-style
Implementation of "Tree of Thoughts
Implementation of model parallel autoregressive transformers on GPUs