Run Local LLMs on Any Device. Open-source
A high-throughput and memory-efficient inference and serving engine
LLM training code for MosaicML foundation models
Replace OpenAI GPT with another LLM in your app
The unofficial python package that returns response of Google Bard
Database system for building simpler and faster AI-powered application
Operating LLMs in production
Tensor search for humans
Phi-3.5 for Mac: Locally-run Vision and Language Models
A high-performance ML model serving framework, offers dynamic batching
Low-latency REST API for serving text-embeddings
Visual Instruction Tuning: Large Language-and-Vision Assistant
State-of-the-art Parameter-Efficient Fine-Tuning
PyTorch library of curated Transformer models and their components
Framework that is dedicated to making neural data processing
Run any Llama 2 locally with gradio UI on GPU or CPU from anywhere
Run 100B+ language models at home, BitTorrent-style
Implementation of "Tree of Thoughts
Implementation of model parallel autoregressive transformers on GPUs