Run Local LLMs on Any Device. Open-source
A high-throughput and memory-efficient inference and serving engine
Operating LLMs in production
LMDeploy is a toolkit for compressing, deploying, and serving LLMs
Replace OpenAI GPT with another LLM in your app
Large Language Model Text Generation Inference
Visual Instruction Tuning: Large Language-and-Vision Assistant
Sparsity-aware deep learning inference runtime for CPUs
Phi-3.5 for Mac: Locally-run Vision and Language Models
Open platform for training, serving, and evaluating language models
Framework that is dedicated to making neural data processing
Neural Network Compression Framework for enhanced OpenVINO
A Unified Library for Parameter-Efficient Learning
Database system for building simpler and faster AI-powered application
LLM training code for MosaicML foundation models
Low-latency REST API for serving text-embeddings
Bring the notion of Model-as-a-Service to life
An easy-to-use LLMs quantization package with user-friendly apis
Openai style api for open large language models
Libraries for applying sparsification recipes to neural networks
State-of-the-art Parameter-Efficient Fine-Tuning
DoWhy is a Python library for causal inference
Optimizing inference proxy for LLMs
Efficient few-shot learning with Sentence Transformers
PyTorch library of curated Transformer models and their components