-
@doytsujin.com
- Toronto, Canada
-
05:58
(UTC -05:00) - doytsujin.com
Highlights
Lists (1)
Sort Name ascending (A-Z)
- All languages
- Assembly
- C
- C#
- C++
- CSS
- CoffeeScript
- Dart
- Dockerfile
- Go
- HCL
- HTML
- Haskell
- Java
- JavaScript
- Julia
- Jupyter Notebook
- Kotlin
- Lua
- MDX
- Makefile
- Markdown
- OCaml
- Objective-C++
- PHP
- PowerShell
- Processing
- Python
- R
- Ruby
- Rust
- SCSS
- Scala
- Shell
- Solidity
- Svelte
- Swift
- TeX
- TypeScript
- Verilog
- Vim Script
- YARA
- Zig
Starred repositories
Learn how to design large-scale systems. Prep for the system design interview. Includes Anki flashcards.
Interact with your documents using the power of GPT, 100% privately, no data leaks
Platform to experiment with the AI Software Engineer. Terminal based. NOTE: Very different from https://gptengineer.app
ChatGLM-6B: An Open Bilingual Dialogue Language Model | 开源双语对话语言模型
Making large AI models cheaper, faster and more accessible
The simplest, fastest repository for training/finetuning medium-sized GPTs.
OpenAssistant is a chat-based assistant that understands tasks, can interact with third-party systems, and retrieve information dynamically to do so.
Code and documentation to train Stanford's Alpaca models, and generate the data.
Platform for building AI that can learn and answer questions over federated data.
The official Python library for the OpenAI API
The ChatGPT Retrieval Plugin lets you easily find personal or work documents by asking questions in natural language.
Stable Diffusion with Core ML on Apple Silicon
Evals is a framework for evaluating LLMs and LLM systems, and an open-source registry of benchmarks.
Databricks’ Dolly, a large language model trained on the Databricks Machine Learning Platform
eeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeee
Geometric Computer Vision Library for Spatial AI
A collection of libraries to optimise AI model performances
Implementation of RLHF (Reinforcement Learning with Human Feedback) on top of the PaLM architecture. Basically ChatGPT but with PaLM
An implementation of model parallel autoregressive transformers on GPUs, based on the Megatron and DeepSpeed libraries
BertViz: Visualize Attention in NLP Models (BERT, GPT2, BART, etc.)
Implementation of the LLaMA language model based on nanoGPT. Supports flash attention, Int8 and GPTQ 4bit quantization, LoRA and LLaMA-Adapter fine-tuning, pre-training. Apache 2.0-licensed.
🐫 CAMEL: Finding the Scaling Law of Agents. The first and the best multi-agent framework. https://www.camel-ai.org
Example projects using the AWS CDK