llm-inference
Here are 433 public repositories matching this topic...
A programming framework for agentic AI. Discord: https://aka.ms/autogen-dc. Roadmap: https://aka.ms/autogen-roadmap
-
Updated
Jun 13, 2024 - Jupyter Notebook
Run any open-source LLMs, such as Llama 2, Mistral, as OpenAI compatible API endpoint in the cloud.
-
Updated
Jun 11, 2024 - Python
Official inference library for Mistral models
-
Updated
Jun 10, 2024 - Jupyter Notebook
Pretrain, finetune, deploy 20+ LLMs on your own data. Uses state-of-the-art techniques: flash attention, FSDP, 4-bit, LoRA, and more.
-
Updated
Jun 12, 2024 - Python
High-speed Large Language Model Serving on PCs with Consumer-grade GPUs
-
Updated
Jun 11, 2024 - C++
The easiest way to serve AI/ML models in production - Build Model Inference Service, LLM APIs, Multi-model Inference Graph/Pipelines, LLM/RAG apps, and more!
-
Updated
Jun 13, 2024 - Python
OpenVINO™ is an open-source toolkit for optimizing and deploying AI inference
-
Updated
Jun 13, 2024 - C++
🔮 SuperDuperDB: Bring AI to your database! Build, deploy and manage any AI application directly with your existing data infrastructure, without moving your data. Including streaming inference, scalable model training and vector search.
-
Updated
Jun 12, 2024 - Python
LMDeploy is a toolkit for compressing, deploying, and serving LLMs.
-
Updated
Jun 13, 2024 - Python
Sparsity-aware deep learning inference runtime for CPUs
-
Updated
Jun 6, 2024 - Python
Code examples and resources for DBRX, a large language model developed by Databricks
-
Updated
May 1, 2024 - Python
⚡ Build your chatbot within minutes on your favorite device; offer SOTA compression techniques for LLMs; run LLMs efficiently on Intel Platforms⚡
-
Updated
Jun 13, 2024 - Python
Medusa: Simple Framework for Accelerating LLM Generation with Multiple Decoding Heads
-
Updated
May 19, 2024 - Jupyter Notebook
Run any Llama 2 locally with gradio UI on GPU or CPU from anywhere (Linux/Windows/Mac). Use `llama2-wrapper` as your local llama2 backend for Generative Agents/Apps.
-
Updated
Mar 22, 2024 - Jupyter Notebook
AICI: Prompts as (Wasm) Programs
-
Updated
Jun 11, 2024 - Rust
Multi-LoRA inference server that scales to 1000s of fine-tuned LLMs
-
Updated
Jun 13, 2024 - Python
Generative AI reference workflows optimized for accelerated infrastructure and microservice architecture.
-
Updated
Jun 6, 2024 - Python
📖A curated list of Awesome LLM Inference Paper with codes, TensorRT-LLM, vLLM, streaming-llm, AWQ, SmoothQuant, WINT8/4, Continuous Batching, FlashAttention, PagedAttention etc.
-
Updated
Jun 12, 2024
Improve this page
Add a description, image, and links to the llm-inference topic page so that developers can more easily learn about it.
Add this topic to your repo
To associate your repository with the llm-inference topic, visit your repo's landing page and select "manage topics."