b4rtaz/distributed-llama
Inference EnginesDistributed LLM inference. Connect home devices into a powerful cluster to accelerate LLM inference. More devices means faster inference.
GitHub Metrics
Stars
2.8k
Forks
212
Open Issues
45
Watchers
50
Contributors
13
Weekly Commits
0
Language
C++
License
MIT
Last Commit
Feb 10, 2026
Created
Dec 5, 2023
Latest Release
v0.16.5
Release Date
Feb 2, 2026
Synced: Mar 3, 2026
Quality Scores
Documentation Qualityw: 20%
0.0
Community Healthw: 20%
0.0
Maintenance Velocityw: 15%
0.0
API Design & DXw: 20%
0.0
Production Readinessw: 15%
0.0
Ecosystem Integrationw: 10%
0.0
Tags
distributed-computingdistributed-llmllama2llama3llmllm-inferencellmsneural-networkopen-llm
Radar
No scores yet