Logo
Explore Help
Sign In
ExternalVendorCode/LocalAI
1
0
Fork 0
You've already forked LocalAI
mirror of https://github.com/mudler/LocalAI.git synced 2025-06-24 01:03:28 +00:00
Code Issues Actions 14 Packages Projects Releases Wiki Activity
Files
libmtmd
LocalAI/pkg
History
Ettore Di Giacinto 7437d0c9ca WIP
2025-05-14 20:11:06 +02:00
..
assets
fix: use rice when embedding large binaries (#5309)
2025-05-04 16:42:42 +02:00
concurrency
chore: update jobresult_test.go (#4124)
2024-11-12 08:52:18 +01:00
downloader
chore(downloader): support hf.co and hf:// URIs (#4677)
2025-01-24 08:27:22 +01:00
functions
chore(deps): update llama.cpp and sync with upstream changes (#4950)
2025-03-06 00:40:58 +01:00
grpc
feat(video-gen): add endpoint for video generation (#5247)
2025-04-26 18:05:01 +02:00
langchain
feat(llama.cpp): do not specify backends to autoload and add llama.cpp variants (#2232)
2024-05-04 17:56:12 +02:00
library
fix: use rice when embedding large binaries (#5309)
2025-05-04 16:42:42 +02:00
model
feat(llama.cpp/clip): inject gpu options if we detect GPUs (#5243)
2025-04-26 00:04:47 +02:00
oci
chore: fix go.mod module (#2635)
2024-06-23 08:24:36 +00:00
startup
chore: drop embedded models (#4715)
2025-01-30 00:03:01 +01:00
store
chore: fix go.mod module (#2635)
2024-06-23 08:24:36 +00:00
templates
WIP
2025-05-14 20:11:06 +02:00
utils
feat(tts): Implement naive response_format for tts endpoint (#4035)
2024-11-02 19:13:35 +00:00
xsync
chore: fix go.mod module (#2635)
2024-06-23 08:24:36 +00:00
xsysinfo
fix(gpu): do not assume gpu being returned has node and mem (#5310)
2025-05-03 19:00:24 +02:00
Powered by Gitea Version: 1.24.1 Page: 735ms Template: 28ms
English
Bahasa Indonesia Deutsch English Español Français Gaeilge Italiano Latviešu Magyar nyelv Nederlands Polski Português de Portugal Português do Brasil Suomi Svenska Türkçe Čeština Ελληνικά Български Русский Українська فارسی മലയാളം 日本語 简体中文 繁體中文(台灣) 繁體中文(香港) 한국어
Licenses API