Logo
Explore Help
Sign In
ExternalVendorCode/LocalAI
1
0
Fork 0
You've already forked LocalAI
mirror of https://github.com/mudler/LocalAI.git synced 2025-06-14 13:08:08 +00:00
Code Issues Actions 13 Packages Projects Releases Wiki Activity
Files
e65e3253a3210aafffa62be72c696ac89bcb92f4
LocalAI/backend/cpp/llama
History
Ettore Di Giacinto 6257e2f510 chore(deps): bump llama-cpp to 96776405a17034dcfd53d3ddf5d142d34bdbb657 (#3793)
This adapts also to upstream changes

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2024-10-12 01:25:03 +02:00
..
patches
chore(deps): update llama.cpp (#3497)
2024-09-12 20:55:27 +02:00
CMakeLists.txt
deps(llama.cpp): update, support Gemma models (#1734)
2024-02-21 17:23:38 +01:00
grpc-server.cpp
chore(deps): bump llama-cpp to 96776405a17034dcfd53d3ddf5d142d34bdbb657 (#3793)
2024-10-12 01:25:03 +02:00
json.hpp
🔥 add LaVA support and GPT vision API, Multiple requests for llama.cpp, return JSON types (#1254)
2023-11-11 13:14:59 +01:00
Makefile
fix: speedup git submodule update with --single-branch (#2847)
2024-07-13 22:32:25 +02:00
prepare.sh
chore(deps): update llama.cpp (#3497)
2024-09-12 20:55:27 +02:00
utils.hpp
chore(deps): update llama.cpp (#3497)
2024-09-12 20:55:27 +02:00
Powered by Gitea Version: 1.24.0 Page: 67ms Template: 4ms
English
Bahasa Indonesia Deutsch English Español Français Gaeilge Italiano Latviešu Magyar nyelv Nederlands Polski Português de Portugal Português do Brasil Suomi Svenska Türkçe Čeština Ελληνικά Български Русский Українська فارسی മലയാളം 日本語 简体中文 繁體中文(台灣) 繁體中文(香港) 한국어
Licenses API