LocalAI/backend/cpp/llama
Ettore Di Giacinto 404ca3cc23
chore(deps): bump llama.cpp to 47f931c8f9a26c072d71224bc8013cc66ea9e445 (#4263)
chore(deps): bump llama.cpp to '47f931c8f9a26c072d71224bc8013cc66ea9e445'

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2024-11-26 11:12:57 +01:00
..
patches chore(deps): update llama.cpp (#3497) 2024-09-12 20:55:27 +02:00
CMakeLists.txt deps(llama.cpp): update, support Gemma models (#1734) 2024-02-21 17:23:38 +01:00
grpc-server.cpp chore(deps): bump llama.cpp to 47f931c8f9a26c072d71224bc8013cc66ea9e445 (#4263) 2024-11-26 11:12:57 +01:00
json.hpp 🔥 add LaVA support and GPT vision API, Multiple requests for llama.cpp, return JSON types (#1254) 2023-11-11 13:14:59 +01:00
Makefile chore(deps): bump llama-cpp to ae8de6d50a09d49545e0afab2e50cc4acfb280e2 (#4157) 2024-11-15 12:51:43 +01:00
prepare.sh chore(deps): update llama.cpp (#3497) 2024-09-12 20:55:27 +02:00
utils.hpp chore(deps): update llama.cpp (#3497) 2024-09-12 20:55:27 +02:00