This website requires JavaScript.
Explore
Help
Sign In
ExternalVendorCode
/
LocalAI
Watch
1
Star
0
Fork
0
You've already forked LocalAI
mirror of
https://github.com/mudler/LocalAI.git
synced
2025-02-11 21:25:19 +00:00
Code
Issues
Actions
14
Packages
Projects
Releases
Wiki
Activity
LocalAI
/
backend
/
cpp
/
llama
History
cryptk
c047c19145
fix: make sure the GNUMake jobserver is passed to cmake for the llama.cpp build (
#2697
)
...
Signed-off-by: Chris Jowett <421501+cryptk@users.noreply.github.com>
2024-07-02 08:46:59 +02:00
..
CMakeLists.txt
deps(llama.cpp): update, support Gemma models (
#1734
)
2024-02-21 17:23:38 +01:00
grpc-server.cpp
feat(options): add
repeat_last_n
(
#2660
)
2024-06-26 14:58:50 +02:00
json.hpp
🔥
add LaVA support and GPT vision API, Multiple requests for llama.cpp, return JSON types (
#1254
)
2023-11-11 13:14:59 +01:00
Makefile
fix: make sure the GNUMake jobserver is passed to cmake for the llama.cpp build (
#2697
)
2024-07-02 08:46:59 +02:00
prepare.sh
feat(llama.cpp): do not specify backends to autoload and add llama.cpp variants (
#2232
)
2024-05-04 17:56:12 +02:00
utils.hpp
feat(sycl): Add support for Intel GPUs with sycl (
#1647
) (
#1660
)
2024-02-01 19:21:52 +01:00