This website requires JavaScript.
Explore
Help
Sign In
ExternalVendorCode
/
LocalAI
Watch
1
Star
0
Fork
0
You've already forked LocalAI
mirror of
https://github.com/mudler/LocalAI.git
synced
2025-03-23 20:45:20 +00:00
Code
Issues
Actions
12
Packages
Projects
Releases
Wiki
Activity
LocalAI
/
backend
/
cpp
/
llama
History
Ettore Di Giacinto
697c769b64
fix(llama.cpp): enable cont batching when parallel is set (
#1622
)
...
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2024-01-21 14:59:48 +01:00
..
CMakeLists.txt
Fix: Set proper Homebrew install location for x86 Macs (
#1510
)
2023-12-30 12:37:26 +01:00
grpc-server.cpp
fix(llama.cpp): enable cont batching when parallel is set (
#1622
)
2024-01-21 14:59:48 +01:00
json.hpp
🔥
add LaVA support and GPT vision API, Multiple requests for llama.cpp, return JSON types (
#1254
)
2023-11-11 13:14:59 +01:00
Makefile
update(llama.cpp): update server, correctly propagate LLAMA_VERSION (
#1440
)
2023-12-15 08:26:48 +01:00