This website requires JavaScript.
Explore
Help
Sign In
ExternalVendorCode
/
LocalAI
Watch
1
Star
0
Fork
0
You've already forked LocalAI
mirror of
https://github.com/mudler/LocalAI.git
synced
2025-06-19 15:33:45 +00:00
Code
Issues
Actions
13
Packages
Projects
Releases
Wiki
Activity
Files
3733250b3c33a2f5446d02205054d16f648a1aa2
LocalAI
/
backend
/
cpp
History
Ettore Di Giacinto
697c769b64
fix(llama.cpp): enable cont batching when parallel is set (
#1622
)
...
Signed-off-by: Ettore Di Giacinto <
mudler@localai.io
>
2024-01-21 14:59:48 +01:00
..
grpc
move BUILD_GRPC_FOR_BACKEND_LLAMA logic to makefile: errors in this section now immediately fail the build (
#1576
)
2024-01-13 10:08:26 +01:00
llama
fix(llama.cpp): enable cont batching when parallel is set (
#1622
)
2024-01-21 14:59:48 +01:00