This website requires JavaScript.
Explore
Help
Sign In
ExternalVendorCode
/
LocalAI
Watch
1
Star
0
Fork
0
You've already forked LocalAI
mirror of
https://github.com/mudler/LocalAI.git
synced
2025-02-05 02:29:48 +00:00
Code
Issues
Actions
14
Packages
Projects
Releases
Wiki
Activity
LocalAI
/
backend
/
cpp
/
llama
History
Ettore Di Giacinto
e843d7df0e
feat(grpc): return consumed token count and update response accordingly (
#2035
)
...
Fixes
:
#1920
2024-04-15 19:47:11 +02:00
..
CMakeLists.txt
deps(llama.cpp): update, support Gemma models (
#1734
)
2024-02-21 17:23:38 +01:00
grpc-server.cpp
feat(grpc): return consumed token count and update response accordingly (
#2035
)
2024-04-15 19:47:11 +02:00
json.hpp
🔥
add LaVA support and GPT vision API, Multiple requests for llama.cpp, return JSON types (
#1254
)
2023-11-11 13:14:59 +01:00
Makefile
test/fix: OSX Test Repair (
#1843
)
2024-03-18 19:19:43 +01:00
utils.hpp
feat(sycl): Add support for Intel GPUs with sycl (
#1647
) (
#1660
)
2024-02-01 19:21:52 +01:00