LocalAI/backend/cpp/llama
siddimore f84b55d1ef
feat: Add Get Token Metrics to GRPC server (#3687)
* Add Get Token Metrics to GRPC server

Signed-off-by: Siddharth More <siddimore@gmail.com>

* Expose LocalAI endpoint

Signed-off-by: Siddharth More <siddimore@gmail.com>

---------

Signed-off-by: Siddharth More <siddimore@gmail.com>
2024-10-01 14:41:20 +02:00
..
patches chore(deps): update llama.cpp (#3497) 2024-09-12 20:55:27 +02:00
CMakeLists.txt deps(llama.cpp): update, support Gemma models (#1734) 2024-02-21 17:23:38 +01:00
grpc-server.cpp feat: Add Get Token Metrics to GRPC server (#3687) 2024-10-01 14:41:20 +02:00
json.hpp 🔥 add LaVA support and GPT vision API, Multiple requests for llama.cpp, return JSON types (#1254) 2023-11-11 13:14:59 +01:00
Makefile fix: speedup git submodule update with --single-branch (#2847) 2024-07-13 22:32:25 +02:00
prepare.sh chore(deps): update llama.cpp (#3497) 2024-09-12 20:55:27 +02:00
utils.hpp chore(deps): update llama.cpp (#3497) 2024-09-12 20:55:27 +02:00