LocalAI/backend
Ettore Di Giacinto 6257e2f510
chore(deps): bump llama-cpp to 96776405a17034dcfd53d3ddf5d142d34bdbb657 (#3793)
This adapts also to upstream changes

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2024-10-12 01:25:03 +02:00
..
cpp chore(deps): bump llama-cpp to 96776405a17034dcfd53d3ddf5d142d34bdbb657 (#3793) 2024-10-12 01:25:03 +02:00
go fix: untangle pkg/grpc and core/schema for Transcription (#3419) 2024-09-02 15:48:53 +02:00
python feat(transformers): Use downloaded model for Transformers backend if it already exists. (#3777) 2024-10-10 08:42:59 +00:00
backend.proto feat: Add Get Token Metrics to GRPC server (#3687) 2024-10-01 14:41:20 +02:00