LocalAI/backend
Ettore Di Giacinto 98ad93d53e
Drop ggml-based gpt2 and starcoder (supported by llama.cpp) (#1679)
* Drop ggml-based gpt2 and starcoder (supported by llama.cpp)

* Update compatibility table
2024-02-04 13:15:51 +01:00
..
cpp feat(sycl): Add support for Intel GPUs with sycl (#1647) (#1660) 2024-02-01 19:21:52 +01:00
go Drop ggml-based gpt2 and starcoder (supported by llama.cpp) (#1679) 2024-02-04 13:15:51 +01:00
python transformers: correctly load automodels (#1643) 2024-01-26 00:13:21 +01:00
backend_grpc.pb.go transformers: correctly load automodels (#1643) 2024-01-26 00:13:21 +01:00
backend.proto transformers: correctly load automodels (#1643) 2024-01-26 00:13:21 +01:00