LocalAI/pkg
Ettore Di Giacinto 1120847f72
feat: bump llama.cpp, add gguf support ()
**Description**

This PR syncs up the `llama` backend to use `gguf`
(https://github.com/go-skynet/go-llama.cpp/pull/180). It also adds
`llama-stable` to the targets so we can still load ggml. It adapts the
current tests to use the `llama-backend` for ggml and uses a `gguf`
model to run tests on the new backend.

In order to consume the new version of go-llama.cpp, it also bump go to
1.21 (images, pipelines, etc)

---------

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2023-08-24 01:18:58 +02:00
..
assets feat: Update gpt4all, support multiple implementations in runtime () 2023-06-01 23:38:52 +02:00
backend feat: bump llama.cpp, add gguf support () 2023-08-24 01:18:58 +02:00
gallery fix: match lowercase of the input, not of the model 2023-08-08 00:46:22 +02:00
grammar feat: update integer, number and string rules - allow primitives as root types () 2023-08-03 23:32:30 +02:00
grpc Feat: rwkv improvements: () 2023-08-22 18:48:06 +02:00
langchain feat: add LangChainGo Huggingface backend () 2023-06-01 12:00:06 +02:00
model feat: backend monitor shutdown endpoint, process based () 2023-08-23 18:38:37 +02:00
stablediffusion feat: support upscaled image generation with esrgan () 2023-06-05 17:21:38 +02:00
utils fix: do not break on newlines on function returns () 2023-08-04 21:46:36 +02:00