LocalAI/backend/go/llm
Ettore Di Giacinto 8814b31805
chore: drop gpt4all.cpp (#3106)
chore: drop gpt4all

gpt4all is already supported in llama.cpp - the backend was kept for
keeping compatibility with old gpt4all models (prior to gguf format).

It is good time now to clean up and remove it to slim the compilation
process.

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2024-08-07 23:35:55 +02:00
..
bert chore: fix go.mod module (#2635) 2024-06-23 08:24:36 +00:00
langchain chore: fix go.mod module (#2635) 2024-06-23 08:24:36 +00:00
llama build: fix go.mod - don't import ourself (#2896) 2024-07-16 22:49:43 +02:00
llama-ggml chore: fix go.mod module (#2635) 2024-06-23 08:24:36 +00:00
rwkv rf: centralize base64 image handling (#2595) 2024-06-24 08:34:36 +02:00