LocalAI/core
Ettore Di Giacinto 8814b31805
chore: drop gpt4all.cpp (#3106)
chore: drop gpt4all

gpt4all is already supported in llama.cpp - the backend was kept for
keeping compatibility with old gpt4all models (prior to gguf format).

It is good time now to clean up and remove it to slim the compilation
process.

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2024-08-07 23:35:55 +02:00
..
backend feat(llama.cpp): support embeddings endpoints (#2871) 2024-07-15 22:54:16 +02:00
cli chore: drop gpt4all.cpp (#3106) 2024-08-07 23:35:55 +02:00
clients feat(store): add Golang client (#1977) 2024-04-16 15:54:14 +02:00
config feat(p2p): allow to run multiple clusters in the same p2p network (#3128) 2024-08-07 23:35:44 +02:00
dependencies_manager fix: be consistent in downloading files, check for scanner errors (#3108) 2024-08-02 20:06:25 +02:00
gallery fix: be consistent in downloading files, check for scanner errors (#3108) 2024-08-02 20:06:25 +02:00
http chore: drop gpt4all.cpp (#3106) 2024-08-07 23:35:55 +02:00
p2p feat(p2p): allow to run multiple clusters in the same p2p network (#3128) 2024-08-07 23:35:44 +02:00
schema feat(openai): add json_schema format type and strict mode (#3193) 2024-08-07 15:27:02 -04:00
services feat(model-list): be consistent, skip known files from listing (#2760) 2024-07-10 15:28:39 +02:00
startup chore: drop gpt4all.cpp (#3106) 2024-08-07 23:35:55 +02:00
application.go feat(model-list): be consistent, skip known files from listing (#2760) 2024-07-10 15:28:39 +02:00