LocalAI/core/cli
Ettore Di Giacinto 8814b31805
chore: drop gpt4all.cpp (#3106)
chore: drop gpt4all

gpt4all is already supported in llama.cpp - the backend was kept for
keeping compatibility with old gpt4all models (prior to gguf format).

It is good time now to clean up and remove it to slim the compilation
process.

Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2024-08-07 23:35:55 +02:00
..
context feat(llama.cpp): Totally decentralized, private, distributed, p2p inference (#2343) 2024-05-20 19:17:59 +02:00
worker chore: drop gpt4all.cpp (#3106) 2024-08-07 23:35:55 +02:00
cli.go feat(p2p): Federation and AI swarms (#2723) 2024-07-08 22:04:06 +02:00
federated.go feat(p2p): allow to run multiple clusters in the same p2p network (#3128) 2024-08-07 23:35:44 +02:00
models.go fix: be consistent in downloading files, check for scanner errors (#3108) 2024-08-02 20:06:25 +02:00
run.go feat(p2p): allow to run multiple clusters in the same p2p network (#3128) 2024-08-07 23:35:44 +02:00
transcript.go fix(cli): remove duplicate alias (#2654) 2024-06-25 10:08:13 +02:00
tts.go chore: fix go.mod module (#2635) 2024-06-23 08:24:36 +00:00
util.go fix: be consistent in downloading files, check for scanner errors (#3108) 2024-08-02 20:06:25 +02:00