mirror of
https://github.com/mudler/LocalAI.git
synced 2024-12-18 20:27:57 +00:00
3c9544b023
* refactor: rename llama-stable to llama-ggml * Makefile: get sources in sources/ Signed-off-by: Ettore Di Giacinto <mudler@localai.io> * fixup path Signed-off-by: Ettore Di Giacinto <mudler@localai.io> * fixup sources Signed-off-by: Ettore Di Giacinto <mudler@localai.io> * fixups sd Signed-off-by: Ettore Di Giacinto <mudler@localai.io> * update SD * fixup * fixup: create piper libdir also when not built Signed-off-by: Ettore Di Giacinto <mudler@localai.io> * fix make target on linux test Signed-off-by: Ettore Di Giacinto <mudler@localai.io> --------- Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
39 lines
525 B
Plaintext
39 lines
525 B
Plaintext
# go-llama build artifacts
|
|
/sources/
|
|
__pycache__/
|
|
*.a
|
|
get-sources
|
|
/backend/cpp/llama/grpc-server
|
|
/backend/cpp/llama/llama.cpp
|
|
|
|
go-ggml-transformers
|
|
go-gpt2
|
|
go-rwkv
|
|
whisper.cpp
|
|
/bloomz
|
|
go-bert
|
|
|
|
# LocalAI build binary
|
|
LocalAI
|
|
local-ai
|
|
# prevent above rules from omitting the helm chart
|
|
!charts/*
|
|
# prevent above rules from omitting the api/localai folder
|
|
!api/localai
|
|
|
|
# Ignore models
|
|
models/*
|
|
test-models/
|
|
test-dir/
|
|
|
|
release/
|
|
|
|
# just in case
|
|
.DS_Store
|
|
.idea
|
|
|
|
# Generated during build
|
|
backend-assets/
|
|
prepare
|
|
/ggml-metal.metal
|