LocalAI [bot]
a7fc89c207
⬆️ Update ggerganov/whisper.cpp ( #1927 )
...
Signed-off-by: GitHub <noreply@github.com>
Co-authored-by: mudler <mudler@users.noreply.github.com>
2024-03-29 22:29:50 +01:00
Ettore Di Giacinto
123a5a2e16
feat(swagger): Add swagger API doc ( #1926 )
...
* makefile(build): add minimal and api build target
* feat(swagger): Add swagger
2024-03-29 22:29:33 +01:00
LocalAI [bot]
ab2f403dd0
⬆️ Update ggerganov/whisper.cpp ( #1924 )
...
Signed-off-by: GitHub <noreply@github.com>
Co-authored-by: mudler <mudler@users.noreply.github.com>
2024-03-29 00:13:59 +01:00
LocalAI [bot]
b9c5e14e2c
⬆️ Update ggerganov/llama.cpp ( #1923 )
...
Signed-off-by: GitHub <noreply@github.com>
Co-authored-by: mudler <mudler@users.noreply.github.com>
2024-03-29 00:13:38 +01:00
LocalAI [bot]
07c49ee4b8
⬆️ Update ggerganov/whisper.cpp ( #1914 )
...
Signed-off-by: GitHub <noreply@github.com>
Co-authored-by: mudler <mudler@users.noreply.github.com>
2024-03-27 22:53:13 +00:00
LocalAI [bot]
07c4bdda7c
⬆️ Update ggerganov/llama.cpp ( #1913 )
...
Signed-off-by: GitHub <noreply@github.com>
Co-authored-by: mudler <mudler@users.noreply.github.com>
2024-03-27 21:57:59 +00:00
cryptk
0c0efc871c
fix(build): better CI logging and correct some build failure modes in Makefile ( #1899 )
...
* feat: group make output by target when running parallelized builds in CI
* fix: quote GO_TAGS in makefile to fix handling of whitespace in value
* fix: set CPATH to find opencv2 in it's commonly installed location
* fix: add missing go mod dropreplace for go-llama.cpp
* chore: remove opencv symlink from github workflows
2024-03-27 21:12:19 +01:00
Gianluca Boiano
7ef5f3b473
⬆️ Update M0Rf30/go-tiny-dream ( #1911 )
2024-03-27 21:12:04 +01:00
LocalAI [bot]
b500ceaf73
⬆️ Update ggerganov/llama.cpp ( #1904 )
...
Signed-off-by: GitHub <noreply@github.com>
Co-authored-by: mudler <mudler@users.noreply.github.com>
2024-03-26 23:21:54 +00:00
LocalAI [bot]
1395e505cd
⬆️ Update ggerganov/llama.cpp ( #1897 )
...
Signed-off-by: GitHub <noreply@github.com>
Co-authored-by: mudler <mudler@users.noreply.github.com>
2024-03-26 00:34:10 +01:00
LocalAI [bot]
42a4c86dca
⬆️ Update ggerganov/whisper.cpp ( #1896 )
...
Signed-off-by: GitHub <noreply@github.com>
Co-authored-by: mudler <mudler@users.noreply.github.com>
2024-03-26 00:33:46 +01:00
LocalAI [bot]
3e293f1465
⬆️ Update ggerganov/llama.cpp ( #1889 )
...
Signed-off-by: GitHub <noreply@github.com>
Co-authored-by: mudler <mudler@users.noreply.github.com>
2024-03-24 21:12:18 +00:00
LocalAI [bot]
0106c58181
⬆️ Update ggerganov/llama.cpp ( #1885 )
...
Signed-off-by: GitHub <noreply@github.com>
Co-authored-by: mudler <mudler@users.noreply.github.com>
2024-03-24 14:54:01 +01:00
LocalAI [bot]
a922119c41
⬆️ Update ggerganov/llama.cpp ( #1881 )
...
Signed-off-by: GitHub <noreply@github.com>
Co-authored-by: mudler <mudler@users.noreply.github.com>
2024-03-23 09:23:28 +01:00
Richard Palethorpe
643d85d2cc
feat(stores): Vector store backend ( #1795 )
...
Add simple vector store backend
Signed-off-by: Richard Palethorpe <io@richiejp.com>
2024-03-22 21:14:04 +01:00
Ettore Di Giacinto
4b1ee0c170
feat(aio): add tests, update model definitions ( #1880 )
2024-03-22 21:13:11 +01:00
LocalAI [bot]
dd84c29a3d
⬆️ Update ggerganov/whisper.cpp ( #1875 )
...
Signed-off-by: GitHub <noreply@github.com>
Co-authored-by: mudler <mudler@users.noreply.github.com>
2024-03-22 09:14:56 +01:00
LocalAI [bot]
07468c8786
⬆️ Update ggerganov/llama.cpp ( #1874 )
...
Signed-off-by: GitHub <noreply@github.com>
Co-authored-by: mudler <mudler@users.noreply.github.com>
2024-03-22 09:14:42 +01:00
Ettore Di Giacinto
abc9360dc6
feat(aio): entrypoint, update workflows ( #1872 )
2024-03-21 22:09:04 +01:00
Ettore Di Giacinto
e533dcf506
feat(functions/aio): all-in-one images, function template enhancements ( #1862 )
...
* feat(startup): allow to specify models from local files
* feat(aio): add Dockerfile, make targets, aio profiles
* feat(template): add Function and LastMessage
* add hermes2-pro-mistral
* update hermes2 definition
* feat(template): add sprig
* feat(template): expose FunctionCall
* feat(aio): switch llm for text
2024-03-21 01:12:20 +01:00
LocalAI [bot]
eeaf8c7ccd
⬆️ Update ggerganov/whisper.cpp ( #1867 )
...
Signed-off-by: GitHub <noreply@github.com>
Co-authored-by: mudler <mudler@users.noreply.github.com>
2024-03-20 22:26:29 +00:00
LocalAI [bot]
7e34dfdae7
⬆️ Update ggerganov/llama.cpp ( #1866 )
...
Signed-off-by: GitHub <noreply@github.com>
Co-authored-by: mudler <mudler@users.noreply.github.com>
2024-03-20 22:13:29 +00:00
LocalAI [bot]
e4bf51d5bd
⬆️ Update ggerganov/llama.cpp ( #1864 )
...
Signed-off-by: GitHub <noreply@github.com>
Co-authored-by: mudler <mudler@users.noreply.github.com>
2024-03-20 09:05:53 +01:00
LocalAI [bot]
ead61bf9d5
⬆️ Update ggerganov/llama.cpp ( #1857 )
...
Signed-off-by: GitHub <noreply@github.com>
Co-authored-by: mudler <mudler@users.noreply.github.com>
2024-03-19 00:03:17 +00:00
LocalAI [bot]
621541a92f
⬆️ Update ggerganov/whisper.cpp ( #1508 )
...
Signed-off-by: GitHub <noreply@github.com>
Co-authored-by: mudler <mudler@users.noreply.github.com>
2024-03-19 00:44:23 +01:00
Dave
ed5734ae25
test/fix: OSX Test Repair ( #1843 )
...
* test with gguf instead of ggml. Updates testPrompt to match? Adds debugging line to Dockerfile that I've found helpful recently.
* fix testPrompt slightly
* Sad Experiment: Test GH runner without metal?
* break apart CGO_LDFLAGS
* switch runner
* upstream llama.cpp disables Metal on Github CI!
* missed a dir from clean-tests
* CGO_LDFLAGS
* tmate failure + NO_ACCELERATE
* whisper.cpp has a metal fix
* do the exact opposite of the name of this branch, but keep it around for unrelated fixes?
* add back newlines
* add tmate to linux for testing
* update fixtures
* timeout for tmate
2024-03-18 19:19:43 +01:00
Ettore Di Giacinto
b202bfaaa0
deps(whisper.cpp): update, fix cublas build ( #1846 )
...
fix(whisper.cpp): Add stubs and -lcuda
2024-03-18 15:56:53 +01:00
LocalAI [bot]
0eb0ac7dd0
⬆️ Update ggerganov/llama.cpp ( #1848 )
...
Signed-off-by: GitHub <noreply@github.com>
Co-authored-by: mudler <mudler@users.noreply.github.com>
2024-03-18 08:57:58 +01:00
cryptk
020ce29cd8
fix(make): allow to parallelize jobs ( #1845 )
...
* fix: clean up Makefile dependencies to allow for parallel builds
* refactor: remove old unused backend from Makefile
* fix: finish removing legacy backend, update piper
* fix: I broke llama... I fixed llama
* feat: give the tests and builds a few threads
* fix: ensure libraries are replaced before build, add dropreplace target
* Fix image build workflows
2024-03-17 15:39:20 +01:00
LocalAI [bot]
8967ed1601
⬆️ Update ggerganov/llama.cpp ( #1840 )
...
Signed-off-by: GitHub <noreply@github.com>
Co-authored-by: mudler <mudler@users.noreply.github.com>
2024-03-16 11:25:41 +00:00
LocalAI [bot]
5826fb8e6d
⬆️ Update mudler/go-piper ( #1844 )
...
Signed-off-by: GitHub <noreply@github.com>
Co-authored-by: mudler <mudler@users.noreply.github.com>
2024-03-15 23:51:03 +00:00
Dave
db199f61da
fix: osx build default.metallib ( #1837 )
...
fix: osx build default.metallib (#1837 )
* port osx fix from refactor pr to slim pr
* manually bump llama.cpp version to unstick CI?
2024-03-15 08:18:58 +00:00
LocalAI [bot]
44adbd2c75
⬆️ Update go-skynet/go-llama.cpp ( #1835 )
...
Signed-off-by: GitHub <noreply@github.com>
Co-authored-by: mudler <mudler@users.noreply.github.com>
2024-03-14 23:06:42 +00:00
Dave
45d520f913
fix: OSX Build Files for llama.cpp ( #1836 )
...
bot ate my changes, seperate branch
2024-03-14 23:07:47 +01:00
LocalAI [bot]
f82065703d
⬆️ Update ggerganov/llama.cpp ( #1827 )
...
Signed-off-by: GitHub <noreply@github.com>
Co-authored-by: mudler <mudler@users.noreply.github.com>
2024-03-14 08:39:39 +01:00
LocalAI [bot]
5c5f07c1e7
⬆️ Update ggerganov/llama.cpp ( #1821 )
...
Signed-off-by: GitHub <noreply@github.com>
Co-authored-by: mudler <mudler@users.noreply.github.com>
2024-03-13 10:05:46 +01:00
LocalAI [bot]
8e57f4df31
⬆️ Update ggerganov/llama.cpp ( #1818 )
...
Signed-off-by: GitHub <noreply@github.com>
Co-authored-by: mudler <mudler@users.noreply.github.com>
2024-03-11 00:02:37 +01:00
LocalAI [bot]
a08cc5adbb
⬆️ Update ggerganov/llama.cpp ( #1816 )
...
Signed-off-by: GitHub <noreply@github.com>
Co-authored-by: mudler <mudler@users.noreply.github.com>
2024-03-10 09:32:09 +01:00
LocalAI [bot]
595a73fce4
⬆️ Update ggerganov/llama.cpp ( #1813 )
...
Signed-off-by: GitHub <noreply@github.com>
Co-authored-by: mudler <mudler@users.noreply.github.com>
2024-03-09 09:27:06 +01:00
LocalAI [bot]
dc919e08e8
⬆️ Update ggerganov/llama.cpp ( #1811 )
...
Signed-off-by: GitHub <noreply@github.com>
Co-authored-by: mudler <mudler@users.noreply.github.com>
2024-03-08 08:21:25 +01:00
Ettore Di Giacinto
5d1018495f
feat(intel): add diffusers/transformers support ( #1746 )
...
* feat(intel): add diffusers support
* try to consume upstream container image
* Debug
* Manually install deps
* Map transformers/hf cache dir to modelpath if not specified
* fix(compel): update initialization, pass by all gRPC options
* fix: add dependencies, implement transformers for xpu
* base it from the oneapi image
* Add pillow
* set threads if specified when launching the API
* Skip conda install if intel
* defaults to non-intel
* ci: add to pipelines
* prepare compel only if enabled
* Skip conda install if intel
* fix cleanup
* Disable compel by default
* Install torch 2.1.0 with Intel
* Skip conda on some setups
* Detect python
* Quiet output
* Do not override system python with conda
* Prefer python3
* Fixups
* exllama2: do not install without conda (overrides pytorch version)
* exllama/exllama2: do not install if not using cuda
* Add missing dataset dependency
* Small fixups, symlink to python, add requirements
* Add neural_speed to the deps
* correctly handle model offloading
* fix: device_map == xpu
* go back at calling python, fixed at dockerfile level
* Exllama2 restricted to only nvidia gpus
* Tokenizer to xpu
2024-03-07 14:37:45 +01:00
LocalAI [bot]
ad6fd7a991
⬆️ Update ggerganov/llama.cpp ( #1805 )
...
Signed-off-by: GitHub <noreply@github.com>
Co-authored-by: mudler <mudler@users.noreply.github.com>
2024-03-06 23:28:31 +01:00
LocalAI [bot]
e022b5959e
⬆️ Update mudler/go-stable-diffusion ( #1802 )
...
Signed-off-by: GitHub <noreply@github.com>
Co-authored-by: mudler <mudler@users.noreply.github.com>
2024-03-05 23:39:57 +00:00
LocalAI [bot]
db7f4955a1
⬆️ Update ggerganov/llama.cpp ( #1801 )
...
Signed-off-by: GitHub <noreply@github.com>
Co-authored-by: mudler <mudler@users.noreply.github.com>
2024-03-05 21:50:27 +00:00
LocalAI [bot]
c8e29033c2
⬆️ Update ggerganov/llama.cpp ( #1794 )
...
Signed-off-by: GitHub <noreply@github.com>
Co-authored-by: mudler <mudler@users.noreply.github.com>
2024-03-05 08:59:09 +01:00
LocalAI [bot]
d0bd961bde
⬆️ Update ggerganov/llama.cpp ( #1791 )
...
Signed-off-by: GitHub <noreply@github.com>
Co-authored-by: mudler <mudler@users.noreply.github.com>
2024-03-04 09:44:21 +01:00
LocalAI [bot]
b60a3fc879
⬆️ Update ggerganov/llama.cpp ( #1789 )
...
Signed-off-by: GitHub <noreply@github.com>
Co-authored-by: mudler <mudler@users.noreply.github.com>
2024-03-03 08:49:23 +01:00
LocalAI [bot]
daa0b8741c
⬆️ Update ggerganov/llama.cpp ( #1785 )
...
Signed-off-by: GitHub <noreply@github.com>
Co-authored-by: mudler <mudler@users.noreply.github.com>
2024-03-01 22:38:24 +00:00
Dave
1c312685aa
refactor: move remaining api packages to core ( #1731 )
...
* core 1
* api/openai/files fix
* core 2 - core/config
* move over core api.go and tests to the start of core/http
* move over localai specific endpoints to core/http, begin the service/endpoint split there
* refactor big chunk on the plane
* refactor chunk 2 on plane, next step: port and modify changes to request.go
* easy fixes for request.go, major changes not done yet
* lintfix
* json tag lintfix?
* gitignore and .keep files
* strange fix attempt: rename the config dir?
2024-03-01 16:19:53 +01:00
LocalAI [bot]
316de82f51
⬆️ Update ggerganov/llama.cpp ( #1779 )
...
Signed-off-by: GitHub <noreply@github.com>
Co-authored-by: mudler <mudler@users.noreply.github.com>
2024-02-29 22:33:30 +00:00