LocalAI [bot]
5c5f07c1e7
⬆️ Update ggerganov/llama.cpp ( #1821 )
...
Signed-off-by: GitHub <noreply@github.com>
Co-authored-by: mudler <mudler@users.noreply.github.com>
2024-03-13 10:05:46 +01:00
LocalAI [bot]
8e57f4df31
⬆️ Update ggerganov/llama.cpp ( #1818 )
...
Signed-off-by: GitHub <noreply@github.com>
Co-authored-by: mudler <mudler@users.noreply.github.com>
2024-03-11 00:02:37 +01:00
LocalAI [bot]
a08cc5adbb
⬆️ Update ggerganov/llama.cpp ( #1816 )
...
Signed-off-by: GitHub <noreply@github.com>
Co-authored-by: mudler <mudler@users.noreply.github.com>
2024-03-10 09:32:09 +01:00
LocalAI [bot]
595a73fce4
⬆️ Update ggerganov/llama.cpp ( #1813 )
...
Signed-off-by: GitHub <noreply@github.com>
Co-authored-by: mudler <mudler@users.noreply.github.com>
2024-03-09 09:27:06 +01:00
LocalAI [bot]
dc919e08e8
⬆️ Update ggerganov/llama.cpp ( #1811 )
...
Signed-off-by: GitHub <noreply@github.com>
Co-authored-by: mudler <mudler@users.noreply.github.com>
2024-03-08 08:21:25 +01:00
Ettore Di Giacinto
5d1018495f
feat(intel): add diffusers/transformers support ( #1746 )
...
* feat(intel): add diffusers support
* try to consume upstream container image
* Debug
* Manually install deps
* Map transformers/hf cache dir to modelpath if not specified
* fix(compel): update initialization, pass by all gRPC options
* fix: add dependencies, implement transformers for xpu
* base it from the oneapi image
* Add pillow
* set threads if specified when launching the API
* Skip conda install if intel
* defaults to non-intel
* ci: add to pipelines
* prepare compel only if enabled
* Skip conda install if intel
* fix cleanup
* Disable compel by default
* Install torch 2.1.0 with Intel
* Skip conda on some setups
* Detect python
* Quiet output
* Do not override system python with conda
* Prefer python3
* Fixups
* exllama2: do not install without conda (overrides pytorch version)
* exllama/exllama2: do not install if not using cuda
* Add missing dataset dependency
* Small fixups, symlink to python, add requirements
* Add neural_speed to the deps
* correctly handle model offloading
* fix: device_map == xpu
* go back at calling python, fixed at dockerfile level
* Exllama2 restricted to only nvidia gpus
* Tokenizer to xpu
2024-03-07 14:37:45 +01:00
LocalAI [bot]
ad6fd7a991
⬆️ Update ggerganov/llama.cpp ( #1805 )
...
Signed-off-by: GitHub <noreply@github.com>
Co-authored-by: mudler <mudler@users.noreply.github.com>
2024-03-06 23:28:31 +01:00
LocalAI [bot]
e022b5959e
⬆️ Update mudler/go-stable-diffusion ( #1802 )
...
Signed-off-by: GitHub <noreply@github.com>
Co-authored-by: mudler <mudler@users.noreply.github.com>
2024-03-05 23:39:57 +00:00
LocalAI [bot]
db7f4955a1
⬆️ Update ggerganov/llama.cpp ( #1801 )
...
Signed-off-by: GitHub <noreply@github.com>
Co-authored-by: mudler <mudler@users.noreply.github.com>
2024-03-05 21:50:27 +00:00
LocalAI [bot]
c8e29033c2
⬆️ Update ggerganov/llama.cpp ( #1794 )
...
Signed-off-by: GitHub <noreply@github.com>
Co-authored-by: mudler <mudler@users.noreply.github.com>
2024-03-05 08:59:09 +01:00
LocalAI [bot]
d0bd961bde
⬆️ Update ggerganov/llama.cpp ( #1791 )
...
Signed-off-by: GitHub <noreply@github.com>
Co-authored-by: mudler <mudler@users.noreply.github.com>
2024-03-04 09:44:21 +01:00
LocalAI [bot]
b60a3fc879
⬆️ Update ggerganov/llama.cpp ( #1789 )
...
Signed-off-by: GitHub <noreply@github.com>
Co-authored-by: mudler <mudler@users.noreply.github.com>
2024-03-03 08:49:23 +01:00
LocalAI [bot]
daa0b8741c
⬆️ Update ggerganov/llama.cpp ( #1785 )
...
Signed-off-by: GitHub <noreply@github.com>
Co-authored-by: mudler <mudler@users.noreply.github.com>
2024-03-01 22:38:24 +00:00
Dave
1c312685aa
refactor: move remaining api packages to core ( #1731 )
...
* core 1
* api/openai/files fix
* core 2 - core/config
* move over core api.go and tests to the start of core/http
* move over localai specific endpoints to core/http, begin the service/endpoint split there
* refactor big chunk on the plane
* refactor chunk 2 on plane, next step: port and modify changes to request.go
* easy fixes for request.go, major changes not done yet
* lintfix
* json tag lintfix?
* gitignore and .keep files
* strange fix attempt: rename the config dir?
2024-03-01 16:19:53 +01:00
LocalAI [bot]
316de82f51
⬆️ Update ggerganov/llama.cpp ( #1779 )
...
Signed-off-by: GitHub <noreply@github.com>
Co-authored-by: mudler <mudler@users.noreply.github.com>
2024-02-29 22:33:30 +00:00
LocalAI [bot]
c665898652
⬆️ Update donomii/go-rwkv.cpp ( #1771 )
...
Signed-off-by: GitHub <noreply@github.com>
Co-authored-by: mudler <mudler@users.noreply.github.com>
2024-02-28 23:50:27 +00:00
LocalAI [bot]
f651a660aa
⬆️ Update ggerganov/llama.cpp ( #1772 )
...
Signed-off-by: GitHub <noreply@github.com>
Co-authored-by: mudler <mudler@users.noreply.github.com>
2024-02-28 23:02:30 +01:00
LocalAI [bot]
c7e08813a5
⬆️ Update ggerganov/llama.cpp ( #1767 )
...
Signed-off-by: GitHub <noreply@github.com>
Co-authored-by: mudler <mudler@users.noreply.github.com>
2024-02-27 23:12:51 +01:00
LocalAI [bot]
d21a6b33ab
⬆️ Update ggerganov/llama.cpp ( #1756 )
...
Signed-off-by: GitHub <noreply@github.com>
Co-authored-by: mudler <mudler@users.noreply.github.com>
2024-02-27 18:07:51 +00:00
Ettore Di Giacinto
d6cf82aba3
fix(tests): re-enable tests after code move ( #1764 )
...
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2024-02-27 15:04:19 +01:00
Ettore Di Giacinto
bc5f5aa538
deps(llama.cpp): update ( #1759 )
...
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2024-02-26 13:18:44 +01:00
Sertaç Özercan
7f72a61104
ci: add stablediffusion to release ( #1757 )
...
Signed-off-by: Sertac Ozercan <sozercan@gmail.com>
2024-02-25 23:06:18 +00:00
LocalAI [bot]
8e45d47740
⬆️ Update ggerganov/llama.cpp ( #1753 )
...
Signed-off-by: GitHub <noreply@github.com>
Co-authored-by: mudler <mudler@users.noreply.github.com>
2024-02-25 10:03:19 +01:00
LocalAI [bot]
ff88c390bb
⬆️ Update ggerganov/llama.cpp ( #1750 )
...
Signed-off-by: GitHub <noreply@github.com>
Co-authored-by: mudler <mudler@users.noreply.github.com>
2024-02-24 00:06:46 +01:00
LocalAI [bot]
d825821a22
⬆️ Update ggerganov/llama.cpp ( #1740 )
...
Signed-off-by: GitHub <noreply@github.com>
Co-authored-by: mudler <mudler@users.noreply.github.com>
2024-02-23 00:07:15 +01:00
LocalAI [bot]
6fc122fa1a
⬆️ Update ggerganov/llama.cpp ( #1705 )
...
Signed-off-by: GitHub <noreply@github.com>
Co-authored-by: mudler <mudler@users.noreply.github.com>
2024-02-22 09:33:23 +00:00
Ettore Di Giacinto
8292781045
deps(llama.cpp): update, support Gemma models ( #1734 )
...
deps(llama.cpp): update
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2024-02-21 17:23:38 +01:00
Ettore Di Giacinto
54ec6348fa
deps(llama.cpp): update ( #1714 )
...
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2024-02-21 11:35:44 +01:00
fenfir
fb0a4c5d9a
Build docker container for ROCm ( #1595 )
...
* Dockerfile changes to build for ROCm
* Adjust linker flags for ROCm
* Update conda env for diffusers and transformers to use ROCm pytorch
* Update transformers conda env for ROCm
* ci: build hipblas images
* fixup rebase
* use self-hosted
Signed-off-by: mudler <mudler@localai.io>
* specify LD_LIBRARY_PATH only when BUILD_TYPE=hipblas
---------
Signed-off-by: mudler <mudler@localai.io>
Co-authored-by: mudler <mudler@localai.io>
2024-02-16 15:08:50 +01:00
Ettore Di Giacinto
5e155fb081
fix(python): pin exllama2 ( #1711 )
...
fix(python): pin python deps
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2024-02-14 21:44:12 +01:00
Ettore Di Giacinto
39a6b562cf
fix(llama.cpp): downgrade to a known working version ( #1706 )
...
sycl support is broken otherwise.
See upstream issue: https://github.com/ggerganov/llama.cpp/issues/5469
Signed-off-by: Ettore Di Giacinto <mudler@users.noreply.github.com>
2024-02-14 10:28:06 +01:00
LocalAI [bot]
02f6e18adc
⬆️ Update ggerganov/llama.cpp ( #1700 )
...
Signed-off-by: GitHub <noreply@github.com>
Co-authored-by: mudler <mudler@users.noreply.github.com>
2024-02-12 21:43:33 +00:00
LocalAI [bot]
4436e62cf1
⬆️ Update ggerganov/llama.cpp ( #1698 )
...
Signed-off-by: GitHub <noreply@github.com>
Co-authored-by: mudler <mudler@users.noreply.github.com>
2024-02-12 09:56:04 +01:00
LocalAI [bot]
58cdf97361
⬆️ Update ggerganov/llama.cpp ( #1694 )
...
Signed-off-by: GitHub <noreply@github.com>
Co-authored-by: mudler <mudler@users.noreply.github.com>
2024-02-11 10:01:11 +01:00
LocalAI [bot]
ef1306f703
⬆️ Update mudler/go-stable-diffusion ( #1674 )
...
Signed-off-by: GitHub <noreply@github.com>
Co-authored-by: mudler <mudler@users.noreply.github.com>
2024-02-09 21:59:15 +00:00
LocalAI [bot]
3196967995
⬆️ Update ggerganov/llama.cpp ( #1691 )
...
Signed-off-by: GitHub <noreply@github.com>
Co-authored-by: mudler <mudler@users.noreply.github.com>
2024-02-09 21:50:34 +00:00
LocalAI [bot]
fc8423392f
⬆️ Update ggerganov/llama.cpp ( #1688 )
...
Signed-off-by: GitHub <noreply@github.com>
Co-authored-by: mudler <mudler@users.noreply.github.com>
2024-02-09 00:02:23 +01:00
Ettore Di Giacinto
ddd21f1644
feat: Use ubuntu as base for container images, drop deprecated ggml-transformers backends ( #1689 )
...
* cleanup backends
* switch image to ubuntu 22.04
* adapt commands for ubuntu
* transformers cleanup
* no contrib on ubuntu
* Change test model to gguf
* ci: disable bark tests (too cpu-intensive)
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* cleanup
* refinements
* use intel base image
* Makefile: Add docker targets
* Change test model
---------
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2024-02-08 20:12:51 +01:00
Ettore Di Giacinto
e0632f2ce2
fix(llama.cpp): downgrade to fix sycl build
...
Signed-off-by: Ettore Di Giacinto <mudler@users.noreply.github.com>
2024-02-07 00:16:52 +01:00
LocalAI [bot]
d8b17795d7
⬆️ Update ggerganov/llama.cpp ( #1683 )
...
Signed-off-by: GitHub <noreply@github.com>
Co-authored-by: mudler <mudler@users.noreply.github.com>
2024-02-06 09:26:01 +01:00
LocalAI [bot]
8ace0a9ba7
⬆️ Update ggerganov/llama.cpp ( #1681 )
...
Signed-off-by: GitHub <noreply@github.com>
Co-authored-by: mudler <mudler@users.noreply.github.com>
2024-02-04 21:59:14 +00:00
Ettore Di Giacinto
98ad93d53e
Drop ggml-based gpt2 and starcoder (supported by llama.cpp) ( #1679 )
...
* Drop ggml-based gpt2 and starcoder (supported by llama.cpp)
* Update compatibility table
2024-02-04 13:15:51 +01:00
LocalAI [bot]
38e4ec0b2a
⬆️ Update ggerganov/llama.cpp ( #1678 )
...
Signed-off-by: GitHub <noreply@github.com>
Co-authored-by: mudler <mudler@users.noreply.github.com>
2024-02-04 00:55:12 +01:00
Ettore Di Giacinto
df13ba655c
Drop old falcon backend (deprecated) ( #1675 )
...
Drop old falcon backend
2024-02-03 13:01:13 +01:00
LocalAI [bot]
7678b25755
⬆️ Update ggerganov/llama.cpp ( #1673 )
...
Signed-off-by: GitHub <noreply@github.com>
Co-authored-by: mudler <mudler@users.noreply.github.com>
2024-02-02 21:46:26 +00:00
LocalAI [bot]
c87ca4f320
⬆️ Update ggerganov/llama.cpp ( #1669 )
...
Signed-off-by: GitHub <noreply@github.com>
Co-authored-by: mudler <mudler@users.noreply.github.com>
2024-02-02 19:14:03 +01:00
Ettore Di Giacinto
1c57f8d077
feat(sycl): Add support for Intel GPUs with sycl ( #1647 ) ( #1660 )
...
* feat(sycl): Add sycl support (#1647 )
* onekit: install without prompts
* set cmake args only in grpc-server
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* cleanup
* fixup sycl source env
* Cleanup docs
* ci: runs on self-hosted
* fix typo
* bump llama.cpp
* llama.cpp: update server
* adapt to upstream changes
* adapt to upstream changes
* docs: add sycl
---------
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2024-02-01 19:21:52 +01:00
LocalAI [bot]
16cebf0390
⬆️ Update ggerganov/llama.cpp ( #1665 )
...
Signed-off-by: GitHub <noreply@github.com>
Co-authored-by: mudler <mudler@users.noreply.github.com>
2024-01-30 23:38:05 +00:00
LocalAI [bot]
c1bae1ee81
⬆️ Update ggerganov/llama.cpp ( #1656 )
...
Signed-off-by: GitHub <noreply@github.com>
Co-authored-by: mudler <mudler@users.noreply.github.com>
2024-01-30 00:43:36 +01:00
LocalAI [bot]
abd678e147
⬆️ Update ggerganov/llama.cpp ( #1655 )
...
Signed-off-by: GitHub <noreply@github.com>
Co-authored-by: mudler <mudler@users.noreply.github.com>
2024-01-28 09:24:44 +01:00
LocalAI [bot]
f928899338
⬆️ Update ggerganov/llama.cpp ( #1652 )
...
Signed-off-by: GitHub <noreply@github.com>
Co-authored-by: mudler <mudler@users.noreply.github.com>
2024-01-27 00:13:38 +01:00
LocalAI [bot]
ac19998e5e
⬆️ Update ggerganov/llama.cpp ( #1644 )
...
Signed-off-by: GitHub <noreply@github.com>
Co-authored-by: mudler <mudler@users.noreply.github.com>
2024-01-26 00:13:39 +01:00
LocalAI [bot]
3733250b3c
⬆️ Update ggerganov/llama.cpp ( #1642 )
...
Signed-off-by: GitHub <noreply@github.com>
Co-authored-by: mudler <mudler@users.noreply.github.com>
2024-01-24 22:51:59 +01:00
LocalAI [bot]
7690caf020
⬆️ Update ggerganov/llama.cpp ( #1632 )
...
Signed-off-by: GitHub <noreply@github.com>
Co-authored-by: mudler <mudler@users.noreply.github.com>
2024-01-23 23:07:51 +01:00
LocalAI [bot]
efe2883c5d
⬆️ Update ggerganov/llama.cpp ( #1626 )
...
Signed-off-by: GitHub <noreply@github.com>
Co-authored-by: mudler <mudler@users.noreply.github.com>
2024-01-22 23:22:01 +01:00
LocalAI [bot]
47237c7c3c
⬆️ Update ggerganov/llama.cpp ( #1623 )
...
Signed-off-by: GitHub <noreply@github.com>
Co-authored-by: mudler <mudler@users.noreply.github.com>
2024-01-22 08:54:06 +01:00
LocalAI [bot]
6a88b030ea
⬆️ Update ggerganov/llama.cpp ( #1620 )
...
Signed-off-by: GitHub <noreply@github.com>
Co-authored-by: mudler <mudler@users.noreply.github.com>
2024-01-20 23:34:46 +01:00
LocalAI [bot]
b2dc5fbd7e
⬆️ Update ggerganov/llama.cpp ( #1612 )
...
Signed-off-by: GitHub <noreply@github.com>
Co-authored-by: mudler <mudler@users.noreply.github.com>
2024-01-20 00:38:14 +01:00
Ettore Di Giacinto
9e653d6abe
feat: 🐍 add mamba support ( #1589 )
...
feat(mamba): Initial import
This is a first iteration of the mamba backend, loosely based on
mamba-chat(https://github.com/havenhq/mamba-chat ).
2024-01-19 23:42:50 +01:00
Ettore Di Giacinto
3a253c6cd7
Makefile: allow to build without GRPC_BACKENDS ( #1607 )
2024-01-19 15:38:43 +01:00
LocalAI [bot]
23d64ac53a
⬆️ Update ggerganov/llama.cpp ( #1604 )
...
Signed-off-by: GitHub <noreply@github.com>
Co-authored-by: mudler <mudler@users.noreply.github.com>
2024-01-18 21:20:50 +00:00
LocalAI [bot]
b5c93f176a
⬆️ Update ggerganov/llama.cpp ( #1599 )
...
Signed-off-by: GitHub <noreply@github.com>
Co-authored-by: mudler <mudler@users.noreply.github.com>
2024-01-18 14:39:30 +01:00
LocalAI [bot]
1aaf88098d
⬆️ Update ggerganov/llama.cpp ( #1597 )
...
Signed-off-by: GitHub <noreply@github.com>
Co-authored-by: mudler <mudler@users.noreply.github.com>
2024-01-17 09:27:02 +01:00
LocalAI [bot]
dfb7c3b1aa
⬆️ Update ggerganov/llama.cpp ( #1594 )
...
Signed-off-by: GitHub <noreply@github.com>
Co-authored-by: mudler <mudler@users.noreply.github.com>
2024-01-16 14:47:57 +01:00
Dionysius
b41eb5e1f3
prepend built binaries in PATH for BUILD_GRPC_FOR_BACKEND_LLAMA ( #1593 )
...
prepend built binaries in PATH
2024-01-16 14:47:47 +01:00
LocalAI [bot]
9c2d264979
⬆️ Update ggerganov/llama.cpp ( #1590 )
...
Signed-off-by: GitHub <noreply@github.com>
Co-authored-by: mudler <mudler@users.noreply.github.com>
2024-01-15 09:01:07 +01:00
LocalAI [bot]
b996c3198c
⬆️ Update ggerganov/llama.cpp ( #1587 )
...
Signed-off-by: GitHub <noreply@github.com>
Co-authored-by: mudler <mudler@users.noreply.github.com>
2024-01-14 09:46:47 +00:00
Dionysius
441e2965ff
move BUILD_GRPC_FOR_BACKEND_LLAMA logic to makefile: errors in this section now immediately fail the build ( #1576 )
...
* move BUILD_GRPC_FOR_BACKEND_LLAMA option to makefile
* review: oversight, fixup cmake_args
Co-authored-by: Ettore Di Giacinto <mudler@users.noreply.github.com>
Signed-off-by: Dionysius <1341084+dionysius@users.noreply.github.com>
---------
Signed-off-by: Dionysius <1341084+dionysius@users.noreply.github.com>
Co-authored-by: Ettore Di Giacinto <mudler@users.noreply.github.com>
2024-01-13 10:08:26 +01:00
LocalAI [bot]
cbe9a03e3c
⬆️ Update ggerganov/llama.cpp ( #1583 )
...
Signed-off-by: GitHub <noreply@github.com>
Co-authored-by: mudler <mudler@users.noreply.github.com>
2024-01-12 23:04:04 +01:00
LocalAI [bot]
4ee7e73d00
⬆️ Update ggerganov/llama.cpp ( #1578 )
...
Signed-off-by: GitHub <noreply@github.com>
Co-authored-by: mudler <mudler@users.noreply.github.com>
2024-01-12 16:04:33 +01:00
LocalAI [bot]
faf7c1c325
⬆️ Update ggerganov/llama.cpp ( #1573 )
...
Signed-off-by: GitHub <noreply@github.com>
Co-authored-by: mudler <mudler@users.noreply.github.com>
2024-01-11 08:41:32 +01:00
LocalAI [bot]
58288494d6
⬆️ Update ggerganov/llama.cpp ( #1568 )
...
Signed-off-by: GitHub <noreply@github.com>
Co-authored-by: mudler <mudler@users.noreply.github.com>
2024-01-10 10:18:57 +01:00
Dionysius
72283dc744
minor: replace shell pwd in Makefile with CURDIR for better windows compatibility ( #1571 )
...
replace shell pwd in Makefile with CURDIR
2024-01-10 08:39:50 +00:00
LocalAI [bot]
2e890b3838
⬆️ Update ggerganov/llama.cpp ( #1563 )
...
Signed-off-by: GitHub <noreply@github.com>
Co-authored-by: mudler <mudler@users.noreply.github.com>
2024-01-09 08:48:40 +01:00
LocalAI [bot]
574fa67bdc
⬆️ Update ggerganov/llama.cpp ( #1558 )
...
Signed-off-by: GitHub <noreply@github.com>
Co-authored-by: mudler <mudler@users.noreply.github.com>
2024-01-08 00:38:03 +01:00
LocalAI [bot]
0a06c80801
⬆️ Update ggerganov/llama.cpp ( #1547 )
...
Signed-off-by: GitHub <noreply@github.com>
Co-authored-by: mudler <mudler@users.noreply.github.com>
2024-01-05 23:27:51 +01:00
LocalAI [bot]
d48faf35ab
⬆️ Update ggerganov/llama.cpp ( #1544 )
...
Signed-off-by: GitHub <noreply@github.com>
Co-authored-by: mudler <mudler@users.noreply.github.com>
2024-01-04 00:08:03 +01:00
LocalAI [bot]
7e1d8c489b
⬆️ Update ggerganov/llama.cpp ( #1533 )
...
Signed-off-by: GitHub <noreply@github.com>
Co-authored-by: mudler <mudler@users.noreply.github.com>
2024-01-03 08:43:35 +01:00
LocalAI [bot]
de28867374
⬆️ Update ggerganov/llama.cpp ( #1531 )
...
Signed-off-by: GitHub <noreply@github.com>
Co-authored-by: mudler <mudler@users.noreply.github.com>
2024-01-02 00:28:22 +00:00
Ettore Di Giacinto
fd48cb6506
deps(llama.cpp): update and sync grpc server ( #1527 )
...
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2024-01-01 14:39:31 +01:00
LocalAI [bot]
27686ff20b
⬆️ Update ggerganov/llama.cpp ( #1518 )
...
Signed-off-by: GitHub <noreply@github.com>
Co-authored-by: mudler <mudler@users.noreply.github.com>
2023-12-31 00:19:08 +00:00
LocalAI [bot]
5b0dc20e4c
⬆️ Update ggerganov/llama.cpp ( #1509 )
...
Signed-off-by: GitHub <noreply@github.com>
Co-authored-by: mudler <mudler@users.noreply.github.com>
2023-12-30 09:19:07 +00:00
LocalAI [bot]
6428003c3b
⬆️ Update ggerganov/llama.cpp ( #1503 )
...
Signed-off-by: GitHub <noreply@github.com>
Co-authored-by: mudler <mudler@users.noreply.github.com>
2023-12-28 22:44:50 +01:00
LocalAI [bot]
2eac4f93bb
⬆️ Update ggerganov/llama.cpp ( #1501 )
...
Signed-off-by: GitHub <noreply@github.com>
Co-authored-by: mudler <mudler@users.noreply.github.com>
2023-12-28 00:51:29 +00:00
LocalAI [bot]
c45f581c47
⬆️ Update ggerganov/llama.cpp ( #1496 )
...
Signed-off-by: GitHub <noreply@github.com>
Co-authored-by: mudler <mudler@users.noreply.github.com>
2023-12-26 19:15:58 -05:00
LocalAI [bot]
4ca649154d
⬆️ Update ggerganov/llama.cpp ( #1495 )
...
Signed-off-by: GitHub <noreply@github.com>
Co-authored-by: mudler <mudler@users.noreply.github.com>
2023-12-26 17:53:59 +00:00
LocalAI [bot]
9789f5a96a
⬆️ Update ggerganov/llama.cpp ( #1492 )
...
Signed-off-by: GitHub <noreply@github.com>
Co-authored-by: mudler <mudler@users.noreply.github.com>
2023-12-25 02:43:35 -05:00
Gianluca Boiano
cae7b197ec
feat: add tiny dream stable diffusion support ( #1283 )
...
Signed-off-by: Gianluca Boiano <morf3089@gmail.com>
2023-12-24 19:27:24 +00:00
Ettore Di Giacinto
95eb72bfd3
feat: add 🐸 coqui ( #1489 )
...
* feat: add coqui
* docs: update news
2023-12-24 19:38:54 +01:00
LocalAI [bot]
eaa899df63
⬆️ Update ggerganov/whisper.cpp ( #1483 )
...
Signed-off-by: GitHub <noreply@github.com>
Co-authored-by: mudler <mudler@users.noreply.github.com>
2023-12-24 02:53:29 -05:00
LocalAI [bot]
16ed0bd0c5
⬆️ Update ggerganov/llama.cpp ( #1482 )
...
Signed-off-by: GitHub <noreply@github.com>
Co-authored-by: mudler <mudler@users.noreply.github.com>
2023-12-24 02:53:12 -05:00
LocalAI [bot]
51215d480a
⬆️ Update ggerganov/whisper.cpp ( #1480 )
...
Signed-off-by: GitHub <noreply@github.com>
Co-authored-by: mudler <mudler@users.noreply.github.com>
2023-12-23 09:11:40 +00:00
LocalAI [bot]
987f0041d3
⬆️ Update ggerganov/llama.cpp ( #1469 )
...
Signed-off-by: GitHub <noreply@github.com>
Co-authored-by: mudler <mudler@users.noreply.github.com>
2023-12-23 00:05:56 +00:00
LocalAI [bot]
a29de9bf50
⬆️ Update donomii/go-rwkv.cpp ( #1478 )
...
Signed-off-by: GitHub <noreply@github.com>
Co-authored-by: mudler <mudler@users.noreply.github.com>
2023-12-22 15:02:32 +01:00
LocalAI [bot]
9bd5831fda
⬆️ Update ggerganov/whisper.cpp ( #1479 )
...
Signed-off-by: GitHub <noreply@github.com>
Co-authored-by: mudler <mudler@users.noreply.github.com>
2023-12-22 08:26:39 +01:00
Ettore Di Giacinto
9ae47d37e9
pin go-rwkv
...
Signed-off-by: Ettore Di Giacinto <mudler@users.noreply.github.com>
2023-12-21 08:42:40 +01:00
Ettore Di Giacinto
2b3ad7f41c
Revert " ⬆️ Update donomii/go-rwkv.cpp" ( #1474 )
...
Revert "⬆️ Update donomii/go-rwkv.cpp (#1470 )"
This reverts commit 51db10b18f
.
2023-12-21 08:38:50 +01:00
LocalAI [bot]
51db10b18f
⬆️ Update donomii/go-rwkv.cpp ( #1470 )
...
Signed-off-by: GitHub <noreply@github.com>
Co-authored-by: mudler <mudler@users.noreply.github.com>
2023-12-21 08:35:31 +01:00
LocalAI [bot]
23eced1644
⬆️ Update ggerganov/llama.cpp ( #1461 )
...
Signed-off-by: GitHub <noreply@github.com>
Co-authored-by: mudler <mudler@users.noreply.github.com>
2023-12-20 18:02:52 +01:00
LocalAI [bot]
7741a6e75d
⬆️ Update ggerganov/whisper.cpp ( #1462 )
...
Signed-off-by: GitHub <noreply@github.com>
Co-authored-by: mudler <mudler@users.noreply.github.com>
2023-12-20 00:21:49 +00:00
LocalAI [bot]
d4210db0c9
⬆️ Update ggerganov/llama.cpp ( #1457 )
...
Signed-off-by: GitHub <noreply@github.com>
Co-authored-by: mudler <mudler@users.noreply.github.com>
2023-12-19 00:42:19 +01:00
LocalAI [bot]
64a8471dd5
⬆️ Update ggerganov/llama.cpp ( #1455 )
...
Signed-off-by: GitHub <noreply@github.com>
Co-authored-by: mudler <mudler@users.noreply.github.com>
2023-12-18 08:55:29 +01:00
LocalAI [bot]
86a8df1c8b
⬆️ Update ggerganov/llama.cpp ( #1450 )
...
Signed-off-by: GitHub <noreply@github.com>
Co-authored-by: mudler <mudler@users.noreply.github.com>
2023-12-17 19:02:28 +01:00
LocalAI [bot]
2f7beb6744
⬆️ Update ggerganov/whisper.cpp ( #1434 )
...
Signed-off-by: GitHub <noreply@github.com>
Co-authored-by: mudler <mudler@users.noreply.github.com>
2023-12-16 09:22:28 +01:00
LocalAI [bot]
ab0370a0b9
⬆️ Update ggerganov/llama.cpp ( #1429 )
...
Signed-off-by: GitHub <noreply@github.com>
Co-authored-by: mudler <mudler@users.noreply.github.com>
2023-12-16 09:22:13 +01:00
LocalAI [bot]
3f9a41684a
⬆️ Update mudler/go-piper ( #1441 )
...
Signed-off-by: GitHub <noreply@github.com>
Co-authored-by: mudler <mudler@users.noreply.github.com>
2023-12-16 09:21:56 +01:00
Ettore Di Giacinto
fb6a5bc620
update(llama.cpp): update server, correctly propagate LLAMA_VERSION ( #1440 )
...
* fix(Makefile): correctly propagate LLAMA_VERSION
Signed-off-by: Ettore Di Giacinto <mudler@users.noreply.github.com>
* update grpc-server.cpp
---------
Signed-off-by: Ettore Di Giacinto <mudler@users.noreply.github.com>
2023-12-15 08:26:48 +01:00
Ettore Di Giacinto
7641f92cde
feat(diffusers): update, add autopipeline, controlnet ( #1432 )
...
* feat(diffusers): update, add autopipeline, controlenet
* tests with AutoPipeline
* simplify logic
2023-12-13 19:20:22 +01:00
LocalAI [bot]
72325fd0a3
⬆️ Update ggerganov/whisper.cpp ( #1430 )
...
Signed-off-by: GitHub <noreply@github.com>
Co-authored-by: mudler <mudler@users.noreply.github.com>
2023-12-13 08:37:02 +01:00
LocalAI [bot]
86fac272d8
⬆️ Update ggerganov/llama.cpp ( #1391 )
...
Signed-off-by: GitHub <noreply@github.com>
Co-authored-by: mudler <mudler@users.noreply.github.com>
2023-12-12 18:22:48 +01:00
LocalAI [bot]
4a965e1b0e
⬆️ Update ggerganov/whisper.cpp ( #1418 )
...
Signed-off-by: GitHub <noreply@github.com>
Co-authored-by: mudler <mudler@users.noreply.github.com>
2023-12-11 08:24:48 +01:00
Ettore Di Giacinto
48e5380e45
tests: add diffusers tests ( #1419 )
2023-12-11 08:20:34 +01:00
LocalAI [bot]
831418612b
⬆️ Update mudler/go-piper ( #1400 )
...
Signed-off-by: GitHub <noreply@github.com>
Co-authored-by: mudler <mudler@users.noreply.github.com>
2023-12-10 08:50:26 +01:00
LocalAI [bot]
89ff12309d
⬆️ Update ggerganov/whisper.cpp ( #1390 )
...
Signed-off-by: GitHub <noreply@github.com>
Co-authored-by: mudler <mudler@users.noreply.github.com>
2023-12-09 09:23:40 +01:00
Ettore Di Giacinto
887b3dff04
feat: cuda transformers ( #1401 )
...
* Use cuda in transformers if available
tensorflow probably needs a different check.
Signed-off-by: Erich Schubert <kno10@users.noreply.github.com>
* feat: expose CUDA at top level
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* tests: add to tests and create workflow for py extra backends
* doc: update note on how to use core images
---------
Signed-off-by: Erich Schubert <kno10@users.noreply.github.com>
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
Co-authored-by: Erich Schubert <kno10@users.noreply.github.com>
2023-12-08 15:45:04 +01:00
Dave
8b6e601405
Feat: new backend: transformers-musicgen ( #1387 )
...
Transformers-MusicGen
---------
Signed-off-by: Dave <dave@gray101.com>
2023-12-08 10:01:02 +01:00
Ettore Di Giacinto
6011911746
fix(piper): pin petals, phonemize and espeak ( #1393 )
...
* fix: pin phonemize and espeak
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* fix: pin petals deps
---------
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2023-12-07 22:58:41 +01:00
LocalAI [bot]
997119c27a
⬆️ Update ggerganov/llama.cpp ( #1385 )
...
Signed-off-by: GitHub <noreply@github.com>
Co-authored-by: mudler <mudler@users.noreply.github.com>
2023-12-05 15:44:24 +01:00
Ettore Di Giacinto
2b2d6673ff
exllama(v2): fix exllamav1, add exllamav2 ( #1384 )
...
* fix(exllama): fix exllama deps with anaconda
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* feat(exllamav2): add exllamav2 backend
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
---------
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2023-12-05 08:15:37 +01:00
LocalAI [bot]
67966b623c
⬆️ Update ggerganov/llama.cpp ( #1379 )
...
Signed-off-by: GitHub <noreply@github.com>
Co-authored-by: mudler <mudler@users.noreply.github.com>
2023-12-04 18:36:34 +01:00
LocalAI [bot]
9fc3fd04be
⬆️ Update ggerganov/whisper.cpp ( #1378 )
...
Signed-off-by: GitHub <noreply@github.com>
Co-authored-by: mudler <mudler@users.noreply.github.com>
2023-12-04 18:36:22 +01:00
LocalAI [bot]
3d71bc9b64
⬆️ Update ggerganov/whisper.cpp ( #1227 )
...
Signed-off-by: GitHub <noreply@github.com>
Co-authored-by: mudler <mudler@users.noreply.github.com>
2023-12-03 01:16:07 +01:00
Felix Erkinger
3923024d84
update whisper_cpp with CUBLAS, HIPBLAS, METAL, OPENBLAS, CLBLAST support ( #1302 )
...
update whisper_cpp to 1.5.1 with OPENBLAS, METAL, HIPBLAS, CUBLAS, CLBLAST support
2023-12-02 10:10:18 +00:00
LocalAI [bot]
42a80d1b8b
⬆️ Update ggerganov/llama.cpp ( #1375 )
...
Signed-off-by: GitHub <noreply@github.com>
Co-authored-by: mudler <mudler@users.noreply.github.com>
2023-12-02 00:09:48 +00:00
Dave
e94a34be8c
fix: OSX Build Fix Part 1: Metal ( #1365 )
...
* Make Metal the default on OSX, simplify osx-specific code, and fix the file copy error.
* fix endif / comment
2023-11-30 19:50:50 +01:00
LocalAI [bot]
9f708ff318
⬆️ Update ggerganov/llama.cpp ( #1363 )
...
Signed-off-by: GitHub <noreply@github.com>
Co-authored-by: mudler <mudler@users.noreply.github.com>
2023-11-30 00:06:28 +01:00
LocalAI [bot]
519285bf38
⬆️ Update ggerganov/llama.cpp ( #1351 )
...
Signed-off-by: GitHub <noreply@github.com>
Co-authored-by: mudler <mudler@users.noreply.github.com>
2023-11-29 08:29:03 +01:00
Gianluca Boiano
687730a7f5
fix: go-piper add libucd at linking time ( #1357 )
...
Signed-off-by: Gianluca Boiano <morf3089@gmail.com>
2023-11-28 19:55:09 +00:00
Ettore Di Giacinto
b7821361c3
feat(petals): add backend ( #1350 )
...
* feat(petals): add backend
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* fixups
---------
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2023-11-28 09:01:46 +01:00
LocalAI [bot]
63e1f8fffd
⬆️ Update ggerganov/llama.cpp ( #1345 )
...
Signed-off-by: GitHub <noreply@github.com>
Co-authored-by: mudler <mudler@users.noreply.github.com>
2023-11-27 09:02:19 +01:00
LocalAI [bot]
9482acfdfc
⬆️ Update ggerganov/llama.cpp ( #1340 )
...
Signed-off-by: GitHub <noreply@github.com>
Co-authored-by: mudler <mudler@users.noreply.github.com>
2023-11-26 09:27:42 +01:00
Ettore Di Giacinto
6f34e8f044
fix: propagate CMAKE_ARGS when building grpc ( #1334 )
...
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2023-11-25 13:53:51 +01:00
Ettore Di Giacinto
6d187af643
fix: handle grpc and llama-cpp with REBUILD=true ( #1328 )
...
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2023-11-25 08:48:24 +01:00
LocalAI [bot]
97e9598c79
⬆️ Update ggerganov/llama.cpp ( #1330 )
...
Signed-off-by: GitHub <noreply@github.com>
Co-authored-by: mudler <mudler@users.noreply.github.com>
2023-11-24 23:45:05 +01:00
LocalAI [bot]
b1a20effde
⬆️ Update ggerganov/llama.cpp ( #1323 )
...
Signed-off-by: GitHub <noreply@github.com>
Co-authored-by: mudler <mudler@users.noreply.github.com>
2023-11-24 08:32:36 +01:00
Dave
69f53211a1
Feat: OSX Local Codesigning ( #1319 )
...
* stage makefile
* OSX local code signing and entitlements file to fix incoming connections prompt
2023-11-23 15:22:54 +01:00
LocalAI [bot]
763f94ca80
⬆️ Update ggerganov/llama.cpp ( #1313 )
...
Signed-off-by: GitHub <noreply@github.com>
Co-authored-by: mudler <mudler@users.noreply.github.com>
2023-11-22 08:37:11 +01:00
LocalAI [bot]
480b14c8dc
⬆️ Update ggerganov/llama.cpp ( #1310 )
...
Signed-off-by: GitHub <noreply@github.com>
Co-authored-by: mudler <mudler@users.noreply.github.com>
2023-11-21 00:20:37 +01:00
Ettore Di Giacinto
92cbc4d516
feat(transformers): add embeddings with Automodel ( #1308 )
...
* Update huggingface.py
Switch SentenceTransformer for AutoModel in order to set trust_remote_code needed to use the encode method with embeddings models like jinai-v2
Signed-off-by: Lucas Hänke de Cansino <lhc@next-boss.eu>
* feat(transformers): split in separate backend
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
---------
Signed-off-by: Lucas Hänke de Cansino <lhc@next-boss.eu>
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
Co-authored-by: Lucas Hänke de Cansino <lhc@next-boss.eu>
2023-11-20 21:21:17 +01:00
LocalAI [bot]
ff9afdb0fe
⬆️ Update ggerganov/llama.cpp ( #1306 )
...
Signed-off-by: GitHub <noreply@github.com>
Co-authored-by: mudler <mudler@users.noreply.github.com>
2023-11-20 08:16:00 +01:00
LocalAI [bot]
3e35b20a02
⬆️ Update mudler/go-piper ( #1305 )
...
Signed-off-by: GitHub <noreply@github.com>
Co-authored-by: mudler <mudler@users.noreply.github.com>
2023-11-19 09:01:40 +01:00
LocalAI [bot]
9ea371d6cd
⬆️ Update ggerganov/llama.cpp ( #1304 )
...
Signed-off-by: GitHub <noreply@github.com>
Co-authored-by: mudler <mudler@users.noreply.github.com>
2023-11-19 08:49:05 +01:00
LocalAI [bot]
b5af87fc6c
⬆️ Update ggerganov/llama.cpp ( #1300 )
...
Signed-off-by: GitHub <noreply@github.com>
Co-authored-by: mudler <mudler@users.noreply.github.com>
2023-11-18 08:19:10 +01:00
Ettore Di Giacinto
3c9544b023
refactor: rename llama-stable to llama-ggml ( #1287 )
...
* refactor: rename llama-stable to llama-ggml
* Makefile: get sources in sources/
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* fixup path
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* fixup sources
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* fixups sd
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* update SD
* fixup
* fixup: create piper libdir also when not built
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* fix make target on linux test
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
---------
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2023-11-18 08:18:43 +01:00
LocalAI [bot]
8c5436cbed
⬆️ Update ggerganov/llama.cpp ( #1297 )
...
Signed-off-by: GitHub <noreply@github.com>
Co-authored-by: mudler <mudler@users.noreply.github.com>
2023-11-17 08:45:22 +01:00
LocalAI [bot]
2addb9f99a
⬆️ Update ggerganov/llama.cpp ( #1291 )
...
Signed-off-by: GitHub <noreply@github.com>
Co-authored-by: mudler <mudler@users.noreply.github.com>
2023-11-16 08:20:26 +01:00
LocalAI [bot]
733b612eb2
⬆️ Update ggerganov/llama.cpp ( #1288 )
...
Signed-off-by: GitHub <noreply@github.com>
Co-authored-by: mudler <mudler@users.noreply.github.com>
2023-11-15 18:41:09 +01:00
LocalAI [bot]
991ecce004
⬆️ Update ggerganov/llama.cpp ( #1285 )
...
Signed-off-by: GitHub <noreply@github.com>
Co-authored-by: mudler <mudler@users.noreply.github.com>
2023-11-14 18:23:09 +01:00
Ettore Di Giacinto
ad0e30bca5
refactor: move backends into the backends directory ( #1279 )
...
* refactor: move backends into the backends directory
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* refactor: move main close to implementation for every backend
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
---------
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2023-11-13 22:40:16 +01:00
LocalAI [bot]
55461188a4
⬆️ Update ggerganov/llama.cpp ( #1282 )
...
Signed-off-by: GitHub <noreply@github.com>
Co-authored-by: mudler <mudler@users.noreply.github.com>
2023-11-13 00:48:26 +00:00
LocalAI [bot]
5d2405fdef
⬆️ Update ggerganov/llama.cpp ( #1280 )
...
Signed-off-by: GitHub <noreply@github.com>
Co-authored-by: mudler <mudler@users.noreply.github.com>
2023-11-11 23:26:54 +00:00
LocalAI [bot]
e9f1268225
⬆️ Update ggerganov/llama.cpp ( #1272 )
...
Signed-off-by: GitHub <noreply@github.com>
Co-authored-by: mudler <mudler@users.noreply.github.com>
2023-11-11 20:00:28 +00:00
Gianluca Boiano
bde87d00b9
deps(go-piper): update to 2023.11.6-3 ( #1257 )
...
Signed-off-by: Gianluca Boiano <morf3089@gmail.com>
2023-11-11 18:40:26 +01:00
LocalAI [bot]
3b4c5d54d8
⬆️ Update ggerganov/llama.cpp ( #1265 )
...
Signed-off-by: GitHub <noreply@github.com>
Co-authored-by: mudler <mudler@users.noreply.github.com>
2023-11-10 08:50:42 +01:00
LocalAI [bot]
4e16bc2f13
⬆️ Update ggerganov/llama.cpp ( #1256 )
...
Signed-off-by: GitHub <noreply@github.com>
Co-authored-by: mudler <mudler@users.noreply.github.com>
2023-11-08 08:21:12 +01:00
LocalAI [bot]
562ac62f59
⬆️ Update ggerganov/llama.cpp ( #1242 )
...
Signed-off-by: GitHub <noreply@github.com>
Co-authored-by: mudler <mudler@users.noreply.github.com>
2023-11-07 08:37:55 +01:00
Diego
e7fa2e06f8
Fixes the bug 1196 ( #1232 )
...
* Current state of the branch.
* Now gRPC is build only when the BUILD_GRPC_FOR_BACKEND_LLAMA variable is defined.
* Now the local compilation of gRPC is executed on BUILD_GRPC_FOR_BACKEND_LLAMA.
* Revised the Makefile.
* Removed replace directives in go.mod.
---------
Signed-off-by: Diego <38375572+diego-minguzzi@users.noreply.github.com>
Co-authored-by: lunamidori5 <118759930+lunamidori5@users.noreply.github.com>
Co-authored-by: Ettore Di Giacinto <mudler@users.noreply.github.com>
2023-11-06 19:07:46 +01:00
Ettore Di Giacinto
622aaa9f7d
dockerfile: avoid pushing a big layer
...
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2023-11-05 10:31:33 +01:00
Ettore Di Giacinto
7b1ee203ce
tests: re-add flake-attempts
...
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2023-11-05 09:01:03 +01:00
Ettore Di Giacinto
f347e51927
feat(conda): conda environments ( #1144 )
...
* feat(autogptq): add a separate conda environment for autogptq (#1137 )
**Description**
This PR related to #1117
**Notes for Reviewers**
Here we lock down the version of the dependencies. Make sure it can be
used all the time without failed if the version of dependencies were
upgraded.
I change the order of importing packages according to the pylint, and no
change the logic of code. It should be ok.
I will do more investigate on writing some test cases for every backend.
I can run the service in my environment, but there is not exist a way to
test it. So, I am not confident on it.
Add a README.md in the `grpc` root. This is the common commands for
creating `conda` environment. And it can be used to the reference file
for creating extral gRPC backend document.
Signed-off-by: GitHub <noreply@github.com>
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* [Extra backend] Add seperate environment for ttsbark (#1141 )
**Description**
This PR relates to #1117
**Notes for Reviewers**
Same to the latest PR:
* The code is also changed, but only the order of the import package
parts. And some code comments are also added.
* Add a configuration of the `conda` environment
* Add a simple test case for testing if the service can be startup in
current `conda` environment. It is succeed in VSCode, but the it is not
out of box on terminal. So, it is hard to say the test case really
useful.
**[Signed
commits](../CONTRIBUTING.md#signing-off-on-commits-developer-certificate-of-origin)**
- [x] Yes, I signed my commits.
<!--
Thank you for contributing to LocalAI!
Contributing Conventions
-------------------------
The draft above helps to give a quick overview of your PR.
Remember to remove this comment and to at least:
1. Include descriptive PR titles with [<component-name>] prepended. We
use [conventional
commits](https://www.conventionalcommits.org/en/v1.0.0/ ).
2. Build and test your changes before submitting a PR (`make build`).
3. Sign your commits
4. **Tag maintainer:** for a quicker response, tag the relevant
maintainer (see below).
5. **X/Twitter handle:** we announce bigger features on X/Twitter. If
your PR gets announced, and you'd like a mention, we'll gladly shout you
out!
By following the community's contribution conventions upfront, the
review process will
be accelerated and your PR merged more quickly.
If no one reviews your PR within a few days, please @-mention @mudler.
-->
Signed-off-by: GitHub <noreply@github.com>
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* feat(conda): add make target and entrypoints for the dockerfile
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* feat(conda): Add seperate conda env for diffusers (#1145 )
**Description**
This PR relates to #1117
**Notes for Reviewers**
* Add `conda` env `diffusers.yml`
* Add Makefile to create it automatically
* Add `run.sh` to support running as a extra backend
* Also adding it to the main Dockerfile
* Add make command in the root Makefile
* Testing the server, it can start up under the env
Signed-off-by: GitHub <noreply@github.com>
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* feat(conda):Add seperate env for vllm (#1148 )
**Description**
This PR is related to #1117
**Notes for Reviewers**
* The gRPC server can be started as normal
* The test case can be triggered in VSCode
* Same to other this kind of PRs, add `vllm.yml` Makefile and add
`run.sh` to the main Dockerfile, and command to the main Makefile
**[Signed
commits](../CONTRIBUTING.md#signing-off-on-commits-developer-certificate-of-origin)**
- [x] Yes, I signed my commits.
<!--
Thank you for contributing to LocalAI!
Contributing Conventions
-------------------------
The draft above helps to give a quick overview of your PR.
Remember to remove this comment and to at least:
1. Include descriptive PR titles with [<component-name>] prepended. We
use [conventional
commits](https://www.conventionalcommits.org/en/v1.0.0/ ).
2. Build and test your changes before submitting a PR (`make build`).
3. Sign your commits
4. **Tag maintainer:** for a quicker response, tag the relevant
maintainer (see below).
5. **X/Twitter handle:** we announce bigger features on X/Twitter. If
your PR gets announced, and you'd like a mention, we'll gladly shout you
out!
By following the community's contribution conventions upfront, the
review process will
be accelerated and your PR merged more quickly.
If no one reviews your PR within a few days, please @-mention @mudler.
-->
Signed-off-by: GitHub <noreply@github.com>
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* feat(conda):Add seperate env for huggingface (#1146 )
**Description**
This PR is related to #1117
**Notes for Reviewers**
* Add conda env `huggingface.yml`
* Change the import order, and also remove the no-used packages
* Add `run.sh` and `make command` to the main Dockerfile and Makefile
* Add test cases for it. It can be triggered and succeed under VSCode
Python extension but it is hang by using `python -m unites
test_huggingface.py` in the terminal
```
Running tests (unittest): /workspaces/LocalAI/extra/grpc/huggingface
Running tests: /workspaces/LocalAI/extra/grpc/huggingface/test_huggingface.py::TestBackendServicer::test_embedding
/workspaces/LocalAI/extra/grpc/huggingface/test_huggingface.py::TestBackendServicer::test_load_model
/workspaces/LocalAI/extra/grpc/huggingface/test_huggingface.py::TestBackendServicer::test_server_startup
./test_huggingface.py::TestBackendServicer::test_embedding Passed
./test_huggingface.py::TestBackendServicer::test_load_model Passed
./test_huggingface.py::TestBackendServicer::test_server_startup Passed
Total number of tests expected to run: 3
Total number of tests run: 3
Total number of tests passed: 3
Total number of tests failed: 0
Total number of tests failed with errors: 0
Total number of tests skipped: 0
Finished running tests!
```
**[Signed
commits](../CONTRIBUTING.md#signing-off-on-commits-developer-certificate-of-origin)**
- [x] Yes, I signed my commits.
<!--
Thank you for contributing to LocalAI!
Contributing Conventions
-------------------------
The draft above helps to give a quick overview of your PR.
Remember to remove this comment and to at least:
1. Include descriptive PR titles with [<component-name>] prepended. We
use [conventional
commits](https://www.conventionalcommits.org/en/v1.0.0/ ).
2. Build and test your changes before submitting a PR (`make build`).
3. Sign your commits
4. **Tag maintainer:** for a quicker response, tag the relevant
maintainer (see below).
5. **X/Twitter handle:** we announce bigger features on X/Twitter. If
your PR gets announced, and you'd like a mention, we'll gladly shout you
out!
By following the community's contribution conventions upfront, the
review process will
be accelerated and your PR merged more quickly.
If no one reviews your PR within a few days, please @-mention @mudler.
-->
Signed-off-by: GitHub <noreply@github.com>
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* feat(conda): Add the seperate conda env for VALL-E X (#1147 )
**Description**
This PR is related to #1117
**Notes for Reviewers**
* The gRPC server cannot start up
```
(ttsvalle) @Aisuko ➜ /workspaces/LocalAI (feat/vall-e-x) $ /opt/conda/envs/ttsvalle/bin/python /workspaces/LocalAI/extra/grpc/vall-e-x/ttsvalle.py
Traceback (most recent call last):
File "/workspaces/LocalAI/extra/grpc/vall-e-x/ttsvalle.py", line 14, in <module>
from utils.generation import SAMPLE_RATE, generate_audio, preload_models
ModuleNotFoundError: No module named 'utils'
```
The installation steps follow
https://github.com/Plachtaa/VALL-E-X#-installation below:
* Under the `ttsvalle` conda env
```
git clone https://github.com/Plachtaa/VALL-E-X.git
cd VALL-E-X
pip install -r requirements.txt
```
**[Signed
commits](../CONTRIBUTING.md#signing-off-on-commits-developer-certificate-of-origin)**
- [x] Yes, I signed my commits.
<!--
Thank you for contributing to LocalAI!
Contributing Conventions
-------------------------
The draft above helps to give a quick overview of your PR.
Remember to remove this comment and to at least:
1. Include descriptive PR titles with [<component-name>] prepended. We
use [conventional
commits](https://www.conventionalcommits.org/en/v1.0.0/ ).
2. Build and test your changes before submitting a PR (`make build`).
3. Sign your commits
4. **Tag maintainer:** for a quicker response, tag the relevant
maintainer (see below).
5. **X/Twitter handle:** we announce bigger features on X/Twitter. If
your PR gets announced, and you'd like a mention, we'll gladly shout you
out!
By following the community's contribution conventions upfront, the
review process will
be accelerated and your PR merged more quickly.
If no one reviews your PR within a few days, please @-mention @mudler.
-->
Signed-off-by: GitHub <noreply@github.com>
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* fix: set image type
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* feat(conda):Add seperate conda env for exllama (#1149 )
Add seperate env for exllama
Signed-off-by: Aisuko <urakiny@gmail.com>
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* Setup conda
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* Set image_type arg
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* ci: prepare only conda env in tests
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* Dockerfile: comment manual pip calls
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* conda: add conda to PATH
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* fixes
* add shebang
* Fixups
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* file perms
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* debug
* Install new conda in the worker
* Disable GPU tests for now until the worker is back
* Rename workflows
* debug
* Fixup conda install
* fixup(wrapper): pass args
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
---------
Signed-off-by: GitHub <noreply@github.com>
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
Signed-off-by: Aisuko <urakiny@gmail.com>
Signed-off-by: Ettore Di Giacinto <mudler@users.noreply.github.com>
Co-authored-by: Aisuko <urakiny@gmail.com>
2023-11-04 15:30:32 +01:00
LocalAI [bot]
9b17af18b3
⬆️ Update ggerganov/llama.cpp ( #1236 )
...
Signed-off-by: GitHub <noreply@github.com>
Co-authored-by: mudler <mudler@users.noreply.github.com>
2023-11-03 19:23:53 +01:00
LocalAI [bot]
5b596ea605
⬆️ Update ggerganov/llama.cpp ( #1231 )
...
Signed-off-by: GitHub <noreply@github.com>
Co-authored-by: mudler <mudler@users.noreply.github.com>
2023-11-01 12:44:34 +00:00
LocalAI [bot]
6ef7ea2635
⬆️ Update ggerganov/llama.cpp ( #1207 )
...
Signed-off-by: GitHub <noreply@github.com>
Signed-off-by: Ettore Di Giacinto <mudler@users.noreply.github.com>
Co-authored-by: mudler <mudler@users.noreply.github.com>
2023-10-30 08:00:36 +00:00
Ettore Di Giacinto
d9a42cc4c5
ci: run only cublas on selfhosted ( #1224 )
...
* ci: run only cublas on selfhosted
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* debug
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* update git
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* change testing embeddings model link
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
---------
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2023-10-29 22:04:43 +01:00
Ettore Di Giacinto
c62504ac92
cleanup: drop bloomz and ggllm as now supported by llama.cpp ( #1217 )
...
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2023-10-26 07:43:31 +02:00
Ettore Di Giacinto
f227e918f9
feat(llama.cpp): Bump llama.cpp, adapt grpc server ( #1211 )
...
* feat(llama.cpp): Bump llama.cpp, adapt grpc server
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* ci: fixups
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
---------
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2023-10-25 20:56:25 +02:00
LocalAI [bot]
9196583651
⬆️ Update ggerganov/llama.cpp ( #1204 )
...
Signed-off-by: GitHub <noreply@github.com>
Co-authored-by: mudler <mudler@users.noreply.github.com>
2023-10-23 19:06:39 +02:00
LocalAI [bot]
c377e61ff0
⬆️ Update go-skynet/go-llama.cpp ( #1156 )
...
Signed-off-by: GitHub <noreply@github.com>
Co-authored-by: mudler <mudler@users.noreply.github.com>
2023-10-22 08:55:44 +02:00
Ettore Di Giacinto
1a7be035d3
fix(Makefile): build all backends if none is specified
...
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2023-10-21 11:34:59 +02:00
Ettore Di Giacinto
004baaa30f
feat(llama.cpp): update
...
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2023-10-21 11:04:03 +02:00
Ettore Di Giacinto
432513c3ba
ci: add GPU tests ( #1095 )
...
* ci: test GPU
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* ci: show logs
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* Debug
* debug
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* split extra/core images
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* split extra/core images
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* consider runner host dir
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
---------
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2023-10-19 13:50:40 +02:00
Ettore Di Giacinto
128694213f
feat: llama.cpp gRPC C++ backend ( #1170 )
...
* wip: llama.cpp c++ gRPC server
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* make it work, attach it to the build process
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* update deps
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* fix: add protobuf dep
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* try fix protobuf on cmake
* cmake: workarounds
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* add packages
* cmake: use fixed version of grpc
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* cmake(grpc): install locally
* install grpc
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* install required deps for grpc on debian bullseye
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* debug
* debug
* Fixups
* no need to install cmake manually
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* ci: fixup macOS
* use brew whenever possible
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* macOS fixups
* debug
* fix container build
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
* workaround
* try mac
https://stackoverflow.com/questions/23905661/on-mac-g-clang-fails-to-search-usr-local-include-and-usr-local-lib-by-def
* Disable temp. arm64 docker image builds
---------
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2023-10-16 21:46:29 +02:00
LocalAI [bot]
07249c0446
⬆️ Update go-skynet/go-llama.cpp ( #1136 )
...
Bump of go-skynet/go-llama.cpp version
Signed-off-by: GitHub <noreply@github.com>
Co-authored-by: mudler <mudler@users.noreply.github.com>
2023-10-05 17:35:21 +02:00
LocalAI [bot]
e660721a0c
⬆️ Update go-skynet/go-llama.cpp ( #1130 )
...
Bump of go-skynet/go-llama.cpp version
Signed-off-by: GitHub <noreply@github.com>
Co-authored-by: mudler <mudler@users.noreply.github.com>
2023-10-04 16:54:20 +02:00
LocalAI [bot]
46660a16a0
⬆️ Update go-skynet/go-llama.cpp ( #1106 )
...
Bump of go-skynet/go-llama.cpp version
Signed-off-by: GitHub <noreply@github.com>
Co-authored-by: mudler <mudler@users.noreply.github.com>
2023-09-29 23:55:12 +00:00
65a
55e38fea0e
feat(llama.cpp): enable ROCm/HIPBLAS support ( #1100 )
...
**Description**
This PR fixes lack of HIPBLAS support in LocalAI.
**Notes for Reviewers**
This PR builds on https://github.com/go-skynet/go-llama.cpp/pull/235 to
enable ROCm/HIPBLAS support for gguf models running under llama.cpp
backend (not the stable ggml one). It can be enabled by using
BUILD_TYPE=hipblas. This was tested on a gfx1100 card, but should work
for gfx900,gfx1030 and other cards. Card support can be set with
AMDGPU_TARGETS environment variable.
**[Signed
commits](../CONTRIBUTING.md#signing-off-on-commits-developer-certificate-of-origin)**
- [x] Yes, I signed my commits.
<!--
Thank you for contributing to LocalAI!
Contributing Conventions
-------------------------
The draft above helps to give a quick overview of your PR.
Remember to remove this comment and to at least:
1. Include descriptive PR titles with [<component-name>] prepended. We
use [conventional
commits](https://www.conventionalcommits.org/en/v1.0.0/ ).
2. Build and test your changes before submitting a PR (`make build`).
3. Sign your commits
4. **Tag maintainer:** for a quicker response, tag the relevant
maintainer (see below).
5. **X/Twitter handle:** we announce bigger features on X/Twitter. If
your PR gets announced, and you'd like a mention, we'll gladly shout you
out!
By following the community's contribution conventions upfront, the
review process will
be accelerated and your PR merged more quickly.
If no one reviews your PR within a few days, please @-mention @mudler.
-->
---------
Signed-off-by: 65a <65a@63bit.net>
2023-09-28 21:42:20 +02:00
Ettore Di Giacinto
601e54000d
fix(llama.cpp): update, run go mod tidy ( #1088 )
...
**Description**
This PR supersedes #1086
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2023-09-22 00:45:02 +02:00
ci-robbot [bot]
7bdf707dd3
⬆️ Update go-skynet/go-llama.cpp ( #1084 )
...
Bump of go-skynet/go-llama.cpp version
Signed-off-by: GitHub <noreply@github.com>
Co-authored-by: mudler <mudler@users.noreply.github.com>
2023-09-20 19:48:38 +02:00
ci-robbot [bot]
a8fb4d23f8
⬆️ Update go-skynet/go-llama.cpp ( #1062 )
...
Bump of go-skynet/go-llama.cpp version
Signed-off-by: GitHub <noreply@github.com>
Co-authored-by: mudler <mudler@users.noreply.github.com>
2023-09-17 08:38:28 +02:00
ci-robbot [bot]
8590f5a599
⬆️ Update go-skynet/go-llama.cpp ( #1048 )
...
Bump of go-skynet/go-llama.cpp version
Signed-off-by: GitHub <noreply@github.com>
Co-authored-by: mudler <mudler@users.noreply.github.com>
2023-09-14 10:40:36 +02:00
ci-robbot [bot]
0b28220f2b
⬆️ Update go-skynet/go-llama.cpp ( #1043 )
...
Bump of go-skynet/go-llama.cpp version
Signed-off-by: GitHub <noreply@github.com>
Co-authored-by: mudler <mudler@users.noreply.github.com>
2023-09-13 09:16:33 +02:00
ci-robbot [bot]
255c31bddf
⬆️ Update go-skynet/go-llama.cpp ( #1027 )
...
Bump of go-skynet/go-llama.cpp version
Signed-off-by: GitHub <noreply@github.com>
Co-authored-by: mudler <mudler@users.noreply.github.com>
2023-09-11 09:42:54 +02:00
Ettore Di Giacinto
c0bb5c4bf6
feat(vllm): Initial vllm backend implementation
...
Related to: https://github.com/go-skynet/LocalAI/issues/1015
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2023-09-09 17:03:23 +02:00
Ettore Di Giacinto
cc74fc93b4
feat(llama.cpp): update ( #1024 )
...
**Description**
This PR fixes #
**Notes for Reviewers**
**[Signed
commits](../CONTRIBUTING.md#signing-off-on-commits-developer-certificate-of-origin)**
- [ ] Yes, I signed my commits.
<!--
Thank you for contributing to LocalAI!
Contributing Conventions:
1. Include descriptive PR titles with [<component-name>] prepended.
2. Build and test your changes before submitting a PR.
3. Sign your commits
By following the community's contribution conventions upfront, the
review process will
be accelerated and your PR merged more quickly.
-->
---------
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2023-09-08 18:38:22 +02:00
Ettore Di Giacinto
dc307a1cc0
feat: add vall-e-x ( #1007 )
...
**Description**
This PR fixes #985
**Notes for Reviewers**
**[Signed
commits](../CONTRIBUTING.md#signing-off-on-commits-developer-certificate-of-origin)**
- [ ] Yes, I signed my commits.
<!--
Thank you for contributing to LocalAI!
Contributing Conventions:
1. Include descriptive PR titles with [<component-name>] prepended.
2. Build and test your changes before submitting a PR.
3. Sign your commits
By following the community's contribution conventions upfront, the
review process will
be accelerated and your PR merged more quickly.
-->
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2023-09-04 19:25:23 +02:00
ci-robbot [bot]
b3eb5c860b
⬆️ Update go-skynet/go-llama.cpp ( #1005 )
...
Bump of go-skynet/go-llama.cpp version
Signed-off-by: GitHub <noreply@github.com>
Co-authored-by: mudler <mudler@users.noreply.github.com>
2023-09-04 19:11:41 +02:00
Bo-Yi Wu
1c2f7409e3
chore(deps): remove unused package ( #1003 )
...
**Description**
Just remove Golang unused package and update the format in Makefile
Signed-off-by: appleboy <appleboy.tw@gmail.com>
2023-09-04 19:11:28 +02:00
ci-robbot [bot]
0e7e8eec53
⬆️ Update go-skynet/go-llama.cpp ( #1002 )
...
Bump of go-skynet/go-llama.cpp version
Signed-off-by: GitHub <noreply@github.com>
Co-authored-by: mudler <mudler@users.noreply.github.com>
2023-09-03 10:00:01 +02:00
ci-robbot [bot]
c332499252
⬆️ Update go-skynet/go-llama.cpp ( #996 )
...
Bump of go-skynet/go-llama.cpp version
Signed-off-by: GitHub <noreply@github.com>
Co-authored-by: mudler <mudler@users.noreply.github.com>
2023-09-02 09:54:50 +02:00
Ettore Di Giacinto
1ff30034e8
fix(deps): update go-llama.cpp ( #980 )
...
**Description**
This PR bumps llama.cpp (adding support to gguf v2) and changes the
default test model
---------
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2023-08-30 23:01:55 +02:00
ci-robbot [bot]
cc84dfd50f
⬆️ Update go-skynet/go-llama.cpp ( #968 )
...
Bump of go-skynet/go-llama.cpp version
Signed-off-by: GitHub <noreply@github.com>
Co-authored-by: mudler <mudler@users.noreply.github.com>
2023-08-28 08:23:51 +02:00
Ettore Di Giacinto
44bc7aa3d0
feat: Allow to load lora adapters for llama.cpp ( #955 )
...
**Description**
This PR fixes #
**Notes for Reviewers**
**[Signed
commits](../CONTRIBUTING.md#signing-off-on-commits-developer-certificate-of-origin)**
- [ ] Yes, I signed my commits.
<!--
Thank you for contributing to LocalAI!
Contributing Conventions:
1. Include descriptive PR titles with [<component-name>] prepended.
2. Build and test your changes before submitting a PR.
3. Sign your commits
By following the community's contribution conventions upfront, the
review process will
be accelerated and your PR merged more quickly.
-->
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2023-08-25 21:58:46 +02:00
ci-robbot [bot]
7f0c88ed3e
⬆️ Update go-skynet/go-llama.cpp ( #954 )
...
Bump of go-skynet/go-llama.cpp version
Signed-off-by: GitHub <noreply@github.com>
Co-authored-by: mudler <mudler@users.noreply.github.com>
2023-08-25 18:45:40 +02:00
ci-robbot [bot]
d15508f52c
⬆️ Update nomic-ai/gpt4all ( #953 )
...
Bump of nomic-ai/gpt4all version
Signed-off-by: GitHub <noreply@github.com>
Co-authored-by: mudler <mudler@users.noreply.github.com>
2023-08-25 01:19:48 +02:00
Ettore Di Giacinto
1120847f72
feat: bump llama.cpp, add gguf support ( #943 )
...
**Description**
This PR syncs up the `llama` backend to use `gguf`
(https://github.com/go-skynet/go-llama.cpp/pull/180 ). It also adds
`llama-stable` to the targets so we can still load ggml. It adapts the
current tests to use the `llama-backend` for ggml and uses a `gguf`
model to run tests on the new backend.
In order to consume the new version of go-llama.cpp, it also bump go to
1.21 (images, pipelines, etc)
---------
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2023-08-24 01:18:58 +02:00
Ettore Di Giacinto
ab5b75eb01
feat: add llama-stable backend ( #932 )
...
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2023-08-20 16:35:42 +02:00
ci-robbot [bot]
dbb1f86455
⬆️ Update nomic-ai/gpt4all ( #911 )
...
Signed-off-by: GitHub <noreply@github.com>
Co-authored-by: mudler <mudler@users.noreply.github.com>
2023-08-19 10:17:41 +02:00
Dave
8cb1061c11
Usage Features ( #863 )
2023-08-18 21:23:14 +02:00
ci-robbot [bot]
0c73a637f1
⬆️ Update nomic-ai/gpt4all ( #899 )
...
Signed-off-by: GitHub <noreply@github.com>
Co-authored-by: mudler <mudler@users.noreply.github.com>
2023-08-16 01:11:54 +02:00
ci-robbot [bot]
63d91af555
⬆️ Update nomic-ai/gpt4all ( #878 )
...
Signed-off-by: GitHub <noreply@github.com>
Co-authored-by: mudler <mudler@users.noreply.github.com>
2023-08-15 09:25:10 +02:00
Ettore Di Giacinto
77e1ae3d70
feat(Makefile): allow to restrict backend builds ( #890 )
...
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2023-08-13 20:04:08 +02:00
Ettore Di Giacinto
c81e9d8d1f
fix: add exllama to protogen
2023-08-11 01:02:31 +02:00
Ettore Di Giacinto
8c781a6a44
feat: Add Diffusers ( #874 )
...
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2023-08-09 08:38:51 +02:00
ci-robbot [bot]
0e4f93c5cf
⬆️ Update nomic-ai/gpt4all ( #870 )
...
Signed-off-by: GitHub <noreply@github.com>
Co-authored-by: mudler <mudler@users.noreply.github.com>
2023-08-08 21:57:01 +02:00
Ettore Di Giacinto
433605e282
feat: add initial Bark backend implementation
...
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2023-08-07 22:53:28 +02:00
Ettore Di Giacinto
a843e64fc2
feat: add initial AutoGPTQ backend implementation
2023-08-07 22:53:28 +02:00
ci-robbot [bot]
6b900e28cd
⬆️ Update nomic-ai/gpt4all ( #859 )
...
Signed-off-by: GitHub <noreply@github.com>
Co-authored-by: mudler <mudler@users.noreply.github.com>
2023-08-03 19:07:53 +02:00
Ettore Di Giacinto
5ca21ee398
feat: add ngqa and RMSNormEps parameters ( #860 )
...
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2023-08-03 00:51:08 +02:00
Ettore Di Giacinto
1e37ec727d
Revert " ⬆️ Update go-skynet/go-llama.cpp" ( #850 )
2023-08-01 19:09:18 +02:00
ci-robbot [bot]
ae36bae59d
⬆️ Update nomic-ai/gpt4all ( #847 )
...
Signed-off-by: GitHub <noreply@github.com>
Co-authored-by: mudler <mudler@users.noreply.github.com>
2023-08-01 00:48:10 +02:00
ci-robbot [bot]
a0324245f1
⬆️ Update nomic-ai/gpt4all ( #841 )
...
Signed-off-by: GitHub <noreply@github.com>
Co-authored-by: mudler <mudler@users.noreply.github.com>
2023-07-31 19:14:56 +02:00
ci-robbot [bot]
18e1cb9c92
⬆️ Update nomic-ai/gpt4all ( #825 )
...
Signed-off-by: GitHub <noreply@github.com>
Co-authored-by: mudler <mudler@users.noreply.github.com>
2023-07-30 09:48:30 +02:00
ci-robbot [bot]
e7ceb9e8f5
⬆️ Update go-skynet/go-llama.cpp ( #824 )
...
Signed-off-by: GitHub <noreply@github.com>
Co-authored-by: mudler <mudler@users.noreply.github.com>
2023-07-30 09:48:10 +02:00
Ettore Di Giacinto
096d98c3d9
fix: add rope settings during model load, fix CUDA ( #821 )
...
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2023-07-27 21:56:05 +02:00
ci-robbot [bot]
90ae35e2e4
⬆️ Update nomic-ai/gpt4all ( #814 )
...
Signed-off-by: GitHub <noreply@github.com>
Co-authored-by: mudler <mudler@users.noreply.github.com>
2023-07-27 18:41:15 +02:00
ci-robbot [bot]
c79ddd6fc4
⬆️ Update nomic-ai/gpt4all ( #807 )
...
Signed-off-by: GitHub <noreply@github.com>
Co-authored-by: mudler <mudler@users.noreply.github.com>
2023-07-25 23:03:02 +02:00
Dave
ae58fb8821
fix: update gitignore and make clean ( #798 )
...
Co-authored-by: Ettore Di Giacinto <mudler@users.noreply.github.com>
2023-07-25 23:02:46 +02:00
Ettore Di Giacinto
569c1d1163
feat: add rope settings and negative prompt, drop grammar backend ( #797 )
...
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2023-07-25 19:05:27 +02:00
ci-robbot [bot]
bed9570e48
⬆️ Update nomic-ai/gpt4all ( #785 )
...
Signed-off-by: GitHub <noreply@github.com>
Co-authored-by: mudler <mudler@users.noreply.github.com>
2023-07-23 09:51:42 +02:00
ci-robbot [bot]
5ee186b8e5
⬆️ Update go-skynet/go-llama.cpp ( #723 )
...
Signed-off-by: GitHub <noreply@github.com>
Co-authored-by: mudler <mudler@users.noreply.github.com>
2023-07-22 00:55:33 +02:00
Ettore Di Giacinto
0eac0402e1
feat: backends improvements ( #778 )
2023-07-21 20:55:49 +02:00
Ettore Di Giacinto
982a7e86a8
feat: add huggingface embeddings backend
...
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2023-07-20 22:10:42 +02:00
Ettore Di Giacinto
5ce5f87a26
fix: move metal file to grpcs assets ( #777 )
...
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2023-07-20 22:00:07 +02:00
ci-robbot [bot]
71ac331f90
⬆️ Update nomic-ai/gpt4all ( #775 )
...
Signed-off-by: GitHub <noreply@github.com>
Co-authored-by: mudler <mudler@users.noreply.github.com>
2023-07-20 01:22:44 +02:00
Ettore Di Giacinto
3feb632eb4
refactor: rename "llama-master" and "llama" ( #776 )
...
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2023-07-20 00:36:16 +02:00
ci-robbot [bot]
a38dc497b2
⬆️ Update go-skynet/go-llama.cpp ( #770 )
...
Signed-off-by: GitHub <noreply@github.com>
Co-authored-by: mudler <mudler@users.noreply.github.com>
2023-07-19 19:44:33 +02:00
ci-robbot [bot]
28ed52fa94
⬆️ Update nomic-ai/gpt4all ( #769 )
...
Signed-off-by: GitHub <noreply@github.com>
Co-authored-by: mudler <mudler@users.noreply.github.com>
2023-07-19 19:44:21 +02:00
Enzo Einhorn
e995b95c94
[build] pass build type to cmake on libtransformers.a build ( #741 )
...
Co-authored-by: Enzo Einhorn <enzo.einhorn@hiventive.com>
2023-07-18 19:04:19 +02:00
ci-robbot [bot]
3c6b798522
⬆️ Update nomic-ai/gpt4all ( #759 )
...
Signed-off-by: GitHub <noreply@github.com>
Co-authored-by: mudler <mudler@users.noreply.github.com>
2023-07-17 23:58:40 +02:00
ci-robbot [bot]
c18770a61a
⬆️ Update go-skynet/go-bert.cpp ( #758 )
...
Signed-off-by: GitHub <noreply@github.com>
Co-authored-by: mudler <mudler@users.noreply.github.com>
2023-07-17 23:58:25 +02:00
Ettore Di Giacinto
6352448b72
feat: add llama-master backend ( #752 )
...
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2023-07-17 23:58:15 +02:00
ci-robbot [bot]
27ef8b1eb7
⬆️ Update go-skynet/go-ggml-transformers.cpp ( #711 )
...
Signed-off-by: GitHub <noreply@github.com>
Co-authored-by: mudler <mudler@users.noreply.github.com>
2023-07-16 09:57:16 +02:00
ci-robbot [bot]
c00435d72b
⬆️ Update nomic-ai/gpt4all ( #735 )
...
Signed-off-by: GitHub <noreply@github.com>
Co-authored-by: mudler <mudler@users.noreply.github.com>
2023-07-16 09:57:00 +02:00
ci-robbot [bot]
accd9f9044
⬆️ Update donomii/go-rwkv.cpp ( #750 )
...
Signed-off-by: GitHub <noreply@github.com>
Co-authored-by: mudler <mudler@users.noreply.github.com>
2023-07-15 22:52:45 +02:00
Ettore Di Giacinto
f193f56564
fix: fix copy
...
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2023-07-15 01:19:43 +02:00
Ettore Di Giacinto
c0a91ab548
fix: fix LDFLAGS for rwkv.cpp
...
Previously the libs were added by other deps that made the linker add
those as well (by chance).
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2023-07-15 01:19:43 +02:00
Ettore Di Giacinto
26e510bf28
fix: Makefile
...
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2023-07-15 01:19:43 +02:00
Ettore Di Giacinto
7f3de3ca4a
fix: fix makefile error
...
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2023-07-15 01:19:43 +02:00
Ettore Di Giacinto
1d0ed95a54
feat: move other backends to grpc
...
This finally makes everything more consistent
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2023-07-15 01:19:43 +02:00
Ettore Di Giacinto
f2f1d7fe72
feat: use gRPC for transformers
...
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2023-07-15 01:19:43 +02:00
Ettore Di Giacinto
ae533cadef
feat: move gpt4all to a grpc service
...
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2023-07-15 01:19:43 +02:00
Ettore Di Giacinto
58f6aab637
feat: move llama to a grpc
...
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2023-07-15 01:19:43 +02:00
Ettore Di Giacinto
b816009db0
feat: add falcon ggllm via grpc client
...
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2023-07-15 01:19:43 +02:00
ci-robbot [bot]
a84dee1be1
⬆️ Update nomic-ai/gpt4all ( #705 )
...
Signed-off-by: GitHub <noreply@github.com>
Co-authored-by: mudler <mudler@users.noreply.github.com>
2023-07-09 16:55:56 +02:00
mudler
c4495ad8f2
invoke go mod clean before rebuilds
2023-07-05 18:24:55 +02:00
mudler
1668489b00
Add comments
2023-07-04 19:02:02 +02:00
mudler
7dd292cbb3
feat: add a way to test grammar from forks
2023-07-04 18:58:19 +02:00
mudler
a5b64b6a41
wip: test go-llama.cpp version
...
It also needs a llama.cpp with grammar branch + rebased on current
master
2023-07-04 18:58:19 +02:00
mudler
6d19a8bdb5
fix: copy git to correctly display version in /version
2023-07-04 18:58:19 +02:00
Ettore Di Giacinto
70674d3c58
fix(deps): bump go-llama.cpp ( #719 )
...
Signed-off-by: mudler <mudler@localai.io>
2023-07-03 00:17:48 +02:00
ci-robbot [bot]
3829aba869
⬆️ Update nomic-ai/gpt4all ( #704 )
...
Signed-off-by: GitHub <noreply@github.com>
Co-authored-by: mudler <mudler@users.noreply.github.com>
2023-06-30 10:30:39 +02:00
ci-robbot [bot]
e3db6496d7
⬆️ Update go-skynet/go-llama.cpp ( #697 )
...
Signed-off-by: GitHub <noreply@github.com>
Co-authored-by: mudler <mudler@users.noreply.github.com>
2023-06-28 23:43:29 +02:00
ci-robbot [bot]
1e6542a5ca
⬆️ Update ggerganov/whisper.cpp ( #696 )
...
Signed-off-by: GitHub <noreply@github.com>
Co-authored-by: mudler <mudler@users.noreply.github.com>
2023-06-28 23:42:57 +02:00
ci-robbot [bot]
218e7bc8df
⬆️ Update nomic-ai/gpt4all ( #691 )
...
Signed-off-by: GitHub <noreply@github.com>
Co-authored-by: mudler <mudler@users.noreply.github.com>
2023-06-28 23:42:46 +02:00
ci-robbot [bot]
69367a7948
⬆️ Update go-skynet/go-ggml-transformers.cpp ( #692 )
...
Signed-off-by: GitHub <noreply@github.com>
Co-authored-by: mudler <mudler@users.noreply.github.com>
2023-06-27 23:54:51 +02:00
ci-robbot [bot]
85a38a8122
⬆️ Update go-skynet/go-llama.cpp ( #690 )
...
Signed-off-by: GitHub <noreply@github.com>
Co-authored-by: mudler <mudler@users.noreply.github.com>
2023-06-27 23:48:52 +02:00
ci-robbot [bot]
85eea1189e
⬆️ Update ggerganov/whisper.cpp ( #682 )
...
Signed-off-by: GitHub <noreply@github.com>
Co-authored-by: mudler <mudler@users.noreply.github.com>
2023-06-27 09:01:09 +02:00
ci-robbot [bot]
ed2344ab9b
⬆️ Update nomic-ai/gpt4all ( #681 )
...
Signed-off-by: GitHub <noreply@github.com>
Co-authored-by: mudler <mudler@users.noreply.github.com>
2023-06-27 09:00:57 +02:00
Ettore Di Giacinto
3593cb0c87
feat: update llama, enable NUMA ( #684 )
2023-06-27 09:00:10 +02:00
Samuel Maynard
e130b208ab
Docker preserve sources ( #658 )
2023-06-26 22:34:03 +02:00
Ettore Di Giacinto
d3a486a4f8
feat: Add '/version' endpoint and display it in the CLI ( #679 )
2023-06-26 15:12:43 +02:00
ci-robbot [bot]
a1ed6fbd96
⬆️ Update ggerganov/whisper.cpp ( #672 )
...
Signed-off-by: GitHub <noreply@github.com>
Co-authored-by: mudler <mudler@users.noreply.github.com>
2023-06-26 12:26:02 +02:00
ci-robbot [bot]
0ba94bf33f
⬆️ Update nomic-ai/gpt4all ( #668 )
...
Signed-off-by: GitHub <noreply@github.com>
Co-authored-by: mudler <mudler@users.noreply.github.com>
2023-06-25 09:26:17 +02:00
ci-robbot [bot]
be1667c387
⬆️ Update nomic-ai/gpt4all ( #657 )
...
Signed-off-by: GitHub <noreply@github.com>
Co-authored-by: mudler <mudler@users.noreply.github.com>
2023-06-24 08:33:52 +02:00
ci-robbot [bot]
eb39d908d0
⬆️ Update go-skynet/go-llama.cpp ( #634 )
...
Signed-off-by: GitHub <noreply@github.com>
Co-authored-by: mudler <mudler@users.noreply.github.com>
2023-06-24 08:33:40 +02:00
ci-robbot [bot]
55cf9d5792
⬆️ Update nomic-ai/gpt4all ( #650 )
...
Signed-off-by: GitHub <noreply@github.com>
Co-authored-by: mudler <mudler@users.noreply.github.com>
2023-06-22 17:53:32 +02:00
Ettore Di Giacinto
a7bb029d23
feat: add tts with go-piper ( #649 )
...
Signed-off-by: mudler <mudler@localai.io>
2023-06-22 17:53:10 +02:00
ci-robbot [bot]
cc31c58235
⬆️ Update go-skynet/go-ggml-transformers.cpp ( #644 )
...
Signed-off-by: GitHub <noreply@github.com>
Co-authored-by: mudler <mudler@users.noreply.github.com>
2023-06-21 08:58:20 +02:00
ci-robbot [bot]
445067f6ad
⬆️ Update donomii/go-rwkv.cpp ( #600 )
...
Signed-off-by: GitHub <noreply@github.com>
Co-authored-by: mudler <mudler@users.noreply.github.com>
2023-06-21 08:57:15 +02:00
ci-robbot [bot]
11bfd0de76
⬆️ Update nomic-ai/gpt4all ( #635 )
...
Signed-off-by: GitHub <noreply@github.com>
Co-authored-by: mudler <mudler@users.noreply.github.com>
2023-06-21 08:56:41 +02:00
ci-robbot [bot]
d0025a7483
⬆️ Update go-skynet/go-ggml-transformers.cpp ( #633 )
...
Signed-off-by: GitHub <noreply@github.com>
Co-authored-by: mudler <mudler@users.noreply.github.com>
2023-06-20 08:47:21 +02:00
ci-robbot [bot]
db0b29be51
⬆️ Update nomic-ai/gpt4all ( #628 )
...
Signed-off-by: GitHub <noreply@github.com>
Co-authored-by: mudler <mudler@users.noreply.github.com>
2023-06-20 00:12:24 +02:00
ci-robbot [bot]
1766de814c
⬆️ Update go-skynet/go-ggml-transformers.cpp ( #619 )
...
Signed-off-by: GitHub <noreply@github.com>
Co-authored-by: mudler <mudler@users.noreply.github.com>
2023-06-18 23:49:38 +02:00
ci-robbot [bot]
0b351d6da2
⬆️ Update nomic-ai/gpt4all ( #613 )
...
Signed-off-by: GitHub <noreply@github.com>
Co-authored-by: mudler <mudler@users.noreply.github.com>
2023-06-18 23:48:07 +02:00
Ettore Di Giacinto
d3d3187e51
feat: fix CUDA images and update go-llama to use full GPU offloading ( #618 )
...
Signed-off-by: mudler <mudler@localai.io>
Co-authored-by: mudler <mudler@localai.io>
2023-06-18 08:27:29 +02:00
Ettore Di Giacinto
6c94f3cd67
Revert "Docker preserve sources" ( #620 )
2023-06-17 23:22:04 +02:00
Ettore Di Giacinto
1b7990d5d9
deps: switch back to nomic-ai/gpt4all ( #595 )
2023-06-14 18:07:05 +02:00
Samuel Maynard
7b9dcb05d4
Docker preserve sources ( #590 )
2023-06-14 13:26:27 +02:00
Ettore Di Giacinto
e37361985c
deps: update gpt4all bindings, fix search path on new versions ( #592 )
2023-06-14 13:24:53 +02:00
ci-robbot [bot]
467e88d305
⬆️ Update donomii/go-rwkv.cpp ( #527 )
...
Signed-off-by: GitHub <noreply@github.com>
Co-authored-by: mudler <mudler@users.noreply.github.com>
2023-06-14 12:56:20 +02:00
ci-robbot [bot]
f98680a18a
⬆️ Update go-skynet/go-llama.cpp ( #584 )
...
Signed-off-by: GitHub <noreply@github.com>
Co-authored-by: mudler <mudler@users.noreply.github.com>
2023-06-13 23:05:03 +02:00
ci-robbot [bot]
6306885fe7
⬆️ Update go-skynet/go-llama.cpp ( #561 )
...
Signed-off-by: GitHub <noreply@github.com>
Co-authored-by: mudler <mudler@users.noreply.github.com>
2023-06-11 15:44:06 +02:00
Ettore Di Giacinto
2a11f16c0f
fix: copy metal file from build ( #564 )
2023-06-11 01:07:39 +02:00
ci-robbot [bot]
897ac6e4e5
⬆️ Update go-skynet/go-ggml-transformers.cpp ( #562 )
...
Signed-off-by: GitHub <noreply@github.com>
Co-authored-by: mudler <mudler@users.noreply.github.com>
2023-06-11 01:01:46 +02:00
ci-robbot [bot]
e6c8ebb65c
⬆️ Update go-skynet/go-llama.cpp ( #554 )
...
Signed-off-by: GitHub <noreply@github.com>
Co-authored-by: mudler <mudler@users.noreply.github.com>
2023-06-10 01:35:58 +02:00
ci-robbot [bot]
437f563128
⬆️ Update go-skynet/go-bert.cpp ( #540 )
...
Signed-off-by: GitHub <noreply@github.com>
Co-authored-by: mudler <mudler@users.noreply.github.com>
2023-06-10 00:09:14 +02:00
ci-robbot [bot]
6bb562272d
⬆️ Update go-skynet/go-llama.cpp ( #546 )
...
Signed-off-by: GitHub <noreply@github.com>
Co-authored-by: mudler <mudler@users.noreply.github.com>
2023-06-09 01:13:15 +02:00
ci-robbot [bot]
806e4c3a63
⬆️ Update go-skynet/go-ggml-transformers.cpp ( #539 )
...
Signed-off-by: GitHub <noreply@github.com>
Co-authored-by: mudler <mudler@users.noreply.github.com>
2023-06-09 00:19:58 +02:00
Ettore Di Giacinto
c9bbba4872
tests: add llama tests with openllama ( #538 )
...
Signed-off-by: mudler <mudler@mocaccino.org>
2023-06-08 00:36:11 +02:00
Ettore Di Giacinto
5abbb134d9
feat: extend model configuration for llama.cpp ( #536 )
2023-06-07 21:46:19 +02:00
ci-robbot [bot]
2630e251ce
⬆️ Update ggerganov/whisper.cpp ( #520 )
...
Signed-off-by: GitHub <noreply@github.com>
Co-authored-by: mudler <mudler@users.noreply.github.com>
2023-06-06 19:16:42 +02:00
ci-robbot [bot]
0909a0637e
feat: update llama.cpp to support k-quants ( #521 )
...
Signed-off-by: GitHub <noreply@github.com>
Co-authored-by: mudler <mudler@users.noreply.github.com>
2023-06-06 18:15:17 +02:00
Ettore Di Giacinto
d62aef2016
feat: add experimental support for falcon-7b ( #516 )
...
Signed-off-by: mudler <mudler@mocaccino.org>
2023-06-06 17:23:19 +02:00
ci-robbot [bot]
25e9483add
⬆️ Update donomii/go-rwkv.cpp ( #511 )
...
Signed-off-by: GitHub <noreply@github.com>
Co-authored-by: mudler <mudler@users.noreply.github.com>
2023-06-06 16:02:09 +02:00
ci-robbot [bot]
2e916abe15
⬆️ Update go-skynet/go-llama.cpp ( #512 )
...
Signed-off-by: GitHub <noreply@github.com>
Co-authored-by: mudler <mudler@users.noreply.github.com>
2023-06-06 00:35:01 +02:00
Ettore Di Giacinto
b447a2a719
feat: support upscaled image generation with esrgan ( #509 )
2023-06-05 17:21:38 +02:00
Ettore Di Giacinto
ec4fd1d219
fix gpt4all, add metal GPU support ( #507 )
2023-06-05 14:26:20 +02:00
Ettore Di Giacinto
b503725dc7
fix: downgrade gpt4all ( #503 )
...
Signed-off-by: mudler <mudler@mocaccino.org>
2023-06-05 09:42:50 +02:00
ci-robbot [bot]
e873fc7b71
⬆️ Update go-skynet/go-ggml-transformers.cpp ( #501 )
...
Signed-off-by: GitHub <noreply@github.com>
Co-authored-by: mudler <mudler@users.noreply.github.com>
2023-06-05 00:07:48 +02:00
Ettore Di Giacinto
4ddc956462
deps: update rwkv, switch back to upstream ( #494 )
2023-06-04 17:25:35 +02:00
ci-robbot [bot]
b64c1d8ac1
⬆️ Update nomic-ai/gpt4all ( #488 )
...
Signed-off-by: GitHub <noreply@github.com>
Co-authored-by: mudler <mudler@users.noreply.github.com>
2023-06-04 01:56:59 +02:00
ci-robbot [bot]
05edf59c91
⬆️ Update nomic-ai/gpt4all ( #483 )
...
Signed-off-by: GitHub <noreply@github.com>
Co-authored-by: mudler <mudler@users.noreply.github.com>
2023-06-03 18:30:30 +02:00
ci-robbot [bot]
b9f1f85433
⬆️ Update go-skynet/go-llama.cpp ( #482 )
...
Signed-off-by: GitHub <noreply@github.com>
Co-authored-by: mudler <mudler@users.noreply.github.com>
2023-06-03 18:30:18 +02:00
ci-robbot [bot]
29856f7527
⬆️ Update nomic-ai/gpt4all ( #479 )
...
Signed-off-by: GitHub <noreply@github.com>
Co-authored-by: mudler <mudler@users.noreply.github.com>
2023-06-03 14:25:42 +02:00
Ettore Di Giacinto
e875c1f64a
fix: fix the make run target ( #476 )
...
Signed-off-by: mudler <mudler@mocaccino.org>
2023-06-02 09:57:01 +02:00
Ettore Di Giacinto
19f92d7d55
fix: Bump and fix rwkv build ( #475 )
2023-06-02 08:53:57 +02:00
ci-robbot [bot]
a63d6f6364
⬆️ Update ggerganov/whisper.cpp ( #473 )
...
Signed-off-by: GitHub <noreply@github.com>
Co-authored-by: mudler <mudler@users.noreply.github.com>
2023-06-01 23:44:05 +02:00
ci-robbot [bot]
4422ca2235
⬆️ Update go-skynet/go-ggml-transformers.cpp ( #459 )
...
Signed-off-by: GitHub <noreply@github.com>
Co-authored-by: mudler <mudler@users.noreply.github.com>
2023-06-01 23:43:15 +02:00
Ettore Di Giacinto
78ad4813df
feat: Update gpt4all, support multiple implementations in runtime ( #472 )
...
Signed-off-by: mudler <mudler@mocaccino.org>
2023-06-01 23:38:52 +02:00
ci-robbot [bot]
5c018c0437
⬆️ Update ggerganov/whisper.cpp ( #468 )
...
Signed-off-by: GitHub <noreply@github.com>
Co-authored-by: mudler <mudler@users.noreply.github.com>
2023-06-01 16:23:16 +02:00
ci-robbot [bot]
c5cb2ff268
⬆️ Update go-skynet/go-bert.cpp ( #463 )
...
Signed-off-by: GitHub <noreply@github.com>
Co-authored-by: mudler <mudler@users.noreply.github.com>
2023-06-01 16:21:13 +02:00
Aisuko
c8a4a4f4e9
feat: Add new test cases for LoadConfigs ( #447 )
...
Signed-off-by: Aisuko <urakiny@gmail.com>
2023-06-01 16:20:45 +02:00
ci-robbot [bot]
275c124701
⬆️ Update go-skynet/go-llama.cpp ( #458 )
...
Signed-off-by: GitHub <noreply@github.com>
Co-authored-by: mudler <mudler@users.noreply.github.com>
2023-05-31 22:59:02 +02:00
ci-robbot [bot]
87a6bbd251
⬆️ Update ggerganov/whisper.cpp ( #462 )
...
Signed-off-by: GitHub <noreply@github.com>
Co-authored-by: mudler <mudler@users.noreply.github.com>
2023-05-31 22:58:44 +02:00
ci-robbot [bot]
5623a7c331
⬆️ Update go-skynet/go-bert.cpp ( #418 )
...
Signed-off-by: GitHub <noreply@github.com>
Co-authored-by: mudler <mudler@users.noreply.github.com>
2023-05-31 00:45:07 +02:00
ci-robbot [bot]
9e3ca6d1a3
⬆️ Update nomic-ai/gpt4all ( #422 )
...
Signed-off-by: GitHub <noreply@github.com>
Co-authored-by: mudler <mudler@users.noreply.github.com>
2023-05-31 00:44:52 +02:00
ci-robbot [bot]
fa58965bbc
⬆️ Update ggerganov/whisper.cpp ( #419 )
...
Signed-off-by: GitHub <noreply@github.com>
Co-authored-by: mudler <mudler@users.noreply.github.com>
2023-05-30 23:04:53 +02:00
ci-robbot [bot]
f711d35377
⬆️ Update go-skynet/go-ggml-transformers.cpp ( #442 )
...
Signed-off-by: GitHub <noreply@github.com>
Co-authored-by: mudler <mudler@users.noreply.github.com>
2023-05-30 23:04:10 +02:00
ci-robbot [bot]
abd3c62194
⬆️ Update go-skynet/go-llama.cpp ( #443 )
...
Signed-off-by: GitHub <noreply@github.com>
Co-authored-by: mudler <mudler@users.noreply.github.com>
2023-05-30 23:03:48 +02:00
Aisuko
49ce24984c
feat: Add more test-cases and remove dev container ( #433 )
...
Signed-off-by: Aisuko <urakiny@gmail.com>
Co-authored-by: Ettore Di Giacinto <mudler@users.noreply.github.com>
2023-05-30 13:01:55 +02:00
Ettore Di Giacinto
f401181cb5
fix: switch back to upstream for rwkv bindings ( #432 )
2023-05-30 12:35:32 +02:00
ci-robbot [bot]
04d6bd7922
⬆️ Update go-skynet/go-ggml-transformers.cpp ( #421 )
...
Signed-off-by: GitHub <noreply@github.com>
Co-authored-by: mudler <mudler@users.noreply.github.com>
2023-05-29 23:10:43 +02:00
ci-robbot [bot]
2abdac7003
⬆️ Update go-skynet/bloomz.cpp ( #417 )
...
Signed-off-by: GitHub <noreply@github.com>
Co-authored-by: mudler <mudler@users.noreply.github.com>
2023-05-29 23:09:42 +02:00
Ettore Di Giacinto
f5146bde18
feat: add clblast support ( #412 )
...
Signed-off-by: mudler <mudler@mocaccino.org>
2023-05-29 15:17:38 +02:00
Ettore Di Giacinto
65d06285d8
Bump rwkv ( #402 )
2023-05-28 22:59:25 +02:00
ci-robbot [bot]
425beea6c5
⬆️ Update ggerganov/whisper.cpp ( #398 )
...
Signed-off-by: GitHub <noreply@github.com>
Co-authored-by: mudler <mudler@users.noreply.github.com>
2023-05-27 22:30:24 +02:00
ci-robbot [bot]
cdfb930a69
⬆️ Update go-skynet/go-ggml-transformers.cpp ( #385 )
...
Signed-off-by: GitHub <noreply@github.com>
Co-authored-by: mudler <mudler@users.noreply.github.com>
2023-05-27 22:30:11 +02:00
Ettore Di Giacinto
217dbb448e
feat: allow to set a prompt cache path and enable saving state ( #395 )
...
Signed-off-by: mudler <mudler@mocaccino.org>
2023-05-27 14:29:11 +02:00
ci-robbot [bot]
835a20610b
⬆️ Update ggerganov/whisper.cpp ( #372 )
...
Signed-off-by: GitHub <noreply@github.com>
Co-authored-by: mudler <mudler@users.noreply.github.com>
2023-05-26 22:43:11 +02:00
ci-robbot [bot]
74e808b8c3
⬆️ Update nomic-ai/gpt4all ( #389 )
...
Signed-off-by: GitHub <noreply@github.com>
Co-authored-by: mudler <mudler@users.noreply.github.com>
2023-05-26 22:28:14 +02:00
Ettore Di Giacinto
a44c8e9b4e
ci: set flakeAttempts ( #386 )
2023-05-26 15:28:26 +02:00
ci-robbot [bot]
320e430c7f
⬆️ Update nomic-ai/gpt4all ( #384 )
...
Signed-off-by: GitHub <noreply@github.com>
Co-authored-by: mudler <mudler@users.noreply.github.com>
2023-05-26 09:57:03 +02:00
ci-robbot [bot]
e891a46740
⬆️ Update nomic-ai/gpt4all ( #362 )
...
Signed-off-by: GitHub <noreply@github.com>
Co-authored-by: mudler <mudler@users.noreply.github.com>
2023-05-25 22:46:44 +02:00
ci-robbot [bot]
babbd23744
⬆️ Update go-skynet/go-ggml-transformers.cpp ( #363 )
...
Signed-off-by: GitHub <noreply@github.com>
Co-authored-by: mudler <mudler@users.noreply.github.com>
2023-05-25 00:37:36 +02:00
ci-robbot [bot]
eee41cbe2b
⬆️ Update go-skynet/go-llama.cpp ( #373 )
...
Signed-off-by: GitHub <noreply@github.com>
Co-authored-by: mudler <mudler@users.noreply.github.com>
2023-05-25 00:36:57 +02:00
Ettore Di Giacinto
c8cc197ddd
feat: add static builds ( #370 )
2023-05-24 16:42:24 +02:00
ci-robbot [bot]
e969604d75
⬆️ Update go-skynet/go-llama.cpp ( #365 )
...
Signed-off-by: GitHub <noreply@github.com>
Co-authored-by: mudler <mudler@users.noreply.github.com>
2023-05-23 23:10:06 +02:00
ci-robbot [bot]
c822e18f0d
⬆️ Update ggerganov/whisper.cpp ( #364 )
...
Signed-off-by: GitHub <noreply@github.com>
Co-authored-by: mudler <mudler@users.noreply.github.com>
2023-05-23 23:09:48 +02:00
Ettore Di Giacinto
9decd0813c
feat: update go-gpt2 ( #359 )
...
Signed-off-by: mudler <mudler@mocaccino.org>
2023-05-23 21:47:47 +02:00
Ettore Di Giacinto
43d3fb3eba
ci: add binary releases pipelines ( #358 )
2023-05-23 17:12:48 +02:00
ci-robbot [bot]
9e5cd0f10b
⬆️ Update nomic-ai/gpt4all ( #348 )
...
Signed-off-by: GitHub <noreply@github.com>
Co-authored-by: mudler <mudler@users.noreply.github.com>
2023-05-23 09:16:56 +02:00
ci-robbot [bot]
1cbe6a7067
⬆️ Update nomic-ai/gpt4all ( #345 )
...
Signed-off-by: GitHub <noreply@github.com>
Co-authored-by: mudler <mudler@users.noreply.github.com>
2023-05-22 19:02:56 +02:00
ci-robbot [bot]
482a83886e
⬆️ Update ggerganov/whisper.cpp ( #332 )
...
Signed-off-by: GitHub <noreply@github.com>
Co-authored-by: mudler <mudler@users.noreply.github.com>
2023-05-21 00:40:17 +02:00
ci-robbot [bot]
864aaf8c4d
⬆️ Update go-skynet/go-llama.cpp ( #327 )
...
Signed-off-by: GitHub <noreply@github.com>
Co-authored-by: mudler <mudler@users.noreply.github.com>
2023-05-20 20:42:29 +02:00
ci-robbot [bot]
93cc8569c3
⬆️ Update ggerganov/whisper.cpp ( #326 )
...
Signed-off-by: GitHub <noreply@github.com>
Co-authored-by: mudler <mudler@users.noreply.github.com>
2023-05-20 19:50:01 +02:00
ci-robbot [bot]
9609e4392b
⬆️ Update go-skynet/go-llama.cpp ( #321 )
...
Signed-off-by: GitHub <noreply@github.com>
Co-authored-by: mudler <mudler@users.noreply.github.com>
2023-05-20 10:53:22 +02:00
Ettore Di Giacinto
4e381cbe92
feat: support shorter urls for github repositories ( #314 )
2023-05-20 09:06:30 +02:00
ci-robbot [bot]
465a3b755d
⬆️ Update nomic-ai/gpt4all ( #312 )
...
Signed-off-by: GitHub <noreply@github.com>
Co-authored-by: mudler <mudler@users.noreply.github.com>
2023-05-20 00:30:36 +02:00
ci-robbot [bot]
91fc52bfb7
⬆️ Update go-skynet/go-llama.cpp ( #296 )
...
Signed-off-by: GitHub <noreply@github.com>
Co-authored-by: mudler <mudler@users.noreply.github.com>
2023-05-20 00:27:13 +02:00
Ettore Di Giacinto
bf3d936aea
fix: add LLAMA_CUBLAS on BUILD_TYPE=cublas ( #310 )
2023-05-19 17:11:28 +02:00
ci-robbot [bot]
837ce2cb31
⬆️ Update nomic-ai/gpt4all ( #295 )
...
Signed-off-by: GitHub <noreply@github.com>
Co-authored-by: mudler <mudler@users.noreply.github.com>
2023-05-19 10:37:12 +02:00
Ettore Di Giacinto
cc9aa9eb3f
feat: add /models/apply endpoint to prepare models ( #286 )
2023-05-18 15:59:03 +02:00
ci-robbot [bot]
5617e50ebc
⬆️ Update go-skynet/go-llama.cpp ( #256 )
...
Signed-off-by: GitHub <noreply@github.com>
Co-authored-by: mudler <mudler@users.noreply.github.com>
2023-05-18 09:52:48 +02:00
ci-robbot [bot]
b83e8b950d
⬆️ Update nomic-ai/gpt4all ( #252 )
...
Signed-off-by: GitHub <noreply@github.com>
Co-authored-by: mudler <mudler@users.noreply.github.com>
2023-05-18 09:52:35 +02:00
ci-robbot [bot]
7e4616646f
⬆️ Update go-skynet/go-bert.cpp ( #274 )
...
Signed-off-by: GitHub <noreply@github.com>
Co-authored-by: mudler <mudler@users.noreply.github.com>
2023-05-17 11:56:32 +02:00
ci-robbot [bot]
76be06ed56
⬆️ Update go-skynet/go-gpt2.cpp ( #253 )
...
Signed-off-by: GitHub <noreply@github.com>
Co-authored-by: mudler <mudler@users.noreply.github.com>
2023-05-17 01:47:31 +02:00
ci-robbot [bot]
41de6efca9
⬆️ Update ggerganov/whisper.cpp ( #265 )
...
Signed-off-by: GitHub <noreply@github.com>
Co-authored-by: mudler <mudler@users.noreply.github.com>
2023-05-17 01:04:14 +02:00
Ettore Di Giacinto
9d051c5d4f
feat: add image generation with ncnn-stablediffusion ( #272 )
2023-05-16 19:32:53 +02:00
Ettore Di Giacinto
acd03d15f2
feat: add support for cublas/openblas in the llama.cpp backend ( #258 )
2023-05-16 16:26:25 +02:00
Ettore Di Giacinto
a035de2fdd
tests: add rwkv ( #261 )
2023-05-15 08:15:01 +02:00
Ettore Di Giacinto
76a1267799
bump: update whisper.cpp ( #260 )
2023-05-15 01:00:16 +02:00
ci-robbot [bot]
b82bbbfc6b
⬆️ Update ggerganov/whisper.cpp ( #218 )
...
Signed-off-by: GitHub <noreply@github.com>
Co-authored-by: mudler <mudler@users.noreply.github.com>
2023-05-14 10:03:55 +02:00
ci-robbot [bot]
6c9ddff8e9
⬆️ Update go-skynet/go-llama.cpp ( #245 )
...
Signed-off-by: GitHub <noreply@github.com>
Co-authored-by: mudler <mudler@users.noreply.github.com>
2023-05-13 22:07:43 +02:00
ci-robbot [bot]
c5318587b8
⬆️ Update go-skynet/go-bert.cpp ( #247 )
...
Signed-off-by: GitHub <noreply@github.com>
Co-authored-by: mudler <mudler@users.noreply.github.com>
2023-05-13 14:36:01 +02:00
Ettore Di Giacinto
de36a48861
Update gpt4all to fix thread counts ( #249 )
2023-05-13 09:37:46 +02:00
Ettore Di Giacinto
2488c445b6
feat: bert.cpp token embeddings ( #241 )
2023-05-12 17:16:49 +02:00
Ettore Di Giacinto
8250391e49
Add support for gptneox/replit ( #238 )
2023-05-12 11:36:35 +02:00
Ettore Di Giacinto
fd1df4e971
whisper: add tests and allow to set upload size ( #237 )
2023-05-12 10:04:20 +02:00
ci-robbot [bot]
5115b2faa3
⬆️ Update go-skynet/go-llama.cpp ( #219 )
...
Signed-off-by: GitHub <noreply@github.com>
Co-authored-by: mudler <mudler@users.noreply.github.com>
2023-05-11 23:43:55 +02:00
ci-robbot [bot]
93e82a8bf4
⬆️ Update go-skynet/go-gpt2.cpp ( #220 )
...
Signed-off-by: GitHub <noreply@github.com>
Co-authored-by: mudler <mudler@users.noreply.github.com>
2023-05-11 23:43:44 +02:00
Ettore Di Giacinto
4413defca5
feat: add starcoder ( #236 )
2023-05-11 20:20:07 +02:00
Ettore Di Giacinto
f359e1c6c4
fix: dolly/rp ( #235 )
2023-05-11 19:38:27 +02:00
Ettore Di Giacinto
59e3c02002
make use of new bindings for gpt4all ( #232 )
2023-05-11 14:31:19 +02:00
ci-robbot [bot]
d6d7391da8
⬆️ Update donomii/go-rwkv.cpp ( #225 )
...
Signed-off-by: GitHub <noreply@github.com>
Co-authored-by: mudler <mudler@users.noreply.github.com>
2023-05-11 01:13:28 +02:00
Ettore Di Giacinto
11675932ac
feat: add dolly/redpajama/bloomz models support ( #214 )
2023-05-11 01:12:58 +02:00
Ettore Di Giacinto
f8ee20991c
feat: add bert.cpp embeddings ( #222 )
2023-05-10 15:20:21 +02:00
Ettore Di Giacinto
9f426578cf
feat: add transcript endpoint ( #211 )
2023-05-09 11:43:50 +02:00
ci-robbot [bot]
9d01b695a8
⬆️ Update go-skynet/go-llama.cpp ( #209 )
...
Signed-off-by: GitHub <noreply@github.com>
Co-authored-by: mudler <mudler@users.noreply.github.com>
2023-05-08 22:37:16 +02:00
Ettore Di Giacinto
89dfa0f5fc
feat: add experimental support for embeddings as arrays ( #207 )
2023-05-08 19:31:18 +02:00
ci-robbot [bot]
cbdcc839f3
⬆️ Update go-skynet/go-llama.cpp ( #201 )
...
Signed-off-by: GitHub <noreply@github.com>
Co-authored-by: mudler <mudler@users.noreply.github.com>
2023-05-06 22:49:44 +02:00
ci-robbot [bot]
38d7e0b43c
⬆️ Update go-skynet/go-llama.cpp ( #198 )
...
Signed-off-by: GitHub <noreply@github.com>
Co-authored-by: mudler <mudler@users.noreply.github.com>
2023-05-06 00:21:48 +02:00
mudler
75b25297fd
tests: run with ginkgo
2023-05-05 22:51:30 +02:00
ci-robbot [bot]
91db3d4d5c
⬆️ Update go-skynet/go-llama.cpp ( #194 )
...
Signed-off-by: GitHub <noreply@github.com>
Co-authored-by: mudler <mudler@users.noreply.github.com>
2023-05-05 13:45:50 +02:00
Ettore Di Giacinto
c839b334eb
feat: add embeddings for go-llama.cpp backend ( #190 )
2023-05-05 11:20:06 +02:00
ci-robbot [bot]
eabdc5042a
⬆️ Update go-skynet/go-llama.cpp ( #184 )
...
Signed-off-by: GitHub <noreply@github.com>
Co-authored-by: mudler <mudler@users.noreply.github.com>
2023-05-04 18:28:49 +02:00
mudler
885642915f
ci: add renovate suffix
2023-05-04 12:26:59 +02:00
Ettore Di Giacinto
3fe11fe24d
ci: attempt to configure renovate with custom regexes ( #178 )
2023-05-04 11:55:14 +02:00
Ettore Di Giacinto
4eae570ef5
Update docs ( #163 )
2023-05-03 15:51:54 +02:00
Ettore Di Giacinto
751b7eca62
feat: add rwkv support ( #158 )
...
Signed-off-by: mudler <mudler@mocaccino.org>
2023-05-03 11:45:22 +02:00
Ettore Di Giacinto
156e15a4fa
Bump llama.cpp, downgrade gpt4all-j ( #149 )
2023-05-02 16:07:18 +02:00
Ettore Di Giacinto
92452d46da
feat: add new gpt4all-j binding ( #142 )
2023-05-01 20:00:15 +02:00
Ettore Di Giacinto
16773e2a35
feat: make images to build sources on start ( #124 )
...
Signed-off-by: mudler <mudler@mocaccino.org>
2023-04-29 20:38:37 +02:00
Ettore Di Giacinto
a330c9cee5
update: bump llama.cpp to 7f15c5c ( #122 )
...
Signed-off-by: mudler <mudler@mocaccino.org>
2023-04-29 15:20:50 +02:00
Ettore Di Giacinto
ff0867996e
tests: increase timeout ( #121 )
2023-04-29 14:56:00 +02:00
Ettore Di Giacinto
b8533428bc
bump: update llama.cpp ( #117 )
...
Signed-off-by: mudler <mudler@mocaccino.org>
2023-04-28 19:24:28 +02:00
Ettore Di Giacinto
c806eae0de
feat: config files and SSE ( #83 )
...
Signed-off-by: mudler <mudler@mocaccino.org>
Signed-off-by: Tyler Gillson <tyler.gillson@gmail.com>
Co-authored-by: Tyler Gillson <tyler.gillson@gmail.com>
2023-04-26 21:18:18 -07:00
Ettore Di Giacinto
b9011bda59
feat: automatic updates with renovate, docs updates ( #76 )
2023-04-24 18:10:58 +02:00
Ettore Di Giacinto
2b2f5fa36a
feat: update llama.cpp ( #72 )
2023-04-24 14:15:49 +02:00
Ettore Di Giacinto
676e15f785
fix: make MacOS builds work ( #61 )
2023-04-22 11:05:23 +02:00
Marc R Kellerman
3e71c90949
feature: add devcontainer for live debugging ( #60 )
2023-04-22 01:20:03 +02:00