Diego Devesa
|
9c817edb48
|
ggml : move CPU backend to a separate file (llama/10144)
|
2024-11-15 15:21:04 +02:00 |
|
Diego Devesa
|
1d48457aa6
|
llama : refactor model loader with backend registry (llama/10026)
|
2024-11-15 15:21:04 +02:00 |
|
Radoslav Gerganov
|
25f9fee6fb
|
rpc : pack only RPC structs (llama/9959)
|
2024-11-01 10:19:05 +02:00 |
|
Radoslav Gerganov
|
4078e4c388
|
rpc : backend refactoring (llama/9912)
* rpc : refactor backend
Use structs for RPC request/response messages
* rpc : refactor server
|
2024-11-01 10:19:05 +02:00 |
|
Diego Devesa
|
c313723860
|
rpc : add backend registry / device interfaces (llama/9812)
* rpc : add backend registry / device interfaces
* llama : add llama_supports_rpc API
* ggml_backend_rpc_start_rpc_server -> ggml_backend_rpc_start_server
|
2024-11-01 10:19:05 +02:00 |
|
Diego Devesa
|
1acfadb721
|
ggml-backend : add device and backend reg interfaces (llama/9707)
Co-authored-by: Johannes Gäßler <johannesg@5d6.de>
|
2024-10-05 15:23:51 +03:00 |
|
Georgi Gerganov
|
34291099fb
|
ggml : refactoring (llama/#0)
- d6a04f87
- 23e0d70b
|
2024-09-24 19:45:08 +03:00 |
|
Radoslav Gerganov
|
0677293503
|
rpc : fix segfault with nkvo (llama/9389)
* rpc : fix nkvo
* rpc : buf_size must not be static
ref: #9337
---------
Co-authored-by: slaren <slarengh@gmail.com>
|
2024-09-24 19:45:08 +03:00 |
|
Johannes Gäßler
|
c7515b0995
|
ggml/examples: add backend support for numerical optimization (ggml/949)
* CUDA eval works
* stochastic gradient descent op
* Adam except decay
* CUDA CROSS_ENTROPY_LOSS_BACK
* CUDA mnist-fc training works
* backend CLI arg
* refactor gguf load
* remove sched from opt_step_adam
* implement l1 regularization (weight decay)
* extra call to add optimizer
* initialize gradients with ggml_graph_reset
* gradient accumulation
* increment iter per eval instead of epoch
* adjust backend interfaces
* fix ggml_graph_reset without backend
* fix ggml graph export/import
* fixup
* rename
* revert ggml_opt changes
* more general CUDA repeat_back
* update documentation, fix CNN
* validation split
* add clarifying comment
* optimize PyTorch training
* adjust buffer size, thread count
* fix 0.0f validation split
* Update examples/mnist/mnist-common.cpp
Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
* fix gradient accumulation
* tensor flag for accumulators -> tensor hash set
* Update include/ggml.h
Co-authored-by: slaren <slarengh@gmail.com>
* Update tests/test-backend-ops.cpp
Co-authored-by: slaren <slarengh@gmail.com>
* Update tests/test-backend-ops.cpp
Co-authored-by: slaren <slarengh@gmail.com>
* fix test prints
* Update src/ggml-backend.c
Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
* better CUDA support for noncontiguous out_prod
* add comment
---------
Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
Co-authored-by: slaren <slarengh@gmail.com>
|
2024-09-24 19:45:08 +03:00 |
|
Radoslav Gerganov
|
7e59afa1e0
|
rpc : print error message when failed to connect endpoint (llama/9042)
|
2024-08-28 13:22:20 +03:00 |
|
Radoslav Gerganov
|
5ac022140e
|
rpc : prevent crashes on invalid input (llama/9040)
Add more checks which prevent RPC server from crashing if invalid input
is received from client
|
2024-08-28 13:22:20 +03:00 |
|
Georgi Gerganov
|
ad37d26983
|
rpc : sanitize tensor data + warnings (llama/0)
Co-authored-by: slaren <slarengh@gmail.com>
|
2024-08-12 11:58:46 +03:00 |
|
Georgi Gerganov
|
e30c679928
|
whisper : reorganize source code + improve CMake (#2256)
* scripts : update sync [no ci]
* files : reorganize [no ci]
* sync : llama.cpp
* cmake : link math library
* cmake : build normal ggml library
* files : move headers to include
* objc : fix path to ggml-metal.h
* ci : fix WHISPER_CUDA -> GGML_CUDA
* scripts : sync LICENSE [no ci]
|
2024-06-26 19:34:09 +03:00 |
|