2470 Commits

Author SHA1 Message Date
Chenguang Li
be42a19eab CANN: Add 310P operator support check (llama/12962) 2025-04-24 20:39:16 +03:00
Georgi Gerganov
b8755670ca metal : add FA-vec kernels for head size 96 (llama/12952)
ggml-ci
2025-04-24 20:39:16 +03:00
hipudding
483eecae62 CANN: Add x86 build ci (llama/12950)
* CANN: Add x86 build ci

* CANN: fix code format
2025-04-24 20:39:16 +03:00
David Huang
43e3d25d93 CUDA/HIP: Share the same unified memory allocation logic. (llama/12934)
Replace compile-time `GGML_HIP_UMA` with environment variable `GGML_CUDA_ENABLE_UNIFIED_MEMORY`. This unifies the usage on NVIDIA and AMD GPUs, and allows a single binary to be shared between integrated and dedicated GPUs.
2025-04-24 20:39:16 +03:00
Akarshan Biswas
e1dbf9a42e SYCL: Add ROPE vision kernel (llama/12887)
* SYCL: Add ROPE vision kernel

* Add comment about rope mode
2025-04-24 20:39:16 +03:00
Srihari-mcw
ee0013865d ggml : Add AVX512 implementation of GEMM - Q4_Kx8 (llama/12829)
* Add AVX512 implementation of GEMM - q4kx8

* Update changes to remove unnecessary whitespaces
2025-04-24 20:39:16 +03:00
Chenguang Li
32a407166b CANN: Opt ROPE optimization (llama/12865)
* [CANN]Opt ROPE optimization

* [CANN]Codestyle adjustment

* [CANN]Fix the ROPE precision issue

* [CANN]codestyle fix

* [CANN]add rope unsupport case

Signed-off-by: noemotiovon <noemotiovon@gmail.com>
2025-04-24 20:39:16 +03:00
Xinpeng Dou
622f981853 CANN: Optimize CANN buffer pool memory management (llama/12875)
Multiple optional memory pools are provided for CANN, including VMM,
priority queue-based, and traditional memory pools.
1.When the memory pool is available and GGML_CANN_DISABLE_VMM_POOL
   is not defined, the VMM pool is selected by default.
2.Otherwise, if GGML_CANN_ENABLE_BUF_PRIO_POOL is defined,
   the priority queue-based memory pool is used.
3.If neither condition is met, the default memory pool is used.
2025-04-24 20:39:16 +03:00
Akarshan Biswas
d049d67065 SYCL: Fix im2col (llama/12910)
* SYCL: Fix im2col

* restore local workgroup size adjustments for large inputs

* restore format
2025-04-24 20:39:16 +03:00
Radoslav Gerganov
877308838e rpc : use ggml_context_ptr (llama/12938) 2025-04-24 20:39:16 +03:00
Acly
d87dfcf7c0 ggml : Depthwise 2D convolution (ggml/1152)
* ggml-cpu : kernels for faster depthwise 2D convolution

* fix compile: remove static after moving to ops.cpp

* add dilation for depthwise_conv_2d

* review: rename to ggml_conv_2d_dw_direct, remove redundant struct keywords, pass by ref, whitespace

* review: rename depthwise_conv_2d -> conv_2d_dw everywhere
2025-04-24 20:39:16 +03:00
SXX
915c14ef10 ggml: use _mm[512/256]_dpbusd[_avx]_epi32 to directly accumulate into the result register (llama/12773)
* ggml: use _mm[512/256]_dpbusd[_avx]_epi32 to directly accumulate into the result register

* simplifies the codebase by removing redundant functions
2025-04-24 20:39:16 +03:00
Alan Gray
5d33d3c929 ggml: disable CUDA graphs for unsupported DUP and CONT node types (llama/12891)
Fixes #12798
2025-04-24 20:39:16 +03:00
Jeff Bolz
751e42b21e vulkan: use aligned loads for flash attention mask (llama/12853)
Rewrite the stride logic for the mask tensor in the FA shader to force the
stride to be aligned, to allow using more efficient loads.
2025-04-24 20:39:16 +03:00
Ewan Crawford
e8ee32d12d sycl: Support sycl_ext_oneapi_limited_graph (llama/12873)
The current usage of the SYCL-Graph extension checks for
the `sycl_ext_oneapi_graph` device aspect. However, it is also
possible to support `sycl_ext_oneapi_limied_graph` devices that
don't support update
2025-04-24 20:39:16 +03:00
Akarshan Biswas
e9ce285135 SYCL: Add fp16 type support to unary op kernels (llama/12788)
* SYCL: Add fp16 support to some elementwise OP kernels

* remove comment

ggml-ci

* Use static_cast directly

* remove not needed cast from tanh

* Use static cast and remove unneeded castings

* Adjust device_support_op for unary OPs

* Use cast_data and typed_data struct to deduplicate casting code
2025-04-24 20:39:16 +03:00
Aaron Teo
b942f451b6 ggml: fix compilation error s390x (llama/12848)
* ggml: fixes #12846 compilation error

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>

Co-authored-by: Aleksei Nikiforov <aleksei.nikiforov@ibm.com>

* ggml: add documentation for code change

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>

Co-authored-by: Aleksei Nikiforov <aleksei.nikiforov@ibm.com>

* ggml: refactor to type-cast and update documentation

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>

Co-authored-by: Aleksei Nikiforov <aleksei.nikiforov@ibm.com>

* ggml: update documentation to provide full issue link

Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>

Co-authored-by: Aleksei Nikiforov <aleksei.nikiforov@ibm.com>

---------

Co-authored-by: Aleksei Nikiforov <aleksei.nikiforov@ibm.com>
2025-04-24 20:39:16 +03:00
cmdr2
e6410faf99 cpu: fix cpu backend's supports-op for GET_ROWS_BACK. fixes a fatal when running test-backend-ops with only the CPU backend (ggml/1190) 2025-04-24 20:39:16 +03:00
Chenguang Li
182df69384 CANN: Support more ops (llama/12841)
* [CANN]Support Opt LOG && MEAN && PAD_REFLECT_1D

* [CANN]Support COUNT_EQUAL && STEP && SGN

* [CANN]codestyle adjustment

* [CANN]codestyle adjustment

---------

Signed-off-by: noemotiovon <noemotiovon@gmail.com>
2025-04-24 20:39:16 +03:00
Prajwal B Mehendarkar
3bf9691dfd Fixes #12823 (llama/12830)
* Including limits file on AIX

* Fixes #12823
2025-04-24 20:39:16 +03:00
Piotr Kubaj
ba444e9c23 ggml-cpu-impl.h: do not redefine bool on POWER9 (llama/12856)
error: unknown type name '_Bool'
2025-04-24 20:39:16 +03:00
Piotr Kubaj
c6caf8eef2 ggml-impl.h: fix build on POWER9 (llama/12855)
error: ISO C++17 does not allow 'register' storage class specifier
2025-04-24 20:39:16 +03:00
Chenguang Li
6cae79a1d7 CANN: Support Opt CONV_TRANSPOSE_1D and ELU (llama/12786)
* [CANN] Support ELU and CONV_TRANSPOSE_1D

* [CANN]Modification review comments

* [CANN]Modification review comments

* [CANN]name adjustment

* [CANN]remove lambda used in template

* [CANN]Use std::func instead of template

* [CANN]Modify the code according to the review comments

---------

Signed-off-by: noemotiovon <noemotiovon@gmail.com>
2025-04-24 20:39:16 +03:00
Jeff Bolz
b9bfe0c693 vulkan: In coopmat2 mmq, load q4_k/q5_k scales through shared memory (llama/12833)
q4_k and q5_k had a lot of redundant global loads where the same 16B of
scale information is repeatedly loaded and decoded during each loop iteration.
This change restructures the loops to more explicitly iterate over whole
blocks in the outer loop (with unrolled inner loop) and to copy/decode the
scale data into shared memory once at the start of each outer loop. The copy
is pipelined so the scale load from global memory is relatively cheap.

This improves q4_k/q5_k model prompt processing performance by around 5-7%.
I briefly tried applying this to q6_k and q4_0, and it didn't help for q6_k
and hurt for q4_0.

The big "else" path in mul_mm_cm2.comp that had all the clamped/unclamped
variants isn't used as often as it originally was (e.g. due to the padded_N
change), so I trimmed it down to offset some of the new complexity of the
semi-manual loop unrolling.
2025-04-24 20:39:16 +03:00
Jeff Bolz
1d50c6ac22 vulkan: Use fp16 for the flash attention P*V multiplication (llama/12783)
This is consistent with the ggml-cuda behavior and the mul_mat fallback.
2025-04-24 20:39:16 +03:00
Sigbjørn Skjæret
79f23d9132 cuda : add f32 to bf16 copy op (llama/12806)
This allows BF16 KV-cache on CUDA.
2025-04-24 20:39:16 +03:00
Georgi Gerganov
ee2cbeeb74 llama : fix FA when KV cache is not used (i.e. embeddings) (llama/12825)
* ggml : FA supports F32 V

* graph : cast KV to F16 when the KV cache is not used

ggml-ci

* server : add test that exercises embeddings with FA enabled

ggml-ci
2025-04-24 20:39:16 +03:00
cmdr2
868a5ce310 ggml: don't include arm_neon.h when using CUDA 12 with ARM Neon (ggml/1187)
fix #1186
2025-04-24 20:39:16 +03:00
Diego Devesa
b9c71fae5a ggml : add bilinear upscale support (ggml/1185) 2025-04-24 20:39:16 +03:00
Diego Devesa
6d67c6d93d ggml : add more generic custom op, remove deprecated custom ops (ggml/1183)
* ggml : add more generic ggml_custom op

* ggml : remove deprecated custom ops
2025-04-24 20:39:16 +03:00
Neo Zhang Jianyu
12cade118e Revert "sycl:remove redundant memcopy in function ggml_backend_sycl_buffer_set_tensor" (llama/12812)
* Revert "sycl: remove redundant memcopy in function ggml_backend_sycl_buffer_s…"

This reverts commit 518a01480eb3a7c80a4951b430db9dee55428310.

* Update ggml/src/ggml-sycl/ggml-sycl.cpp

* Update ggml/src/ggml-sycl/ggml-sycl.cpp

* rm tail space
2025-04-24 20:39:16 +03:00
lhez
fd1c725e65 opencl: better identify Adreno GPU (llama/12760) 2025-04-24 20:39:16 +03:00
Georgi Gerganov
d33fd00cfe cuda : fix HIP and MUSA BF16 (llama/0)
ggml-ci
2025-04-24 20:39:16 +03:00
zhouwg
3e0d89782a sycl: remove redundant memcopy in function ggml_backend_sycl_buffer_set_tensor (llama/12734) 2025-04-24 20:39:16 +03:00
zhouwg
7074b622eb CANN: fix typo in ggml-cann (llama/12733) 2025-04-24 20:39:16 +03:00
hipudding
b8d3e45342 CANN: Refactor to reduce duplicate code (llama/12731)
* CANN: Refactor to reduce duplicate code

* CANN: fix review comment
2025-04-24 20:39:16 +03:00
R0CKSTAR
1901505138 musa: fix compilation warnings in mp_22/31 (llama/12780)
Signed-off-by: Xiaodong Ye <xiaodong.ye@mthreads.com>
2025-04-24 20:39:16 +03:00
Jeff Bolz
3c26dd3353 vulkan: fix NaN issue in flash attention shader (llama/12776)
Use -FLT_MAX/2 rather than -inf as the initial value for computing the maximum.
2025-04-24 20:39:16 +03:00
Jeff Bolz
d792d2a2dc vulkan: Use unclamped loads for flash attention mask (llama/12720)
nem1 must be a multiple of GGML_KQ_MASK_PAD, and GGML_KQ_MASK_PAD is a multiple
of the number of rows in the matrix. The KV dim is a multiple of the number of
columns for the aligned shader.
2025-04-24 20:39:16 +03:00
0cc4m
8add58aa5e Vulkan: Tune Vulkan mmq int dot shader for performance (llama/12767) 2025-04-24 20:39:16 +03:00
Nicolò Scipione
8f8ede1b12 sycl: allow ggml-sycl configuration and compilation using Visual Studio project/solution (llama/12625) 2025-04-24 20:39:16 +03:00
Ronny Brendel
3a6fe8d767 cmake: fix ggml-shaders-gen compiler paths containing spaces (llama/12747)
fixes error for compiler paths with spaces
2025-04-24 20:39:16 +03:00
Jeff Bolz
76231bda56 vulkan: Hybrid waitForFences/getFenceStatus to reduce fence latency (llama/12630)
There seems to be a bubble waking up from waitForFences, which costs a few
percent performance and also increased variance in performance. This change
inserts an "almost_ready" fence when the graph is about 80% complete and we
waitForFences for the almost_ready fence and then spin (with _mm_pauses) waiting
for the final fence to be signaled.
2025-04-24 20:39:16 +03:00
Jeff Bolz
785437c253 vulkan: set cmake minimum and project name in vulkan-shaders (llama/12744) 2025-04-24 20:39:16 +03:00
Gaurav Garg
2f0612cb1c CUDA: Prefer vector flash decoding kernel for Gemma models (llama/12738)
* Prefer vector flash decoding kernel for Gemma models

Vector flash decoding kernel was not being picked for models with head dimension 256. Gemma models are in this category.
Removing this limit improves e2e performance by upto 12% in gen phase throughput for Gemm models.

* Update ggml/src/ggml-cuda/fattn.cu

Co-authored-by: Johannes Gäßler <johannesg@5d6.de>

---------

Co-authored-by: Johannes Gäßler <johannesg@5d6.de>
2025-04-24 20:39:16 +03:00
Jeff Bolz
e944065d5b vulkan: Fix missing cmake logic for dot product extension (llama/12721) 2025-04-24 20:39:16 +03:00
a3sh
ccc7b5df0b fix MUSA compiler warning (llama/12704)
* fix MUSA compiler warning

* replace (void) with GGML_UNUSED
2025-04-24 20:39:16 +03:00
Chenguang Li
fbed36851e CANN: Support operator SIN COS ARGMAX (llama/12709)
* [CANN]support sin cos argmax

Signed-off-by: noemotiovon <noemotiovon@gmail.com>

* [CANN]codestyle adjustment

Signed-off-by: noemotiovon <noemotiovon@gmail.com>

* [CANN]Remove redundant code

Signed-off-by: noemotiovon <noemotiovon@gmail.com>

---------

Signed-off-by: noemotiovon <noemotiovon@gmail.com>
Co-authored-by: noemotiovon <noemotiovon@gmail.com>
2025-04-24 20:39:16 +03:00
Alan Gray
d1d847f184 Simplify and improve CUDA graphs through use of indirect copy pointers (llama/9017)
* CUDA: Simplify and improve CUDA graphs through use of indirect copy pointers

Previously there was complexity in the CUDA graphs implementation due
frequently changing parameters to copy kernels associated with K and V
cache pointers. This patch simplifies by using indirection to avoid
such parameters frequently changing, avoiding the need for frequent
graph updates.

Fixes #12152

* Addressed comments

* fix HIP builds

* properly sync to stream

* removed ggml_cuda_cpy_fn_ptrs

* move stream sync before free

* guard to only use indirection with graphs

* style fixes

* check for errors

---------

Co-authored-by: slaren <slarengh@gmail.com>
2025-04-24 20:39:16 +03:00
hipudding
337f91d4a6 CANN: Fix failed test cases (llama/12708)
* CANN: Fix memory waste in aclnn_tensor

* CANN: fix backend ops fail

* CANN: fix acl_tensor memory alloc.

* CANN: format

* CANN: remove trailing whitespace
2025-04-24 20:39:16 +03:00