Aaron Teo
82e04e7670
ggml-cpu: Support s390x SIMD Instruction Set (llama/12019)
...
* ggml: add s390x ARCH_FLAGS for compilation
Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>
* ggml: add SIMD for s390x using vector intrinsics
SIMD is activated for:
* ggml_vec_dot_f32
* ggml_vec_dot_f16
* ggml_vec_mad_f32
* ggml_vec_mad_f16
* ggml_vec_mad_f32_unroll
* ggml_vec_scale_f32
* ggml_vec_scale_f16
SIMD is NOT activated for:
* ggml_vec_dot_f16_unroll (pending bugfix)
Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>
* ggml: fix missing escape character in GGML_F32x4_REDUCE
Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>
* ggml: add temporary patch for GGML_F32_ARR and GGML_F16_ARR
Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>
* ggml: fix s390x GGML_F32x4_REDUCE
Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>
* ggml: full SIMD activation for F32,F16 s390x
Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>
* ggml: add option to disable s390x VXE/VXE2
Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>
* ggml: change vecintrin.h include to ggml-cpu-impl
* add __VXE__ and __VXE2__ macros
Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>
* cmake: add s390x target detection for VX/VXE/VXE2
Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>
* ggml: move s390x vector intrinsics to ggml-cpu-impl.h
Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>
* ggml: s390x Q8_0 SIMD
Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>
* ggml: correct documentation for Q8_0
Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>
* ggml: s390x reduce code complexity Q8_0
Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>
* ggml: s390x bugfix typo Q8_0
Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>
* ggml: s390x SIMD activated for Q4_1
Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>
* ggml: s390x inline vec_reve
Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>
* ggml: s390x SIMD activation for Q4_0
Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>
* ggml: add VXE backend feature
Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>
* ggml: remove test.py
Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>
* ggml: s390x SIMD activation for quantize_row_q8_0
Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>
* ggml: s390x SIMD activation for quantize_row_q8_1
Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>
* ggml: s390x SIMD activation for iq4_xs
Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>
* ggml: bugfix iq4_xs
Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>
* ggml: s390x SIMD activation for iq4_nl
Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>
* ggml: add float, double, and long vector data type
Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>
* ggml: clean up iq4_xs SIMD
Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>
* ggml: fix improper use of restrict keyword
Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>
* ggml: update warning message for ggml_vec_tbl
Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>
* ggml: untested implementation of ggml_vec_dot_iq2_xxs_q8_K
Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>
* ggml: update ggml_vec_dot_q4_1_q8_1 to use typedefs
Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>
* ggml: switch to restrict for iq4_nl
Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>
* ggml: slight dot product speed improvement for q4_1_q8_1
Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>
* ggml: s390x SIMD activation for q6_K
Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>
* ggml: add missing `_t` to ggml_int8x16x4_t
Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>
* ggml: fix missing `_t` for ggml_vec_xl_s8x4
Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>
* ggml: fix more missing `_t`
Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>
* ggml: add unroll and prefetch to Q8_0
increase of 3.86% for prompt processing and 32.22% for token generation
Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>
* ggml: patch Q8_0 to use proper vector sizes
Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>
* ggml: optimise Q8_0 dot prod compute kernel further
Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>
* ggml: add unroll and prefetch to Q4_1
Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>
* ggml: refactor Q6_K variable naming for readability
Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>
* ggml: fix Q6_K typos
Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>
* ggml: s390x SIMD activation for Q5_K
Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>
* ggml: fix wrong char*x16_t naming
Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>
* ggml: Q5_K y0 wrong signness
Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>
* ggml: fix Q5_K invalid uchar type
Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>
* ggml: fix Q5_K invalid uchar type
Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>
* ggml: s390x SIMD activation for Q4_K
Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>
* ggml: fix Q4_K invalid vector intrinsics
Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>
* ggml: simplify ggml_padd_s16 compute kernel
Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>
* ggml: correct ggml-cpu vxe wording
Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>
* ggml: change ggml_aligned_malloc alignment to 256
256 is the cache line size for s390x platforms
Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>
* ggml: resolve pr merge via cherry-pick 225bbbf
Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>
* ggml : fix LoongArch compile error with 128-bit SIMD (llama/11701)
* ggml: resolve pr merge via cherry-pick 4571953
Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>
* ggml: cmake remove fork when determining s390x machine type
thank you @ericcurtin
Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>
---------
Signed-off-by: Aaron Teo <aaron.teo1@ibm.com>
Co-authored-by: Jinyang He <hejinyang@loongson.cn>
Co-authored-by: junchao-zhao <68935141+junchao-loongson@users.noreply.github.com>
2025-02-27 08:55:36 +02:00
Johannes Gäßler
38ac47cd4d
CUDA: app option to compile without FlashAttention (llama/12025)
2025-02-27 08:55:36 +02:00
Johannes Gäßler
2d70cd36d7
CUDA: optimize FA for GQA + large batches (llama/12014)
2025-02-27 08:55:36 +02:00
Gian-Carlo Pascutto
98dab49b9a
cuda: Add Q5_1, Q5_0, Q4_1 and Q4_0 to F32 conversion support. (llama/12000)
2025-02-27 08:55:36 +02:00
PureJourney
b1385e9aa9
CUDA: correct the lowest Maxwell supported by CUDA 12 (llama/11984)
...
* CUDA: correct the lowest Maxwell supported by CUDA 12
---------
Co-authored-by: Johannes Gäßler <johannesg@5d6.de>
2025-02-27 08:55:36 +02:00
Bodhi
48f5e893f5
MUSA: support ARM64 and enable dp4a .etc (llama/11843)
...
* MUSA: support ARM64 and enable __dp4a .etc
* fix cross entropy loss op for musa
* update
* add cc info log for musa
* add comment for the MUSA .cc calculation block
---------
Co-authored-by: Bodhi Hu <huaishun.hu@mthreads.com>
2025-02-27 08:55:36 +02:00
Charles Xu
dc21871fcb
ggml-cpu: Add CPU backend support for KleidiAI library (llama/11390)
...
* ggml-cpu: Add CPU backend support for KleidiAI library
* Add environmental variable GGML_KLEIDIAI_SME
* Add support for multithread LHS conversion
* Switch kernel selection order to dotprod and i8mm
* updates for review comments
* More updates for review comments
* Reorganize and rename KleidiAI files
* Move ggml-cpu-traits.h to source file
* Update cmake for SME build and add alignment for SME
* Remove append GGML_USE_CPU_KLEIDIAI to the GGML_CDEF_PUBLIC list
2025-02-27 08:55:36 +02:00
Prashant Vithule
64a430bc81
ggml: aarch64: implement SVE kernels for q3_K_q8_K vector dot (llama/11917)
...
* Added SVE Implementation for Q3_K Kernel in ggml-cpu-quants.c file
* Improved Formating of code in ggml-cpu-quants.c file
* style : minor fixes
* style : less whitespaces
* style : ptr spaceing
---------
Co-authored-by: vithulep <p.m.vithule1517@gmail.com>
Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
2025-02-27 08:55:36 +02:00
Johannes Gäßler
51a3580c79
CUDA: use async data loading for FlashAttention (llama/11894)
...
* CUDA: use async data loading for FlashAttention
---------
Co-authored-by: Diego Devesa <slarengh@gmail.com>
2025-02-27 08:55:36 +02:00
Rémy O
37a21dd43d
vulkan: implement several ops relevant for ggml_opt (llama/11769)
...
* vulkan: support memset_tensor
* vulkan: support GGML_OP_SUM
* vulkan: implement GGML_OP_ARGMAX
* vulkan: implement GGML_OP_SUB
* vulkan: implement GGML_OP_COUNT_EQUAL
* vulkan: implement GGML_OP_OPT_STEP_ADAMW
* vulkan: fix check_results RWKV_WKV6 crash and memory leaks
* vulkan: implement GGML_OP_REPEAT_BACK
* tests: remove invalid test-backend-ops REPEAT_BACK tests
* vulkan: fix COUNT_EQUAL memset using a fillBuffer command
2025-02-27 08:55:36 +02:00
Jeff Bolz
8a22a8b17f
vulkan: support multi/vision rope, and noncontiguous rope (llama/11902)
2025-02-27 08:55:36 +02:00
Hale Chan
fcbcad0c90
metal : fix the crash caused by the lack of residency set support on Intel Macs. (llama/11904)
2025-02-27 08:55:36 +02:00
Adrian Kretz
4444db7360
metal : optimize dequant q6_K kernel (llama/11892)
2025-02-27 08:55:36 +02:00
Georgi Gerganov
a7fc1038ca
repo : update links to new url (llama/11886)
...
* repo : update links to new url
ggml-ci
* cont : more urls
ggml-ci
2025-02-27 08:55:36 +02:00
Rémy O
1689aaf854
vulkan: initial support for IQ1_S and IQ1_M quantizations (llama/11528)
...
* vulkan: initial support for IQ1_S and IQ1_M quantizations
* vulkan: define MMV kernels for IQ1 quantizations
* devops: increase timeout of Vulkan tests again
* vulkan: simplify ifdef for init_iq_shmem
2025-02-27 08:55:36 +02:00
lhez
4b48fe449a
opencl: Fix rope and softmax (llama/11833)
...
* opencl: fix `ROPE`
* opencl: fix `SOFT_MAX`
* Add fp16 variant
* opencl: enforce subgroup size for `soft_max`
2025-02-27 08:55:36 +02:00
Diego Devesa
47cc043e69
cuda : add ampere to the list of default architectures (llama/11870)
2025-02-27 08:55:36 +02:00
Jinyang He
e3d9ffb98b
ggml: optimize some vec dot functions for LoongArch ASX (llama/11842)
...
* Optimize ggml_vec_dot_q3_K_q8_K for LoongArch ASX
* Optimize ggml_vec_dot_q4_K_q8_K for LoongArch ASX
* Optimize ggml_vec_dot_q6_K_q8_K for LoongArch ASX
* Optimize ggml_vec_dot_q5_K_q8_K for LoongArch ASX
* Optimize ggml_vec_dot_q2_K_q8_K for LoongArch ASX
* Optimize mul_sum_i8_pairs_float for LoongArch ASX
* Optimize ggml_vec_dot_iq4_xs_q8_K for LoongArch ASX
2025-02-27 08:55:36 +02:00
Eve
e22d69839d
vulkan: linux builds + small subgroup size fixes (llama/11767)
...
* mm subgroup size
* upload vulkan x86 builds
2025-02-27 08:55:36 +02:00
Jeffrey Morgan
defe731263
llamafile: use member variable instead of constant for iq4nlt (llama/11780)
2025-02-27 08:55:36 +02:00
R0CKSTAR
4e07957bf9
musa: bump MUSA SDK version to rc3.1.1 (llama/11822)
...
* musa: Update MUSA SDK version to rc3.1.1
Signed-off-by: Xiaodong Ye <xiaodong.ye@mthreads.com>
* musa: Remove workaround in PR #10042
Signed-off-by: Xiaodong Ye <xiaodong.ye@mthreads.com>
---------
Signed-off-by: Xiaodong Ye <xiaodong.ye@mthreads.com>
2025-02-27 08:55:36 +02:00
Diego Devesa
d2c5154bb5
ggml-cpu : add chunking support to mul_mat_id (llama/11666)
...
* ggml-cpu : add chunking support to mul_mat_id
* allocate chunk counter in wdata
parallelize src1 quantization by column to allows parallelization even when there is only one row
* disable for arm
* cleanup
* better way to disable for arm
* fix uninitialized counter when using 1 thread only
* revert test-backend-ops changes
2025-02-27 08:55:36 +02:00
Xuan-Son Nguyen
4fac43fe00
ggml : x2 speed for WASM by optimizing SIMD (llama/11453)
...
* ggml : x2 speed for WASM by optimizing SIMD
* fix bad merging
* rm trailing spaces
* rm redundant clamp
* better quantize_row_q8_K
Co-authored-by: camel-cdr <camel-cdr@protonmail.com>
* remove memset that causes buffer overflow
Co-authored-by: camel-cdr <camel-cdr@protonmail.com>
---------
Co-authored-by: camel-cdr <camel-cdr@protonmail.com>
2025-02-27 08:55:36 +02:00
uvos
3be9670f17
HIP: Remove GCN from list of devices that avoid MMQ (llama/11831)
2025-02-27 08:55:36 +02:00
uvos
86729fcd6d
HIP: Switch to std::vector in rocblas version check (llama/11820)
2025-02-27 08:55:36 +02:00
bandoti
7fbca6304e
cleanup: fix compile warnings associated with gnu_printf (llama/11811)
2025-02-27 08:55:36 +02:00
Richard
d597f83e1a
ggml : fix multi-threaded clamp_f32 (llama/11824)
...
* Bug fix for clamp_f32
When using tensors larger than 1d clamp operation does not work due to the restriction of returning if ith is not 0.
* Bug fix for clamp_f32
* Bug fix for clamp_f32
2025-02-27 08:55:36 +02:00
Weizhao Ouyang
e5edcc6259
ggml-cpu: Fix duplicate MATMUL_INT8 (llama/11817)
...
Signed-off-by: Weizhao Ouyang <o451686892@gmail.com>
2025-02-27 08:55:36 +02:00
Johannes Gäßler
556f773d53
CUDA: fix CUDART_VERSION checks (llama/11821)
2025-02-27 08:55:36 +02:00
Sheldon Robinson
91d02de332
Fix #11802 : Compile bug - RegQueryValueExA changed to RegQueryValueEx (llama/11803)
...
* Fix #11802 : Compile bug - RegQueryValueExA changed to RegQueryValueEx
* Fix #11802 : PR #11803 - keep RegQueryValueExA, remove TEXT macro, description needs to be ANSI string
2025-02-27 08:55:36 +02:00
Johannes Gäßler
1b67d72f87
CUDA: use arch list for compatibility check (llama/11775)
...
* CUDA: use arch list for feature availability check
---------
Co-authored-by: Diego Devesa <slarengh@gmail.com>
2025-02-27 08:55:36 +02:00
Maxim Evtush
14d7c0368d
fix: typos in documentation files (llama/11791)
...
* Update ggml.c
* Update arg.cpp
* Update speculative.h
2025-02-27 08:55:36 +02:00
Danny Milosavljevic
db6e19188a
vulkan: Make Vulkan optional at runtime (ggml/11493). (llama/11494)
...
Co-authored-by: Jeff Bolz <jbolz@nvidia.com>
2025-02-27 08:55:36 +02:00
Wagner Bruna
b4b063a5c9
vulkan: add environment variable GGML_VK_PREFER_HOST_MEMORY to avoid VRAM allocation (llama/11592)
2025-02-27 08:55:36 +02:00
Jeff Bolz
930b739e7a
vulkan: account for lookup tables when checking shared memory size (llama/11502)
2025-02-27 08:55:36 +02:00
Karol Kontny
5981352bb5
ggml: Fix data race in ggml threadpool (llama/11736)
...
After the barrier in last iteration is executed, still the loop termination
condition will be executed. However main thread can destroy the cgraph object
and its nodes already, then another thread will access it, but the thing is already gone.
Also trouble can happen when n_nodes == 0 or abort is called, but I'm not sure if the
prior situation is possible.
Last syncronization should be done after the loop to ensure the cgraph/cplan won't be
accessed after the main thread exits from the function.
2025-02-27 08:55:36 +02:00
Johannes Gäßler
7561da244e
CUDA: fix min. version for movmatrix (llama/11751)
2025-02-27 08:55:36 +02:00
Jeff Bolz
be83f342fb
vulkan: print shared memory size (llama/11719)
2025-02-27 08:55:36 +02:00
Akarshan Biswas
fd369871f7
SYCL: remove XMX info from print devices (llama/11712)
2025-02-27 08:55:36 +02:00
Jinyang He
bbd8364f5e
ggml : optimize and build warning fix for LoongArch (llama/11709)
...
* ggml : optimize convert f32<->f16 for loongarch_asx
* ggml : optimize loongarch_asx extend i16,i8,u8 to i32,i16
* ggml : Fix warnings when run cpu CI locally on LoongArch
2025-02-27 08:55:36 +02:00
Akarshan Biswas
e4102440ef
SYCL: Adjust support condition for norm operators (llama/11674)
...
SYCL does not support non contiguous tensors for norm operations
2025-02-27 08:55:36 +02:00
junchao-zhao
f8242ec483
ggml : fix LoongArch compile error with 128-bit SIMD (llama/11701)
2025-02-27 08:55:36 +02:00
Jeff Bolz
ef51b4cba4
vulkan: optimize coopmat2 iq2/iq3 callbacks (llama/11521)
...
* vulkan: optimize coopmat2 iq2/iq3 callbacks
* build: trigger CI on GLSL compute shader changes
2025-02-27 08:55:36 +02:00
Rémy O
6f08b24146
vulkan: initial support for IQ4_XS quantization (llama/11501)
2025-02-27 08:55:36 +02:00
Jeff Bolz
7c165d7fa8
vulkan: use smaller combined allocations to avoid fragmentation (llama/11551)
2025-02-27 08:55:36 +02:00
Charles Duffy
2f0cf44915
metal : avoid breaking build when metal API predates TARGET_OS_VISION (llama/11690)
...
Avoids breakage in nix flake build introduced by b0569130c5e9c671152c913d82803b7c2f014ff9
2025-02-27 08:55:36 +02:00
Georgi Gerganov
b9c972fd0d
metal : adjust support conditions for norm operators (llama/11671)
...
cont #11659
ggml-ci
2025-02-27 08:55:36 +02:00
Johannes Gäßler
01c9aafbfd
CUDA: support for mat. mul. with ne03 != ne13 (llama/11656)
2025-02-27 08:55:36 +02:00
Johannes Gäßler
bae6bbf487
CUDA: non-contiguous (RMS) norm support (llama/11659)
...
* CUDA: non-contiguous (RMS) norm support
---------
Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
2025-02-27 08:55:36 +02:00
fxzjshm
c310272fa0
HIP: force max threads per block to be 1024 (llama/11621)
...
Some old/vendor forked version of llvm still use 256. Explicitly set it to 1024 to align with upstream llvm.
Signed-off-by: fxzjshm <fxzjshm@163.com>
2025-02-27 08:55:36 +02:00