whisper.cpp/ggml-cuda
2024-06-16 18:19:48 +03:00
..
template-instances CUDA: refactor mmq, dmmv, mmvq (llama/7716) 2024-06-16 18:19:48 +03:00
acc.cu sync : ggml (#2001) 2024-03-27 18:55:10 +02:00
acc.cuh sync : ggml (#2001) 2024-03-27 18:55:10 +02:00
arange.cu sync : ggml (#2001) 2024-03-27 18:55:10 +02:00
arange.cuh sync : ggml (#2001) 2024-03-27 18:55:10 +02:00
argsort.cu ggml : mul_mat_id use the same tensor for all the experts (llama/6387) 2024-04-07 16:15:57 +03:00
argsort.cuh sync : ggml (#2001) 2024-03-27 18:55:10 +02:00
binbcast.cu ggml : group all experts in a single ggml_mul_mat_id (llama/6505) 2024-05-13 11:02:26 +03:00
binbcast.cuh sync : ggml (#2001) 2024-03-27 18:55:10 +02:00
clamp.cu Introduction of CUDA Graphs to LLama.cpp (llama/6766) 2024-05-13 11:02:26 +03:00
clamp.cuh sync : ggml (#2001) 2024-03-27 18:55:10 +02:00
common.cuh CUDA: use tensor cores for MMQ (llama/7676) 2024-06-16 18:19:48 +03:00
concat.cu cuda : non-cont concat support (llama/7610) 2024-06-16 18:19:48 +03:00
concat.cuh sync : ggml (#2001) 2024-03-27 18:55:10 +02:00
convert.cu ggml : drop support for QK_K=64 (llama/7473) 2024-06-16 18:19:48 +03:00
convert.cuh llama : add Command R Plus support (llama/6491) 2024-04-09 20:26:18 +03:00
cpy.cu Introduction of CUDA Graphs to LLama.cpp (llama/6766) 2024-05-13 11:02:26 +03:00
cpy.cuh Introduction of CUDA Graphs to LLama.cpp (llama/6766) 2024-05-13 11:02:26 +03:00
dequantize.cuh llama : add Command R Plus support (llama/6491) 2024-04-09 20:26:18 +03:00
diagmask.cu sync : ggml (#2001) 2024-03-27 18:55:10 +02:00
diagmask.cuh sync : ggml (#2001) 2024-03-27 18:55:10 +02:00
dmmv.cu CUDA: refactor mmq, dmmv, mmvq (llama/7716) 2024-06-16 18:19:48 +03:00
dmmv.cuh sync : llama.cpp (skip) 2024-04-07 16:15:57 +03:00
fattn-common.cuh CUDA: use tensor cores for MMQ (llama/7676) 2024-06-16 18:19:48 +03:00
fattn-tile-f16.cu CUDA: use tensor cores for MMQ (llama/7676) 2024-06-16 18:19:48 +03:00
fattn-tile-f16.cuh CUDA: faster large batch FA without tensor cores (llama/7314) 2024-06-16 18:19:48 +03:00
fattn-tile-f32.cu CUDA: fix Pascal FA, deq. KV to FP16 for batch > 8 (llama/7681) 2024-06-16 18:19:48 +03:00
fattn-tile-f32.cuh CUDA: faster large batch FA without tensor cores (llama/7314) 2024-06-16 18:19:48 +03:00
fattn-vec-f16.cu CUDA: add FP32 FlashAttention vector kernel (llama/7188) 2024-05-14 19:16:29 +03:00
fattn-vec-f16.cuh CUDA: use tensor cores for MMQ (llama/7676) 2024-06-16 18:19:48 +03:00
fattn-vec-f32.cu CUDA: add FP32 FlashAttention vector kernel (llama/7188) 2024-05-14 19:16:29 +03:00
fattn-vec-f32.cuh CUDA: fix broken oob check for FA vec f32 kernel (llama/7904) 2024-06-16 18:19:48 +03:00
fattn-wmma-f16.cuh CUDA: use tensor cores for MMQ (llama/7676) 2024-06-16 18:19:48 +03:00
fattn.cu CUDA: fix Pascal FA, deq. KV to FP16 for batch > 8 (llama/7681) 2024-06-16 18:19:48 +03:00
fattn.cuh ggml : add Flash Attention (llama/5021) 2024-05-13 11:02:26 +03:00
getrows.cu sync : ggml (#2001) 2024-03-27 18:55:10 +02:00
getrows.cuh sync : ggml (#2001) 2024-03-27 18:55:10 +02:00
im2col.cu sync : ggml (#2001) 2024-03-27 18:55:10 +02:00
im2col.cuh sync : ggml (#2001) 2024-03-27 18:55:10 +02:00
mma.cuh CUDA: int8 tensor cores for MMQ (q4_K, q5_K, q6_K) (llama/7860) 2024-06-16 18:19:48 +03:00
mmq.cu CUDA: revise q8_1 data layout for mul_mat_q (llama/7824) 2024-06-16 18:19:48 +03:00
mmq.cuh CUDA: int8 tensor cores for MMQ (q4_K, q5_K, q6_K) (llama/7860) 2024-06-16 18:19:48 +03:00
mmvq.cu CUDA: refactor mmq, dmmv, mmvq (llama/7716) 2024-06-16 18:19:48 +03:00
mmvq.cuh sync : ggml (#2001) 2024-03-27 18:55:10 +02:00
norm.cu ggml : fix YARN + add tests + add asserts (llama/7617) 2024-06-16 18:19:48 +03:00
norm.cuh sync : ggml (#2001) 2024-03-27 18:55:10 +02:00
pad.cu sync : ggml (#2001) 2024-03-27 18:55:10 +02:00
pad.cuh sync : ggml (#2001) 2024-03-27 18:55:10 +02:00
pool2d.cu sync : ggml (#2001) 2024-03-27 18:55:10 +02:00
pool2d.cuh sync : ggml (#2001) 2024-03-27 18:55:10 +02:00
quantize.cu CUDA: revise q8_1 data layout for mul_mat_q (llama/7824) 2024-06-16 18:19:48 +03:00
quantize.cuh CUDA: revise q8_1 data layout for mul_mat_q (llama/7824) 2024-06-16 18:19:48 +03:00
rope.cu ggml : refactor rope norm/neox (llama/7634) 2024-06-16 18:19:48 +03:00
rope.cuh sync : ggml (#2001) 2024-03-27 18:55:10 +02:00
scale.cu Introduction of CUDA Graphs to LLama.cpp (llama/6766) 2024-05-13 11:02:26 +03:00
scale.cuh sync : ggml (#2001) 2024-03-27 18:55:10 +02:00
softmax.cu CUDA: deduplicate FlashAttention code (llama/7352) 2024-06-16 18:19:48 +03:00
softmax.cuh sync : ggml (#2001) 2024-03-27 18:55:10 +02:00
sumrows.cu sync : ggml (#2001) 2024-03-27 18:55:10 +02:00
sumrows.cuh sync : ggml (#2001) 2024-03-27 18:55:10 +02:00
tsembd.cu sync : ggml (#2001) 2024-03-27 18:55:10 +02:00
tsembd.cuh sync : ggml (#2001) 2024-03-27 18:55:10 +02:00
unary.cu tests : add non-cont unary tests (llama/7857) 2024-06-16 18:19:48 +03:00
unary.cuh feat: implemented sigmoid function (ggml/806) 2024-05-13 11:02:26 +03:00
upscale.cu ggml : add ggml_upscale_ext (ggml/814) 2024-06-16 18:19:48 +03:00
upscale.cuh sync : ggml (#2001) 2024-03-27 18:55:10 +02:00
vecdotq.cuh CUDA: refactor mmq, dmmv, mmvq (llama/7716) 2024-06-16 18:19:48 +03:00