..
acc.cu
sync : ggml ( #2001 )
2024-03-27 18:55:10 +02:00
acc.cuh
sync : ggml ( #2001 )
2024-03-27 18:55:10 +02:00
arange.cu
sync : ggml ( #2001 )
2024-03-27 18:55:10 +02:00
arange.cuh
sync : ggml ( #2001 )
2024-03-27 18:55:10 +02:00
argsort.cu
ggml : mul_mat_id use the same tensor for all the experts (llama/6387)
2024-04-07 16:15:57 +03:00
argsort.cuh
sync : ggml ( #2001 )
2024-03-27 18:55:10 +02:00
binbcast.cu
ggml : group all experts in a single ggml_mul_mat_id (llama/6505)
2024-05-13 11:02:26 +03:00
binbcast.cuh
sync : ggml ( #2001 )
2024-03-27 18:55:10 +02:00
clamp.cu
Introduction of CUDA Graphs to LLama.cpp (llama/6766)
2024-05-13 11:02:26 +03:00
clamp.cuh
sync : ggml ( #2001 )
2024-03-27 18:55:10 +02:00
common.cuh
update HIP_UMA #7399 (llama/7414)
2024-06-16 18:19:48 +03:00
concat.cu
sync : ggml ( #2001 )
2024-03-27 18:55:10 +02:00
concat.cuh
sync : ggml ( #2001 )
2024-03-27 18:55:10 +02:00
convert.cu
ggml : drop support for QK_K=64 (llama/7473)
2024-06-16 18:19:48 +03:00
convert.cuh
llama : add Command R Plus support (llama/6491)
2024-04-09 20:26:18 +03:00
cpy.cu
Introduction of CUDA Graphs to LLama.cpp (llama/6766)
2024-05-13 11:02:26 +03:00
cpy.cuh
Introduction of CUDA Graphs to LLama.cpp (llama/6766)
2024-05-13 11:02:26 +03:00
dequantize.cuh
llama : add Command R Plus support (llama/6491)
2024-04-09 20:26:18 +03:00
diagmask.cu
sync : ggml ( #2001 )
2024-03-27 18:55:10 +02:00
diagmask.cuh
sync : ggml ( #2001 )
2024-03-27 18:55:10 +02:00
dmmv.cu
ggml : drop support for QK_K=64 (llama/7473)
2024-06-16 18:19:48 +03:00
dmmv.cuh
sync : llama.cpp (skip)
2024-04-07 16:15:57 +03:00
fattn-common.cuh
CUDA: deduplicate FlashAttention code (llama/7352)
2024-06-16 18:19:48 +03:00
fattn-tile-f16.cu
CUDA: fix FA out-of-bounds reads (llama/7479)
2024-06-16 18:19:48 +03:00
fattn-tile-f16.cuh
CUDA: faster large batch FA without tensor cores (llama/7314)
2024-06-16 18:19:48 +03:00
fattn-tile-f32.cu
CUDA: fix FA out-of-bounds reads (llama/7479)
2024-06-16 18:19:48 +03:00
fattn-tile-f32.cuh
CUDA: faster large batch FA without tensor cores (llama/7314)
2024-06-16 18:19:48 +03:00
fattn-vec-f16.cu
CUDA: add FP32 FlashAttention vector kernel (llama/7188)
2024-05-14 19:16:29 +03:00
fattn-vec-f16.cuh
CUDA: add FP32 FlashAttention vector kernel (llama/7188)
2024-05-14 19:16:29 +03:00
fattn-vec-f32.cu
CUDA: add FP32 FlashAttention vector kernel (llama/7188)
2024-05-14 19:16:29 +03:00
fattn-vec-f32.cuh
CUDA: add FP32 FlashAttention vector kernel (llama/7188)
2024-05-14 19:16:29 +03:00
fattn.cu
CUDA: deduplicate FlashAttention code (llama/7352)
2024-06-16 18:19:48 +03:00
fattn.cuh
ggml : add Flash Attention (llama/5021)
2024-05-13 11:02:26 +03:00
getrows.cu
sync : ggml ( #2001 )
2024-03-27 18:55:10 +02:00
getrows.cuh
sync : ggml ( #2001 )
2024-03-27 18:55:10 +02:00
im2col.cu
sync : ggml ( #2001 )
2024-03-27 18:55:10 +02:00
im2col.cuh
sync : ggml ( #2001 )
2024-03-27 18:55:10 +02:00
mmq.cu
ggml : drop support for QK_K=64 (llama/7473)
2024-06-16 18:19:48 +03:00
mmq.cuh
sync : ggml ( #2001 )
2024-03-27 18:55:10 +02:00
mmvq.cu
cuda : fix bounds check for src0 rows in MMVQ kernel ( #2231 )
2024-06-11 17:39:01 +03:00
mmvq.cuh
sync : ggml ( #2001 )
2024-03-27 18:55:10 +02:00
norm.cu
sync : ggml ( #2001 )
2024-03-27 18:55:10 +02:00
norm.cuh
sync : ggml ( #2001 )
2024-03-27 18:55:10 +02:00
pad.cu
sync : ggml ( #2001 )
2024-03-27 18:55:10 +02:00
pad.cuh
sync : ggml ( #2001 )
2024-03-27 18:55:10 +02:00
pool2d.cu
sync : ggml ( #2001 )
2024-03-27 18:55:10 +02:00
pool2d.cuh
sync : ggml ( #2001 )
2024-03-27 18:55:10 +02:00
quantize.cu
llama : add Command R Plus support (llama/6491)
2024-04-09 20:26:18 +03:00
quantize.cuh
llama : add Command R Plus support (llama/6491)
2024-04-09 20:26:18 +03:00
rope.cu
cuda : fix rope + add tests (llama/7452)
2024-06-16 18:19:48 +03:00
rope.cuh
sync : ggml ( #2001 )
2024-03-27 18:55:10 +02:00
scale.cu
Introduction of CUDA Graphs to LLama.cpp (llama/6766)
2024-05-13 11:02:26 +03:00
scale.cuh
sync : ggml ( #2001 )
2024-03-27 18:55:10 +02:00
softmax.cu
CUDA: deduplicate FlashAttention code (llama/7352)
2024-06-16 18:19:48 +03:00
softmax.cuh
sync : ggml ( #2001 )
2024-03-27 18:55:10 +02:00
sumrows.cu
sync : ggml ( #2001 )
2024-03-27 18:55:10 +02:00
sumrows.cuh
sync : ggml ( #2001 )
2024-03-27 18:55:10 +02:00
tsembd.cu
sync : ggml ( #2001 )
2024-03-27 18:55:10 +02:00
tsembd.cuh
sync : ggml ( #2001 )
2024-03-27 18:55:10 +02:00
unary.cu
feat: implemented sigmoid function (ggml/806)
2024-05-13 11:02:26 +03:00
unary.cuh
feat: implemented sigmoid function (ggml/806)
2024-05-13 11:02:26 +03:00
upscale.cu
ggml : add ggml_upscale_ext
(ggml/814)
2024-06-16 18:19:48 +03:00
upscale.cuh
sync : ggml ( #2001 )
2024-03-27 18:55:10 +02:00
vecdotq.cuh
ggml : drop support for QK_K=64 (llama/7473)
2024-06-16 18:19:48 +03:00