Commit Graph

1458 Commits

Author SHA1 Message Date
a6b0950916 ggml : compute forward no longer pass src tensors (ggml/729)
* refactored compute forward to not pass in the src tensors each time

* fix merge issues with flags

* missed one place in the last commit to fix the is_param / flags issue

* minor spacing fix

* fixed some variable assignments so all tests locally are passing

* new change after merge fix

---------

Co-authored-by: siddharthvader <siddharth@coinlist.co>
2024-02-22 15:12:35 +02:00
d352dbd163 ggml : fix conv_2d batch mode (ggml/737)
Co-authored-by: bssrdf <bssrdf@gmail.com>
2024-02-22 15:12:32 +02:00
eb23f4ef16 openvino : fix convert-whisper-to-openvino.py (#1890)
Fix issue: Conversion from Whisper to OpenVino failed #1870

convert-whisper-to-openvino.py stopped working with OpenVINO version 2023.0.0-10926-b4452d56304-releases/2023/0 .

Error was: TypeError: load(): incompatible function arguments. The following argument types are supported:
    1. (self: openvino._pyopenvino.FrontEnd, path: object) -> ov::frontend::InputModel

Tested successfully with a large-v3 conversion.

Co-authored-by: Stefan Grundmann <grundmanns@sandiego.gov>
2024-02-22 15:11:35 +02:00
c56344b509 main : fix file existence check in main.cpp (#1889)
In commit dda4b0e of PR #1872, I've introduced a check for the
existence of files before loading the model. However, I haven't
considered the case where whisper.cpp might read from stdin as well,
and in such cases, the checks should ignore the "-" argument as it
does not represent a regular file.

Additionally, this commit removes the usage of 'stat()' in favor of
the recently introduced function 'is_file_exist()' in common.cpp from
PR #1871.

Apologies for the bug introduced in the previous PR and any
inconvenience it may have caused.
2024-02-22 15:01:08 +02:00
59119f4f20 talk-llama : sync llama.cpp 2024-02-20 12:09:57 +02:00
276615d708 make : fix CUBLAS link with WSL (#1878) 2024-02-20 12:05:38 +02:00
b602819b6e sync : ggml 2024-02-19 15:54:25 +02:00
c2c606f05b ggml : resolve merge conflicts (ggml/0)
ggml-ci
2024-02-19 15:53:25 +02:00
83afebe872 common : add IQ1_S (ggml/0)
ggml-ci
2024-02-19 15:53:25 +02:00
a4d8f9d559 ci : enable -Werror for CUDA builds (llama/5579)
* cmake : pass -Werror through -Xcompiler

ggml-ci

* make, cmake : enable CUDA errors on warnings

ggml-ci
2024-02-19 15:53:24 +02:00
5ec1e0edfa cuda, metal : fix nans in soft_max (llama/5574)
* cuda : fix nans in soft_max

* metal : fix nans in soft_max

---------

Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
2024-02-19 15:53:24 +02:00
30a11b1ab8 ggml : android and old glibc NUMA incompatibility bugfixes (llama/5557)
* #ifdef out some code NUMA blocks for Android due to lack of support

* added in some __ANDROID__ if def gates around numa code and forced GLIBC prior to 2.29 to use a syscall for getcpu instead of the wrapper

* Changed gates on numa platform specific stuff to __gnu_linux__ to skip any platforms without glibc

* harmonizing #if defined blocks for numa code to __gnu_linux__ since that's the only model that's being followed anyways

---------

Co-authored-by: root <root@nenya.lothlorien.ca>
2024-02-19 15:53:24 +02:00
f04e6b87d7 ggml : restore vec dot stride arg names (llama/5453) 2024-02-19 15:53:24 +02:00
0c33928b55 ci : fix wikitext url + compile warnings (llama/5569)
ggml-ci
2024-02-19 15:53:24 +02:00
0775374750 metal : fix unused warnings (llama/0) 2024-02-19 15:53:24 +02:00
7d90bb035b ggml, common, examples, tests : fixed type arguments in printf (llama/5528) 2024-02-19 15:53:24 +02:00
2c1ad21ba8 1.5 bit quantization (llama/5453)
* iq1_s: WIP basics

* iq1_s: CUDA is working

* iq1_s: scalar CPU dot product

* iq1_s: WIP AVX2 dot product - something is not right

* Fix tests

* Fix shadow warnings

* Fix after merge with latest master

* iq1_s: AVX2 finally works

* iq1_s: ARM_NEON dot product. Works, but not very fast

* iq1_s: better grid

* iq1_s: use IQ2_XXS for attn_output

At a cost of 0.04 extra bpw this gives a big improvement in PPL.

* iq1_s: Metal basics

Dequantize works, but not dot product

* iq1_s: Metal works, but quite slow

As usual, Apple Silicon does not like the code I write.

* iq1_s: Tests

* iq1_s: slightly faster dot product

---------

Co-authored-by: Iwan Kawrakow <iwan.kawrakow@gmail.com>
2024-02-19 15:53:23 +02:00
eca5ff9868 ggml : add ALiBi support for ggml_soft_max_ext (llama/5488) 2024-02-19 15:53:23 +02:00
1b25d2fa0a ci : add an option to fail on compile warning (llama/3952)
* feat(ci): add an option to fail on compile warning

* Update CMakeLists.txt

* minor : fix compile warnings

ggml-ci

* ggml : fix unreachable code warnings

ggml-ci

* ci : disable fatal warnings for windows, ios and tvos

* ggml : fix strncpy warning

* ci : disable fatal warnings for MPI build

* ci : add fatal warnings to ggml-ci

ggml-ci

---------

Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
2024-02-19 15:53:23 +02:00
74a6acc999 cmake : fix VULKAN and ROCm builds (llama/5525)
* cmake : fix VULKAN and ROCm builds

* cmake : fix (cont)

* vulkan : fix compile warnings

ggml-ci

* cmake : fix

ggml-ci

* cmake : minor

ggml-ci
2024-02-19 15:53:23 +02:00
a4ed8a0821 ggml : add numa options (llama/5377)
* Added numa options to allow finer grained control as well as plumbing for a new mirror mode that will require numa.h

* Reverted Makefile

* Fixed include

* Removed sched.h from ggml.h, moved ggml_get_numa_affinity into ggml.c, removed trailing whitespace and fixed up a few inconsistent variables

* removed trailing whitespace

* Added numa options to allow finer grained control as well as plumbing for a new mirror mode that will require numa.h

* Reverting Makefile

* Fixed a number of issues with the move from BOOL to ggml_numa_strategies. Added a note about mirror mode note being implemented yet

* Removing MIRROR_MODE code for this PR

* Removing last bit of MIRROR_MODE code for this PR

* Removing unneeded branch in server.cpp example and moving get_numa_affinity and making it static

* Fixed lingering init_llama_backend() bool calls in tests and examples

* Remote enum llama_numa_strategies

* Revert bad merge with dynatemp flags

* add missing enum ggml_numa_strategies declaration and revert sync problem with master

* add missing enum ggml_numa_strategies declaration

* fixed ggml_init_numa variable

* Update ggml.h

Co-authored-by: Jared Van Bortel <cebtenzzre@gmail.com>

* Update READMEs with info about numa flags, change INTERLEAVE strategy name to DISTRIBUTE everywhere, implement the improved distribution strategy from @rankaiyx, fix a spelling mistake and un-merge some bad merges

* split numa init out from llama_backend_init and created llama_numa_init. Updated all code paths and samples

* Fix up some boolean vs enum comparisons

* Added #ifdefs for non-Linux OS that don't have cpu_set_t datatype

* Update ggml.h

Align enum values

Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>

* Update ggml.c

Remove whitespace

Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>

* Update ggml.c

align paremeters

Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>

* Update examples/server/server.cpp

remove whitespace and align brace

Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>

* Update common/common.cpp

Remove whitespace and align brace

Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>

* unified ggml_numa_strategy enum and fixed text alignment in server.cpp example

* Update ggml.c

simplified return for platforms without NUMA support

Co-authored-by: Jared Van Bortel <cebtenzzre@gmail.com>

* removed redundant else from cli argument processing of --numa

* whitespace

---------

Co-authored-by: root <root@nenya.lothlorien.ca>
Co-authored-by: Jared Van Bortel <cebtenzzre@gmail.com>
Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
Co-authored-by: Jared Van Bortel <jared@nomic.ai>
2024-02-19 15:53:23 +02:00
9f675e021c cuda : print message when initialization fails (llama/5512)
* cuda : print message when initialization fails

* use CUDA_NAME both times
2024-02-19 15:53:23 +02:00
a38efcb9fd vulkan: Find optimal memory type but with fallback (llama/5381)
* @0cc4m feedback

* More feedback @0cc4m
2024-02-19 15:53:22 +02:00
AT
31591649a0 Early return for zero size calls to get_tensor. (llama/5482)
* Early return for zero size calls to get_tensor.

Signed-off-by: Adam Treat <treat.adam@gmail.com>

* Update ggml-kompute.cpp

Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>

* Update ggml-kompute.cpp

Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>

* Add an early return to the get/set tensor when the size is null.

Signed-off-by: Adam Treat <treat.adam@gmail.com>

* Early return after the assertions.

Signed-off-by: Adam Treat <treat.adam@gmail.com>

* Since we do the early return in the generic backend now no reason to do so here as well.

Signed-off-by: Adam Treat <treat.adam@gmail.com>

---------

Signed-off-by: Adam Treat <treat.adam@gmail.com>
Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
2024-02-19 15:53:22 +02:00
4f5c46a84f ggml-quants : fix compiler warnings (shadow variable) (llama/5472)
Co-authored-by: Iwan Kawrakow <iwan.kawrakow@gmail.com>
2024-02-19 15:53:22 +02:00
462ffc58db ggml-sycl: Replace 3d ops with macro (llama/5458)
* use macro

* use macro

* fix format
2024-02-19 15:53:21 +02:00
65faae0b6a build : update CBLAS flags + fix unused var warning (#0) 2024-02-19 14:44:46 +02:00
dda4b0ed06 main : check if input files exist before proceeding (#1872)
Until the most recent commit (3d42463), the main.cpp sample file does
not check whether the input files exist or not. Consequently, the
model is loaded first before reporting whether there was a failure or
not when processing a file. In environments with HDD, this can take
about 50 seconds or more, depending on the loaded model.

This commit addresses this issue by checking in advance whether the
input files exist or not.
2024-02-19 10:51:26 +02:00
07d04280be examples : clean up common code (#1871)
move some utility functions into common.h
2024-02-19 10:50:15 +02:00
917c56ded4 models : fix openvino setup info (#1874) 2024-02-19 02:19:47 +00:00
3d42463845 models : add update py requirements 2024-02-13 11:51:32 +02:00
3ffc83d90a swift : package no longer use ggml dependency (#1861)
* Revert "swift : update Package.swift to use ggml as package dependency (#1701)"

This reverts commit 993acb5d41.

* spm : add ggml.h
2024-02-12 19:54:11 +02:00
e3c5e2cba8 whisper : fix external encoder (#1860) 2024-02-12 19:53:51 +02:00
b742f13e70 sync : ggml 2024-02-12 19:07:56 +02:00
52c529eeb1 ggml-alloc : allocate all leafs as if they were inputs (ggml/731)
* ggml-alloc : allocate all leafs as if they were inputs

* ensure static leafs are allocated

* gpt-2-backend : remove unnecesary ggml_new_tensor

* update other gpt-2 examples to remove ggml_new_tensor calls in the graph
2024-02-12 19:07:38 +02:00
551529290d talk-llama : sync llama.cpp 2024-02-12 10:39:58 +02:00
25a90ffa38 sync : ggml 2024-02-12 09:32:15 +02:00
866b67ca93 ggml-backend : sync remnant 2024-02-12 09:31:12 +02:00
d7e9f58f7f CUDA: mul_mat_vec_q tiling, refactor mul mat logic (llama/5434)
* CUDA: mul_mat_vec_q tiling, refactor mul mat logic

Co-authored-by: slaren <slarengh@gmail.com>

---------

Co-authored-by: slaren <slarengh@gmail.com>
2024-02-12 09:31:12 +02:00
04839bae22 vulkan: only use M-sized matmul on Apple GPUs (llama/5412)
* vulkan: refactor guess_matmul_pipeline for vendor

Refactor ggml_vk_guess_matmul_pipeline to simplify adding per-vendor
conditionals.

Signed-off-by: Sergio Lopez <slp@redhat.com>

* vulkan: only use M-sized matmul on Apple GPUs

L-sized and S-sized matmuls are broken on Apple GPUs, force using
M-size with this vendor.

Signed-off-by: Sergio Lopez <slp@redhat.com>

---------

Signed-off-by: Sergio Lopez <slp@redhat.com>
2024-02-12 09:31:12 +02:00
3cc6e04a52 ggml : fix compile warnings (unused vars) (llama/4966) 2024-02-12 09:31:11 +02:00
b7ef178b9c ggml : add mmla kernels for quantized GEMM (llama/4966)
* ggml: aarch64: implement smmla kernel for q8_0_q8_0 quantized gemm

armv8.2-a and above supports MMLA instructions that have higher
throughput than DOT. this commit adds mmla kernel for
q8_0_q8_0 gemm. The feature is enabled if the platform supports
"__ARM_FEATURE_MATMUL_INT8"

On AWS Graviton3 processors this kernel resulted up to 1.5x
improvement for prompt evaluation throughput compared to the
default sdot kernel.

* ggml: aarch64: implement smmla kernel for q4_0_q8_0 quantized gemm

armv8.2-a and above supports MMLA instructions that have higher
throughput than DOT. this commit adds mmla kernel for
q4_0_q8_0 gemm. The feature is enabled if the platform supports
"__ARM_FEATURE_MATMUL_INT8"

On AWS Graviton3 processors this kernel resulted up to 1.5x
improvement for prompt evaluation throughput compared to the
default sdot kernel.

* ggml: aarch64: implement smmla kernel for q4_1_q8_1 quantized gemm

armv8.2-a and above supports MMLA instructions that have higher
throughput than DOT. this commit adds mmla kernel for
q4_1_q8_1 gemm. The feature is enabled if the platform supports
"__ARM_FEATURE_MATMUL_INT8"

On AWS Graviton3 processors this kernel resulted up to 1.5x
improvement for prompt evaluation throughput compared to the
default sdot kernel.

* ggml: update unit tests for the new vec_dot interface

* llama.cpp: add MATMUL_INT8 capability to system_info
2024-02-12 09:31:11 +02:00
47dfe9d4db metal : use autoreleasepool to avoid memory leaks (llama/5437)
There appears to be a known memory leak when using the
`MLTCommandBuffer`. It is suggested to use `@autoreleasepool` in
[1,2]

[1] https://developer.apple.com/forums/thread/662721
[2] https://forums.developer.apple.com/forums/thread/120931

This change-set wraps the `ggml_metal_graph_compute` in a
`@autoreleasepool`.

This commit addresses https://github.com/ggerganov/llama.cpp/issues/5436
2024-02-12 09:31:11 +02:00
1d3270cc8f ggml-alloc : v3 (ggml/727)
* ggml-alloc v3

ggml-ci

* fix ci

ggml-ci

* whisper : check for backend buffer allocation failures

* whisper : avoid leaks when initialization fails

* cleanup

ggml-ci

* style fixes

ggml-ci
2024-02-12 09:31:11 +02:00
a6fb6ab597 examples : added audio_ctx argument to main and server (#1857)
* added audio_ctx argument to main and server examples

* Better default value

Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>

* better default value (again)

Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>

---------

Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
2024-02-12 09:19:07 +02:00
163e74b6c3 metal : option to embed MSL source into compiled binary (#1842)
* ggml : embed Metal library source (ggml-metal.metal) into binary

enable by setting WHISPER_EMBED_METAL_LIBRARY

* rename the build option

* rename the preprocessor directive

* generate Metal library embedding assembly on-fly during build process
2024-02-11 16:41:41 +02:00
f273e66dc6 examples : initialize context params properly (#1852) 2024-02-11 16:39:12 +02:00
02b4c52c12 talk-llama : sync llama.cpp 2024-02-10 10:10:59 +02:00
518199c09e sync : ggml 2024-02-10 09:56:47 +02:00
8b17a2f776 src : relocate new backend sources 2024-02-10 09:55:47 +02:00