Georgi Gerganov
|
114df388fe
|
talk-llama : increase context to 2048
|
2023-04-10 23:09:15 +03:00 |
|
Georgi Gerganov
|
ea36831459
|
talk-llama : update to latest llama.cpp (improved performance)
|
2023-04-10 22:59:13 +03:00 |
|
InconsolableCellist
|
5e6e2187a3
|
talk-llama : fixing usage message for talk-llama (#687)
"-ml" instead of "-mg" for specifying the llama file
|
2023-03-30 00:10:20 +03:00 |
|
Evan Jones
|
a47e812a54
|
talk-llama : add alpaca support (#668)
|
2023-03-29 23:01:14 +03:00 |
|
Georgi Gerganov
|
e5c197d8aa
|
talk-llama : add discussion link
|
2023-03-28 10:11:34 +03:00 |
|
Georgi Gerganov
|
7cd1d3bc34
|
talk-llama : try to fix windows build ..
|
2023-03-27 22:40:59 +03:00 |
|
Georgi Gerganov
|
4a0deb8b1e
|
talk-llama : add new example + sync ggml from llama.cpp (#664)
* talk-llama : talk with LLaMA AI
* talk.llama : disable EOS token
* talk-llama : add README instructions
* ggml : fix build in debug
|
2023-03-27 21:00:32 +03:00 |
|