Georgi Gerganov
|
114df388fe
|
talk-llama : increase context to 2048
|
2023-04-10 23:09:15 +03:00 |
|
InconsolableCellist
|
5e6e2187a3
|
talk-llama : fixing usage message for talk-llama (#687)
"-ml" instead of "-mg" for specifying the llama file
|
2023-03-30 00:10:20 +03:00 |
|
Evan Jones
|
a47e812a54
|
talk-llama : add alpaca support (#668)
|
2023-03-29 23:01:14 +03:00 |
|
Georgi Gerganov
|
4a0deb8b1e
|
talk-llama : add new example + sync ggml from llama.cpp (#664)
* talk-llama : talk with LLaMA AI
* talk.llama : disable EOS token
* talk-llama : add README instructions
* ggml : fix build in debug
|
2023-03-27 21:00:32 +03:00 |
|