mirror of
https://github.com/ParisNeo/lollms-webui.git
synced 2024-12-18 20:17:50 +00:00
fixed some bugs
This commit is contained in:
parent
48d6c46da7
commit
bae059f334
@ -14,7 +14,7 @@ That's why I am an advocate of opensource models. At least, we can probe them, w
|
||||
|
||||
Ai is like pandora box. It has been opened and can't be closed. At least, I fail to find any possible way to contain it. In these cases, if you can't contain it, then let people access these things and teach them to start using their brains to dissern truth from lies. People should be trained to have critical minds and not just be passive consumers.
|
||||
|
||||
The models we have today are not concious, they are just function calls. They don't have lives and stop thinking the moment you stop talking to them. But with increasing context size, this may become false. Now recurrent transformers primiss contexts as big as 2Million tokens. Think of the context as the life line of the conversational ai. through interaction, the AI shapes its personality. With small context size, it can't live long. But with big one, and with new multimodal LLMS, AI can see, can hear, can talk and most importantly, can think.
|
||||
The models we have today are not concious, they are just function calls. They don't have lives and stop thinking the moment you stop talking to them. But with increasing context size, this may become false. Now recurrent transformers primiss contexts as big as 2 Million tokens. Think of the context as the life line of the conversational ai. through interaction, the AI shapes its personality. With small context size, it can't live long. But with big one, and with new multimodal LLMS, AI can see, can hear, can talk and most importantly, can think.
|
||||
|
||||
|
||||
At some point, we need to forbid those things from starting to think on their own. But projects like autogpt and the langchain are giving more control to the AI. Still, the human is in control, but he is less and less in control. At least for now, bad things still come from humans and not AI by itself.
|
||||
|
@ -371,13 +371,15 @@ class ModelProcess:
|
||||
while True:
|
||||
command = self.cancel_queue.get()
|
||||
if command is not None:
|
||||
self._cancel_generation()
|
||||
self._cancel_generation()
|
||||
print("Stop generation received")
|
||||
|
||||
def _check_clear_queue(self):
|
||||
while True:
|
||||
command = self.clear_queue_queue.get()
|
||||
if command is not None:
|
||||
self._clear_queue()
|
||||
print("Clear received")
|
||||
|
||||
def _check_set_config_queue(self):
|
||||
while True:
|
||||
|
1
app.py
1
app.py
@ -592,7 +592,6 @@ class Gpt4AllWebUI(GPT4AllAPI):
|
||||
def stop_gen(self):
|
||||
self.cancel_gen = True
|
||||
self.process.cancel_generation()
|
||||
print("Stop generation received")
|
||||
return jsonify({"status": "ok"})
|
||||
|
||||
|
||||
|
@ -16,6 +16,10 @@ I have built this ui to explore new things and build on top of it. I am not buil
|
||||
|
||||
I think all the contributors to this project and hope more people come and share their expertise. This help is vital to enhance the tool for all man kind.
|
||||
|
||||
Before installing this tool you need to install python 3.10 or higher as well as git. Make sure the python installation is in your path and you can call it from a terminal. To verify your python version, type python --version. If you get an error or the version is lower than 3.10, please install a newer version and try again. For those who use conda, you can create a conda virtual environment, install the requirements.txt content and just run the application using python app.py. Now we assume that you have a regular python installation and just want to use the tool.
|
||||
|
||||
|
||||
|
||||
Now let's cut to the chace. Let's start by installing the tool.
|
||||
First, go to the github repository page at github.com/ParisNeo/gpt4all-ui then press the latest release button. Depending on your platform download webui.bat for windows or webui.sh for linux.
|
||||
|
||||
@ -35,7 +39,7 @@ To do this, go to settings. Then open the models zoo tab.
|
||||
You need to select a binding from the list. For example the llama-cpp-official. The first time you select a binding, you have to wait as it is being installed. You can look it up in the console.
|
||||
|
||||
Once the installation is done, you should install a model by pressing install and waiting for it to finish.
|
||||
|
||||
This may take some time.
|
||||
Once the model is installed, you can select it and press Apply modifications.
|
||||
|
||||
Notice that applying modifications does not save the configuration, so You need to press the save button and confirm.
|
||||
|
File diff suppressed because one or more lines are too long
2
web/dist/index.html
vendored
2
web/dist/index.html
vendored
@ -6,7 +6,7 @@
|
||||
|
||||
<meta name="viewport" content="width=device-width, initial-scale=1.0">
|
||||
<title>GPT4All - WEBUI</title>
|
||||
<script type="module" crossorigin src="/assets/index-2548679d.js"></script>
|
||||
<script type="module" crossorigin src="/assets/index-aeb494ac.js"></script>
|
||||
<link rel="stylesheet" href="/assets/index-2bd2bbf7.css">
|
||||
</head>
|
||||
<body>
|
||||
|
@ -555,7 +555,12 @@ export default {
|
||||
if (response.status === 'progress') {
|
||||
console.log(`Progress = ${response.progress}`);
|
||||
model_object.progress = response.progress
|
||||
if(model_object.progress==100){
|
||||
this.models[index].isInstalled = true;
|
||||
this.showProgress = false;
|
||||
}
|
||||
} else if (response.status === 'succeeded') {
|
||||
console.log("Received succeeded")
|
||||
socket.off('install_progress', progressListener);
|
||||
console.log("Installed successfully")
|
||||
// Update the isInstalled property of the corresponding model
|
||||
|
Loading…
Reference in New Issue
Block a user