This commit is contained in:
Saifeddine ALOUI 2024-04-11 00:30:39 +02:00
parent 116cfa4a5e
commit 6a4f3e0127
14 changed files with 332 additions and 36 deletions

Binary file not shown.

View File

@ -0,0 +1,130 @@
\documentclass{article}
\usepackage[utf8]{inputenc}
\usepackage{hyperref}
\usepackage{graphicx}
\usepackage{enumitem}
\title{lollms-webui and Settings Documentation}
\author{ParisNeo}
\date{}
\begin{document}
\maketitle
\section{lollms-webui}
\subsection{Discussion Page}
\begin{figure}[h]
\centering
\includegraphics[width=1.1\textwidth]{discussion_page.png}
\caption{Discussion Page illustration}
\end{figure}
This is the first page of the lollms webui interface. This interface is designed to resemble a discussion space where users can interact with AI. We can break down this page into three main zones:
\begin{itemize}[noitemsep,topsep=0pt]
\item \textbf{Left side panel:} This panel contains a list of all the discussions currently stored in the database, along with buttons to manage these discussions. Users can create a new discussion, select multiple discussions to export or delete, reset the database, change the current database, import additional discussions, and search through existing discussions.
\item \textbf{Message display:} This zone displays all the messages of a selected discussion. Each message contains the name of the user or AI, the date and time of posting, the message text, as well as information about the AI used to respond (if applicable). Users can edit, copy, delete, rate, and listen to any message.
\item \textbf{Chat bar:} This section is located at the bottom of the page and contains various buttons for interacting with the AI. Users can select the AI model, choose a personality (if available), enter text to send, use additional features such as sending documents, photos, or URLs, and send messages as a user or AI.
\end{itemize}
\subsection{Playground page}
\begin{figure}[h]
\centering
\includegraphics[width=1.1\textwidth]{playground_page.png}
\caption{Playground Page illustration}
\end{figure}
This is the second page of the lollms interface. Here, you can freely experiment with the LLM (Language Models) and improve your text in various ways. You will find a text input field in the center of the page. Simply write your text in this field, then click the pen button to complete your text. You can find the following buttons at the top:
\begin{itemize}[noitemsep,topsep=0pt]
\item \textbf{Completion button}
\item \textbf{Completion by keyword (@<generation_placeholder>) button}
\item \textbf{Show tokens button}
\item \textbf{Microphone} for dictating your text
\item \textbf{Speaker} for reading the generated text
\item \textbf{Upload a voice} for uploading a voice sample to be used for speech synthesis
\item \textbf{Audio to audio} fo audio to audio tasks like real time translation
\item \textbf{Start recording} that starts recording and transcribing the speech into text
\item \textbf{Recorder} that generates an audio file from the entered text
\end{itemize}
The interface features two tabs and two buttons:
\begin{itemize}[noitemsep,topsep=0pt]
\item Two tabs for exporting and importing text
\item \textbf{Configuration panel} on the right where you can select the model to use and presets (pre-configured texts for specific tasks)
\item \textbf{Import} and \textbf{Export} buttons
\end{itemize}
You can improve your text using specific keywords:
\begin{itemize}[noitemsep,topsep=0pt]
\item \textbf{@<Language:all_programming_language_options>@} for displaying a language selection interface
\item \textbf{@<put your code here>@} for asking the user to enter their code in a text field
\item \textbf{@<demande>@} for asking users questions and gathering additional information
\end{itemize}
Finally, you can adjust the generation parameters such as temperature, top_p, top_k, etc., and set a seed for reproducibility.
\section{Settings}
\subsection{Settingspage}
\begin{figure}[h]
\centering
\includegraphics[width=1.1\textwidth]{settings_page.png}
\caption{SettingsPage illustration}
\end{figure}
This is the tool setup page, which consists of several sections:
\begin{itemize}[noitemsep,topsep=0pt]
\item \textbf{System state}
\item \textbf{Server configuration}
\item \textbf{Bindings zoo}
\item \textbf{Models zoo}
\item \textbf{Personalities zoo}
\end{itemize}
\subsection{System status}
First, you will find the system status section, which displays available resources such as memory, disk space, CPU load, and GPU memory.
\subsection{Main configurations}
This section is divided into several subsections, such as general information where you can specify the type of hardware used (CPU, GPU, etc.), the server address, the current database, automatic tool display options (e.g., whether you want to use it only as a server without a user interface), automatic setting options, automatic update options, and automatic title generation options.
The next subsection concerns user personal information, which can be added for a more personalized experience. You can upload a photo as an avatar, provide information about your preferences that will be used by AI to better meet your expectations, as well as a context management parameter where you can set the minimum number of tokens allocated to the AI to respond.
The following subsection concerns data vectorization, which controls how documents and web pages sent to AI are vectorized, indexed, and stored. You can choose the vectorization method (e.g., tf-idf or embeddings), as well as the number of text pieces to use in the discussion (top k) and their sizes (in tokens).
You will then find a subsection dedicated to LaTeX where you can specify the path to your pdflatex. This allows AI that generates LaTeX to compile their productions into PDFs.
\subsection{Boost}
The boost subsection allows you to customize the boost text to improve the AI's response. You can, for example, tell the AI that it will receive a reward if it answers correctly or negatively if it returns false information, which can cause damage. You can also force the AI's language response and enable the fun mode to make the AI more fun.
Finally, this last subsection allows you to control the Audio IN and Audio OUT by selecting the AI language and voice.
\subsection{Server configuration}
This section is dedicated to advanced server configurations, offering various options for installing different types of servers. These servers can be operated by AI, such as Stable Diffusion for generating high-quality images or XTTS for producing high-precision audio.
You can also install text generation servers, such as Ollama or Petals, which allow you to outsource production to other computers. These powerful servers offer fast and efficient text generation for your projects!
\subsection{Bindings zoo}
This third section is dedicated to bindings, which are modules executing the generation models. Each module can be installed and configured independently by clicking on the \textbf{Install} and \textbf{Settings} buttons. The configuration settings of each binding may vary, but some common settings include context size or server address. These settings are used when the binding calls a remote service, such as Ollama, OpenAI, Mistral AI, Google, Gemini, or Open Router.
In some cases, an API key is required to access the service. For instance, the Ollama binding requires a key. Depending on the server used, you may be required to provide the key, particularly if an authentication proxy is in place.
\subsection{Models zoo}
This section allows you to select pre-trained models for text generation. Available models vary based on your needs, ranging from simple to complex. For example, if you need to translate text from one language to another, you can choose a specialized translation model. If you want to generate creative text for a novel or song, you should choose an advanced language model.
\subsubsection{Add models for binding}
If the pre-trained models do not meet your specific needs, you can add your custom models using this section. Simply provide a link to your local or cloud-based model. This allows you to use models created by other developers or train your own models based on your unique needs.
\subsection{Personalities zoo}
This section is dedicated to personality selection. Personalities are AI behavior simulation layers that condition AI behavior and direct responses. There are over 500 personalities available in 49 categories (up to version 9.0 of Lollms). To install a personality and use it, click the \textbf{Mount} button in the menu located in the personality tab. When you select a category, the personality cards will appear, and you can read the personality description, author, etc. You can then mount a personality to add it to the list of personalities visible in the discussion interface. You can make multiple personalities communicate with each other and collaborate.
There are two types of personalities: unscripted ones, composed only of text describing how they should behave, and scripted ones, containing Python code using logic to perform tasks and provide choices to the AI. For example, Artbot has a multi-step image generation process where AI makes choices and performs operations.
\end{document}

View File

@ -0,0 +1,16 @@
Hi there! Today, I'm thrilled to share some exciting news about a major update to our open router binding. If you're interested in AI, robotics, or just love exploring new technologies, then you're in the right place!
As many of you may know, working with AI models can sometimes be challenging, especially if you don't have access to high-end GPUs or the budget for paid AI services. We believe that everyone should have the opportunity to experiment and benefit from AI, regardless of their resources, which led us to develop lollms and our open router.
In a recent update, we've introduced an upgraded open router binding that significantly expands the model zoo, now offering a total of 117 models! This upgrade includes popular options like Claude, GPT4, DBRX, and Command-R, along with many new and exciting models to try.
To make AI even more accessible, we've included eight free models in the update, which can be easily found by typing "free" into the models zoo search box. These free models serve as an excellent starting point for those new to AI, while the paid models cater to users seeking more advanced capabilities.
The lollms open router's primary strength lies in its versatility and accessibility. Our system supports a wide range of models and offers free options, breaking down barriers for users with varying needs and resources.
Here's a quick overview of the key benefits:
Expanded Model Zoo: With 117 models available, you'll have no shortage of options to explore and utilize.
Accessible AI: The eight free models allow users with limited resources to work with AI, fostering innovation and learning.
Easy Discovery: Typing "free" into the search box helps you quickly find the free models, saving you time and effort.
Versatility: The lollms open router supports a wide variety of models, enabling you to find the perfect fit for your needs.
I hope you're as excited about this update as I am! By continuously expanding the model zoo and offering free options, we're committed to making AI more approachable for everyone. To stay connected and learn more about lollms, don't forget to follow me on X @ParisNeo_AI, join our Discord channel, subscribe to our Sub-Reddit (r/lollms), and follow our Instagram. All links are in the description
Thank you for joining me today, and I look forward to sharing more updates and insights with you in the future! See ya!

@ -1 +1 @@
Subproject commit 1c4bc70a45c1767d1e7acaf27ede00822a3fe52e
Subproject commit ad1c1e0bb5eb9080698ed474379bbb87139988a4

View File

@ -0,0 +1,150 @@
@echo off
@rem This script will install miniconda and git with all dependencies for this project
@rem This enables a user to install this project without manually installing conda and git.
echo " ___ ___ ___ ___ ___ ___ "
echo " /\__\ /\ \ /\__\ /\__\ /\__\ /\ \ "
echo " /:/ / /::\ \ /:/ / /:/ / /::| | /::\ \ "
echo " /:/ / /:/\:\ \ /:/ / /:/ / /:|:| | /:/\ \ \ "
echo " /:/ / /:/ \:\ \ /:/ / /:/ / /:/|:|__|__ _\:\~\ \ \ "
echo " /:/__/ /:/__/ \:\__\ /:/__/ /:/__/ /:/ |::::\__\ /\ \:\ \ \__\ "
echo " \:\ \ \:\ \ /:/ / \:\ \ \:\ \ \/__/~~/:/ / \:\ \:\ \/__/ "
echo " \:\ \ \:\ /:/ / \:\ \ \:\ \ /:/ / \:\ \:\__\ "
echo " \:\ \ \:\/:/ / \:\ \ \:\ \ /:/ / \:\/:/ / "
echo " \:\__\ \::/ / \:\__\ \:\__\ /:/ / \::/ / "
echo " \/__/ \/__/ \/__/ \/__/ \/__/ \/__/ "
echo V9.5
echo -----------------
echo By ParisNeo
echo -----------------
@rem workaround for broken Windows installs
set PATH=%PATH%;%SystemRoot%\system32
cd /D "%~dp0"
echo "%cd%"| findstr /C:" " >nul && call :PrintBigMessage "This script relies on Miniconda which can not be silently installed under a path with spaces. Please put it in a path without spaces and try again" && goto failed
call :PrintBigMessage "WARNING: This script relies on Miniconda which will fail to install if the path is too long."
set "SPCHARMESSAGE="WARNING: Special characters were detected in the installation path!" " This can cause the installation to fail!""
echo "%CD%"| findstr /R /C:"[!#\$%&()\*+,;<=>?@\[\]\^`{|}~]" >nul && (
call :PrintBigMessage %SPCHARMESSAGE%
)
set SPCHARMESSAGE=
pause
cls
md
@rem better isolation for virtual environment
SET "CONDA_SHLVL="
SET PYTHONNOUSERSITE=1
SET "PYTHONPATH="
SET "PYTHONHOME="
SET "TEMP=%cd%\installer_files\temp"
SET "TMP=%cd%\installer_files\temp"
set MINICONDA_DIR=%cd%\installer_files\miniconda3
set INSTALL_ENV_DIR=%cd%\installer_files\lollms_env
set MINICONDA_DOWNLOAD_URL=https://repo.anaconda.com/miniconda/Miniconda3-latest-Windows-x86_64.exe
set REPO_URL=https://github.com/ParisNeo/lollms-webui.git
set "PACKAGES_TO_INSTALL=python=3.11 git pip"
if not exist "%MINICONDA_DIR%\Scripts\conda.exe" (
@rem download miniconda
echo Downloading Miniconda installer from %MINICONDA_DOWNLOAD_URL%
call curl -LO "%MINICONDA_DOWNLOAD_URL%"
@rem install miniconda
echo. && echo Installing Miniconda To "%MINICONDA_DIR%" && echo Please Wait... && echo.
start "" /W /D "%cd%" "Miniconda3-latest-Windows-x86_64.exe" /InstallationType=JustMe /NoShortcuts=1 /AddToPath=0 /RegisterPython=0 /NoRegistry=1 /S /D=%MINICONDA_DIR% || ( echo. && echo Miniconda installer not found. && goto failed )
del /q "Miniconda3-latest-Windows-x86_64.exe"
if not exist "%MINICONDA_DIR%\Scripts\activate.bat" ( echo. && echo Miniconda install failed. && goto end )
)
@rem activate miniconda
call "%MINICONDA_DIR%\Scripts\activate.bat" || ( echo Miniconda hook not found. && goto end )
@rem create the installer env
if not exist "%INSTALL_ENV_DIR%" (
echo Packages to install: %PACKAGES_TO_INSTALL%
call conda create --no-shortcuts -y -k -p "%INSTALL_ENV_DIR%" %CHANNEL% %PACKAGES_TO_INSTALL% || ( echo. && echo Conda environment creation failed. && goto end )
)
@rem check if conda environment was actually created
if not exist "%INSTALL_ENV_DIR%\python.exe" ( echo. && echo Conda environment is empty. && goto end )
@rem activate installer env
call conda activate "%INSTALL_ENV_DIR%" || ( echo. && echo Conda environment activation failed. && goto end )
@rem install conda library
call conda install conda -y
@rem clone the repository
if exist lollms-webui\ (
cd lollms-webui
git pull
git submodule update --init --recursive
cd
cd lollms_core
pip install -e .
cd ..
cd utilities\safe_store
pip install -e .
cd ..\..
) else (
git clone --depth 1 --recurse-submodules https://github.com/ParisNeo/lollms-webui.git
git submodule update --init --recursive
cd lollms-webui\lollms_core
pip install -e .
cd ..
cd utilities\safe_store
pip install -e .
cd ..\..
)
pip install -r requirements.txt
@rem create launcher
if exist ..\win_run.bat (
echo Win run found
) else (
copy scripts\windows\win_run.bat ..\
)
if exist ..\win_conda_session.bat (
echo win conda session script found
) else (
copy scripts\windows\win_conda_session.bat ..\
)
call python zoos/bindings_zoo/open_router/__init__.py
setlocal enabledelayedexpansion
endlocal
goto end
:PrintBigMessage
echo. && echo.
echo *******************************************************************
for %%M in (%*) do echo * %%~M
echo *******************************************************************
echo. && echo.
exit /b
goto end
:failed
echo Install failed
goto endend
:end
echo Installation complete.
:endend
pause

@ -1 +1 @@
Subproject commit 36215afcf65d6999b016925aec4f6d2b87a49c24
Subproject commit 74d31ac77992bb992968c5414ce8273a6f2ca16e

8
web/dist/assets/index-994f0cca.css vendored Normal file

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

4
web/dist/index.html vendored
View File

@ -6,8 +6,8 @@
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>LoLLMS WebUI - Welcome</title>
<script type="module" crossorigin src="/assets/index-e963f26c.js"></script>
<link rel="stylesheet" href="/assets/index-fbc70e53.css">
<script type="module" crossorigin src="/assets/index-9f8988b9.js"></script>
<link rel="stylesheet" href="/assets/index-994f0cca.css">
</head>
<body>
<div id="app"></div>

View File

@ -1,7 +1,7 @@
<template>
<div class="container bg-bg-light dark:bg-bg-dark shadow-lg overflow-y-auto scrollbar-thin scrollbar-track-bg-light-tone scrollbar-thumb-bg-light-tone-panel hover:scrollbar-thumb-primary dark:scrollbar-track-bg-dark-tone dark:scrollbar-thumb-bg-dark-tone-panel dark:hover:scrollbar-thumb-primary active:scrollbar-thumb-secondary">
<div class="container flex flex-row m-2">
<div class="flex-grow m-2">
<div class="flex-grow max-w-[900px] m-2">
<div class="flex gap-3 flex-1 items-center flex-grow flex-row m-2 p-2 border border-blue-300 rounded-md border-2 border-blue-300 m-2 p-4">
<button v-show="!generating" id="generate-button" title="Generate from current cursor position" @click="generate" class="w-6 ml-2 hover:text-secondary duration-75 active:scale-90 cursor-pointer"><i data-feather="pen-tool"></i></button>
<button v-show="!generating" id="generate-next-button" title="Generate from next place holder" @click="generate_in_placeholder" class="w-6 ml-2 hover:text-secondary duration-75 active:scale-90 cursor-pointer"><i data-feather="archive"></i></button>
@ -148,7 +148,7 @@
</div>
</div>
</div>
<Card title="settings" class="slider-container ml-0 mr-0 max-width" :isHorizontal="false" :disableHoverAnimation="true" :disableFocus="true">
<Card title="settings" class="slider-container ml-0 mr-0" :isHorizontal="false" :disableHoverAnimation="true" :disableFocus="true">
<Card title="Model" class="slider-container ml-0 mr-0" :is_subcard="true" :isHorizontal="false" :disableHoverAnimation="true" :disableFocus="true">
<select v-model="this.$store.state.selectedModel" @change="setModel" class="bg-white dark:bg-black m-0 border-2 rounded-md shadow-sm w-full">
<option v-for="model in models" :key="model" :value="model">
@ -613,23 +613,23 @@ export default {
let ss =this.$refs.mdTextarea.selectionStart
let se =this.$refs.mdTextarea.selectionEnd
if(ss==se){
if(speechSynthesis==0 || this.message.content[ss-1]=="\n"){
this.message.content = this.message.content.slice(0, ss) + "```"+bloc_name+"\n\n```\n" + this.message.content.slice(ss)
if(speechSynthesis==0 || this.text[ss-1]=="\n"){
this.text = this.text.slice(0, ss) + "```"+bloc_name+"\n\n```\n" + this.text.slice(ss)
ss = ss+4+bloc_name.length
}
else{
this.message.content = this.message.content.slice(0, ss) + "\n```"+bloc_name+"\n\n```\n" + this.message.content.slice(ss)
this.text = this.text.slice(0, ss) + "\n```"+bloc_name+"\n\n```\n" + this.text.slice(ss)
ss = ss+3+bloc_name.length
}
}
else{
if(speechSynthesis==0 || this.message.content[ss-1]=="\n"){
this.message.content = this.message.content.slice(0, ss) + "```"+bloc_name+"\n"+this.message.content.slice(ss, se)+"\n```\n" + this.message.content.slice(se)
if(speechSynthesis==0 || this.text[ss-1]=="\n"){
this.text = this.text.slice(0, ss) + "```"+bloc_name+"\n"+this.text.slice(ss, se)+"\n```\n" + this.text.slice(se)
ss = ss+4+bloc_name.length
}
else{
this.message.content = this.message.content.slice(0, ss) + "\n```"+bloc_name+"\n"+this.message.content.slice(ss, se)+"\n```\n" + this.message.content.slice(se)
p = p+3+bloc_name.length
this.text = this.text.slice(0, ss) + "\n```"+bloc_name+"\n"+this.text.slice(ss, se)+"\n```\n" + this.text.slice(se)
ss = ss+3+bloc_name.length
}
}

View File

@ -433,7 +433,7 @@
required
v-model="configFile.user_description"
@change="settingsChanged=true"
class="w-full w-full mt-1 px-2 py-1 border border-gray-300 rounded dark:bg-gray-600"
class="min-h-[500px] w-full mt-1 px-2 py-1 border border-gray-300 rounded dark:bg-gray-600"
></textarea>
</td>

@ -1 +1 @@
Subproject commit 9d3e685c9ef092a16cd0591c87d31bc5b190518b
Subproject commit 635652afc335c72fca18b5a3ffc0972f2ba057c4

@ -1 +1 @@
Subproject commit d58620a9484f34867c29ac221c36ba2aa54e2ba2
Subproject commit b186083ec6b54e88c8c49e96e79d4706494bbcab