This commit is contained in:
Saifeddine ALOUI 2024-02-28 22:56:34 +01:00
parent be34aaebbf
commit e7cdefcf94
23 changed files with 377 additions and 316 deletions

View File

@ -0,0 +1,28 @@
Hi there! Today, we're diving into the future of artificial intelligence integration with an exciting tool called LOLLMS the Lord of Large Language and Multimodal Systems. Whether you're a developer, a content creator, or just curious about the possibilities of AI, this video will give you a comprehensive look at a platform that's shaping the way we interact with various AI systems. So, let's get started!
As you see here, we begin with the core of LOLLMS, a clean slate ready to be filled with endless possibilities. It's the foundation upon which all the magic happens.
If you have used lollms, you have probably come across the word bindings. Bindings, which are essentially python code, serve as the essential link that enables lollms to interact with models through web queries or python libraries. This unique functionality is what gives lollms the ability to tap into a diverse array of models, regardless of their form or location. It's the key ingredient that allows lollms to seamlessly connect with both local and remote services. With all bindings following the same patterns and offering consistent methods, lollms can remain model agnostic while maximizing its capabilities.
Alright, let's talk about the next piece of the puzzle - services. These are additional servers created by third-party developers and tailored for lollms' use. What's great is that all of these services are open source and come with permissive licenses. They offer a range of functionalities, from LLM services like ollama, vllm, and text generation, to innovative options like my new petals server. There are even services dedicated to image generation, such as AUTOMATIC1111's stable diffusion webui and daswer123's Xtts server. The best part? Users can easily install these services with just a click and customize their settings directly within lollms.
Moving on to the next exciting topic - generation engines. These engines act as the key to unlocking lollms' potential in generating text, images, and audio by seamlessly leveraging the bindings. Not only do they facilitate intelligent interactions with the bindings, but they also support the execution of code in various programming languages. This allows the AI to create, execute, and test code efficiently, thanks to a unified library of execution engines. The generation engines are crucial in enabling lollms to produce content in a cohesive manner, utilizing the power of bindings to deliver a wide range of engaging and diverse outputs.
The personalities engine is where LOLLMS truly shines. It allows the creation of distinct agents with unique characteristics, whether through text conditioning or custom Python code, enabling a multitude of applications. This engine features lots of very useful methods like yes no method that allows the AI to ask itself yes no questions about the prompt, the multichoice qna that allows it to select from precrafter choices, code extraction tools that allows asking the model to build code then extract it and include it in the current code as an element, Direct access to RAG and internet search, workflow style generation that allows a developer to build a workflow to automate manipulation of data or even to code or interact with the PC through function calls.
Let's now explore the fascinating world of the personalities engine in lollms. This engine truly exemplifies the brilliance of lollms by enabling the creation of unique agents with distinct characteristics through text conditioning or custom Python code, opening up a world of possibilities. Packed with valuable methods such as the yes-no method for self-questioning, multichoice Q&A for pre-crafted choices, and code extraction tools for seamless code integration, the personalities engine offers a diverse range of functionalities. With access to resources like RAG and internet search, workflow-style generation for data manipulation and automation, and a state machine interface, developers can fully leverage lollms in crafting dynamic and interactive content. In lollms, personalities are meticulously categorized, spanning from fun tools and games to more professional personas capable of handling a significant workload, freeing up time for more engaging pursuits. With over 500 personas developed in the past year and new ones created weekly, the potential with lollms personalities is limitless.
Let's now explore the dynamic capabilities of the RAG engine and the Extensions engine within lollms. These components not only add depth but also extendibility, transforming lollms from a mere tool into a thriving ecosystem. The RAG engine, or Retrieval Augmented Generation, empowers lollms to analyze your documents or websites and execute tasks with enhanced knowledge. It can even provide sources, boosting confidence in its responses and mitigating the issue of hallucinations. The Extensions engine further enriches lollms' functionality, offering a platform for continuous growth and innovation. Together, these engines elevate lollms' capabilities and contribute to its evolution as a versatile and reliable resource.
Let's now shine a spotlight on the vibrant world of personalities within the platform. These personalities breathe life into the AI, offering a personalized and engaging interaction experience. Each personality is tailored to cater to different applications, making the interaction with AI not only functional but also enjoyable. Whether built by me or by third parties, users have the flexibility to create their own personalities using the personality maker tool. This tool allows users to craft a full persona from a simple prompt or manually adjust existing personas to suit their needs. All 500 personas available in the zoo are free for use, with the only requirement being to maintain authorship credit. Users can modify and even share these personas with others, fostering a collaborative and creative community.
Now, let's turn our attention to the heart of the operation - the LOLLMS Elf server. This server, with its RESTful interface powered by FastAPI and a socket.io connection for the WebUI, acts as the central hub for all communication between the different components. The Elf server is a versatile tool, capable of being configured to serve the webui, or as a headless text generation server. In this configuration, it can connect with a variety of applications, including other lollms systems, Open AI, MistralAI, Gemini, Ollama, and VLLM compatible clients, enabling them to generate text. The text generation can be raw, or it can be enhanced by utilizing personalities to improve the quality and relevance of the output.
Now, let's explore how the elf server and bindings work together to make lollms a versatile switch, enabling any client to use another service, even if they're not initially compatible. For instance, imagine you have a client designed for the OpenAI interface, but you want to use Google Gemini instead. No problem! Simply select the Google Gemini binding and direct your OpenAI-compatible client to lollms. This flexibility works in all directions, allowing clients that exclusively use API services to be used with local models. With lollms, the possibilities are endless, as it breaks down compatibility barriers and unlocks new potential for various clients and services.
Now, let's talk about the development of LOLLMS. It's primarily a one-man show, with occasional support from the community. I work tirelessly on it during my nights, weekends, and vacations to bring you the best possible tool. However, I kindly ask for your patience when it comes to bugs or issues, especially with bindings that frequently change and require constant updates. As an open-source project, LOLLMS welcomes any help in maintaining and improving it. Your assistance, particularly in keeping track of the evolving bindings, would be greatly appreciated. Together, we can make LOLLMS even better!
And that's a wrap, folks! You've just been introduced to the amazing world of LOLLMS and its powerful components. But remember, this is just the tip of the iceberg. There's so much more to explore and discover with this fantastic tool. So, stay tuned for more in-depth tutorials and guides on how to maximize your experience with LOLLMS. Together, we'll unlock its full potential and create something truly extraordinary. Until next time, happy creating!
Thanks for watching, and don't forget to hit that subscribe button for more content on the cutting edge of technology. Drop a like if you're excited about the future of AI, and share your thoughts in the comments below. Until next time, keep innovating! See ya!

View File

@ -95,28 +95,28 @@ async def execute_code(request: CodeRequest):
if language=="javascript":
ASCIIColors.info("Executing javascript code:")
ASCIIColors.yellow(code)
return execute_javascript(code, discussion_id, message_id)
return execute_javascript(code)
if language in ["html","html5","svg"]:
ASCIIColors.info("Executing javascript code:")
ASCIIColors.yellow(code)
return execute_html(code, discussion_id, message_id)
return execute_html(code)
elif language=="latex":
ASCIIColors.info("Executing latex code:")
ASCIIColors.yellow(code)
return execute_latex(code, discussion_id, message_id)
return execute_latex(code, client, message_id)
elif language in ["bash","shell","cmd","powershell"]:
ASCIIColors.info("Executing shell code:")
ASCIIColors.yellow(code)
return execute_bash(code, discussion_id, message_id)
return execute_bash(code, client)
elif language in ["mermaid"]:
ASCIIColors.info("Executing mermaid code:")
ASCIIColors.yellow(code)
return execute_mermaid(code, discussion_id, message_id)
return execute_mermaid(code)
elif language in ["graphviz","dot"]:
ASCIIColors.info("Executing graphviz code:")
ASCIIColors.yellow(code)
return execute_graphviz(code, discussion_id, message_id)
return execute_graphviz(code)
return {"status": False, "error": "Unsupported language", "execution_time": 0}
except Exception as ex:
trace_exception(ex)

@ -1 +1 @@
Subproject commit f7bc693b355706a20acd792d0fbd45975cbbf82e
Subproject commit 8e0acd35156536c20194f4f587e0f316e0868eec

View File

@ -68,6 +68,6 @@ def build_graphviz_output(code, ifram_name="unnamed"):
execution_time = time.time() - start_time
return {"output": rendered, "execution_time": execution_time}
def execute_graphviz(code, discussion_id, message_id):
def execute_graphviz(code):
return build_graphviz_output(code)

View File

@ -40,5 +40,5 @@ def build_html_output(code, ifram_name="unnamed"):
execution_time = time.time() - start_time
return {"output": rendered, "execution_time": execution_time}
def execute_html(code, discussion_id, message_id):
def execute_html(code):
return build_html_output(code)

View File

@ -49,5 +49,5 @@ def build_javascript_output(code, ifram_name="unnamed"):
execution_time = time.time() - start_time
return {"output": rendered, "execution_time": execution_time}
def execute_javascript(code, discussion_id, message_id):
def execute_javascript(code):
return build_javascript_output(code)

View File

@ -79,6 +79,6 @@ def execute_latex(code, client:Client, message_id):
host = lollmsElfServer.config.host
url = f"{host}:{lollmsElfServer.config.port}/{discussion_path_2_url(pdf_file)}"
output_json = {"output": f"Pdf file generated at: {pdf_file}\n<a href='{url}'>Click here to show</a>", "execution_time": execution_time}
output_json = {"output": f"Pdf file generated at: {pdf_file}\n<a href='{url}' target='_blank'>Click here to show</a>", "execution_time": execution_time}
return output_json
return spawn_process(code)

View File

@ -82,6 +82,6 @@ def build_mermaid_output(code, ifram_name="unnamed"):
def execute_mermaid(code, discussion_id, message_id):
def execute_mermaid(code):
return build_mermaid_output(code)

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

View File

@ -1,7 +0,0 @@
<?xml version="1.0" standalone="no"?>
<!DOCTYPE svg PUBLIC "-//W3C//DTD SVG 20010904//EN"
"http://www.w3.org/TR/2001/REC-SVG-20010904/DTD/svg10.dtd">
<svg xmlns="http://www.w3.org/2000/svg" width="50" height="50" viewBox="0 0 50 50">
<circle cx="25" cy="25" r="25" fill="black">
</circle>
</svg>

Before

Width:  |  Height:  |  Size: 309 B

5
web/dist/assets/rec_off-2c08e836.svg vendored Normal file
View File

@ -0,0 +1,5 @@
<svg xmlns="http://www.w3.org/2000/svg" width="50" height="50" viewBox="0 0 50 50">
<circle cx="25" cy="25" r="24" fill="white" stroke="black" stroke-width="2"/>
<circle cx="25" cy="25" r="20" fill="red"/>
</svg>

After

Width:  |  Height:  |  Size: 224 B

6
web/dist/assets/rec_on-3b37b566.svg vendored Normal file
View File

@ -0,0 +1,6 @@
<svg xmlns="http://www.w3.org/2000/svg" width="50" height="50" viewBox="0 0 50 50">
<circle cx="25" cy="25" r="24" fill="white" stroke="black" stroke-width="2"/>
<circle id="heartbeat" cx="25" cy="25" r="20" fill="red">
<animate attributeName="r" dur="1s" repeatCount="indefinite" keyTimes="0;0.25;0.5;0.75;1" values="20;24;20;22;20"/>
</circle>
</svg>

After

Width:  |  Height:  |  Size: 363 B

View File

@ -1,8 +0,0 @@
<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<!-- Created with Inkscape (http://www.inkscape.org/) -->
<svg xmlns="http://www.w3.org/2000/svg" width="50" height="50" viewBox="0 0 50 50">
<circle cx="25" cy="25" r="22" fill="red">
<animate attributeName="r" values="15;22;15" dur="1s" repeatCount="indefinite" />
</circle>
</svg>

Before

Width:  |  Height:  |  Size: 358 B

View File

@ -1,4 +1,4 @@
<svg viewBox="0 0 50 50" xmlns="http://www.w3.org/2000/svg">
<circle cx="25" cy="25" r="25" fill="deepskyblue"/>
<text x="25" y="32" font-size="12" text-anchor="middle" fill="white">T</text>
<text x="25" y="37" font-size="36" text-anchor="middle" fill="white" font-weight="bold">T</text>
</svg>

Before

Width:  |  Height:  |  Size: 202 B

After

Width:  |  Height:  |  Size: 221 B

4
web/dist/index.html vendored
View File

@ -6,8 +6,8 @@
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>LoLLMS WebUI - Welcome</title>
<script type="module" crossorigin src="/assets/index-36f2b02c.js"></script>
<link rel="stylesheet" href="/assets/index-13bf9073.css">
<script type="module" crossorigin src="/assets/index-fd262646.js"></script>
<link rel="stylesheet" href="/assets/index-a12915cf.css">
</head>
<body>
<div id="app"></div>

View File

@ -1,7 +1,5 @@
<?xml version="1.0" standalone="no"?>
<!DOCTYPE svg PUBLIC "-//W3C//DTD SVG 20010904//EN"
"http://www.w3.org/TR/2001/REC-SVG-20010904/DTD/svg10.dtd">
<svg xmlns="http://www.w3.org/2000/svg" width="50" height="50" viewBox="0 0 50 50">
<circle cx="25" cy="25" r="25" fill="black">
</circle>
</svg>
<svg xmlns="http://www.w3.org/2000/svg" width="50" height="50" viewBox="0 0 50 50">
<circle cx="25" cy="25" r="24" fill="white" stroke="black" stroke-width="2"/>
<circle cx="25" cy="25" r="20" fill="red"/>
</svg>

Before

Width:  |  Height:  |  Size: 309 B

After

Width:  |  Height:  |  Size: 224 B

View File

@ -1,8 +1,6 @@
<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<!-- Created with Inkscape (http://www.inkscape.org/) -->
<svg xmlns="http://www.w3.org/2000/svg" width="50" height="50" viewBox="0 0 50 50">
<circle cx="25" cy="25" r="22" fill="red">
<animate attributeName="r" values="15;22;15" dur="1s" repeatCount="indefinite" />
</circle>
</svg>
<svg xmlns="http://www.w3.org/2000/svg" width="50" height="50" viewBox="0 0 50 50">
<circle cx="25" cy="25" r="24" fill="white" stroke="black" stroke-width="2"/>
<circle id="heartbeat" cx="25" cy="25" r="20" fill="red">
<animate attributeName="r" dur="1s" repeatCount="indefinite" keyTimes="0;0.25;0.5;0.75;1" values="20;24;20;22;20"/>
</circle>
</svg>

Before

Width:  |  Height:  |  Size: 358 B

After

Width:  |  Height:  |  Size: 363 B

View File

@ -622,7 +622,6 @@ export default {
},
getImgUrl() {
if (this.avatar) {
console.log("Avatar:",bUrl + this.avatar)
return bUrl + this.avatar
}
console.log("No avatar found")

View File

@ -4,11 +4,23 @@
<span :style="{ backgroundColor: colors[index % colors.length] }">{{ token[0] }}</span>
</span>
</div>
<div>
<span v-for="(token, index) in namedTokens" :key="index">
<span :style="{ backgroundColor: colors[index % colors.length] }">{{ token[1] }}</span>
</span>
</div>
</template>
<script>
export default {
props: ['namedTokens'],
name: "TokensHilighter",
props: {
namedTokens: {
type: Object,
required: true
}
},
data() {
return {
colors: [

View File

@ -142,6 +142,7 @@ import feather from 'feather-icons'
import static_info from "../assets/static_info.svg"
import animated_info from "../assets/animated_info.svg"
import { useRouter } from 'vue-router'
</script>
<script>
@ -276,9 +277,16 @@ export default {
setTimeout(()=>{
window.close();
},2000)
},
},
refreshPage() {
window.location.href = "/";
const hostnameParts = window.location.href.split('/');
if(hostnameParts.length > 4){
window.location.href='/'
}
else{
window.location.reload(true);
}
},
handleOk(inputText) {
console.log("Input text:", inputText);

View File

@ -3,18 +3,28 @@
<div class="container flex flex-row m-2">
<div class="flex-grow m-2">
<div class="flex gap-3 flex-1 items-center flex-grow flex-row m-2 p-2 border border-blue-300 rounded-md border-2 border-blue-300 m-2 p-4">
<button v-show="!generating" id="generate-button" @click="generate" class="w-6 ml-2 hover:text-secondary duration-75 active:scale-90 cursor-pointer"><i data-feather="pen-tool"></i></button>
<button v-show="!generating" id="generate-next-button" @click="generate_in_placeholder" class="w-6 ml-2 hover:text-secondary duration-75 active:scale-90 cursor-pointer"><i data-feather="archive"></i></button>
<button v-show="!generating" id="generate-button" title="Generate from current cursor position" @click="generate" class="w-6 ml-2 hover:text-secondary duration-75 active:scale-90 cursor-pointer"><i data-feather="pen-tool"></i></button>
<button v-show="!generating" id="generate-next-button" title="Generate from next place holder" @click="generate_in_placeholder" class="w-6 ml-2 hover:text-secondary duration-75 active:scale-90 cursor-pointer"><i data-feather="archive"></i></button>
<button v-show="!generating" id="tokenize" title="Tokenize text" @click="tokenize_text" class="w-6 ml-2 hover:text-secondary duration-75 active:scale-90 cursor-pointer"><img width="25" height="25" :src="tokenize_icon"></button>
<span class="w-80"></span>
<button v-show="generating" id="stop-button" @click="stopGeneration" class="w-6 ml-2 hover:text-secondary duration-75 active:scale-90 cursor-pointer"><i data-feather="x"></i></button>
<button
type="button"
title="Dictate (using your browser for transcription)"
@click="startSpeechRecognition"
:class="{ 'text-red-500': isLesteningToVoice }"
class="w-6 hover:text-secondary duration-75 active:scale-90 cursor-pointer"
>
<i data-feather="mic"></i>
</button>
<button
title="convert text to audio (not saved and uses your browser tts service)"
@click.stop="speak()"
:class="{ 'text-red-500': isTalking }"
class="w-6 hover:text-secondary duration-75 active:scale-90 cursor-pointer">
<i data-feather="volume-2"></i>
</button>
<button
type="button"
title="Start audio to audio"
@ -28,6 +38,7 @@
<button
type="button"
title="Start recording audio"
@click="startRecording"
:class="{ 'text-green-500': isLesteningToVoice }"
class="w-6 hover:text-secondary duration-75 active:scale-90 cursor-pointer text-red-500"
@ -36,15 +47,8 @@
<img v-if="pending" :src="loading_icon" height="25">
</button>
<button
title="speak"
@click.stop="speak()"
:class="{ 'text-red-500': isTalking }"
class="w-6 hover:text-secondary duration-75 active:scale-90 cursor-pointer">
<i data-feather="volume-2"></i>
</button>
<button v-if="!isSynthesizingVoice"
title="read"
title="generate audio from the text"
@click.stop="read()"
class="w-6 hover:text-secondary duration-75 active:scale-90 cursor-pointer">
<i data-feather="voicemail"></i>
@ -79,41 +83,40 @@
<div class="flex-grow m-2 p-2 border border-blue-300 rounded-md border-2 border-blue-300 m-2 p-4" :class="{ 'border-red-500': generating }">
<div v-if="tab_id === 'source'">
<div class="flex flex-row justify-end mx-2">
<div v-if="editMsgMode" class="text-lg hover:text-secondary duration-75 active:scale-90 p-2 cursor-pointer hover:border-2"
<div class="text-lg hover:text-secondary duration-75 active:scale-90 p-2 cursor-pointer hover:border-2"
title="Add generic block" @click.stop="addBlock('')">
<img :src="code_block" width="25" height="25">
</div>
<div class="text-lg hover:text-secondary duration-75 active:scale-90 p-2 cursor-pointer"
</div>
<div class="text-lg hover:text-secondary duration-75 active:scale-90 p-2 cursor-pointer hover:border-2"
title="Add python block" @click.stop="addBlock('python')">
<img :src="python_block" width="25" height="25">
</div>
<div class="text-lg hover:text-secondary duration-75 active:scale-90 p-2 cursor-pointer"
<div class="text-lg hover:text-secondary duration-75 active:scale-90 p-2 cursor-pointer hover:border-2"
title="Add javascript block" @click.stop="addBlock('javascript')">
<img :src="javascript_block" width="25" height="25">
</div>
<div class="text-lg hover:text-secondary duration-75 active:scale-90 p-2 cursor-pointer"
<div class="text-lg hover:text-secondary duration-75 active:scale-90 p-2 cursor-pointer hover:border-2"
title="Add json block" @click.stop="addBlock('json')">
<img :src="json_block" width="25" height="25">
</div>
<div class="text-lg hover:text-secondary duration-75 active:scale-90 p-2 cursor-pointer"
<div class="text-lg hover:text-secondary duration-75 active:scale-90 p-2 cursor-pointer hover:border-2"
title="Add c++ block" @click.stop="addBlock('c++')">
<img :src="cpp_block" width="25" height="25">
</div>
<div class="text-lg hover:text-secondary duration-75 active:scale-90 p-2 cursor-pointer"
<div class="text-lg hover:text-secondary duration-75 active:scale-90 p-2 cursor-pointer hover:border-2"
title="Add html block" @click.stop="addBlock('html')">
<img :src="html5_block" width="25" height="25">
</div>
<div class="text-lg hover:text-secondary duration-75 active:scale-90 p-2 cursor-pointer"
<div class="text-lg hover:text-secondary duration-75 active:scale-90 p-2 cursor-pointer hover:border-2"
title="Add LaTex block" @click.stop="addBlock('latex')">
<img :src="LaTeX_block" width="25" height="25">
</div>
<div class="text-lg hover:text-secondary duration-75 active:scale-90 p-2 cursor-pointer"
<div class="text-lg hover:text-secondary duration-75 active:scale-90 p-2 cursor-pointer hover:border-2"
title="Add bash block" @click.stop="addBlock('bash')">
<img :src="bash_block" width="25" height="25">
</div>
<div class="text-lg hover:text-secondary duration-75 active:scale-90 p-2 cursor-pointer"
<div class="text-lg hover:text-secondary duration-75 active:scale-90 p-2 cursor-pointer hover:border-2"
title="Copy message to clipboard" @click.stop="copyContentToClipboard()">
<i data-feather="copy"></i>
</div>
@ -134,7 +137,9 @@
<source :src="audio_url" type="audio/wav" ref="audio_player">
Your browser does not support the audio element.
</audio>
<tokens-hilighter :namedTokens="namedTokens">
</tokens-hilighter>
<div v-if="tab_id === 'render'">
<MarkdownRenderer ref="mdRender" :markdown-text="text" class="mt-4 p-2 rounded shadow-lg dark:bg-bg-dark">
</MarkdownRenderer>
@ -234,6 +239,7 @@ import socket from '@/services/websocket.js'
import Toast from '../components/Toast.vue'
import MarkdownRenderer from '../components/MarkdownRenderer.vue';
import ClipBoardTextInput from "@/components/ClipBoardTextInput.vue";
import TokensHilighter from "@/components/TokensHilighter.vue"
import Card from "@/components/Card.vue"
import { nextTick, TransitionGroup } from 'vue'
const bUrl = import.meta.env.VITE_LOLLMS_API_BASEURL
@ -243,11 +249,14 @@ import python_block from '@/assets/python_block.png';
import javascript_block from '@/assets/javascript_block.svg';
import json_block from '@/assets/json_block.png';
import cpp_block from '@/assets/cpp_block.png';
import html5_block from '@/assets/html5_block.png';
import LaTeX_block from '@/assets/LaTeX_block.png';
import bash_block from '@/assets/bash_block.png';
import tokenize_icon from '@/assets/tokenize_icon.svg';
import deaf_on from '@/assets/deaf_on.svg';
@ -405,6 +414,10 @@ export default {
name: 'PlayGroundView',
data() {
return {
posts_headers : {
'accept': 'application/json',
'Content-Type': 'application/json'
},
pending:false,
is_recording:false,
is_deaf_transcribing:false,
@ -418,6 +431,8 @@ export default {
python_block:python_block,
bash_block:bash_block,
tokenize_icon:tokenize_icon,
deaf_off:deaf_off,
deaf_on:deaf_on,
@ -436,7 +451,8 @@ export default {
isLesteningToVoice:false,
presets:[],
selectedPreset: '',
cursorPosition:0,
cursorPosition:0,
namedTokens:[],
text:"",
pre_text:"",
post_text:"",
@ -456,6 +472,7 @@ export default {
Toast,
MarkdownRenderer,
ClipBoardTextInput,
TokensHilighter,
Card
},
mounted() {
@ -768,6 +785,11 @@ export default {
// Toggle button visibility
this.generating=true
},
async tokenize_text(){
const output = await axios.post("/lollms_tokenize", {"prompt": this.text}, {headers: this.posts_headers});
console.log(output.data.named_tokens)
this.namedTokens = output.data.named_tokens
},
generate(){
console.log("Finding cursor position")
this.pre_text = this.text.substring(0,this.getCursorPosition())

View File

@ -425,14 +425,14 @@
</tr>
<tr>
<td style="min-width: 200px;">
<label for="user_description" class="text-sm font-bold" style="margin-right: 1rem;">Use user description in discussion:</label>
<label for="use_user_informations_in_discussion" class="text-sm font-bold" style="margin-right: 1rem;">Use user description in discussion:</label>
</td>
<td style="width: 100%;">
<input
type="checkbox"
id="override_personality_model_parameters"
id="use_user_informations_in_discussion"
required
v-model="configFile.override_personality_model_parameters"
v-model="configFile.use_user_informations_in_discussion"
@change="settingsChanged=true"
class="mt-1 px-2 py-1 border border-gray-300 rounded dark:bg-gray-600"
>