Lollms feather v 13 alpha version

This commit is contained in:
Saifeddine ALOUI 2024-09-27 01:42:31 +02:00
parent 75e2b77170
commit ad35000de3
28 changed files with 673 additions and 667 deletions

View File

@ -4,3 +4,66 @@ Today, we're diving into the magical world of lollms personalities, and boy, do
Alright, the new lollms is too strawberry and I get it. Can you cound all the hidden strawberries? Leave a number in the comments.
Now let's mount the Apps maker personality from the personalities zoo. As you can see, the zoo has now its own page and you can sort the apps by multiple criteria. You can find the lollms apps maker in the lollms category.
Wow! That's an exciting introduction to the new lollms Apps Maker personality! Let's dive deeper into this powerful tool and expand on its capabilities.
The lollms Apps Maker is a game-changing addition to the lollms ecosystem, designed to empower users to create web applications with ease. Here's an enhanced and expanded overview of its features:
Intelligent Coding Assistant:
The Apps Maker allows users to code efficiently by leveraging its vast knowledge base.
It can generate, explain, and debug code in various programming languages.
Customizable Documentation Context:
Users can select specific documentation in the settings to manage the context.
Different documentations can be activated at various stages of the coding process, ensuring relevant information is always at hand.
Project Card Generation:
The Apps Maker creates a comprehensive project card that lollms UI renders.
This card serves as a quick reference and overview of the project.
Detailed Planning:
Optionally, it can create an in-depth plan with extensive details.
This plan serves as a roadmap for the development process.
App Building:
Using the project description and plan, the Apps Maker constructs the application.
It can generate necessary files, including HTML, CSS, JavaScript, and backend code if required.
Custom Icon Creation:
If activated, the Apps Maker can design a unique icon for the application.
This feature adds a personal touch to each project.
If not activated, a generic icon is used as a placeholder.
Testing and Iteration:
Users can test the generated app directly within the lollms environment.
They can then return to the Apps Maker to request additional features, enhancements, or style changes.
Continuous Improvement:
The Apps Maker supports an iterative development process.
Users can refine their applications over time, adding complexity and polish.
Multi-language Support:
It can work with various programming languages and frameworks, adapting to the user's preferences and project requirements.
UI/UX Suggestions:
The Apps Maker can provide recommendations for improving the user interface and experience of the application.
Performance Optimization:
It can suggest and implement optimizations to ensure the app runs smoothly.
Responsive Design:
The Apps Maker can create applications that work well on various devices and screen sizes.
Integration Capabilities:
It can suggest and implement integrations with external APIs and services to extend the app's functionality.
Version Control Guidance:
The Apps Maker can provide advice on using version control systems like Git to manage the project effectively.
Documentation Generation:
It can automatically generate documentation for the created application, including README files and inline code comments.
With the lollms Apps Maker, users have a powerful tool at their fingertips that streamlines the web application development process. Whether you're a seasoned developer looking to prototype quickly or a beginner taking your first steps into coding, this personality is designed to make app creation an exciting and accessible journey.
Remember, the magic of lollms is in its ability to understand and adapt to your needs. So don't hesitate to ask for clarifications, modifications, or entirely new features as you build your dream application!

View File

@ -0,0 +1,105 @@
Title: "Function Calls in LoLLMs: Empowering AI Personalities"
1. Introduction
- Brief overview of LoLLMs
- Importance of function calls in enhancing AI capabilities
2. What are Function Calls?
- Definition and purpose
- Examples of functions (image generation, internet search, hardware interaction)
3. How Function Calls Work in LoLLMs
Step 1: Personality Accesses Functions
Step 2: LLM Generates JSON Code
Step 3: LoLLMs Interprets the JSON
Step 4: Function Execution
Step 5: (Optional) Additional Text Generation
4. Visual Demonstration
- Flowchart of the process
- Example scenarios
5. Benefits of Function Calls in LoLLMs
- Enhanced capabilities
- Flexibility and extensibility
6. Conclusion
- Recap of key points
- Future possibilities
Now, let's create a detailed script with visuals:
```mermaid
sequenceDiagram
participant User
participant LLM
participant LoLLMs System
participant Function
User->>LLM: Request
LLM->>LLM: Analyze request
LLM->>LoLLMs System: Generate JSON function call
LoLLMs System->>Function: Execute function
Function-->>LoLLMs System: Return result
LoLLMs System-->>LLM: Provide result
LLM->>User: Generate response
```
Script:
1. Introduction
[Show LoLLMs logo]
Narrator: "Welcome to LoLLMs, the Lord of Large Language Multimodal Systems. Today, we're diving into one of LoLLMs' most powerful features: function calls."
2. What are Function Calls?
[Show icons representing different functions]
Narrator: "Function calls allow AI personalities to perform specific tasks beyond text generation. These can include image creation, internet searches, and even hardware interactions."
3. How Function Calls Work in LoLLMs
[Display the flowchart as each step is explained]
Narrator: "Let's break down the process:"
Step 1: Personality Accesses Functions
"AI personalities in LoLLMs have access to one or multiple functions, depending on their configuration."
Step 2: LLM Generates JSON Code
"When a specific function is needed, the LLM generates a JSON code to trigger it."
Step 3: LoLLMs Interprets the JSON
"The LoLLMs system interprets this JSON code."
Step 4: Function Execution
"Based on the interpretation, LoLLMs executes the appropriate function."
Step 5: (Optional) Additional Text Generation
"Depending on the function type, another round of text generation might be required to process the results."
4. Visual Demonstration
[Show example scenarios with code snippets]
Narrator: "Let's look at some examples:"
Example 1: Image Generation
[Display JSON code for image generation and a resulting image]
"Here, the LLM generates JSON to create an image, and LoLLMs processes it to produce the visual."
Example 2: Internet Search
[Show JSON for an internet search and search results]
"In this case, the function call triggers an internet search, providing the LLM with up-to-date information."
5. Benefits of Function Calls in LoLLMs
[List benefits with animated icons]
Narrator: "Function calls in LoLLMs offer:
- Enhanced capabilities beyond text generation
- Flexibility to add new functions as needed
- Real-time interaction with external systems and data"
6. Conclusion
[Show LoLLMs logo again]
Narrator: "Function calls are what make LoLLMs truly powerful, allowing AI personalities to interact with the world in meaningful ways. As we continue to develop new functions, the possibilities are endless."
To create this video, we would need to:
1. Record the narration
2. Create or source appropriate visuals (logos, icons, flowcharts)
3. Animate the flowchart and examples
4. Add background music and sound effects
5. Edit everything together into a cohesive video

View File

@ -0,0 +1,129 @@
Certainly! I'll combine all the information into a comprehensive overview of lollms, including its history, features, and significance. Here's the full, consolidated version:
# lollms: The Ever-Evolving AI Ecosystem That Rules Them All
## The Visionary Behind lollms
lollms, short for "Lord of Large Language & Multimodal Systems," is the brainchild of ParisNeo, a visionary expert in AI and robotics. ParisNeo's journey in coding began at the tender age of 11, igniting a lifelong passion that has never waned. As the creator of lollms, ParisNeo's dream was to develop an accessible, free-of-charge tool that could "rule them all" - a sentiment that echoes throughout the project's philosophy.
## The Evolution of lollms: From Google chrome Plugin to Powerhouse
The journey of lollms is a testament to rapid innovation and adaptability in the fast-paced world of AI development:
### Early Beginnings: The ChatGPT chrome Plugin names chatgpt personality selector
- ParisNeo's foray into AI tools began with a chrome plugin for ChatGPT called chatgpt personality selector, it adds buttons to the Chatgpt interface to condition the AI to be any personality out of the list that he developed.
- Developed just months after ChatGPT's release
- Demonstrated ParisNeo's quick recognition of AI's potential and his ability to build upon emerging technologies
### The Standalone Application: GPT4All WebUI
- Inspired by the release of LLaMA and early versions of GPT4All
- ParisNeo saw the need for a more versatile, standalone application
- Initially named "GPT4All WebUI," reflecting its origins and primary interface
### The Birth of lollms
- As the project grew in scope and capability, a new name was needed
- Renamed to "lollms - Lord of Large Language Models"
- The new name reflected the tool's expanding compatibility with various AI systems
- Emphasized the project's ambition to be a comprehensive solution for language models
### Embracing Multimodality
- With the emergence of multimodal AI systems, lollms evolved further
- The name was expanded to "Lord of Large Language and Multimodal Systems"
- This change signified lollms' growth beyond text-based AI, incorporating image, speech, and other modalities
This evolution highlights lollms' adaptability, vision, rapid development, and forward-thinking approach, positioning it uniquely in the AI landscape.
## The Expansive lollms Ecosystem
lollms is not just an AI system; it's a comprehensive ecosystem that pushes the boundaries of what's possible in artificial intelligence:
### 1. Core Features
- Released under the Apache 2.0 license, ensuring versatility in various applications
- Offers bindings to connect with multiple AI systems
- Boasts a robust personality system with over 500 distinct personas
### 2. Service Suite
lollms provides a wide array of services, including:
- Text-to-text
- Text-to-image
- Image-to-text
- Image-to-image
- Speech-to-text
- Text-to-speech
- Text-to-music
- Text-to-video
### 3. The lollms Zoo
#### Application Zoo
- Features hundreds of applications
- Diverse range of tools and utilities built on the lollms framework
- Enables users to leverage AI capabilities for various specific tasks
#### Models Zoo
- Houses thousands of AI models
- Covers a wide spectrum of AI capabilities and specializations
- Each binding (connection to different AI systems) has its own set of compatible models
#### Personalities Zoo
- Over 500 distinct AI personas
- Allows for highly customizable AI interactions
- Enables lollms to adapt to different contexts and user needs
## Daily Innovation and Ethical Development
What sets lollms apart is not just its impressive capabilities, but the passion and ethical considerations driving its development:
### Rapid Development Cycle
- New version released almost every day
- Continuous improvements and expansions to the ecosystem
- Driven by ParisNeo's unwavering passion for AI and coding
### Ethical Considerations and Community Focus
- All developments are shared openly with the community
- Motivated by a genuine desire to help people through technology
- Promotes transparency and collaboration in AI development
## How to Pronounce "lollms"
ParisNeo envisioned a pronunciation that's both easy and distinctive. Here are two approved ways to pronounce lollms:
### Option 1: The Smooth Blend
Pronunciation: "lahms" (rhymes with "palms")
### Option 2: The Fluid Fusion
Pronunciation: "lolmz" (rhymes with "holmes")
## The Impact of Daily Development
The daily release cycle of lollms has significant implications:
- Rapid adaptation to new AI breakthroughs
- Quick integration of community feedback
- Constant refinement of existing features
- Regular introduction of new capabilities
## Community Engagement and Ethical AI
ParisNeo's commitment to sharing lollms with the community goes beyond just open-sourcing the code:
- Encourages collaborative improvement of the ecosystem
- Facilitates discussions on ethical AI development
- Provides a platform for exploring AI's potential to benefit society
- Demonstrates how passion-driven projects can have a significant impact in the tech world
## The Future of lollms: A Dream Realized
As lollms continues to evolve and expand its ecosystem, it raises intriguing questions about the future of AI:
- How will the vast array of applications in the lollms zoo shape various industries?
- Can the thousands of models in the models zoo lead to breakthroughs in AI research and application?
- Will the personalities zoo redefine how we interact with AI in our daily lives?
- Does this level of AI versatility and power lead us towards a utopian future or potential dystopian concerns?
- How will the rapid development cycle of lollms influence the broader AI industry?
- Can this model of passionate, ethical, and community-focused development become a standard in AI research?
- What new applications and use cases will emerge from the constantly expanding lollms ecosystem?
lollms stands as a testament to what can be achieved when cutting-edge technology is driven by passion, ethical considerations, and a genuine desire to help people. It embodies the idea that AI development can be both innovative and responsible, pushing the boundaries of what's possible while remaining grounded in community values.
For podcasters, tech enthusiasts, researchers, and anyone interested in the future of AI, lollms offers a fascinating glimpse into a world where artificial intelligence is not just powerful, but also ethically developed and community-driven. As it continues to evolve daily, lollms remains true to its slogan: "One tool to rule them all" - not through dominance, but through innovation, collaboration, and a steadfast commitment to ethical AI development.
Whether lollms leads us to utopia or raises important questions about potential dystopian futures, one thing is certain: it's a name we'll be hearing - and pronouncing - a lot more in the years to come. It represents not just the culmination of a childhood dream, but a powerful vision for the future of AI that has the potential to shape our world in profound and exciting ways.

View File

@ -0,0 +1,48 @@
Title: "LoLLMs Smart Router: Optimizing AI Performance and Efficiency"
[Intro animation with LoLLMs logo]
[Upbeat background music]
Host: "Welcome to LoLLMs, where we're revolutionizing the world of AI with our smart router technology. Today, we're diving into how our innovative system optimizes AI model selection for maximum efficiency and cost-effectiveness."
[Animated diagram showing the LoLLMs smart router concept]
Host: "The LoLLMs smart router is a game-changer in AI task management. Here's how it works:"
Activation:
[Animation of a user turning on the smart router]
"When activated, users first need to define a router model. This is a small, efficient Language Model that acts as the brain of our system."
Task Analysis:
[Animation of gears turning in the router's "brain"]
"The router model's job is to analyze the complexity of each user task. It examines the query and determines the level of sophistication required to handle it effectively."
Model Setup:
[Animation showing a lineup of AI models of increasing size/power]
"Users set up a range of AI models, arranged by increasing power or price. This creates a spectrum of options for the router to choose from."
Intelligent Routing:
[Animation of the router directing a task to the appropriate model]
"Based on its analysis, the smart router selects the most suitable model for the task. It aims to find the perfect balance between capability and efficiency."
Optimization:
[Split screen showing energy usage and cost decreasing]
"By choosing the right model for each task, the smart router optimizes the generation process. This minimizes both energy requirements and costs."
Host: "The benefits of our smart router are clear:"
[Bullet points appear on screen]
Improved efficiency
Reduced energy consumption
Lower operational costs
Faster response times
Optimal use of AI resources
[Closing animation]
Host: "With LoLLMs smart router, we're not just making AI smarter we're making it smarter about how it uses its intelligence. One tool to rule them all, indeed!"
[Call to action]
"Want to learn more about how LoLLMs can revolutionize your AI operations? Visit our website or contact us today for a demo."
[LoLLMs logo and contact information]

@ -1 +1 @@
Subproject commit ae87383034323bfba1157e85ef735570c7097e06
Subproject commit 73f175d90b549765cefeea6ca0f41f9c7f22fe1c

View File

@ -71,7 +71,7 @@ def terminate_thread(thread):
else:
ASCIIColors.yellow("Canceled successfully")# The current version of the webui
lollms_webui_version="12 (🍓)"
lollms_webui_version="13 alpha ( code name feather 🪶)"

View File

@ -1,7 +1,7 @@
@echo off
echo "LoLLMS: Lord of Large Language and Multimodal Systems"
echo V12 Strawberry
echo V13 Feather
echo -----------------
echo By ParisNeo
echo -----------------

View File

@ -3,8 +3,8 @@
@rem This script will install miniconda and git with all dependencies for this project
@rem This enables a user to install this project without manually installing conda and git.
echo "L🍓LLMS: Lord of Large Language and Multimodal Systems"
echo V12 Strawberry
echo "L🪶LLMS: Lord of Large Language and Multimodal Systems"
echo V13 Feather
echo -----------------
echo By ParisNeo
echo -----------------

File diff suppressed because one or more lines are too long

8
web/dist/assets/index-86a84335.css vendored Normal file

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

BIN
web/dist/assets/logo-360e020b.png vendored Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 727 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 1.1 MiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 445 KiB

4
web/dist/index.html vendored
View File

@ -6,8 +6,8 @@
<script src="https://cdn.jsdelivr.net/npm/mathjax@3/es5/tex-svg.js"></script>
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>LoLLMS WebUI</title>
<script type="module" crossorigin src="/assets/index-e85b972b.js"></script>
<link rel="stylesheet" href="/assets/index-966be503.css">
<script type="module" crossorigin src="/assets/index-4ce3ce46.js"></script>
<link rel="stylesheet" href="/assets/index-86a84335.css">
</head>
<body>
<div id="app"></div>

Binary file not shown.

Before

Width:  |  Height:  |  Size: 1.1 MiB

After

Width:  |  Height:  |  Size: 727 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 1.1 MiB

After

Width:  |  Height:  |  Size: 727 KiB

View File

@ -257,7 +257,7 @@ body {
0%, 100% { transform: translateY(0); }
50% { transform: translateY(-5px); }
}
.strawberry-emoji {
.feather-emoji {
display: inline-block;
margin-left: 5px;
animation: bounce 2s infinite;

View File

@ -344,7 +344,7 @@
title="Real-time audio mode"
>
<template #icon>
🍓
🪶
</template>
</ChatBarButton>

View File

@ -11,7 +11,7 @@
ref="menuItems"
>
{{ link.text }}
<span v-if="isRouteActive(link.route)" class="strawberry-emoji">🍓</span>
<span v-if="isRouteActive(link.route)" class="feather-emoji">🪶</span>
</RouterLink>
</nav>
</div>

View File

@ -9,9 +9,9 @@
alt="Logo" title="LoLLMS WebUI">
</div>
<div class="flex flex-col justify-center">
<div class="text-2xl md:text-2xl font-bold text-red-600 mb-2"
<div class="text-6xl md:text-2xl font-bold text-amber-500 mb-2"
style="text-shadow: 2px 2px 0px white, -2px -2px 0px white, 2px -2px 0px white, -2px 2px 0px white;">
L🍓LLMS
L🪶LLMS
</div>
<p class="text-gray-400 text-sm">One tool to rule them all</p>
</div>

View File

@ -11,7 +11,7 @@
</div>
<div class="flex flex-col items-start">
<h1 class="text-6xl font-bold text-transparent bg-clip-text bg-gradient-to-r from-indigo-600 to-purple-600 dark:from-indigo-400 dark:to-purple-400">
L🍓LLMS
L🪶LLMS
</h1>
<p class="text-2xl text-gray-600 dark:text-gray-300 italic mt-2">
Lord of Large Language And Multimodal Systems
@ -21,14 +21,14 @@
<div class="space-y-8 animate-fade-in-up">
<h2 class="text-4xl font-semibold text-gray-800 dark:text-gray-200">
Welcome to L🍓LLMS WebUI
Welcome to L🪶LLMS WebUI
</h2>
<p class="text-xl text-gray-600 dark:text-gray-300 max-w-3xl mx-auto">
Embark on a journey through the realm of advanced AI with L🍓LLMS, your ultimate companion for intelligent conversations and multimodal interactions. Unleash the power of large language models and explore new frontiers in artificial intelligence.
Embark on a journey through the realm of advanced AI with L🪶LLMS, your ultimate companion for intelligent conversations and multimodal interactions. Unleash the power of large language models and explore new frontiers in artificial intelligence.
</p>
<div class="mt-12 space-y-6">
<p class="text-lg text-gray-700 dark:text-gray-300">
Discover the capabilities of L🍓LLMS:
Discover the capabilities of L🪶LLMS:
</p>
<ul class="text-left list-disc list-inside text-gray-600 dark:text-gray-300 space-y-2">
<li>Engage in natural language conversations</li>

View File

@ -10,20 +10,17 @@
animationDuration: `${3 + Math.random() * 7}s`,
animationDelay: `${Math.random() * 5}s`
}">
<img
src="@/assets/strawberry.png"
alt="Falling Strawberry"
class="w-6 h-6"
/>
🪶
</div>
</div>
<div class="flex flex-col items-center text-center max-w-4xl w-full px-4 relative z-10">
<div class="mb-8 w-full">
<div class="text-6xl md:text-7xl font-bold text-red-600 mb-2"
style="text-shadow: 2px 2px 0px white, -2px -2px 0px white, 2px -2px 0px white, -2px 2px 0px white;">
L🍓LLMS
</div>
<div class="text-6xl md:text-7xl font-bold text-amber-500 mb-2"
style="text-shadow: 2px 2px 0px white, -2px -2px 0px white, 2px -2px 0px white, -2px 2px 0px white;">
L🪶LLMS
</div>
<p class="text-2xl text-gray-600 dark:text-gray-300 italic">
One tool to rule them all
</p>
@ -34,15 +31,10 @@
{{ version_info }}
</p>
<div class="w-full h-24 relative overflow-hidden bg-gradient-to-r from-blue-200 to-purple-200 dark:from-blue-800 dark:to-purple-800 rounded-full shadow-lg">
<img
class="w-24 h-24 rounded-full absolute top-0 transition-all duration-300 ease-linear"
:style="{ left: `calc(${loading_progress}% - 3rem)` }"
title="L🍓LLMS WebUI"
src="@/assets/strawberry.png"
alt="Strawberry Logo"
>
<div class="w-full h-24 relative overflow-hidden bg-gradient-to-r from-blue-200 to-purple-200 dark:from-blue-800 dark:to-purple-800 rounded-full shadow-lg flex items-center justify-center">
<p style="font-size: 48px; line-height: 1;">🪶</p>
</div>
</div>
<div class="w-full max-w-2xl">
@ -1927,14 +1919,14 @@ export default {
if (item) {
if (item.id) {
const realTitle = item.title ? item.title === "untitled" ? "New discussion" : item.title : "New discussion"
document.title = 'L🍓LLMS WebUI - ' + realTitle
document.title = 'L🪶LLMS WebUI - ' + realTitle
} else {
const title = item || "Welcome"
document.title = 'L🍓LLMS WebUI - ' + title
document.title = 'L🪶LLMS WebUI - ' + title
}
} else {
const title = item || "Welcome"
document.title = 'L🍓LLMS WebUI - ' + title
document.title = 'L🪶LLMS WebUI - ' + title
}
},

View File

@ -1,401 +1,67 @@
<template>
<div class="min-h-screen w-full bg-gradient-to-br from-blue-100 to-purple-100 dark:from-blue-900 dark:to-purple-900 overflow-y-auto">
<div class="container mx-auto px-4 py-8 relative z-10">
<header class="text-center mb-12 sticky top-0 bg-white dark:bg-gray-800 bg-opacity-90 dark:bg-opacity-90 backdrop-filter backdrop-blur-lg p-4 rounded-b-lg shadow-md">
<h1 class="text-5xl md:text-6xl font-bold text-transparent bg-clip-text bg-gradient-to-r from-blue-600 to-purple-600 dark:from-blue-400 dark:to-purple-400 mb-2 animate-glow">
LoLLMs Help Documentation
</h1>
<p class="text-2xl text-gray-600 dark:text-gray-300 italic">
"One tool to rule them all"
</p>
</header>
<div class="help-view">
<h1>Bienvenue dans l'aide de Lollms</h1>
<nav class="bg-white dark:bg-gray-800 shadow-md rounded-lg p-6 mb-8 animate-fade-in sticky top-32 max-h-[calc(100vh-8rem)] overflow-y-auto">
<h2 class="text-3xl font-semibold mb-4 text-gray-800 dark:text-gray-200">Table of Contents</h2>
<ul class="space-y-2">
<li v-for="section in sections" :key="section.id" class="ml-4">
<a :href="`#${section.id}`" @click="scrollToSection(section.id)" class="text-blue-600 dark:text-blue-400 hover:text-blue-800 dark:hover:text-blue-300 hover:underline transition-colors duration-200">{{ section.title }}</a>
<ul v-if="section.subsections" class="ml-4 mt-2 space-y-1">
<li v-for="subsection in section.subsections" :key="subsection.id">
<a :href="`#${subsection.id}`" @click="scrollToSection(subsection.id)" class="text-blue-500 dark:text-blue-300 hover:text-blue-700 dark:hover:text-blue-200 hover:underline transition-colors duration-200">{{ subsection.title }}</a>
</li>
</ul>
</li>
<p>
**Lollms** (**L**ord **o**f **L**arge **L**anguage and **M**ultimodal **S**ystems), prononcé "lahms", est un outil puissant qui vous permet d'interagir avec une variété de modèles de langage et multimodaux. Il a été créé par **ParisNeo**, un expert en IA et en robotique passionné de programmation depuis son enfance, dans le but de rendre les modèles d'IA accessibles à tous, gratuitement. Son slogan : **"Un outil pour tous les gouverner"**. [1, 2]
</p>
<h2>Composants clés de Lollms</h2>
<ul>
<li>
<b>Liaisons :</b> Essentiellement du code Python, les liaisons sont des modules intermédiaires permettant à Lollms d'interagir avec différents modèles d'IA. Elles agissent comme des ponts entre l'interface utilisateur de Lollms et les bibliothèques logicielles qui exécutent ces modèles. [3-5]
</li>
<li>
<b>Services :</b> Lollms s'appuie sur des services supplémentaires créés par des développeurs tiers. Ces services, souvent open source, étendent les fonctionnalités de Lollms. Parmi eux, on trouve : [6]
<ul>
<li>**Services LLM :** tels qu'ollama et vllm, dédiés à la génération de texte.</li>
<li>**Génération d'images :** comme Stable Diffusion, pour créer des images à partir de descriptions textuelles.</li>
<li>**Synthèse vocale :** comme Xtts, permettant de convertir du texte en parole.</li>
</ul>
</nav>
</li>
<li>
<b>Modèles :</b> Au cœur de Lollms se trouvent les modèles de langage. Ces modèles, formés sur d'énormes ensembles de données textuelles, sont capables de : [6-8]
<ul>
<li>Générer du texte cohérent et contextuel.</li>
<li>Traduire des langues avec une précision remarquable.</li>
<li>Produire différents types de contenu créatif, tels que des poèmes, du code et des scripts.</li>
<li>Fournir des réponses informatives à vos questions.</li>
</ul>
La taille d'un modèle est un facteur déterminant de ses performances : plus le modèle est grand (c'est-à-dire plus il possède de paramètres), plus ses capacités sont généralement avancées. [8, 9]
</li>
<li>
<b>Personnalités :</b> Donnez vie à Lollms grâce aux personnalités. Ces agents virtuels, dotés de caractéristiques et de styles de communication uniques, sont créés par le biais de deux méthodes principales : [10, 11]
<ul>
<li>**Conditionnement de texte :** On fournit au modèle des exemples de conversations représentatifs de la personnalité souhaitée, l'entraînant à imiter ce style.</li>
<li>**Code Python personnalisé :** Des scripts Python permettent de définir des comportements plus complexes et d'intégrer des fonctionnalités spécifiques à une personnalité.</li>
</ul>
Avec plus de 250 personnalités disponibles, couvrant des domaines allant de la science à la fiction, vous trouverez forcément celle qui correspond à vos besoins. [10, 12, 13]
</li>
</ul>
<main class="bg-white dark:bg-gray-800 shadow-md rounded-lg p-6 animate-fade-in">
<section v-for="section in sections" :key="section.id" :id="section.id" class="mb-12">
<h2 class="text-4xl font-semibold mb-6 text-transparent bg-clip-text bg-gradient-to-r from-blue-600 to-purple-600 dark:from-blue-400 dark:to-purple-400">{{ section.title }}</h2>
<div v-html="section.content" class="prose dark:prose-invert max-w-none"></div>
<h2>Fonctionnalités de Lollms</h2>
<div v-if="section.subsections" class="mt-8">
<section v-for="subsection in section.subsections" :key="subsection.id" :id="subsection.id" class="mb-8">
<h3 class="text-3xl font-semibold mb-4 text-gray-700 dark:text-gray-300">{{ subsection.title }}</h3>
<div v-html="subsection.content" class="prose dark:prose-invert max-w-none"></div>
</section>
</div>
</section>
</main>
<p>
Lollms offre un éventail impressionnant de fonctionnalités :
</p>
<footer class="mt-12 pt-8 border-t border-gray-300 dark:border-gray-700 animate-fade-in">
<h2 class="text-3xl font-semibold mb-6 text-center text-transparent bg-clip-text bg-gradient-to-r from-blue-600 to-purple-600 dark:from-blue-400 dark:to-purple-400">Contact</h2>
<div class="flex flex-wrap justify-center gap-6 mb-8">
<a v-for="(link, index) in contactLinks" :key="index" :href="link.url" target="_blank" class="text-blue-600 dark:text-blue-400 hover:text-blue-800 dark:hover:text-blue-300 hover:underline transition-colors duration-200">
{{ link.text }}
</a>
</div>
<p class="text-center font-bold text-2xl text-gray-700 dark:text-gray-300">See ya!</p>
</footer>
</div>
<!-- Falling stars background -->
<div class="fixed inset-0 pointer-events-none overflow-hidden">
<div v-for="n in 50" :key="n" class="absolute animate-fall"
:style="{
left: `${Math.random() * 100}%`,
top: `-20px`,
animationDuration: `${3 + Math.random() * 7}s`,
animationDelay: `${Math.random() * 5}s`
}">
<svg class="w-2 h-2 text-yellow-300" fill="currentColor" viewBox="0 0 20 20">
<path d="M9.049 2.927c.3-.921 1.603-.921 1.902 0l1.07 3.292a1 1 0 00.95.69h3.462c.969 0 1.371 1.24.588 1.81l-2.8 2.034a1 1 0 00-.364 1.118l1.07 3.292c.3.921-.755 1.688-1.54 1.118l-2.8-2.034a1 1 0 00-1.175 0l-2.8 2.034c-.784.57-1.838-.197-1.539-1.118l1.07-3.292a1 1 0 00-.364-1.118L2.98 8.72c-.783-.57-.38-1.81.588-1.81h3.461a1 1 0 00.951-.69l1.07-3.292z" />
</svg>
</div>
</div>
<ul>
<li>**Génération de texte :** Écrivez des histoires, des articles, des poèmes et plus encore.</li>
<li>**Traduction linguistique :** Brisez les barrières linguistiques en traduisant du texte dans différentes langues.</li>
<li>**Écriture créative :** Explorez votre côté artistique en générant des scripts, des poèmes et des paroles de chansons.</li>
<li>**Réponse aux questions :** Obtenez des réponses à vos questions en puisant dans les connaissances des modèles de langage.</li>
<li>**Génération d'images :** Créez des images uniques à partir de descriptions textuelles.</li>
<li>**Synthèse vocale :** Donnez une voix à vos textes grâce à la synthèse vocale.</li>
<li>**Exécution de code :** Générez et exécutez du code dans différents langages de programmation.</li>
<li>**Intégration Web :** Intégrez Lollms à vos projets web pour des fonctionnalités dynamiques.</li>
<li>**Et bien plus encore! :** Lollms est en constante évolution, avec de nouvelles fonctionnalités ajoutées régulièrement.</li>
</ul>
</div>
</template>
<script>
export default {
data() {
return {
sections: [
{
id: 'introduction',
title: 'Introduction',
content: `
<p class="mb-4 text-gray-700 dark:text-gray-300">LoLLMs (Lord of Large Language Multimodal Systems) is a powerful and versatile AI system designed to handle a wide range of tasks. Developed by ParisNeo, a computer geek passionate about AI, LoLLMs aims to be the ultimate tool for AI-assisted work and creativity.</p>
<p class="mb-4 text-gray-700 dark:text-gray-300">With its advanced capabilities in natural language processing, multimodal understanding, and code interpretation, LoLLMs can assist users in various domains, from content creation to complex problem-solving.</p>
`
},
{
id: 'key-features',
title: 'Key Features',
content: `
<ul class="list-disc list-inside mb-4 text-gray-700 dark:text-gray-300">
<li>Advanced language understanding and generation</li>
<li>Multimodal capabilities (text, images, and more)</li>
<li>Built-in code interpreter for various programming languages</li>
<li>Internet search integration for up-to-date information</li>
<li>Customizable personalities for specialized tasks</li>
<li>File handling and analysis capabilities</li>
</ul>
`
},
{
id: 'getting-started',
title: 'Getting Started',
content: `
<p class="mb-4 text-gray-700 dark:text-gray-300">To get started with LoLLMs, follow these steps:</p>
<ol class="list-decimal list-inside mb-4 text-gray-700 dark:text-gray-300">
<li>Install LoLLMs on your system (refer to the installation guide)</li>
<li>Configure your preferences and API keys if necessary</li>
<li>Choose a personality or mode that fits your task</li>
<li>Start interacting with LoLLMs through the chat interface</li>
</ol>
<p class="mb-4 text-gray-700 dark:text-gray-300">For detailed installation instructions, visit our <a href="#" class="text-blue-600 dark:text-blue-400 hover:underline">installation guide</a>.</p>
`
},
{
id: 'personalities',
title: 'Personalities',
content: `
<p class="mb-4 text-gray-700 dark:text-gray-300">LoLLMs offers various personalities to cater to different tasks and user needs. Each personality is optimized for specific use cases, ensuring the best possible assistance.</p>
`,
subsections: [
{
id: 'document-summarization',
title: 'Document Summarization',
content: `
<p class="mb-4 text-gray-700 dark:text-gray-300">The Document Summarization personality, also known as <code class="bg-gray-200 dark:bg-gray-700 rounded px-1">docs_zipper</code>, specializes in condensing large documents into concise summaries while maintaining context and key information.</p>
<p class="mb-4 text-gray-700 dark:text-gray-300">Key features:</p>
<ul class="list-disc list-inside mb-4 text-gray-700 dark:text-gray-300">
<li>Handles multiple document formats (PDF, DOCX, TXT, etc.)</li>
<li>Provides customizable summary lengths</li>
<li>Extracts main ideas and key points</li>
<li>Maintains document structure in summaries</li>
</ul>
<p><a href="/help/personalities/documents summary/index.html" class="text-blue-600 dark:text-blue-400 hover:text-blue-800 dark:hover:text-blue-300 hover:underline transition-colors duration-200">Learn more about Document Summarization</a></p>
`
},
{
id: 'code-interpreter',
title: 'Code Interpreter',
content: `
<p class="mb-4 text-gray-700 dark:text-gray-300">The Code Interpreter personality allows LoLLMs to understand, analyze, and execute code in various programming languages.</p>
<p class="mb-4 text-gray-700 dark:text-gray-300">Key features:</p>
<ul class="list-disc list-inside mb-4 text-gray-700 dark:text-gray-300">
<li>Supports multiple programming languages</li>
<li>Provides code explanations and suggestions</li>
<li>Assists in debugging and optimizing code</li>
<li>Can generate code snippets based on natural language descriptions</li>
</ul>
`
}
]
},
{
id: 'advanced-features',
title: 'Advanced Features',
content: `
<p class="mb-4 text-gray-700 dark:text-gray-300">Explore the advanced capabilities of LoLLMs:</p>
<ul class="list-disc list-inside mb-4 text-gray-700 dark:text-gray-300">
<li>Internet Search Integration: Access up-to-date information by using the "send message with internet search" feature</li>
<li>File Handling: Upload and analyze various file types, including images and documents</li>
<li>SVG and Diagram Generation: Create visual representations using SVG, Graphviz, and Mermaid</li>
<li>HTML and JavaScript Generation: Develop interactive web content directly through LoLLMs</li>
<li>Python Code Execution: Run Python scripts for data analysis and more</li>
</ul>
`
},
{
id: 'use-cases',
title: 'Use Cases',
content: `
<p class="mb-4 text-gray-700 dark:text-gray-300">LoLLMs can be applied to a wide range of tasks, including:</p>
<ul class="list-disc list-inside mb-4 text-gray-700 dark:text-gray-300">
<li>Content Creation and Editing</li>
<li>Data Analysis and Visualization</li>
<li>Code Development and Debugging</li>
<li>Research and Information Gathering</li>
<li>Language Translation and Localization</li>
<li>Creative Writing and Brainstorming</li>
<li>Technical Documentation</li>
<li>Educational Support and Tutoring</li>
</ul>
`
},
{
id: 'troubleshooting',
title: 'Troubleshooting',
content: `
<p class="mb-4 text-gray-700 dark:text-gray-300">If you encounter any issues while using LoLLMs, try the following steps:</p>
<ol class="list-decimal list-inside mb-4 text-gray-700 dark:text-gray-300">
<li>Check your internet connection</li>
<li>Verify that you're using the latest version of LoLLMs</li>
<li>Clear your browser cache (if using the web interface)</li>
<li>Review the error messages for specific information</li>
<li>Consult the <a href="#" class="text-blue-600 dark:text-blue-400 hover:underline">FAQ section</a> for common issues and solutions</li>
</ol>
<p class="mb-4 text-gray-700 dark:text-gray-300">If problems persist, please reach out to our support team through one of the contact methods listed below.</p>
`
}
],
contactLinks: [
{ text: 'Email', url: 'mailto:parisneoai@gmail.com' },
{ text: 'Twitter', url: 'https://twitter.com/ParisNeo_AI' },
{ text: 'Discord', url: 'https://discord.gg/BDxacQmv' },
{ text: 'Sub-Reddit', url: 'https://www.reddit.com/r/lollms' },
{ text: 'Instagram', url: 'https://www.instagram.com/spacenerduino/' }
]
};
},
methods: {
scrollToSection(sectionId) {
const element = document.getElementById(sectionId);
if (element) {
element.scrollIntoView({ behavior: 'smooth', block: 'start' });
}
}
}
name: 'HelpView',
};
</script>
<style>
@keyframes glow {
0%, 100% { text-shadow: 0 0 5px rgba(59, 130, 246, 0.5), 0 0 10px rgba(147, 51, 234, 0.5); }
50% { text-shadow: 0 0 20px rgba(59, 130, 246, 0.75), 0 0 30px rgba(147, 51, 234, 0.75); }
}
.animate-glow {
animation: glow 3s ease-in-out infinite;
}
@keyframes fade-in {
from { opacity: 0; transform: translateY(20px); }
to { opacity: 1; transform: translateY(0); }
}
.animate-fade-in {
animation: fade-in 1s ease-out;
}
@keyframes fall {
from { transform: translateY(-20px) rotate(0deg); opacity: 1; }
to { transform: translateY(100vh) rotate(360deg); opacity: 0; }
}
.animate-fall {
animation: fall linear infinite;
}
/* Smooth scrolling for the whole page */
html {
scroll-behavior: smooth;
}
/* Custom scrollbar styles */
::-webkit-scrollbar {
width: 10px;
}
::-webkit-scrollbar-track {
background: #f1f1f1;
}
::-webkit-scrollbar-thumb {
background: #888;
border-radius: 5px;
}
::-webkit-scrollbar-thumb:hover {
background: #555;
}
/* Dark mode scrollbar */
.dark ::-webkit-scrollbar-track {
background: #2d3748;
}
.dark ::-webkit-scrollbar-thumb {
background: #4a5568;
}
.dark ::-webkit-scrollbar-thumb:hover {
background: #718096;
}
/* Improved typography */
body {
font-family: 'Inter', sans-serif;
line-height: 1.6;
}
h1, h2, h3, h4, h5, h6 {
font-family: 'Poppins', sans-serif;
line-height: 1.2;
}
/* Add some hover effects to links */
a {
transition: all 0.3s ease;
}
a:hover {
transform: translateY(-2px);
}
/* Add a subtle shadow to main content sections */
main section {
box-shadow: 0 4px 6px rgba(0, 0, 0, 0.1);
border-radius: 8px;
transition: all 0.3s ease;
}
main section:hover {
box-shadow: 0 8px 15px rgba(0, 0, 0, 0.1);
transform: translateY(-2px);
}
/* Improve code block styling */
code {
font-family: 'Fira Code', monospace;
padding: 2px 4px;
border-radius: 4px;
font-size: 0.9em;
}
/* Add a subtle gradient background to the page */
.bg-gradient-to-br {
background-size: 400% 400%;
animation: gradientBG 15s ease infinite;
}
@keyframes gradientBG {
0% {
background-position: 0% 50%;
}
50% {
background-position: 100% 50%;
}
100% {
background-position: 0% 50%;
}
}
/* Improve accessibility with focus styles */
a:focus, button:focus {
outline: 2px solid #3b82f6;
outline-offset: 2px;
}
/* Add a pulsing effect to the header */
header h1 {
animation: pulse 2s infinite;
}
@keyframes pulse {
0% {
transform: scale(1);
}
50% {
transform: scale(1.05);
}
100% {
transform: scale(1);
}
}
/* Improve mobile responsiveness */
@media (max-width: 640px) {
header h1 {
font-size: 2rem;
}
nav {
position: static;
max-height: none;
}
}
/* Add a nice transition effect when switching between light and dark mode */
body {
transition: background-color 0.3s ease, color 0.3s ease;
}
/* Improve the appearance of lists */
ul, ol {
padding-left: 1.5rem;
}
li {
margin-bottom: 0.5rem;
}
/* Add a subtle animation to the footer */
footer {
animation: fadeInUp 1s ease-out;
}
@keyframes fadeInUp {
from {
opacity: 0;
transform: translateY(20px);
}
to {
opacity: 1;
transform: translateY(0);
}
}
</style>

View File

@ -412,26 +412,29 @@ export default {
}
},
async mount_personality(pers) {
if (!pers) { return { 'status': false, 'error': 'no personality - mount_personality' } }
this.$store.state.messageBox.showMessage("Loading personality")
try {
const obj = {
client_id: this.$store.state.client_id,
language: pers.language?pers.language:"",
category: pers.category?pers.category:"",
folder: pers.folder?pers.folder:"",
}
const res = await axios.post('/mount_personality', obj, {headers: this.posts_headers});
if (!pers) { return { 'status': false, 'error': 'no personality - mount_personality' } }
if (res) {
try {
const obj = {
client_id: this.$store.state.client_id,
language: pers.language?pers.language:"",
category: pers.category?pers.category:"",
folder: pers.folder?pers.folder:"",
}
const res = await axios.post('/mount_personality', obj, {headers: this.posts_headers});
return res.data
if (res) {
}
} catch (error) {
console.log(error.message, 'mount_personality - settings')
return
}
return res.data
}
} catch (error) {
console.log(error.message, 'mount_personality - settings')
return
}
this.$store.state.messageBox.hideMessage()
},

View File

@ -46,7 +46,7 @@
<ChatBarButton @click="speak" :class="{ 'text-red-500': isTalking }" title="Convert text to audio (not saved, uses your browser's TTS service)">
<template #icon>
🍓
🪶
</template>
</ChatBarButton>

@ -1 +1 @@
Subproject commit b3ede8b00a216c0ea9fbd885c44085a12999bbc7
Subproject commit 75f71bf7de75222e972dab1119edc0206b42ceb2

@ -1 +1 @@
Subproject commit c16a875dbddc1e8a3d5ee98a7244bc2759ef6c98
Subproject commit 642a5f424fee5473a00cc7a8f31076b7148bee29