lollms-webui/docs/youtube/lollms_ollama_install.md

13 lines
3.2 KiB
Markdown
Raw Normal View History

2024-04-03 00:27:48 +00:00
Hi there, welcome back to our channel, where we dive into the fascinating world of AI and robotics. Today, we have a very short but exciting video for you. I'm going to show you how to install lollms with the ollama binding, a very simple and easy way to get into lollms. So, let's not waste any time and get right into it.
First things first, head over to the lollms-webui GitHub page. Here, you'll find different installation files based on your operating system. For Windows users, look for the 'win_install_ollama.bat' file. For those on Linux or MacOS, you'll find 'linux_install_ollama.sh' and 'macos_install_ollama.sh' respectively. Choose the right one for you and download it to a folder. Remember, the folder path should not contain any spaces to avoid any installation issues.
After downloading, it's time to run the file. Simply double-click on it and let the magic happen. The installation process will kick off. You'll be prompted to select your personal folder path; make sure to do so and then accept the installation of ollama. This might take a little while as it downloads and then installs everything you need.
Once the installation is complete, you'll find a 'win_run.bat' file in your folder. For Linux and MacOS users, look for 'linux_run.sh' and 'macos_run.sh' respectively. Run this file to start lollms. You're almost there!
Now, in the settings, you'll need to install a model. Browse through the available options, select one that fits your needs, and apply the settings. This is where the real fun begins.
And there you have it! You're now ready to converse with your AI. On my RTX3060, I get around 50 tokens per second, which is pretty decent. And remember, some models are multimodal, allowing you to send an image and ask questions about it. Just make sure to install the right model, like llava or backllava.
Before we wrap up, a quick tip: New models are constantly being added to this tool, and you can also use custom models. I highly recommend checking out their documentation for more data and to stay updated on the latest features.
Before we close, let's quickly demonstrate how to send an image to the AI. First, ensure you have a multimodal model like llava or backllava installed and activated. You can do this by going to the settings, selecting the model, and installing it if you haven't already. Once installed, you're ready to interact with images.
Let's head back to the discussion view. Now, you can easily send an image file to the AI by clicking on the add file icon. Go ahead, give it a try!
Once the image is attached, you can ask the AI a question related to that image. For example, you can ask it to describe what's in the image or any other questions that involve visual context. The AI will then analyze the image and generate a response based on its analysis—pretty amazing, right?
And that's it for this quick demo! Enjoy exploring the vast universe of AI and robotics with lollms and the ollama binding. Be sure to share your experiences, creations, and discoveries with the community, and I'll see you again in the next video where we'll dive deeper into the ever-evolving world of technology. Remember to like and subscribe for more exciting content, and always keep innovating!
See ya