lollms-webui/docs/petals.md
2023-09-21 03:28:58 +02:00

3.8 KiB

Thank you very much. I actually managed only to make it run on linux. On windows, there is a dependency that is making this very very difficult: uvloop. This dependency explicitly rejects any attempt to install it on windows. There is active work to make it windows friendly, but the pull requests are not yet accepted and they don't seem to be fully working .

This means that my best shot at doing this is to use WSL.

It works like charm with WSL with cuda and everything:

image image image image image image image The node is visible from the https://health.petals.dev/ site. So everything is running fine.

To sum up, I've built a simple .bat file that installs an ubuntu WSL system, installs python and pip, then installs petals and runs the server.

But that won't be acceptable if I understand the rules of this challenge. So I am integrating the installation directly in the lollms binding installation procedure. Usually, if you are using linux, I install the binding and run the node from python with the right models. So for windows I'll make a test and use the wsl instead.

image

Now with this, when you run lollms it starts the node but I need to code a bridge so that it is usable for text generation. I may go with a client that uses socketio to communicate with lollms.

The other solution is to literally install lollms in wsl, which will solve all bridging needs. I think I'll go with that other solution, that would save me some time.

I'll make a version of lollms that runs on wsl and is using petals by default.

DONE!

Now lollms can be installed with wsl support Works! image Now Install petals image

It automatically installs cuda and stuff: image

Now it is using petals image

To finish, I create an exe installer: image

One installed you will have three new icons image The lollms with petals launches lollms with petals support the petals server runs a petals-team/StableBeluga2 server the ubuntu is a terminal to interact with wsl

You can learn more about lollms in my youtube videos: https://www.youtube.com/results?search_query=lollms Or directly from the github: https://github.com/ParisNeo/lollms-webui

It is a multi bindings UI for text generation that provides personalities to chat with, a playground for experimenting with text generation tasks along with multiple presets gfor many applications. It also support image and video generation as well as music generation. All in one :)