mirror of
https://github.com/ParisNeo/lollms-webui.git
synced 2024-12-20 21:03:07 +00:00
Merge pull request #29 from NJannasch/feature/pipeline
Updates for docker-compose usage and some minimal CI pipelines
This commit is contained in:
commit
23c079005b
21
.github/ISSUE_TEMPLATE.md
vendored
Normal file
21
.github/ISSUE_TEMPLATE.md
vendored
Normal file
@ -0,0 +1,21 @@
|
||||
## Expected Behavior
|
||||
Please describe the behavior you are expecting.
|
||||
|
||||
## Current Behavior
|
||||
Please describe the behavior you are currently experiencing.
|
||||
|
||||
## Steps to Reproduce
|
||||
Please provide detailed steps to reproduce the issue.
|
||||
|
||||
1. Step 1
|
||||
2. Step 2
|
||||
3. Step 3
|
||||
|
||||
## Possible Solution
|
||||
If you have any suggestions on how to fix the issue, please describe them here.
|
||||
|
||||
## Context
|
||||
Please provide any additional context about the issue.
|
||||
|
||||
## Screenshots
|
||||
If applicable, add screenshots to help explain the issue.
|
25
.github/PULL_REQUEST_TEMPLATE.md
vendored
Normal file
25
.github/PULL_REQUEST_TEMPLATE.md
vendored
Normal file
@ -0,0 +1,25 @@
|
||||
## Description
|
||||
Please include a summary of the change and which issue is fixed. Please also include relevant motivation and context. List any dependencies that are required for this change.
|
||||
|
||||
Fixes # (issue)
|
||||
|
||||
## Type of change
|
||||
Please delete options that are not relevant.
|
||||
|
||||
- [ ] Bug fix (non-breaking change which fixes an issue)
|
||||
- [ ] New feature (non-breaking change which adds functionality)
|
||||
- [ ] Breaking change (fix or feature that would cause existing functionality to change)
|
||||
|
||||
## Checklist:
|
||||
Please put an `x` in the boxes that apply. You can also fill these out after creating the PR.
|
||||
|
||||
- [ ] My code follows the style guidelines of this project
|
||||
- [ ] I have performed a self-review of my own code
|
||||
- [ ] I have commented my code, particularly in hard-to-understand areas
|
||||
- [ ] My changes generate no new warnings
|
||||
- [ ] I have added tests that prove my fix is effective or that my feature works
|
||||
- [ ] I have tested this code locally, and it is working as intended
|
||||
- [ ] I have updated the documentation accordingly
|
||||
|
||||
## Screenshots
|
||||
If applicable, add screenshots to help explain your changes.
|
7
.github/dependabot.yml
vendored
Normal file
7
.github/dependabot.yml
vendored
Normal file
@ -0,0 +1,7 @@
|
||||
---
|
||||
version: 2
|
||||
updates:
|
||||
- package-ecosystem: "github-actions"
|
||||
directory: "/"
|
||||
schedule:
|
||||
interval: "weekly"
|
31
.github/workflows/docker.yaml
vendored
Normal file
31
.github/workflows/docker.yaml
vendored
Normal file
@ -0,0 +1,31 @@
|
||||
name: Docker Build and Lint
|
||||
|
||||
on:
|
||||
push:
|
||||
branches:
|
||||
- main
|
||||
pull_request:
|
||||
|
||||
jobs:
|
||||
build:
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- name: Checkout Code
|
||||
uses: actions/checkout@v2
|
||||
|
||||
- name: Build Docker Image
|
||||
uses: docker/build-push-action@v2
|
||||
with:
|
||||
context: .
|
||||
push: false
|
||||
tags: gpt4all-ui:latest
|
||||
|
||||
lint:
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- name: Checkout Code
|
||||
uses: actions/checkout@v2
|
||||
|
||||
- name: Run Hadolint
|
||||
run: |
|
||||
docker run --rm -i -v $PWD/.hadolint.yaml:/.config/hadolint.yaml hadolint/hadolint < Dockerfile
|
1
.gitignore
vendored
1
.gitignore
vendored
@ -136,6 +136,7 @@ dmypy.json
|
||||
|
||||
# models
|
||||
models/
|
||||
!models/.keep
|
||||
!models/README.md
|
||||
|
||||
# Temporary files
|
||||
|
2
.hadolint.yaml
Normal file
2
.hadolint.yaml
Normal file
@ -0,0 +1,2 @@
|
||||
ignored:
|
||||
- SC1091
|
@ -3,12 +3,12 @@ FROM python:3.10
|
||||
WORKDIR /srv
|
||||
COPY ./requirements.txt .
|
||||
|
||||
RUN python3.10 -m venv env
|
||||
RUN . env/bin/activate
|
||||
RUN python3.10 -m pip install -r requirements.txt --upgrade pip
|
||||
RUN python3 -m venv venv && . venv/bin/activate
|
||||
RUN python3 -m pip install --no-cache-dir -r requirements.txt --upgrade pip
|
||||
|
||||
COPY ./app.py /srv/app.py
|
||||
COPY ./static /srv/static
|
||||
COPY ./templates /srv/templates
|
||||
|
||||
CMD ["python", "app.py", "--host", "0.0.0.0", "--port", "4685", "--db_path", "data/database.db"]
|
||||
# COPY ./models /srv/models # Mounting model is more efficient
|
||||
CMD ["python", "app.py", "--host", "0.0.0.0", "--port", "9600", "--db_path", "data/database.db"]
|
||||
|
29
README.md
29
README.md
@ -33,21 +33,24 @@ To install the app, follow these steps:
|
||||
git clone https://github.com/nomic-ai/gpt4all-ui
|
||||
```
|
||||
|
||||
### Manual setup
|
||||
Hint: Scroll down for docker-compose setup
|
||||
|
||||
1. Navigate to the project directory:
|
||||
|
||||
```
|
||||
```bash
|
||||
cd gpt4all-ui
|
||||
```
|
||||
|
||||
1. Run the appropriate installation script for your platform:
|
||||
2. Run the appropriate installation script for your platform:
|
||||
|
||||
On Windows :
|
||||
```
|
||||
```cmd
|
||||
install.bat
|
||||
```
|
||||
- On linux/ Mac os
|
||||
|
||||
```
|
||||
```bash
|
||||
bash ./install.sh
|
||||
```
|
||||
|
||||
@ -55,6 +58,7 @@ On Linux/MacOS, if you have issues, refer more details are presented [here](docs
|
||||
These scripts will create a Python virtual environment and install the required dependencies. It will also download the models and install them.
|
||||
|
||||
Now you're ready to work!
|
||||
|
||||
## Usage
|
||||
For simple newbies on Windows:
|
||||
```cmd
|
||||
@ -66,7 +70,6 @@ For simple newbies on Linux/MacOsX:
|
||||
bash run.sh
|
||||
```
|
||||
|
||||
|
||||
if you want more control on your launch, you can activate your environment:
|
||||
|
||||
On Windows:
|
||||
@ -107,6 +110,22 @@ Once the server is running, open your web browser and navigate to http://localho
|
||||
|
||||
Make sure to adjust the default values and descriptions of the options to match your specific application.
|
||||
|
||||
### Docker Compose Setup
|
||||
Make sure to have the `gpt4all-lora-quantized-ggml.bin` inside the `models` directory.
|
||||
After that you can simply use docker-compose or podman-compose to build and start the application:
|
||||
|
||||
Build
|
||||
```bash
|
||||
docker-compose -f docker-compose.yml build
|
||||
```
|
||||
|
||||
Start
|
||||
```bash
|
||||
docker-compose -f docker-compose.yml up
|
||||
```
|
||||
|
||||
After that you can open the application in your browser on http://localhost:9600
|
||||
|
||||
## Contribute
|
||||
|
||||
This is an open-source project by the community for the community. Our chatbot is a UI wrapper for Nomic AI's model, which enables natural language processing and machine learning capabilities.
|
||||
|
@ -8,5 +8,6 @@ services:
|
||||
volumes:
|
||||
- ./data:/srv/data
|
||||
- ./data/.nomic:/root/.nomic/
|
||||
- ./models:/srv/models
|
||||
ports:
|
||||
- "4685:4685"
|
||||
- "9600:9600"
|
||||
|
0
models/.keep
Normal file
0
models/.keep
Normal file
@ -1,4 +1,4 @@
|
||||
flask
|
||||
nomic
|
||||
pytest
|
||||
pyllamacpp
|
||||
pyllamacpp
|
||||
|
Loading…
Reference in New Issue
Block a user