mirror of
https://github.com/mudler/LocalAI.git
synced 2025-01-18 02:40:01 +00:00
Example: Continue (dev) (#940)
This commit is contained in:
parent
6583eed6b2
commit
0d6165e481
@ -157,6 +157,16 @@ Allows to run any LocalAI-compatible model as a backend on the servers of https:
|
||||
|
||||
[Check it out here](https://runpod.io/gsc?template=uv9mtqnrd0&ref=984wlcra)
|
||||
|
||||
### Continue
|
||||
|
||||
<img src="continue/img/screen.png" width="600" height="200" alt="Screenshot">
|
||||
|
||||
_by [@gruberdev](https://github.com/gruberdev)_
|
||||
|
||||
Demonstrates how to integrate an open-source copilot alternative that enhances code analysis, completion, and improvements. This approach seamlessly integrates with any LocalAI model, offering a more user-friendly experience.
|
||||
|
||||
[Check it out here](https://github.com/go-skynet/LocalAI/tree/master/examples/continue/)
|
||||
|
||||
## Want to contribute?
|
||||
|
||||
Create an issue, and put `Example: <description>` in the title! We will post your examples here.
|
||||
|
56
examples/continue/README.md
Normal file
56
examples/continue/README.md
Normal file
@ -0,0 +1,56 @@
|
||||
# Continue
|
||||
|
||||
![logo](https://continue.dev/docs/assets/images/continue-cover-logo-aa135cc83fe8a14af480d1633ed74eb5.png)
|
||||
|
||||
This document presents an example of integration with [continuedev/continue](https://github.com/continuedev/continue).
|
||||
|
||||
![Screenshot](https://continue.dev/docs/assets/images/continue-screenshot-1f36b99467817f755739d7f4c4c08fe3.png)
|
||||
|
||||
For a live demonstration, please click on the link below:
|
||||
|
||||
- [How it works (Video demonstration)](https://www.youtube.com/watch?v=3Ocrc-WX4iQ)
|
||||
|
||||
## Integration Setup Walkthrough
|
||||
|
||||
1. [As outlined in `continue`'s documentation](https://continue.dev/docs/getting-started), install the [Visual Studio Code extension from the marketplace](https://marketplace.visualstudio.com/items?itemName=Continue.continue) and open it.
|
||||
2. In this example, LocalAI will download the gpt4all model and set it up as "gpt-3.5-turbo". Refer to the `docker-compose.yaml` file for details.
|
||||
|
||||
```bash
|
||||
# Clone LocalAI
|
||||
git clone https://github.com/go-skynet/LocalAI
|
||||
|
||||
cd LocalAI/examples/continue
|
||||
|
||||
# Start with docker-compose
|
||||
docker-compose up --build -d
|
||||
```
|
||||
|
||||
3. Type `/config` within Continue's VSCode extension, or edit the file located at `~/.continue/config.py` on your system with the following configuration:
|
||||
|
||||
```py
|
||||
from continuedev.src.continuedev.libs.llm.openai import OpenAI, OpenAIServerInfo
|
||||
|
||||
config = ContinueConfig(
|
||||
...
|
||||
models=Models(
|
||||
default=OpenAI(
|
||||
api_key="my-api-key",
|
||||
model="gpt-3.5-turbo",
|
||||
openai_server_info=OpenAIServerInfo(
|
||||
api_base="http://localhost:8080",
|
||||
model="gpt-3.5-turbo"
|
||||
)
|
||||
)
|
||||
),
|
||||
)
|
||||
```
|
||||
|
||||
This setup enables you to make queries directly to your model running in the Docker container. Note that the `api_key` does not need to be properly set up; it is included here as a placeholder.
|
||||
|
||||
If editing the configuration seems confusing, you may copy and paste the provided default `config.py` file over the existing one in `~/.continue/config.py` after initializing the extension in the VSCode IDE.
|
||||
|
||||
## Additional Resources
|
||||
|
||||
- [Official Continue documentation](https://continue.dev/docs/intro)
|
||||
- [Documentation page on using self-hosted models](https://continue.dev/docs/customization#self-hosting-an-open-source-model)
|
||||
- [Official extension link](https://marketplace.visualstudio.com/items?itemName=Continue.continue)
|
148
examples/continue/config.py
Normal file
148
examples/continue/config.py
Normal file
@ -0,0 +1,148 @@
|
||||
"""
|
||||
This is the Continue configuration file.
|
||||
|
||||
See https://continue.dev/docs/customization to learn more.
|
||||
"""
|
||||
|
||||
import subprocess
|
||||
|
||||
from continuedev.src.continuedev.core.main import Step
|
||||
from continuedev.src.continuedev.core.sdk import ContinueSDK
|
||||
from continuedev.src.continuedev.core.models import Models
|
||||
from continuedev.src.continuedev.core.config import CustomCommand, SlashCommand, ContinueConfig
|
||||
from continuedev.src.continuedev.plugins.context_providers.github import GitHubIssuesContextProvider
|
||||
from continuedev.src.continuedev.plugins.context_providers.google import GoogleContextProvider
|
||||
from continuedev.src.continuedev.plugins.policies.default import DefaultPolicy
|
||||
from continuedev.src.continuedev.libs.llm.openai import OpenAI, OpenAIServerInfo
|
||||
from continuedev.src.continuedev.libs.llm.ggml import GGML
|
||||
|
||||
from continuedev.src.continuedev.plugins.steps.open_config import OpenConfigStep
|
||||
from continuedev.src.continuedev.plugins.steps.clear_history import ClearHistoryStep
|
||||
from continuedev.src.continuedev.plugins.steps.feedback import FeedbackStep
|
||||
from continuedev.src.continuedev.plugins.steps.comment_code import CommentCodeStep
|
||||
from continuedev.src.continuedev.plugins.steps.share_session import ShareSessionStep
|
||||
from continuedev.src.continuedev.plugins.steps.main import EditHighlightedCodeStep
|
||||
from continuedev.src.continuedev.plugins.context_providers.search import SearchContextProvider
|
||||
from continuedev.src.continuedev.plugins.context_providers.diff import DiffContextProvider
|
||||
from continuedev.src.continuedev.plugins.context_providers.url import URLContextProvider
|
||||
|
||||
class CommitMessageStep(Step):
|
||||
"""
|
||||
This is a Step, the building block of Continue.
|
||||
It can be used below as a slash command, so that
|
||||
run will be called when you type '/commit'.
|
||||
"""
|
||||
async def run(self, sdk: ContinueSDK):
|
||||
|
||||
# Get the root directory of the workspace
|
||||
dir = sdk.ide.workspace_directory
|
||||
|
||||
# Run git diff in that directory
|
||||
diff = subprocess.check_output(
|
||||
["git", "diff"], cwd=dir).decode("utf-8")
|
||||
|
||||
# Ask the LLM to write a commit message,
|
||||
# and set it as the description of this step
|
||||
self.description = await sdk.models.default.complete(
|
||||
f"{diff}\n\nWrite a short, specific (less than 50 chars) commit message about the above changes:")
|
||||
|
||||
|
||||
config = ContinueConfig(
|
||||
|
||||
# If set to False, we will not collect any usage data
|
||||
# See here to learn what anonymous data we collect: https://continue.dev/docs/telemetry
|
||||
allow_anonymous_telemetry=True,
|
||||
|
||||
models = Models(
|
||||
default = OpenAI(
|
||||
api_key = "my-api-key",
|
||||
model = "gpt-3.5-turbo",
|
||||
openai_server_info = OpenAIServerInfo(
|
||||
api_base = "http://localhost:8080",
|
||||
model = "gpt-3.5-turbo"
|
||||
)
|
||||
)
|
||||
),
|
||||
# Set a system message with information that the LLM should always keep in mind
|
||||
# E.g. "Please give concise answers. Always respond in Spanish."
|
||||
system_message=None,
|
||||
|
||||
# Set temperature to any value between 0 and 1. Higher values will make the LLM
|
||||
# more creative, while lower values will make it more predictable.
|
||||
temperature=0.5,
|
||||
|
||||
# Custom commands let you map a prompt to a shortened slash command
|
||||
# They are like slash commands, but more easily defined - write just a prompt instead of a Step class
|
||||
# Their output will always be in chat form
|
||||
custom_commands=[
|
||||
# CustomCommand(
|
||||
# name="test",
|
||||
# description="Write unit tests for the higlighted code",
|
||||
# prompt="Write a comprehensive set of unit tests for the selected code. It should setup, run tests that check for correctness including important edge cases, and teardown. Ensure that the tests are complete and sophisticated. Give the tests just as chat output, don't edit any file.",
|
||||
# )
|
||||
],
|
||||
|
||||
# Slash commands let you run a Step from a slash command
|
||||
slash_commands=[
|
||||
# SlashCommand(
|
||||
# name="commit",
|
||||
# description="This is an example slash command. Use /config to edit it and create more",
|
||||
# step=CommitMessageStep,
|
||||
# )
|
||||
SlashCommand(
|
||||
name="edit",
|
||||
description="Edit code in the current file or the highlighted code",
|
||||
step=EditHighlightedCodeStep,
|
||||
),
|
||||
SlashCommand(
|
||||
name="config",
|
||||
description="Customize Continue - slash commands, LLMs, system message, etc.",
|
||||
step=OpenConfigStep,
|
||||
),
|
||||
SlashCommand(
|
||||
name="comment",
|
||||
description="Write comments for the current file or highlighted code",
|
||||
step=CommentCodeStep,
|
||||
),
|
||||
SlashCommand(
|
||||
name="feedback",
|
||||
description="Send feedback to improve Continue",
|
||||
step=FeedbackStep,
|
||||
),
|
||||
SlashCommand(
|
||||
name="clear",
|
||||
description="Clear step history",
|
||||
step=ClearHistoryStep,
|
||||
),
|
||||
SlashCommand(
|
||||
name="share",
|
||||
description="Download and share the session transcript",
|
||||
step=ShareSessionStep,
|
||||
)
|
||||
],
|
||||
|
||||
# Context providers let you quickly select context by typing '@'
|
||||
# Uncomment the following to
|
||||
# - quickly reference GitHub issues
|
||||
# - show Google search results to the LLM
|
||||
context_providers=[
|
||||
# GitHubIssuesContextProvider(
|
||||
# repo_name="<your github username or organization>/<your repo name>",
|
||||
# auth_token="<your github auth token>"
|
||||
# ),
|
||||
# GoogleContextProvider(
|
||||
# serper_api_key="<your serper.dev api key>"
|
||||
# )
|
||||
SearchContextProvider(),
|
||||
DiffContextProvider(),
|
||||
URLContextProvider(
|
||||
preset_urls = [
|
||||
# Add any common urls you reference here so they appear in autocomplete
|
||||
]
|
||||
)
|
||||
],
|
||||
|
||||
# Policies hold the main logic that decides which Step to take next
|
||||
# You can use them to design agents, or deeply customize Continue
|
||||
policy=DefaultPolicy()
|
||||
)
|
27
examples/continue/docker-compose.yml
Normal file
27
examples/continue/docker-compose.yml
Normal file
@ -0,0 +1,27 @@
|
||||
version: '3.6'
|
||||
|
||||
services:
|
||||
api:
|
||||
image: quay.io/go-skynet/local-ai:latest
|
||||
# As initially LocalAI will download the models defined in PRELOAD_MODELS
|
||||
# you might need to tweak the healthcheck values here according to your network connection.
|
||||
# Here we give a timespan of 20m to download all the required files.
|
||||
healthcheck:
|
||||
test: ["CMD", "curl", "-f", "http://localhost:8080/readyz"]
|
||||
interval: 1m
|
||||
timeout: 20m
|
||||
retries: 20
|
||||
build:
|
||||
context: ../../
|
||||
dockerfile: Dockerfile
|
||||
ports:
|
||||
- 8080:8080
|
||||
environment:
|
||||
- DEBUG=true
|
||||
- MODELS_PATH=/models
|
||||
# You can preload different models here as well.
|
||||
# See: https://github.com/go-skynet/model-gallery
|
||||
- 'PRELOAD_MODELS=[{"url": "github:go-skynet/model-gallery/gpt4all-j.yaml", "name": "gpt-3.5-turbo"}]'
|
||||
volumes:
|
||||
- ./models:/models:cached
|
||||
command: ["/usr/bin/local-ai" ]
|
BIN
examples/continue/img/screen.png
Executable file
BIN
examples/continue/img/screen.png
Executable file
Binary file not shown.
After Width: | Height: | Size: 196 KiB |
Loading…
Reference in New Issue
Block a user