LocalAI/examples/continue
Robert Deaton 2454110d81
Update README to reflect changes in Continue's config file (#1014)
**Description**

OpenAIServerInfo no longer exists, and api_base has been moved up.
Changes were made here
8967e2d53f (diff-98e147eaa7c9936befdddabb16c72447fbf8ad2df6b680c5176c24813169858e)

Signed-off-by: Robert Deaton <rdeaton@platipy.org>
2023-09-07 16:29:07 +02:00
..
img
config.py
docker-compose.yml
README.md

Continue

logo

This document presents an example of integration with continuedev/continue.

Screenshot

For a live demonstration, please click on the link below:

Integration Setup Walkthrough

  1. As outlined in continue's documentation, install the Visual Studio Code extension from the marketplace and open it.

  2. In this example, LocalAI will download the gpt4all model and set it up as "gpt-3.5-turbo". Refer to the docker-compose.yaml file for details.

    # Clone LocalAI
    git clone https://github.com/go-skynet/LocalAI
    
    cd LocalAI/examples/continue
    
    # Start with docker-compose
    docker-compose up --build -d
    
  3. Type /config within Continue's VSCode extension, or edit the file located at ~/.continue/config.py on your system with the following configuration:

    from continuedev.src.continuedev.libs.llm.openai import OpenAI
    
    config = ContinueConfig(
       ...
       models=Models(
            default=OpenAI(
               api_key="my-api-key",
               model="gpt-3.5-turbo",
               api_base="http://localhost:8080",
            )
       ),
    )
    

This setup enables you to make queries directly to your model running in the Docker container. Note that the api_key does not need to be properly set up; it is included here as a placeholder.

If editing the configuration seems confusing, you may copy and paste the provided default config.py file over the existing one in ~/.continue/config.py after initializing the extension in the VSCode IDE.

Additional Resources