ianarawjo b33397930b
TypeScript backend, HuggingFace models, JavaScript evaluators, Comment Nodes, and more ()
* Beginning to convert Python backend to Typescript

* Change all fetch() calls to fetch_from_backend switcher

* wip converting query.py to query.ts

* wip started utils.js conversion. Tested that OpenAI API call works

* more progress on converting utils.py to Typescript

* jest tests for query, utils, template.ts. Confirmed PromptPipeline works.

* wip converting queryLLM in flask_app to TS

* Tested queryLLM and StorageCache compressed saving/loading

* wip execute() in backend.ts

* Added execute() and tested w concrete func. Need to test eval()

* Added craco for optional webpack config. Config'd for TypeScript with Node.js packages browserify'd

* Execute JS code on iframe sandbox

* Tested and working JS Evaluator execution.

* wip swapping backends

* Tested TypeScript backendgit status! :) woot

* Added fetchEnvironAPIKeys to Flask server to fetch os.environ keys when running locally

* Route Anthropic calls through Flask when running locally

* Added info button to Eval nodes. Rebuilt react

* Edits to info modal on Eval node

* Remove/error out on Python eval nodes when not running locally.

* Check browser compat and display error if not supported

* Changed all example flows to use JS. Bug fix in query.ts

* Refactored to LLMProvider to streamline model additions

* Added HuggingFace models API

* Added back Dalai call support, routing through Flask

* Remove flask app calls and socketio server that are no longer used

* Added Comment Nodes. Rebuilt react.

* Fix PaLM temp=0 build, update package vers and rebuild react
2023-06-30 15:11:20 -04:00

37 lines
1.5 KiB
Python

import argparse
from chainforge.flask_app import run_server
# Main Chainforge start
def main():
parser = argparse.ArgumentParser(description='Chainforge command line tool')
# Serve command
subparsers = parser.add_subparsers(dest='serve')
serve_parser = subparsers.add_parser('serve', help='Start Chainforge server')
# TODO: Add this back
# Turn on to disable all outbound LLM API calls and replace them with dummy calls
# that return random strings of ASCII characters. Useful for testing interface without wasting $$.
# serve_parser.add_argument('--dummy-responses',
# help="""Disables queries to LLMs, replacing them with spoofed responses composed of random ASCII characters.
# Produces each dummy response at random intervals between 0.1 and 3 seconds.""",
# dest='dummy_responses',
# action='store_true')
# TODO: Reimplement this where the React server is given the backend's port before loading.
# serve_parser.add_argument('--port', help='The port to run the server on. Defaults to 8000.', type=int, default=8000, nargs='?')
args = parser.parse_args()
# Currently only support the 'serve' command...
if not args.serve:
parser.print_help()
exit(0)
port = 8000 # args.port if args.port else 8000
print(f"Serving Flask server on port {port}...")
run_server(host="localhost", port=port, cmd_args=args)
if __name__ == "__main__":
main()