149 Commits

Author SHA1 Message Date
ianarawjo
f6e1bfa38a
Add Together.ai and update Bedrock (#283)
* feat(bedrock_llama3): added support for Llama3 (#270)

- added also Claude 3 Opus to the list of models
- replaced hardcoded model Id strings with refs to NativeLLM enum

* chore: bump @mirai73/bedrock-fm library (#277)

- the new version adds source code to facilitate debugging

Co-authored-by: ianarawjo <fatso784@gmail.com>

* Adding together.ai support (#280)


---------

Co-authored-by: ianarawjo <fatso784@gmail.com>

* Add Together.ai and update Bedrock models

---------

Co-authored-by: Massimiliano Angelino <angmas@amazon.com>
Co-authored-by: Can Bal <canbal@users.noreply.github.com>
2024-05-17 20:17:18 -10:00
Ian Arawjo
e3259ecc1b Add new OpenAI models 2024-05-14 07:21:48 -10:00
Ian Arawjo
735268e331 Fix Claude carrying system message issue and bug with OpenAI_BaseURL loading 2024-04-29 21:17:49 -04:00
Ian Arawjo
af7f53f76e Fix bug loading OPENAI_BASE_URL from environ var 2024-04-28 14:29:58 -04:00
Ian Arawjo
4fa4b7bcc0 Escape braces in LLM scorer. Add OpenAI_BaseURL setting. 2024-04-26 07:24:31 -04:00
ianarawjo
6fa3092cd0
Add Multi-Eval node (#265)
* Port over and type MultiEvalNode code from the `multi-eval` branch

* Merge css changes from `multi-eval`

* Merge changes to inspector table view from `multi-eval`

* Criteria progress rings

* Debounce renders on text edits

* Add sandbox toggle to Python evals inside MultiEval

* Add uids to evals in MultiEval, for correct cache ids not dependent on name

* <Stack> scores

* Add debounce to editing code or prompts in eval UI

* Update package version
2024-04-25 13:51:25 -04:00
Ian Arawjo
2998c99f08 Bug fix for loading example flows in web version 2024-04-19 19:50:32 -04:00
Ian Arawjo
7126f4f4d4 Fix typing error and update package vers 2024-04-17 19:19:54 -04:00
Massimiliano Angelino
ffd947e636
Update to Bedrock integration (#258)
* fix(aws credentials): correct check for credentials

* chore(bedrock): bump @mirai73/ bedrock-fm library

* feat(bedrock): updating library and adding new mistral large model

- fix stop_sequences
2024-04-15 12:55:17 -04:00
Ian Arawjo
6b65d96369 Fix bug w/ non-updating custom providers in model list 2024-04-02 12:16:35 -04:00
yipengfei
5d4d196260
Fixed a bug that prevented custom models from appearing in model list. (#255) 2024-04-02 12:03:37 -04:00
ianarawjo
583ea6506f
Convert entire front-end to TypeScript. Add image model support. (And more!) (#252)
* Refactor: modularize response boxes into a separate component

* Type store.js. Change info to vars. NOTE: This may break backwards compat.

* Refactor addNodes in App.tsx to be simpler.

* Turn AlertModal into a Provider with useContext

* Remove fetch_from_backend.

* Add build/ to gitignore

* Add support for image models and add Dall-E models.

* Better rate limiting with Bottleneck

* Fix new Chrome bug with file import readers not appearing as arrays; and fix bug with exportCache

* Add ability to add custom right-click context menu items per node

* Convert to/from TF and Items nodes

* Add lazyloader for images

* Add compression to images by default before storing in cache

* Add image compression toggle in Global Settings

* Move Alert Provider to top level of index.js
2024-03-30 22:43:40 -04:00
ianarawjo
eb51d1cee9
Add Amazon Bedrock models to main (#251)
* Adding support for Amazon Bedrock models (#247)

* Create global setting for GenAI features provider, to support Bedrock (Anthropic) models as an alternative

* Reformats dropdown in PromptNode to use Mantine ContextMenu with a nested menu, to save space. 

* Remove build folder from git

* Fix context menu to close on click-off. Refactor context menu array code.

* Ensure context menu is positioned below the Add+ button, like a proper dropdown. 

* Toggle context menu off when clicking btn again.

---------

Co-authored-by: Massimiliano Angelino <angmas@amazon.com>
2024-03-30 17:59:17 -04:00
Ian Arawjo
ad84cfdecc Add Claude Haiku 2024-03-21 21:33:16 -04:00
Ian Arawjo
b832b7ba21 Rebuild 2024-03-17 00:01:58 -04:00
Ian Arawjo
91afd61c6c Bug fix toggle visibility
The visibility toggle was screwed up due to an eslint autofix code rewrite, it appears.
2024-03-16 23:59:47 -04:00
Ian Arawjo
fac5579eee Rebuild 2024-03-15 19:59:05 -04:00
Ian Arawjo
f4e06fd00d Bug fix fill templated settings vars 2024-03-15 19:56:19 -04:00
ianarawjo
bdfeb5c26f
Add human ratings to inspectors (#244)
* Add human ratings to inspectors

* Store human labels in cache, not resp objs

* Change rating UI to pull from Zustand store

* Lazy load inspectors

* Update version and rebuild app
2024-03-14 13:02:47 -04:00
Ian Arawjo
677073ef62 Clean escaped braces before eval 2024-03-07 15:36:40 -05:00
ianarawjo
4cb97d87f7
Bug fix code on save (#241) 2024-03-07 14:31:55 -05:00
ianarawjo
0f4275bc75
Add Claude 3 and Pyodide sandboxing (#237)
Adds pyodide WebWorker to run Python scripts, thanks to idea by Shreya.

* Add sandbox option to Python eval nodes.

* Add new Anthropic models

* Disable guards for Python evals on server

* Fix bug with detecting async func in runOverResponses

---------

Co-authored-by: Shreya Shankar <ss.shankar505@gmail.com>
2024-03-05 23:15:35 -05:00
ianarawjo
0a45383b95
Generate code evaluators (#231)
* Adds a purple GenAI button to Code Evaluator Nodes, to allow easier creation of evaluation functions. (NOTE: This, like the TextFields and Items Nodes GenAI features, is experimental and requires an OpenAI API key to access.)

* Adds a drop-down to LLM evaluators

* Ensures LLM evaluators load cache'd responses on load

* Fixes a bug where right-clicking in pop-up Inspectors would bring up the node context menu.

* Internally, refactors evaluator nodes to have inner components that take care of running evaluations, in preparation for multi-eval and running evals elsewhere
2024-02-27 20:27:41 -05:00
Ian Arawjo
5d666643bd Fix TypeScript type errors 2024-02-24 14:31:36 -05:00
ianarawjo
bd35ecddb2
Add Prettier and ESLint formatting (#227)
* Add and run prettier

* Add eslint and code fixes after formatting (#223)

* chore(formatting): config files and packages

* chore(formatting): package.json

* chore(formatting): applying formatting

changes obtained by applying the previous commit and running `npx prettier -w .`

* chore(formatting): added formatting and linting to react app

* chore(formatting): fixes

* chore(eslint): apply fixes to utils.ts

* rebuild

---------

Co-authored-by: Massimiliano Angelino <angmas@amazon.com>

* Rebuild

---------

Co-authored-by: Massimiliano Angelino <angmas@amazon.com>
2024-02-24 11:32:05 -05:00
ianarawjo
d8d02424e2
Add new OpenAI models. (#225) 2024-02-17 12:35:38 -05:00
Ian Arawjo
625619c0b0 Rebuild app and update package version 2024-02-11 16:25:08 -05:00
Ian Arawjo
cff7a470bc Export metavars to excel 2024-02-11 16:22:55 -05:00
Ian Arawjo
08047d2c55 Bug fix join node 2024-01-22 12:56:34 -05:00
ianarawjo
7e1f43688f
"Small" changes (#213)
* Remove notification dots

* Add batch uids to response objects.

* Regroup responses by batch ids in inspectors. Add batch ids to resp objs. Update examples.

* Bug fix: clear RF state first before loading a flow

* Add random sample toggle to Tabular Data node

* Make sample UI loc conditional on num cols and fit nicer into whitespace

* Adds 'settings template vars' to parametrize on model settings.

* Typecast settings vars params

* Rebuild app and update version
2024-01-19 20:23:24 -05:00
Kayla Z
3d15bc9d17
Add stop button to cancel pending queries (#211)
* Add Stop button

* Replaced QueryTracker stop checks in _prompt_llm in query.ts. Modified _prompt_llm and *gen_responses to take in node id for checking purposes. Added new css class for stopping status.

* Used callback function instead of passing id to the backend, renamed QueryStopper and some of its functions, made custom error

* Added semicolons and one more UserForcedPrematureExit check

* Revise canceler to never clear id, and use unique id Date.now instead

* Make cancel go into call_llm funcs

* Cleanup console logs

* Rebuild app and update package version

---------

Co-authored-by: Kayla Zethelyn <kaylazethelyn@college.harvard.edu>
Co-authored-by: Ian Arawjo <fatso784@gmail.com>
2024-01-13 18:22:08 -05:00
ianarawjo
b92c03afb2
Replace Dalai with Ollama (#209)
* Add basic Ollama support (#208)

* Remove trapFocus warning when no OpenAI key set

* Ensure Ollama is only visible in providers list if running locally. 

* Remove Dalai.

* Fix ollama support to include chat models and pass chat history correctly

* Fix bug with debounce on progress bar updates in Prompt/Chat nodes

* Rebuilt app and update package version

---------

Co-authored-by: Laurent Huberdeau <16990250+laurenthuberdeau@users.noreply.github.com>
2024-01-08 18:33:13 -05:00
ianarawjo
5acdfc0677
Add Search Bar to Inspectors (#206)
* Add search bar to Response Inspector

* Added search text highlights using mark tags

* Add filter and case sensitive toggles

* Fixed inspector UI for wide and non-wide formats, to include Find bar

* Escape search string before RegExp. Fix longstanding refresh issue when template var is removed.

* Fix styling inconsistency w border width when displaying LLM responses on Firefox

---------

Co-authored-by: Kayla Zethelyn <kaylazethelyn@college.harvard.edu>
2024-01-05 22:19:50 -05:00
ianarawjo
48f1314d23
Debounce re-renders in prompt node progress listener (#204)
* Debounce rerenders in prompt node progress listener

* Fix debounce lag in genAI for Items nodes

* Rebuild app and update version
2024-01-02 18:33:39 -05:00
ianarawjo
32c62225d2
Debounce text edit callbacks to optimize performance in TFs and Items nodes (#203) 2024-01-02 15:43:13 -05:00
Ian Arawjo
4a45bd6b75 Fix bug with template vars when generating prompt templates w genAI 2024-01-02 11:12:46 -05:00
Ian Arawjo
965b96e451 Fix bug when creating dirs for custom providers 2024-01-02 10:36:43 -05:00
ianarawjo
0af7bdaedd
Bug fix to visibility on TF nodes (#201)
* removed 'source-code-pro' from code css to fix cursor accuracy in code editor (#199)

Co-authored-by: Kayla Zethelyn <kaylazethelyn@college.harvard.edu>

* Refactor duplicate code (#198)

* Refactor common const from JoinNode.js, LLMResponseInspector.js, SplitNode.js and VisNode.js into utils.ts

* unfactor same constant different definition, fixed syntax for multiple imports

---------

Co-authored-by: Kayla Zethelyn <kaylazethelyn@college.harvard.edu>

* Bug fix to update visibility on TF fields

* rebuild react and update version

---------

Co-authored-by: Kayla Z <77540029+kamazet@users.noreply.github.com>
Co-authored-by: Kayla Zethelyn <kaylazethelyn@college.harvard.edu>
2023-12-29 11:18:21 -05:00
ianarawjo
69ff3a1452
Bug fix (#197) 2023-12-20 14:35:38 -05:00
ianarawjo
d6e850e724
Gemini model support and raise error when detecting duplicate var names (v0.2.8.1) (#195)
* Raise error after detecting duplicate variable names (#190)

* Raise error for duplicate variable name

* Created base error class

* Simplified error classes. Made just one `DuplicateVariableNameError` that takes in variable name to have a hard-coded error message

---------

Co-authored-by: Kayla Zethelyn <kaylazethelyn@college.harvard.edu>

* Adding support for Google's Gemini-Pro model.  (#194)

* Refined duplicate var error check code

* Tidy up duplicate var name alerts and error handling, and err message

* Rebuild react and update package version

---------

Co-authored-by: Kayla Z <77540029+kamazet@users.noreply.github.com>
Co-authored-by: Kayla Zethelyn <kaylazethelyn@college.harvard.edu>
Co-authored-by: Priyan Vaithilingam <priyanmuthu@gmail.com>
2023-12-19 16:14:34 -05:00
ianarawjo
ce583a216c
AI for ChainForge BETA: TextFields, Items (#191)
* Implement autofill backend

* Add autofill to ui

* Add argument to getUID to force recalculation of UID's on every call

* Add command fill

* Move popover to the right

* Merge autofill-ui into autofill

* Add minimum rows requirement for autofilling

* Rename local variable in autofill system

* Rename autofill.ts to ai.ts

* Implement generate and replace backend function

* Add purple AI button

* Add ai popover

* Add tabs to ai popover

* Cosmetic changes to AI popover

* Move command fill UI to purple button popover

* Add 'creative' toggle to generateAndReplace

* Generate and replace UI

* Call backend for generate and replace

* Change creative to unconventional in generate and replace system

* Fix generate and replace

* Add loading states

* Cosmetic changes

* Use sparkle icon

* Cosmetic changes

* Add a clarifying sentence to the prompt when the user asks for a prompt

* Change to markdown

* Add error handling to AI system

* Improve prompt prompt

* Remove 'suggestions loading' message

* Change 'pattern' to 'generate a list of' and fix a bug where i forgot to specify unordered markdown list

* Limit output to n in decode()

* Fix bug in error handling

* TEMP: try to fix autofill

* TEMP: disable autofill

* Finally fix autofill's debouncing

* Improve autofill prompt to handle commands

* Fix typo with semicolon

* Implement autofill backend

* Add autofill to ui

* Add argument to getUID to force recalculation of UID's on every call

* Add command fill

* Move popover to the right

* Merge autofill-ui into autofill

* Add minimum rows requirement for autofilling

* Rename local variable in autofill system

* Rename autofill.ts to ai.ts

* Implement generate and replace backend function

* Add purple AI button

* Add ai popover

* Add tabs to ai popover

* Cosmetic changes to AI popover

* Move command fill UI to purple button popover

* Add 'creative' toggle to generateAndReplace

* Generate and replace UI

* Call backend for generate and replace

* Change creative to unconventional in generate and replace system

* Fix generate and replace

* Add loading states

* Cosmetic changes

* Use sparkle icon

* Cosmetic changes

* Add a clarifying sentence to the prompt when the user asks for a prompt

* Change to markdown

* Add error handling to AI system

* Improve prompt prompt

* Remove 'suggestions loading' message

* Change 'pattern' to 'generate a list of' and fix a bug where i forgot to specify unordered markdown list

* Limit output to n in decode()

* Fix bug in error handling

* TEMP: try to fix autofill

* TEMP: disable autofill

* Finally fix autofill's debouncing

* Improve autofill prompt to handle commands

* Fix typo with semicolon

* Refactor the AI Popover into a new component

* Refactor the AI Popover into a new component

* Refactor the autofill functionality into two backend files

* Minor refactoring and styling fixes

* Parse markdown using markdown library

* Add no_cache flag support in backend to ignore cache for AI popover

* trim quotation marks and escape braces in AI autofill

* Add AI Support Tab in Global Settings pane.

* Convert Jinja braces

* Fix typo in AiPopover import

* Handle template variables with Extend and Autocomplete + Check template variable correctness in outputs

* Escape the braces of generate and replace prompts

* Update prompts to strengthen AI support for multiple template variables

* Log the system message

* Reduce minimum rows required to 1 for autocomplete to begin generating

* Reduce min rows to extend to 1 and add warning below 2

* Create a defaultdict utility

* Consider null values as nonexistant in defaultdict

* Make placeholders stick to their assigned text field without using defaultdict

* Make placeholder logic more readable

* Cache rendering of text fields to avoid expensive computation

* Calculate whether to refresh suggestions based on expected suggestions instead of previous suggestions

* Fix bug where LLM was returning templates in generate and replace where none was requested

* Force re-render of text fields on Extend

* Add Sean Yang to README

* Add GenAI support to Items Node

* Pass front-end API keys to AI support features

* Escape braces on Items Node outputs

* Update package to 0.2.8

* Disable autosaving if it takes 1 second or longer to save to localStorage

* Skip autosave when browser tab is inactive

* Fetch environment API keys only once upon load

* Check for OpenAI API key in AIPopover. If not present, display Alert.

---------

Co-authored-by: Sean Yang <53060248+shawseanyang@users.noreply.github.com>
2023-12-13 11:58:07 -05:00
Ian Arawjo
bacf61be18 Increase max Num Responses per Prompt to 999 2023-12-11 15:37:04 -05:00
ianarawjo
ec8fbde392
Add Inspect Drawer (to Prompt and Eval Nodes) (#189)
* Show metavars in Table View

* Remove showing metavar col when var is col-plotted

* Add collapseable drawer to prompt node

* Add inspect drawer to eval nodes

* Rebuild app and package version

* Revise CSS so text in inspect drawer is selectable
2023-12-06 21:40:34 -05:00
ianarawjo
1f5e0207c9
Disable delete key, use relative paths in FLASK_BASE_URL, rename CSVNode to ItemsNode (#185)
* Disable delete key deleting a node in RF

* Change FLASK_BASE_URL to use relative path except in dev mode

* Rename CSVNode to ItemsNode and replace its icon.

* Update package version and rebuild react

* Add new Claude models
2023-12-04 18:53:57 -05:00
Ian Arawjo
821950d959 Rebuild react and update version 2023-11-27 20:18:32 -05:00
Ian Arawjo
622509188f Update cache for join and split nodes.
Use StorageCache in App.js as safe interface to localStorage.
2023-11-27 20:14:34 -05:00
Ian Arawjo
fddc63338f Fix bug with LLM objs in response inspectors 2023-11-27 19:51:40 -05:00
ianarawjo
7223735b7f
Add Code Processor nodes (#180)
* Add Code Processor nodes.

* Renamed EvaluatorNode to CodeEvaluatorNode.

* Changed the way evaluator nodes pull responses (from grabResponses to pullInputData).

* Fix SimpleEvalNode to be consistent with CodeEvaluatorNode

* Fix Vis Node where no eval resps are connected, but resps are connected.

* Rebuilt react and update package version
2023-11-27 13:57:27 -05:00
Ian Arawjo
a861695c87 Make response reordering more efficient 2023-11-26 21:48:36 -05:00
Ian Arawjo
eb6c39947b Remove log 2023-11-26 21:25:56 -05:00