102 Commits

Author SHA1 Message Date
ianarawjo
9ec7a3a4fc
Add prompt previews and cancel button to LLM scorer (#319)
Fix bug with default model not showing the selected one when adding new LLM model from the dropdown list.
2024-12-29 11:31:47 -05:00
Ian Arawjo
f5882768ba Update Google Gemini models 2024-12-28 17:56:51 -05:00
ianarawjo
ff813c7255
Revamped Example Flows (#316)
* Fix Try Me button spacing

* Added new examples

* Updated package version
2024-12-27 18:47:32 -05:00
ianarawjo
7e86f19aac
GenAI Data Synthesis for Tabular Data Node (#315)
* TabularDataNode supports Replace and Extend for AiGen (#312)

* Testing Values

* Fixed typing issue with Models in fromModelId

* TabularDataNode now supports table generation.
modified:   src/AiPopover.tsx
            Added support for table replacement
            and future support for extension.
modified:   src/TabularDataNode.tsx
            Added the AiPopover button and
            functionality for table replacement.
modified:   src/backend/ai.ts
            Added specific prompts and decoding
            for markdown table generation.
new file:   src/backend/tableUtils.ts
            Seperated the parsing for tables into
            a seperate utility file for better
            organization and future extensibility.

* Fixed typing issue with Models in fromModelId

* TabularDataNode now supports table generation.
modified:   src/AiPopover.tsx
            Added support for table replacement
            and future support for extension.
modified:   src/TabularDataNode.tsx
            Added the AiPopover button and
            functionality for table replacement.
modified:   src/backend/ai.ts
            Added specific prompts and decoding
            for markdown table generation.
new file:   src/backend/tableUtils.ts
            Seperated the parsing for tables into
            a seperate utility file for better
            organization and future extensibility.

Testing Values

* Added Extend Functionality to Table Popover.
modified:   src/AiPopover.tsx
            Removed unnecessary import.
            Changed handleCommandFill to work with
            autofillTable function in ai.ts.
modified:   src/TabularDataNode.tsx
            Removed Skeleton from Popover.
            Changed addMultipleRows such that it
            now renders the new rows correctly
            and removes the blank row.
modified:   src/backend/ai.ts
            Added autofillTable function and
            changed decodeTable so that they
            are flexible with both proper and
            improper markdown tables.
            Added new system message prompt
            specific to autofillTable.
            Removed unneccessary log statements.
removed:    src/backend/utils.ts
            Removed change.

* Added "add column" prompt & button in TablePopover

modified:   src/AiPopover.tsx
            Added handleGenerateColumn so that
            a column can be generated given
            a prompt.
            Added changes to the TablePopover UI
            Now extend is diveded into AddRow
            and AddColumn sections.
modified:   src/TabularDataNode.tsx
            Modified addColumns so that its safer.
            Added optional pass of rowValue to
            support generateColumn.
modified:   src/backend/ai.ts
            Added generateColumn and it's
            corresponding system message.
Cleaned up some comments and added missing commas.

* Generate Columns now considers item-by-item
processing of the rows for generating the
new column values.

modified:   src/AiPopover.tsx
            Fixed the key issue for onAddColumn.
modified:   src/TabularDataNode.tsx
            Changed addColumns to filter out
            previously added columns.
modified:   src/backend/ai.ts
            Changed generateColumns to process
            item-by-item to generate new columns.

* Fix bugs. Change OpenAI small model for GenAI features to GPT-4o.

* Update package version

* Remove gen diverse outputs switch in genAI for table

---------

Co-authored-by: Kraft-Cheese <114844630+Kraft-Cheese@users.noreply.github.com>
2024-12-19 15:38:23 -05:00
ianarawjo
1641abe975
Structured outputs support for Ollama, OpenAI, and Anthropic models (#313)
* Add structured outputs support for OpenAI and Ollama

* Extract outputs from tool_calls and refusal in OpenAI API responses

* Add tool use for Anthropic API. Add new Anthropic models.

* Add num_ctx to Ollama API call

* Update package version

* Update function calling example
2024-12-16 16:24:55 -05:00
Ian Arawjo
dd28754959 Remove rate limiting ceiling in bottleneck. Fix eslint to <9.0. 2024-12-11 10:17:04 -05:00
Sam
f6565537fa
chore(deps): bump deps for Docker, python versions (#305) 2024-10-29 13:33:01 -04:00
Ian Arawjo
98b140b5fa Add newest OpenAI and Anthropic models 2024-08-15 22:06:13 -04:00
ianarawjo
f6e1bfa38a
Add Together.ai and update Bedrock (#283)
* feat(bedrock_llama3): added support for Llama3 (#270)

- added also Claude 3 Opus to the list of models
- replaced hardcoded model Id strings with refs to NativeLLM enum

* chore: bump @mirai73/bedrock-fm library (#277)

- the new version adds source code to facilitate debugging

Co-authored-by: ianarawjo <fatso784@gmail.com>

* Adding together.ai support (#280)


---------

Co-authored-by: ianarawjo <fatso784@gmail.com>

* Add Together.ai and update Bedrock models

---------

Co-authored-by: Massimiliano Angelino <angmas@amazon.com>
Co-authored-by: Can Bal <canbal@users.noreply.github.com>
2024-05-17 20:17:18 -10:00
Ian Arawjo
e3259ecc1b Add new OpenAI models 2024-05-14 07:21:48 -10:00
Ian Arawjo
735268e331 Fix Claude carrying system message issue and bug with OpenAI_BaseURL loading 2024-04-29 21:17:49 -04:00
Ian Arawjo
af7f53f76e Fix bug loading OPENAI_BASE_URL from environ var 2024-04-28 14:29:58 -04:00
Ian Arawjo
4fa4b7bcc0 Escape braces in LLM scorer. Add OpenAI_BaseURL setting. 2024-04-26 07:24:31 -04:00
ianarawjo
6fa3092cd0
Add Multi-Eval node (#265)
* Port over and type MultiEvalNode code from the `multi-eval` branch

* Merge css changes from `multi-eval`

* Merge changes to inspector table view from `multi-eval`

* Criteria progress rings

* Debounce renders on text edits

* Add sandbox toggle to Python evals inside MultiEval

* Add uids to evals in MultiEval, for correct cache ids not dependent on name

* <Stack> scores

* Add debounce to editing code or prompts in eval UI

* Update package version
2024-04-25 13:51:25 -04:00
Ian Arawjo
7126f4f4d4 Fix typing error and update package vers 2024-04-17 19:19:54 -04:00
Ian Arawjo
6b65d96369 Fix bug w/ non-updating custom providers in model list 2024-04-02 12:16:35 -04:00
ianarawjo
583ea6506f
Convert entire front-end to TypeScript. Add image model support. (And more!) (#252)
* Refactor: modularize response boxes into a separate component

* Type store.js. Change info to vars. NOTE: This may break backwards compat.

* Refactor addNodes in App.tsx to be simpler.

* Turn AlertModal into a Provider with useContext

* Remove fetch_from_backend.

* Add build/ to gitignore

* Add support for image models and add Dall-E models.

* Better rate limiting with Bottleneck

* Fix new Chrome bug with file import readers not appearing as arrays; and fix bug with exportCache

* Add ability to add custom right-click context menu items per node

* Convert to/from TF and Items nodes

* Add lazyloader for images

* Add compression to images by default before storing in cache

* Add image compression toggle in Global Settings

* Move Alert Provider to top level of index.js
2024-03-30 22:43:40 -04:00
ianarawjo
eb51d1cee9
Add Amazon Bedrock models to main (#251)
* Adding support for Amazon Bedrock models (#247)

* Create global setting for GenAI features provider, to support Bedrock (Anthropic) models as an alternative

* Reformats dropdown in PromptNode to use Mantine ContextMenu with a nested menu, to save space. 

* Remove build folder from git

* Fix context menu to close on click-off. Refactor context menu array code.

* Ensure context menu is positioned below the Add+ button, like a proper dropdown. 

* Toggle context menu off when clicking btn again.

---------

Co-authored-by: Massimiliano Angelino <angmas@amazon.com>
2024-03-30 17:59:17 -04:00
Ian Arawjo
ad84cfdecc Add Claude Haiku 2024-03-21 21:33:16 -04:00
Ian Arawjo
b832b7ba21 Rebuild 2024-03-17 00:01:58 -04:00
Ian Arawjo
fac5579eee Rebuild 2024-03-15 19:59:05 -04:00
ianarawjo
bdfeb5c26f
Add human ratings to inspectors (#244)
* Add human ratings to inspectors

* Store human labels in cache, not resp objs

* Change rating UI to pull from Zustand store

* Lazy load inspectors

* Update version and rebuild app
2024-03-14 13:02:47 -04:00
Ian Arawjo
677073ef62 Clean escaped braces before eval 2024-03-07 15:36:40 -05:00
ianarawjo
4cb97d87f7
Bug fix code on save (#241) 2024-03-07 14:31:55 -05:00
ianarawjo
0f4275bc75
Add Claude 3 and Pyodide sandboxing (#237)
Adds pyodide WebWorker to run Python scripts, thanks to idea by Shreya.

* Add sandbox option to Python eval nodes.

* Add new Anthropic models

* Disable guards for Python evals on server

* Fix bug with detecting async func in runOverResponses

---------

Co-authored-by: Shreya Shankar <ss.shankar505@gmail.com>
2024-03-05 23:15:35 -05:00
ianarawjo
0a45383b95
Generate code evaluators (#231)
* Adds a purple GenAI button to Code Evaluator Nodes, to allow easier creation of evaluation functions. (NOTE: This, like the TextFields and Items Nodes GenAI features, is experimental and requires an OpenAI API key to access.)

* Adds a drop-down to LLM evaluators

* Ensures LLM evaluators load cache'd responses on load

* Fixes a bug where right-clicking in pop-up Inspectors would bring up the node context menu.

* Internally, refactors evaluator nodes to have inner components that take care of running evaluations, in preparation for multi-eval and running evals elsewhere
2024-02-27 20:27:41 -05:00
ianarawjo
d8d02424e2
Add new OpenAI models. (#225) 2024-02-17 12:35:38 -05:00
Ian Arawjo
625619c0b0 Rebuild app and update package version 2024-02-11 16:25:08 -05:00
Ian Arawjo
08047d2c55 Bug fix join node 2024-01-22 12:56:34 -05:00
ianarawjo
7e1f43688f
"Small" changes (#213)
* Remove notification dots

* Add batch uids to response objects.

* Regroup responses by batch ids in inspectors. Add batch ids to resp objs. Update examples.

* Bug fix: clear RF state first before loading a flow

* Add random sample toggle to Tabular Data node

* Make sample UI loc conditional on num cols and fit nicer into whitespace

* Adds 'settings template vars' to parametrize on model settings.

* Typecast settings vars params

* Rebuild app and update version
2024-01-19 20:23:24 -05:00
Kayla Z
3d15bc9d17
Add stop button to cancel pending queries (#211)
* Add Stop button

* Replaced QueryTracker stop checks in _prompt_llm in query.ts. Modified _prompt_llm and *gen_responses to take in node id for checking purposes. Added new css class for stopping status.

* Used callback function instead of passing id to the backend, renamed QueryStopper and some of its functions, made custom error

* Added semicolons and one more UserForcedPrematureExit check

* Revise canceler to never clear id, and use unique id Date.now instead

* Make cancel go into call_llm funcs

* Cleanup console logs

* Rebuild app and update package version

---------

Co-authored-by: Kayla Zethelyn <kaylazethelyn@college.harvard.edu>
Co-authored-by: Ian Arawjo <fatso784@gmail.com>
2024-01-13 18:22:08 -05:00
ianarawjo
b92c03afb2
Replace Dalai with Ollama (#209)
* Add basic Ollama support (#208)

* Remove trapFocus warning when no OpenAI key set

* Ensure Ollama is only visible in providers list if running locally. 

* Remove Dalai.

* Fix ollama support to include chat models and pass chat history correctly

* Fix bug with debounce on progress bar updates in Prompt/Chat nodes

* Rebuilt app and update package version

---------

Co-authored-by: Laurent Huberdeau <16990250+laurenthuberdeau@users.noreply.github.com>
2024-01-08 18:33:13 -05:00
ianarawjo
5acdfc0677
Add Search Bar to Inspectors (#206)
* Add search bar to Response Inspector

* Added search text highlights using mark tags

* Add filter and case sensitive toggles

* Fixed inspector UI for wide and non-wide formats, to include Find bar

* Escape search string before RegExp. Fix longstanding refresh issue when template var is removed.

* Fix styling inconsistency w border width when displaying LLM responses on Firefox

---------

Co-authored-by: Kayla Zethelyn <kaylazethelyn@college.harvard.edu>
2024-01-05 22:19:50 -05:00
ianarawjo
48f1314d23
Debounce re-renders in prompt node progress listener (#204)
* Debounce rerenders in prompt node progress listener

* Fix debounce lag in genAI for Items nodes

* Rebuild app and update version
2024-01-02 18:33:39 -05:00
ianarawjo
32c62225d2
Debounce text edit callbacks to optimize performance in TFs and Items nodes (#203) 2024-01-02 15:43:13 -05:00
Ian Arawjo
4a45bd6b75 Fix bug with template vars when generating prompt templates w genAI 2024-01-02 11:12:46 -05:00
Ian Arawjo
965b96e451 Fix bug when creating dirs for custom providers 2024-01-02 10:36:43 -05:00
ianarawjo
0af7bdaedd
Bug fix to visibility on TF nodes (#201)
* removed 'source-code-pro' from code css to fix cursor accuracy in code editor (#199)

Co-authored-by: Kayla Zethelyn <kaylazethelyn@college.harvard.edu>

* Refactor duplicate code (#198)

* Refactor common const from JoinNode.js, LLMResponseInspector.js, SplitNode.js and VisNode.js into utils.ts

* unfactor same constant different definition, fixed syntax for multiple imports

---------

Co-authored-by: Kayla Zethelyn <kaylazethelyn@college.harvard.edu>

* Bug fix to update visibility on TF fields

* rebuild react and update version

---------

Co-authored-by: Kayla Z <77540029+kamazet@users.noreply.github.com>
Co-authored-by: Kayla Zethelyn <kaylazethelyn@college.harvard.edu>
2023-12-29 11:18:21 -05:00
ianarawjo
69ff3a1452
Bug fix (#197) 2023-12-20 14:35:38 -05:00
ianarawjo
d6e850e724
Gemini model support and raise error when detecting duplicate var names (v0.2.8.1) (#195)
* Raise error after detecting duplicate variable names (#190)

* Raise error for duplicate variable name

* Created base error class

* Simplified error classes. Made just one `DuplicateVariableNameError` that takes in variable name to have a hard-coded error message

---------

Co-authored-by: Kayla Zethelyn <kaylazethelyn@college.harvard.edu>

* Adding support for Google's Gemini-Pro model.  (#194)

* Refined duplicate var error check code

* Tidy up duplicate var name alerts and error handling, and err message

* Rebuild react and update package version

---------

Co-authored-by: Kayla Z <77540029+kamazet@users.noreply.github.com>
Co-authored-by: Kayla Zethelyn <kaylazethelyn@college.harvard.edu>
Co-authored-by: Priyan Vaithilingam <priyanmuthu@gmail.com>
2023-12-19 16:14:34 -05:00
ianarawjo
ce583a216c
AI for ChainForge BETA: TextFields, Items (#191)
* Implement autofill backend

* Add autofill to ui

* Add argument to getUID to force recalculation of UID's on every call

* Add command fill

* Move popover to the right

* Merge autofill-ui into autofill

* Add minimum rows requirement for autofilling

* Rename local variable in autofill system

* Rename autofill.ts to ai.ts

* Implement generate and replace backend function

* Add purple AI button

* Add ai popover

* Add tabs to ai popover

* Cosmetic changes to AI popover

* Move command fill UI to purple button popover

* Add 'creative' toggle to generateAndReplace

* Generate and replace UI

* Call backend for generate and replace

* Change creative to unconventional in generate and replace system

* Fix generate and replace

* Add loading states

* Cosmetic changes

* Use sparkle icon

* Cosmetic changes

* Add a clarifying sentence to the prompt when the user asks for a prompt

* Change to markdown

* Add error handling to AI system

* Improve prompt prompt

* Remove 'suggestions loading' message

* Change 'pattern' to 'generate a list of' and fix a bug where i forgot to specify unordered markdown list

* Limit output to n in decode()

* Fix bug in error handling

* TEMP: try to fix autofill

* TEMP: disable autofill

* Finally fix autofill's debouncing

* Improve autofill prompt to handle commands

* Fix typo with semicolon

* Implement autofill backend

* Add autofill to ui

* Add argument to getUID to force recalculation of UID's on every call

* Add command fill

* Move popover to the right

* Merge autofill-ui into autofill

* Add minimum rows requirement for autofilling

* Rename local variable in autofill system

* Rename autofill.ts to ai.ts

* Implement generate and replace backend function

* Add purple AI button

* Add ai popover

* Add tabs to ai popover

* Cosmetic changes to AI popover

* Move command fill UI to purple button popover

* Add 'creative' toggle to generateAndReplace

* Generate and replace UI

* Call backend for generate and replace

* Change creative to unconventional in generate and replace system

* Fix generate and replace

* Add loading states

* Cosmetic changes

* Use sparkle icon

* Cosmetic changes

* Add a clarifying sentence to the prompt when the user asks for a prompt

* Change to markdown

* Add error handling to AI system

* Improve prompt prompt

* Remove 'suggestions loading' message

* Change 'pattern' to 'generate a list of' and fix a bug where i forgot to specify unordered markdown list

* Limit output to n in decode()

* Fix bug in error handling

* TEMP: try to fix autofill

* TEMP: disable autofill

* Finally fix autofill's debouncing

* Improve autofill prompt to handle commands

* Fix typo with semicolon

* Refactor the AI Popover into a new component

* Refactor the AI Popover into a new component

* Refactor the autofill functionality into two backend files

* Minor refactoring and styling fixes

* Parse markdown using markdown library

* Add no_cache flag support in backend to ignore cache for AI popover

* trim quotation marks and escape braces in AI autofill

* Add AI Support Tab in Global Settings pane.

* Convert Jinja braces

* Fix typo in AiPopover import

* Handle template variables with Extend and Autocomplete + Check template variable correctness in outputs

* Escape the braces of generate and replace prompts

* Update prompts to strengthen AI support for multiple template variables

* Log the system message

* Reduce minimum rows required to 1 for autocomplete to begin generating

* Reduce min rows to extend to 1 and add warning below 2

* Create a defaultdict utility

* Consider null values as nonexistant in defaultdict

* Make placeholders stick to their assigned text field without using defaultdict

* Make placeholder logic more readable

* Cache rendering of text fields to avoid expensive computation

* Calculate whether to refresh suggestions based on expected suggestions instead of previous suggestions

* Fix bug where LLM was returning templates in generate and replace where none was requested

* Force re-render of text fields on Extend

* Add Sean Yang to README

* Add GenAI support to Items Node

* Pass front-end API keys to AI support features

* Escape braces on Items Node outputs

* Update package to 0.2.8

* Disable autosaving if it takes 1 second or longer to save to localStorage

* Skip autosave when browser tab is inactive

* Fetch environment API keys only once upon load

* Check for OpenAI API key in AIPopover. If not present, display Alert.

---------

Co-authored-by: Sean Yang <53060248+shawseanyang@users.noreply.github.com>
2023-12-13 11:58:07 -05:00
ianarawjo
ec8fbde392
Add Inspect Drawer (to Prompt and Eval Nodes) (#189)
* Show metavars in Table View

* Remove showing metavar col when var is col-plotted

* Add collapseable drawer to prompt node

* Add inspect drawer to eval nodes

* Rebuild app and package version

* Revise CSS so text in inspect drawer is selectable
2023-12-06 21:40:34 -05:00
ianarawjo
1f5e0207c9
Disable delete key, use relative paths in FLASK_BASE_URL, rename CSVNode to ItemsNode (#185)
* Disable delete key deleting a node in RF

* Change FLASK_BASE_URL to use relative path except in dev mode

* Rename CSVNode to ItemsNode and replace its icon.

* Update package version and rebuild react

* Add new Claude models
2023-12-04 18:53:57 -05:00
Ian Arawjo
821950d959 Rebuild react and update version 2023-11-27 20:18:32 -05:00
ianarawjo
7223735b7f
Add Code Processor nodes (#180)
* Add Code Processor nodes.

* Renamed EvaluatorNode to CodeEvaluatorNode.

* Changed the way evaluator nodes pull responses (from grabResponses to pullInputData).

* Fix SimpleEvalNode to be consistent with CodeEvaluatorNode

* Fix Vis Node where no eval resps are connected, but resps are connected.

* Rebuilt react and update package version
2023-11-27 13:57:27 -05:00
Ian Arawjo
a861695c87 Make response reordering more efficient 2023-11-26 21:48:36 -05:00
ianarawjo
a9e1ad691c
Fix numeric table imports and make LLM response order consistent w vars order (#178)
* Convert table values to strings upon export

* Make ordering of LLM responses consistent w vars dict ordering

* Rebuild react and package version

* Make sure ordering considers vars as objects
2023-11-26 21:23:21 -05:00
ianarawjo
1eae5edf89
Add "Continue w prior LLMs" toggle to the base Prompt Node (#168)
* Add support for "continue w prior LLM" toggle on base Prompt Node

* Fix anthropic chat bug

* Detect immediate prompt chaining, and show cont LLM toggle in that case

* Update react build and package
2023-11-19 18:49:52 -05:00
ianarawjo
f7be853554
Add a Split Node (#167)
* Add a split node

* Repair damage to model settings schema

* cleanup

* Fix bug w warning msg in Split node

* Bug fix removing LLM meta key

* Rebuild react and package

* Add markdown parser package dependency
2023-11-19 10:59:37 -05:00
Ian Arawjo
4ba0a9f998 Fix bug with new OpenAI models 2023-11-17 08:54:17 -05:00