* Port over and type MultiEvalNode code from the `multi-eval` branch
* Merge css changes from `multi-eval`
* Merge changes to inspector table view from `multi-eval`
* Criteria progress rings
* Debounce renders on text edits
* Add sandbox toggle to Python evals inside MultiEval
* Add uids to evals in MultiEval, for correct cache ids not dependent on name
* <Stack> scores
* Add debounce to editing code or prompts in eval UI
* Update package version
* Refactor: modularize response boxes into a separate component
* Type store.js. Change info to vars. NOTE: This may break backwards compat.
* Refactor addNodes in App.tsx to be simpler.
* Turn AlertModal into a Provider with useContext
* Remove fetch_from_backend.
* Add build/ to gitignore
* Add support for image models and add Dall-E models.
* Better rate limiting with Bottleneck
* Fix new Chrome bug with file import readers not appearing as arrays; and fix bug with exportCache
* Add ability to add custom right-click context menu items per node
* Convert to/from TF and Items nodes
* Add lazyloader for images
* Add compression to images by default before storing in cache
* Add image compression toggle in Global Settings
* Move Alert Provider to top level of index.js
* Adding support for Amazon Bedrock models (#247)
* Create global setting for GenAI features provider, to support Bedrock (Anthropic) models as an alternative
* Reformats dropdown in PromptNode to use Mantine ContextMenu with a nested menu, to save space.
* Remove build folder from git
* Fix context menu to close on click-off. Refactor context menu array code.
* Ensure context menu is positioned below the Add+ button, like a proper dropdown.
* Toggle context menu off when clicking btn again.
---------
Co-authored-by: Massimiliano Angelino <angmas@amazon.com>
* Add human ratings to inspectors
* Store human labels in cache, not resp objs
* Change rating UI to pull from Zustand store
* Lazy load inspectors
* Update version and rebuild app
Adds pyodide WebWorker to run Python scripts, thanks to idea by Shreya.
* Add sandbox option to Python eval nodes.
* Add new Anthropic models
* Disable guards for Python evals on server
* Fix bug with detecting async func in runOverResponses
---------
Co-authored-by: Shreya Shankar <ss.shankar505@gmail.com>
* Adds a purple GenAI button to Code Evaluator Nodes, to allow easier creation of evaluation functions. (NOTE: This, like the TextFields and Items Nodes GenAI features, is experimental and requires an OpenAI API key to access.)
* Adds a drop-down to LLM evaluators
* Ensures LLM evaluators load cache'd responses on load
* Fixes a bug where right-clicking in pop-up Inspectors would bring up the node context menu.
* Internally, refactors evaluator nodes to have inner components that take care of running evaluations, in preparation for multi-eval and running evals elsewhere
* Remove notification dots
* Add batch uids to response objects.
* Regroup responses by batch ids in inspectors. Add batch ids to resp objs. Update examples.
* Bug fix: clear RF state first before loading a flow
* Add random sample toggle to Tabular Data node
* Make sample UI loc conditional on num cols and fit nicer into whitespace
* Adds 'settings template vars' to parametrize on model settings.
* Typecast settings vars params
* Rebuild app and update version
* Add Stop button
* Replaced QueryTracker stop checks in _prompt_llm in query.ts. Modified _prompt_llm and *gen_responses to take in node id for checking purposes. Added new css class for stopping status.
* Used callback function instead of passing id to the backend, renamed QueryStopper and some of its functions, made custom error
* Added semicolons and one more UserForcedPrematureExit check
* Revise canceler to never clear id, and use unique id Date.now instead
* Make cancel go into call_llm funcs
* Cleanup console logs
* Rebuild app and update package version
---------
Co-authored-by: Kayla Zethelyn <kaylazethelyn@college.harvard.edu>
Co-authored-by: Ian Arawjo <fatso784@gmail.com>
* Add basic Ollama support (#208)
* Remove trapFocus warning when no OpenAI key set
* Ensure Ollama is only visible in providers list if running locally.
* Remove Dalai.
* Fix ollama support to include chat models and pass chat history correctly
* Fix bug with debounce on progress bar updates in Prompt/Chat nodes
* Rebuilt app and update package version
---------
Co-authored-by: Laurent Huberdeau <16990250+laurenthuberdeau@users.noreply.github.com>
* Add search bar to Response Inspector
* Added search text highlights using mark tags
* Add filter and case sensitive toggles
* Fixed inspector UI for wide and non-wide formats, to include Find bar
* Escape search string before RegExp. Fix longstanding refresh issue when template var is removed.
* Fix styling inconsistency w border width when displaying LLM responses on Firefox
---------
Co-authored-by: Kayla Zethelyn <kaylazethelyn@college.harvard.edu>
* removed 'source-code-pro' from code css to fix cursor accuracy in code editor (#199)
Co-authored-by: Kayla Zethelyn <kaylazethelyn@college.harvard.edu>
* Refactor duplicate code (#198)
* Refactor common const from JoinNode.js, LLMResponseInspector.js, SplitNode.js and VisNode.js into utils.ts
* unfactor same constant different definition, fixed syntax for multiple imports
---------
Co-authored-by: Kayla Zethelyn <kaylazethelyn@college.harvard.edu>
* Bug fix to update visibility on TF fields
* rebuild react and update version
---------
Co-authored-by: Kayla Z <77540029+kamazet@users.noreply.github.com>
Co-authored-by: Kayla Zethelyn <kaylazethelyn@college.harvard.edu>
* Raise error after detecting duplicate variable names (#190)
* Raise error for duplicate variable name
* Created base error class
* Simplified error classes. Made just one `DuplicateVariableNameError` that takes in variable name to have a hard-coded error message
---------
Co-authored-by: Kayla Zethelyn <kaylazethelyn@college.harvard.edu>
* Adding support for Google's Gemini-Pro model. (#194)
* Refined duplicate var error check code
* Tidy up duplicate var name alerts and error handling, and err message
* Rebuild react and update package version
---------
Co-authored-by: Kayla Z <77540029+kamazet@users.noreply.github.com>
Co-authored-by: Kayla Zethelyn <kaylazethelyn@college.harvard.edu>
Co-authored-by: Priyan Vaithilingam <priyanmuthu@gmail.com>
* Implement autofill backend
* Add autofill to ui
* Add argument to getUID to force recalculation of UID's on every call
* Add command fill
* Move popover to the right
* Merge autofill-ui into autofill
* Add minimum rows requirement for autofilling
* Rename local variable in autofill system
* Rename autofill.ts to ai.ts
* Implement generate and replace backend function
* Add purple AI button
* Add ai popover
* Add tabs to ai popover
* Cosmetic changes to AI popover
* Move command fill UI to purple button popover
* Add 'creative' toggle to generateAndReplace
* Generate and replace UI
* Call backend for generate and replace
* Change creative to unconventional in generate and replace system
* Fix generate and replace
* Add loading states
* Cosmetic changes
* Use sparkle icon
* Cosmetic changes
* Add a clarifying sentence to the prompt when the user asks for a prompt
* Change to markdown
* Add error handling to AI system
* Improve prompt prompt
* Remove 'suggestions loading' message
* Change 'pattern' to 'generate a list of' and fix a bug where i forgot to specify unordered markdown list
* Limit output to n in decode()
* Fix bug in error handling
* TEMP: try to fix autofill
* TEMP: disable autofill
* Finally fix autofill's debouncing
* Improve autofill prompt to handle commands
* Fix typo with semicolon
* Implement autofill backend
* Add autofill to ui
* Add argument to getUID to force recalculation of UID's on every call
* Add command fill
* Move popover to the right
* Merge autofill-ui into autofill
* Add minimum rows requirement for autofilling
* Rename local variable in autofill system
* Rename autofill.ts to ai.ts
* Implement generate and replace backend function
* Add purple AI button
* Add ai popover
* Add tabs to ai popover
* Cosmetic changes to AI popover
* Move command fill UI to purple button popover
* Add 'creative' toggle to generateAndReplace
* Generate and replace UI
* Call backend for generate and replace
* Change creative to unconventional in generate and replace system
* Fix generate and replace
* Add loading states
* Cosmetic changes
* Use sparkle icon
* Cosmetic changes
* Add a clarifying sentence to the prompt when the user asks for a prompt
* Change to markdown
* Add error handling to AI system
* Improve prompt prompt
* Remove 'suggestions loading' message
* Change 'pattern' to 'generate a list of' and fix a bug where i forgot to specify unordered markdown list
* Limit output to n in decode()
* Fix bug in error handling
* TEMP: try to fix autofill
* TEMP: disable autofill
* Finally fix autofill's debouncing
* Improve autofill prompt to handle commands
* Fix typo with semicolon
* Refactor the AI Popover into a new component
* Refactor the AI Popover into a new component
* Refactor the autofill functionality into two backend files
* Minor refactoring and styling fixes
* Parse markdown using markdown library
* Add no_cache flag support in backend to ignore cache for AI popover
* trim quotation marks and escape braces in AI autofill
* Add AI Support Tab in Global Settings pane.
* Convert Jinja braces
* Fix typo in AiPopover import
* Handle template variables with Extend and Autocomplete + Check template variable correctness in outputs
* Escape the braces of generate and replace prompts
* Update prompts to strengthen AI support for multiple template variables
* Log the system message
* Reduce minimum rows required to 1 for autocomplete to begin generating
* Reduce min rows to extend to 1 and add warning below 2
* Create a defaultdict utility
* Consider null values as nonexistant in defaultdict
* Make placeholders stick to their assigned text field without using defaultdict
* Make placeholder logic more readable
* Cache rendering of text fields to avoid expensive computation
* Calculate whether to refresh suggestions based on expected suggestions instead of previous suggestions
* Fix bug where LLM was returning templates in generate and replace where none was requested
* Force re-render of text fields on Extend
* Add Sean Yang to README
* Add GenAI support to Items Node
* Pass front-end API keys to AI support features
* Escape braces on Items Node outputs
* Update package to 0.2.8
* Disable autosaving if it takes 1 second or longer to save to localStorage
* Skip autosave when browser tab is inactive
* Fetch environment API keys only once upon load
* Check for OpenAI API key in AIPopover. If not present, display Alert.
---------
Co-authored-by: Sean Yang <53060248+shawseanyang@users.noreply.github.com>
* Show metavars in Table View
* Remove showing metavar col when var is col-plotted
* Add collapseable drawer to prompt node
* Add inspect drawer to eval nodes
* Rebuild app and package version
* Revise CSS so text in inspect drawer is selectable
* Disable delete key deleting a node in RF
* Change FLASK_BASE_URL to use relative path except in dev mode
* Rename CSVNode to ItemsNode and replace its icon.
* Update package version and rebuild react
* Add new Claude models
* Add Code Processor nodes.
* Renamed EvaluatorNode to CodeEvaluatorNode.
* Changed the way evaluator nodes pull responses (from grabResponses to pullInputData).
* Fix SimpleEvalNode to be consistent with CodeEvaluatorNode
* Fix Vis Node where no eval resps are connected, but resps are connected.
* Rebuilt react and update package version
* Convert table values to strings upon export
* Make ordering of LLM responses consistent w vars dict ordering
* Rebuild react and package version
* Make sure ordering considers vars as objects