* Add human ratings to inspectors
* Store human labels in cache, not resp objs
* Change rating UI to pull from Zustand store
* Lazy load inspectors
* Update version and rebuild app
* Remove notification dots
* Add batch uids to response objects.
* Regroup responses by batch ids in inspectors. Add batch ids to resp objs. Update examples.
* Bug fix: clear RF state first before loading a flow
* Add random sample toggle to Tabular Data node
* Make sample UI loc conditional on num cols and fit nicer into whitespace
* Adds 'settings template vars' to parametrize on model settings.
* Typecast settings vars params
* Rebuild app and update version
* Add red dot in Inspect Responses footer to indicate something changed
* Abstract out inspect footer button to component
* Add tooltips to AddNode menu items.
* Simple eval wip
* Add menu sections to Add Node. Minor tweaks to simple eval.
* Save state of simple eval when editing fields
* Add 'only show scores' toggle to response inspector
* Change 2 example flows to use simple evals. Fix bg of toolbar buttons.
* Update version and rebuild react
* Add LLM scorer node (#107)
* Modularize the LLM list container, extracting it from prompt node
* Working LLM scorer node
* Bug and minor fixes
* Change modals to use perc left.
* Add inspect response footer to LLMEvalNode.
* Make Play buttons light green
* Fix React errors w keys in JSX arrays
* Add Chat Turn node and support for chat history (#108)
* Adds chat_history across backend's cache and querying mechanisms.
* Adds Chat Turn nodes, which allow for continuing a conversation.
* Adds automatic conversions of ChatHistory (in OpenAI format) to Anthropic and Google PaLM's chat formats. Converts chat history to appropriate format and passes it as context in the API call.
* Bug fix and error popup when missing past convo in Chat Turn
* Bug squashing to progress in chat turn node
* bug squashing
* Color false scores bright red in eval inspector
* fix tooltip when cont chat present
* Rebuild react
* bug fix llm eval node
* Add HF chat model support.
* Show multiple response objs in table inspector view
* Fix LLM item deletion bug
* Rebuild react and update package version
* Fix obscure bug when LLM outputs have no 'llm' property (due to prior CF version)
* Fix isLooselyEqual bug
* Update examples so that their cached 'fields' include llm nicknames
* rebuild react
* Add Chelse to readme
* Add tooltip to prompt preview button
* Focus scrollwheel on textfields textareas
* Replace escaped { and } with their bare versions
* Escape braces in tabular data by default. Ignore empty rows.
* Add ability to disable fields on textfields
* Make sure deleting a field deletes its fields_visibility
* Add withinPortal to Tooltips on side-buttons in text fields
* Add Anthropic model Claude-2.
* Beginning to convert Python backend to Typescript
* Change all fetch() calls to fetch_from_backend switcher
* wip converting query.py to query.ts
* wip started utils.js conversion. Tested that OpenAI API call works
* more progress on converting utils.py to Typescript
* jest tests for query, utils, template.ts. Confirmed PromptPipeline works.
* wip converting queryLLM in flask_app to TS
* Tested queryLLM and StorageCache compressed saving/loading
* wip execute() in backend.ts
* Added execute() and tested w concrete func. Need to test eval()
* Added craco for optional webpack config. Config'd for TypeScript with Node.js packages browserify'd
* Execute JS code on iframe sandbox
* Tested and working JS Evaluator execution.
* wip swapping backends
* Tested TypeScript backendgit status! :) woot
* Added fetchEnvironAPIKeys to Flask server to fetch os.environ keys when running locally
* Route Anthropic calls through Flask when running locally
* Added info button to Eval nodes. Rebuilt react
* Edits to info modal on Eval node
* Remove/error out on Python eval nodes when not running locally.
* Check browser compat and display error if not supported
* Changed all example flows to use JS. Bug fix in query.ts
* Refactored to LLMProvider to streamline model additions
* Added HuggingFace models API
* Added back Dalai call support, routing through Flask
* Remove flask app calls and socketio server that are no longer used
* Added Comment Nodes. Rebuilt react.
* Fix PaLM temp=0 build, update package vers and rebuild react
* Added asMarkdownAST to ResponseInfo
* Increased margin between LLM responses in response inspector
* Added Inspect Results footer to eval node after run
* Fixed bug when cacheing responses after deleting a model and adding a new one
* Fixed bug in catching error in PromptTemplate's `is_concrete()` method
* Improvements to response inspector UI
* Ensured eval results (scores) are sorted alongside their responses the response inspector UI
* Removed response previews footer, replaced with Inspect responses button
* Prompt and eval nodes now load cached responses upon initialization
* Rebuilt React and raised package version
* Use Mantine Textarea controlled for Textfields node, instead of textarea
* Added Tabular data node
* TabularData context menus
* Make TabularData template hooks responsive to column name changes.
* Reduced spacing between template hooks
* Better table styling and rename column popup
* Add 'carry with' feature to Prompt Permutation recursive generation using `associate_id`
* Much nicer var tags on inspect window
* Nicer styling for LLM group headers in inspect screen
* Pass metavars around in the backend to surface them to front-end
* Set min-height on inspect node to be larger
* Added in-line print in Eval nodes
* Append error message to print output
* Minor inspect node CSS tweaks and fixes
* Removed mix-blend-mode due to performance issues scrolling large text
* Added ground truth eval example for math problems
* Updated React build and version number