* Invalidate eval node upon upstream changes.
* Chain update pings across nodes. Autoresize textfields when typing.
* Wide output handles when entire node is output
* update package version
* Add tooltip to prompt preview button
* Focus scrollwheel on textfields textareas
* Replace escaped { and } with their bare versions
* Escape braces in tabular data by default. Ignore empty rows.
* Add ability to disable fields on textfields
* Make sure deleting a field deletes its fields_visibility
* Add withinPortal to Tooltips on side-buttons in text fields
* Add Anthropic model Claude-2.
* Resizeable textfield in prompt nodes
* Popup when user clicks X to delete node
* Collapseable supergroups in response inspector
* Nicer hierarchical color scheme for response group headers in response inspectors
* Share button implementation and testing
* Bug fix for Azure OpenAI (missing `bind` call)
* Change `call_anthropic` to go through server proxy when not running locally
* Cleanup of unused imports
* Fixed bug in Azure OpenAI call
* Update README.md with Share, Play link, etc
* Update package version to 0.2.0.2
* Beginning to convert Python backend to Typescript
* Change all fetch() calls to fetch_from_backend switcher
* wip converting query.py to query.ts
* wip started utils.js conversion. Tested that OpenAI API call works
* more progress on converting utils.py to Typescript
* jest tests for query, utils, template.ts. Confirmed PromptPipeline works.
* wip converting queryLLM in flask_app to TS
* Tested queryLLM and StorageCache compressed saving/loading
* wip execute() in backend.ts
* Added execute() and tested w concrete func. Need to test eval()
* Added craco for optional webpack config. Config'd for TypeScript with Node.js packages browserify'd
* Execute JS code on iframe sandbox
* Tested and working JS Evaluator execution.
* wip swapping backends
* Tested TypeScript backendgit status! :) woot
* Added fetchEnvironAPIKeys to Flask server to fetch os.environ keys when running locally
* Route Anthropic calls through Flask when running locally
* Added info button to Eval nodes. Rebuilt react
* Edits to info modal on Eval node
* Remove/error out on Python eval nodes when not running locally.
* Check browser compat and display error if not supported
* Changed all example flows to use JS. Bug fix in query.ts
* Refactored to LLMProvider to streamline model additions
* Added HuggingFace models API
* Added back Dalai call support, routing through Flask
* Remove flask app calls and socketio server that are no longer used
* Added Comment Nodes. Rebuilt react.
* Fix PaLM temp=0 build, update package vers and rebuild react
* Flow autosaving every 60 seconds
* Set viewport upon resetFlow
* Added x-axis, y-axis etc header to Vis node. Ensured left padding sizes to shortnames.
* When num gen per prompt = 1, now plots single bar chart w solid LLM color
* Rebuilt react and update package version
* Added asMarkdownAST to ResponseInfo
* Increased margin between LLM responses in response inspector
* Added Inspect Results footer to eval node after run
* Fixed bug when cacheing responses after deleting a model and adding a new one
* Fixed bug in catching error in PromptTemplate's `is_concrete()` method
* Improvements to response inspector UI
* Ensured eval results (scores) are sorted alongside their responses the response inspector UI
* Removed response previews footer, replaced with Inspect responses button
* Prompt and eval nodes now load cached responses upon initialization
* Rebuilt React and raised package version
* Add OpenAI Evals tab to Example Flows pane.
* Add OpenAI evals examples (preconverted).
* Set unique IDs for each oaievals cforge file
* Use contenteditable divs in tables to improve performance.
* Update eval code to use json.loads instead of eval()
* Fix bug with $s in templates
* Update package info and point oaievals to main branch
* Made column headers use contenteditable p tags
* Add requests to dependency list
* Rebuilt react and updated package version
* Also includes start of categorical variables support in vis node
* Collapses same model responses (within `n` responses requested) and provides number of times they appeared
* Adds basic support for OpenAI function calls.
* Adds example flow illustrating OpenAI func calls