* Beginning to convert Python backend to Typescript
* Change all fetch() calls to fetch_from_backend switcher
* wip converting query.py to query.ts
* wip started utils.js conversion. Tested that OpenAI API call works
* more progress on converting utils.py to Typescript
* jest tests for query, utils, template.ts. Confirmed PromptPipeline works.
* wip converting queryLLM in flask_app to TS
* Tested queryLLM and StorageCache compressed saving/loading
* wip execute() in backend.ts
* Added execute() and tested w concrete func. Need to test eval()
* Added craco for optional webpack config. Config'd for TypeScript with Node.js packages browserify'd
* Execute JS code on iframe sandbox
* Tested and working JS Evaluator execution.
* wip swapping backends
* Tested TypeScript backendgit status! :) woot
* Added fetchEnvironAPIKeys to Flask server to fetch os.environ keys when running locally
* Route Anthropic calls through Flask when running locally
* Added info button to Eval nodes. Rebuilt react
* Edits to info modal on Eval node
* Remove/error out on Python eval nodes when not running locally.
* Check browser compat and display error if not supported
* Changed all example flows to use JS. Bug fix in query.ts
* Refactored to LLMProvider to streamline model additions
* Added HuggingFace models API
* Added back Dalai call support, routing through Flask
* Remove flask app calls and socketio server that are no longer used
* Added Comment Nodes. Rebuilt react.
* Fix PaLM temp=0 build, update package vers and rebuild react
* Flow autosaving every 60 seconds
* Set viewport upon resetFlow
* Added x-axis, y-axis etc header to Vis node. Ensured left padding sizes to shortnames.
* When num gen per prompt = 1, now plots single bar chart w solid LLM color
* Rebuilt react and update package version
* Added asMarkdownAST to ResponseInfo
* Increased margin between LLM responses in response inspector
* Added Inspect Results footer to eval node after run
* Fixed bug when cacheing responses after deleting a model and adding a new one
* Fixed bug in catching error in PromptTemplate's `is_concrete()` method
* Improvements to response inspector UI
* Ensured eval results (scores) are sorted alongside their responses the response inspector UI
* Removed response previews footer, replaced with Inspect responses button
* Prompt and eval nodes now load cached responses upon initialization
* Rebuilt React and raised package version
* Add OpenAI Evals tab to Example Flows pane.
* Add OpenAI evals examples (preconverted).
* Set unique IDs for each oaievals cforge file
* Use contenteditable divs in tables to improve performance.
* Update eval code to use json.loads instead of eval()
* Fix bug with $s in templates
* Update package info and point oaievals to main branch
* Made column headers use contenteditable p tags
* Add requests to dependency list
* Rebuilt react and updated package version
* Also includes start of categorical variables support in vis node
* Collapses same model responses (within `n` responses requested) and provides number of times they appeared
* Adds basic support for OpenAI function calls.
* Adds example flow illustrating OpenAI func calls
* Use Mantine Textarea controlled for Textfields node, instead of textarea
* Added Tabular data node
* TabularData context menus
* Make TabularData template hooks responsive to column name changes.
* Reduced spacing between template hooks
* Better table styling and rename column popup
* Add 'carry with' feature to Prompt Permutation recursive generation using `associate_id`
* Much nicer var tags on inspect window
* Nicer styling for LLM group headers in inspect screen
* Pass metavars around in the backend to surface them to front-end
* Set min-height on inspect node to be larger
* Added in-line print in Eval nodes
* Append error message to print output
* Minor inspect node CSS tweaks and fixes
* Removed mix-blend-mode due to performance issues scrolling large text
* Added ground truth eval example for math problems
* Updated React build and version number
* Lint Python code with ruff (#60)
* Failure progress on Prompt Nodes
* Change PromptNode preview container color
* Ensure LLM colors are unique and the same across nodes
* Reset LLM colors upon flow load
* Add LLM colors to 3D scatterplot
* Extract inspector internals into separate component.
* Added inspect modal.
* Lower rate of failure for dummy LLM responses
* Fix useEffect bug in LLMResponseInspector
* Fix export to excel bug
* Remove dependence on browser support for regex negative lookbehind
* Use monospace font in textareas in Safari
* Fix settings modal bug in FireFox
* Change version
* Update README.md
* Model settings forms
* Editable nicknames and emojis
* Saving and loading model settings
* Temperature indicator on LLM items in PromptNodes
* Ensure LLM nicknames are unique
* Detect when PaLM blocks responses and output standard error msg in response instead
* Fix examples/ to use new cache format
* Add helpful 'could not reach server' text on countQueries fail
* Add Dalai model settings
* Rebuild react and update package version
the Harvard HCI website is terribly out of date (by multiple years) and my personal page on our lab website is not very informative, so I removed the Harvard HCI website and pointed to the glassmanlab main page, where all our publications are.