* Added asMarkdownAST to ResponseInfo
* Increased margin between LLM responses in response inspector
* Added Inspect Results footer to eval node after run
* Fixed bug when cacheing responses after deleting a model and adding a new one
* Fixed bug in catching error in PromptTemplate's `is_concrete()` method
* Improvements to response inspector UI
* Ensured eval results (scores) are sorted alongside their responses the response inspector UI
* Removed response previews footer, replaced with Inspect responses button
* Prompt and eval nodes now load cached responses upon initialization
* Rebuilt React and raised package version
* Add OpenAI Evals tab to Example Flows pane.
* Add OpenAI evals examples (preconverted).
* Set unique IDs for each oaievals cforge file
* Use contenteditable divs in tables to improve performance.
* Update eval code to use json.loads instead of eval()
* Fix bug with $s in templates
* Update package info and point oaievals to main branch
* Made column headers use contenteditable p tags
* Add requests to dependency list
* Rebuilt react and updated package version
* Also includes start of categorical variables support in vis node
* Collapses same model responses (within `n` responses requested) and provides number of times they appeared
* Adds basic support for OpenAI function calls.
* Adds example flow illustrating OpenAI func calls
* Use Mantine Textarea controlled for Textfields node, instead of textarea
* Added Tabular data node
* TabularData context menus
* Make TabularData template hooks responsive to column name changes.
* Reduced spacing between template hooks
* Better table styling and rename column popup
* Add 'carry with' feature to Prompt Permutation recursive generation using `associate_id`
* Much nicer var tags on inspect window
* Nicer styling for LLM group headers in inspect screen
* Pass metavars around in the backend to surface them to front-end
* Set min-height on inspect node to be larger
* Added in-line print in Eval nodes
* Append error message to print output
* Minor inspect node CSS tweaks and fixes
* Removed mix-blend-mode due to performance issues scrolling large text
* Added ground truth eval example for math problems
* Updated React build and version number
* Lint Python code with ruff (#60)
* Failure progress on Prompt Nodes
* Change PromptNode preview container color
* Ensure LLM colors are unique and the same across nodes
* Reset LLM colors upon flow load
* Add LLM colors to 3D scatterplot
* Extract inspector internals into separate component.
* Added inspect modal.
* Lower rate of failure for dummy LLM responses
* Fix useEffect bug in LLMResponseInspector
* Fix export to excel bug
* Remove dependence on browser support for regex negative lookbehind
* Use monospace font in textareas in Safari
* Fix settings modal bug in FireFox
* Change version
* Update README.md
* Model settings forms
* Editable nicknames and emojis
* Saving and loading model settings
* Temperature indicator on LLM items in PromptNodes
* Ensure LLM nicknames are unique
* Detect when PaLM blocks responses and output standard error msg in response instead
* Fix examples/ to use new cache format
* Add helpful 'could not reach server' text on countQueries fail
* Add Dalai model settings
* Rebuild react and update package version
the Harvard HCI website is terribly out of date (by multiple years) and my personal page on our lab website is not very informative, so I removed the Harvard HCI website and pointed to the glassmanlab main page, where all our publications are.