* feat(bedrock_llama3): added support for Llama3 (#270)
- added also Claude 3 Opus to the list of models
- replaced hardcoded model Id strings with refs to NativeLLM enum
* chore: bump @mirai73/bedrock-fm library (#277)
- the new version adds source code to facilitate debugging
Co-authored-by: ianarawjo <fatso784@gmail.com>
* Adding together.ai support (#280)
---------
Co-authored-by: ianarawjo <fatso784@gmail.com>
* Add Together.ai and update Bedrock models
---------
Co-authored-by: Massimiliano Angelino <angmas@amazon.com>
Co-authored-by: Can Bal <canbal@users.noreply.github.com>
* Port over and type MultiEvalNode code from the `multi-eval` branch
* Merge css changes from `multi-eval`
* Merge changes to inspector table view from `multi-eval`
* Criteria progress rings
* Debounce renders on text edits
* Add sandbox toggle to Python evals inside MultiEval
* Add uids to evals in MultiEval, for correct cache ids not dependent on name
* <Stack> scores
* Add debounce to editing code or prompts in eval UI
* Update package version
* Create a Dockerfile
* Edit README.md to contain information about using Chainforge inside a container
* Update Dockerfile to use python 3.10 as the base image
---------
Co-authored-by: Rob-Powell <7034920+Rob-Powell@users.noreply.github.com>
* Refactor: modularize response boxes into a separate component
* Type store.js. Change info to vars. NOTE: This may break backwards compat.
* Refactor addNodes in App.tsx to be simpler.
* Turn AlertModal into a Provider with useContext
* Remove fetch_from_backend.
* Add build/ to gitignore
* Add support for image models and add Dall-E models.
* Better rate limiting with Bottleneck
* Fix new Chrome bug with file import readers not appearing as arrays; and fix bug with exportCache
* Add ability to add custom right-click context menu items per node
* Convert to/from TF and Items nodes
* Add lazyloader for images
* Add compression to images by default before storing in cache
* Add image compression toggle in Global Settings
* Move Alert Provider to top level of index.js
* Adding support for Amazon Bedrock models (#247)
* Create global setting for GenAI features provider, to support Bedrock (Anthropic) models as an alternative
* Reformats dropdown in PromptNode to use Mantine ContextMenu with a nested menu, to save space.
* Remove build folder from git
* Fix context menu to close on click-off. Refactor context menu array code.
* Ensure context menu is positioned below the Add+ button, like a proper dropdown.
* Toggle context menu off when clicking btn again.
---------
Co-authored-by: Massimiliano Angelino <angmas@amazon.com>
* Add human ratings to inspectors
* Store human labels in cache, not resp objs
* Change rating UI to pull from Zustand store
* Lazy load inspectors
* Update version and rebuild app
Adds pyodide WebWorker to run Python scripts, thanks to idea by Shreya.
* Add sandbox option to Python eval nodes.
* Add new Anthropic models
* Disable guards for Python evals on server
* Fix bug with detecting async func in runOverResponses
---------
Co-authored-by: Shreya Shankar <ss.shankar505@gmail.com>
* Adds a purple GenAI button to Code Evaluator Nodes, to allow easier creation of evaluation functions. (NOTE: This, like the TextFields and Items Nodes GenAI features, is experimental and requires an OpenAI API key to access.)
* Adds a drop-down to LLM evaluators
* Ensures LLM evaluators load cache'd responses on load
* Fixes a bug where right-clicking in pop-up Inspectors would bring up the node context menu.
* Internally, refactors evaluator nodes to have inner components that take care of running evaluations, in preparation for multi-eval and running evals elsewhere
* Remove notification dots
* Add batch uids to response objects.
* Regroup responses by batch ids in inspectors. Add batch ids to resp objs. Update examples.
* Bug fix: clear RF state first before loading a flow
* Add random sample toggle to Tabular Data node
* Make sample UI loc conditional on num cols and fit nicer into whitespace
* Adds 'settings template vars' to parametrize on model settings.
* Typecast settings vars params
* Rebuild app and update version
* Add Stop button
* Replaced QueryTracker stop checks in _prompt_llm in query.ts. Modified _prompt_llm and *gen_responses to take in node id for checking purposes. Added new css class for stopping status.
* Used callback function instead of passing id to the backend, renamed QueryStopper and some of its functions, made custom error
* Added semicolons and one more UserForcedPrematureExit check
* Revise canceler to never clear id, and use unique id Date.now instead
* Make cancel go into call_llm funcs
* Cleanup console logs
* Rebuild app and update package version
---------
Co-authored-by: Kayla Zethelyn <kaylazethelyn@college.harvard.edu>
Co-authored-by: Ian Arawjo <fatso784@gmail.com>
* Add basic Ollama support (#208)
* Remove trapFocus warning when no OpenAI key set
* Ensure Ollama is only visible in providers list if running locally.
* Remove Dalai.
* Fix ollama support to include chat models and pass chat history correctly
* Fix bug with debounce on progress bar updates in Prompt/Chat nodes
* Rebuilt app and update package version
---------
Co-authored-by: Laurent Huberdeau <16990250+laurenthuberdeau@users.noreply.github.com>
* Add search bar to Response Inspector
* Added search text highlights using mark tags
* Add filter and case sensitive toggles
* Fixed inspector UI for wide and non-wide formats, to include Find bar
* Escape search string before RegExp. Fix longstanding refresh issue when template var is removed.
* Fix styling inconsistency w border width when displaying LLM responses on Firefox
---------
Co-authored-by: Kayla Zethelyn <kaylazethelyn@college.harvard.edu>
* removed 'source-code-pro' from code css to fix cursor accuracy in code editor (#199)
Co-authored-by: Kayla Zethelyn <kaylazethelyn@college.harvard.edu>
* Refactor duplicate code (#198)
* Refactor common const from JoinNode.js, LLMResponseInspector.js, SplitNode.js and VisNode.js into utils.ts
* unfactor same constant different definition, fixed syntax for multiple imports
---------
Co-authored-by: Kayla Zethelyn <kaylazethelyn@college.harvard.edu>
* Bug fix to update visibility on TF fields
* rebuild react and update version
---------
Co-authored-by: Kayla Z <77540029+kamazet@users.noreply.github.com>
Co-authored-by: Kayla Zethelyn <kaylazethelyn@college.harvard.edu>