mirror of
https://github.com/ianarawjo/ChainForge.git
synced 2025-03-14 16:26:45 +00:00
* Beginning to convert Python backend to Typescript * Change all fetch() calls to fetch_from_backend switcher * wip converting query.py to query.ts * wip started utils.js conversion. Tested that OpenAI API call works * more progress on converting utils.py to Typescript * jest tests for query, utils, template.ts. Confirmed PromptPipeline works. * wip converting queryLLM in flask_app to TS * Tested queryLLM and StorageCache compressed saving/loading * wip execute() in backend.ts * Added execute() and tested w concrete func. Need to test eval() * Added craco for optional webpack config. Config'd for TypeScript with Node.js packages browserify'd * Execute JS code on iframe sandbox * Tested and working JS Evaluator execution. * wip swapping backends * Tested TypeScript backendgit status! :) woot * Added fetchEnvironAPIKeys to Flask server to fetch os.environ keys when running locally * Route Anthropic calls through Flask when running locally * Added info button to Eval nodes. Rebuilt react * Edits to info modal on Eval node * Remove/error out on Python eval nodes when not running locally. * Check browser compat and display error if not supported * Changed all example flows to use JS. Bug fix in query.ts * Refactored to LLMProvider to streamline model additions * Added HuggingFace models API * Added back Dalai call support, routing through Flask * Remove flask app calls and socketio server that are no longer used * Added Comment Nodes. Rebuilt react. * Fix PaLM temp=0 build, update package vers and rebuild react
1 line
9.3 KiB
Plaintext
1 line
9.3 KiB
Plaintext
{"flow": {"nodes": [{"width": 312, "height": 311, "id": "prompt-reverse-string", "type": "prompt", "data": {"prompt": "{prompt}", "n": 1, "llms": [{"key": "aa3c0f03-22bd-416e-af4d-4bf5c4278c99", "settings": {"system_msg": "You are a helpful assistant.", "temperature": 1, "functions": [], "function_call": "", "top_p": 1, "stop": [], "presence_penalty": 0, "frequency_penalty": 0}, "name": "GPT3.5", "emoji": "\ud83d\ude42", "model": "gpt-3.5-turbo", "base_model": "gpt-3.5-turbo", "temp": 1, "formData": {"shortname": "GPT3.5", "model": "gpt-3.5-turbo", "system_msg": "You are a helpful assistant.", "temperature": 1, "functions": "", "function_call": "", "top_p": 1, "stop": "", "presence_penalty": 0, "frequency_penalty": 0}}]}, "position": {"x": 448, "y": 224}, "selected": false, "positionAbsolute": {"x": 448, "y": 224}, "dragging": false}, {"width": 333, "height": 182, "id": "eval-reverse-string", "type": "evaluator", "data": {"code": "function evaluate(response) {\n\tlet ideal = response.meta['Ideal'];\n\treturn response.text.startsWith(ideal);\n}", "language": "javascript"}, "position": {"x": 820, "y": 150}, "positionAbsolute": {"x": 820, "y": 150}}, {"width": 228, "height": 196, "id": "vis-reverse-string", "type": "vis", "data": {"input": "eval-reverse-string"}, "position": {"x": 1200, "y": 250}, "positionAbsolute": {"x": 1200, "y": 250}}, {"width": 302, "height": 260, "id": "inspect-reverse-string", "type": "inspect", "data": {"input": "prompt-reverse-string"}, "position": {"x": 820, "y": 400}, "positionAbsolute": {"x": 820, "y": 400}}, {"width": 423, "height": 417, "id": "table-reverse-string", "type": "table", "data": {"rows": [{"prompt": "Spell this sentence backwards, character by character: We\u2019ve trained a model called ChatGPT which interacts in a conversational way. The dialogue format makes it possible for ChatGPT to answer follow-up questions, admit its mistakes, challenge incorrect premises, and reject inappropriate requests.", "ideal": ".stseuqer etairporppani tcejer dna ,sesimerp tcerrocni egnellahc ,sekatsim sti timda ,snoitseuq puwollof rewsna ot TPGtahC rof elbissop ti sekam tamrof eugolaid ehT .yaw lanoitasrevnoc a ni stcaretni hcihw TPGtahC dellac ledom a deniart ev\u2019eW"}, {"prompt": "Spell this sentence backwards, character by character: We trained this model using Reinforcement Learning from Human Feedback (RLHF), using the same methods as InstructGPT, but with slight differences in the data collection setup. We trained an initial model using supervised fine-tuning: human AI trainers provided conversations in which they played both sides\u2014the user and an AI assistant. We gave the trainers access to model-written suggestions to help them compose their responses. We mixed this new dialogue dataset with the InstructGPT dataset, which we transformed into a dialogue format. To create a reward model for reinforcement learning, we needed to collect comparison data, which consisted of two or more model responses ranked by quality. To collect this data, we took conversations that AI trainers had with the chatbot. We randomly selected a model-written message, sampled several alternative completions, and had AI trainers rank them. Using these reward models, we can fine-tune the model using Proximal Policy Optimization. We performed several iterations of this process.", "ideal": ".ssecorp siht fo snoitareti lareves demrofrep eW .noitazimitpO yciloP lamixorP gnisu ledom eht enut-enif nac ew ,sledom drawer eseht gnisU .meht knar sreniart IA dah dna ,snoitelpmoc evitanretla lareves delpmas ,egassem nettirw-ledom a detceles ylmodnar eW .tobtahc eht htiw dah sreniart IA taht snoitasrevnoc koot ew ,atad siht tcelloc oT .ytilauq yb deknar sesnopser ledom erom ro owt fo detsisnoc hcihw ,atad nosirapmoc tcelloc ot dedeen ew ,gninrael tnemecrofnier rof ledom drawer a etaerc oT .tamrof eugolaid a otni demrofsnart ew hcihw ,tesatad TPGtcurtsnI eht htiw tesatad eugolaid wen siht dexim eW .sesnopser rieht esopmoc meht pleh ot snoitseggus nettirw-ledom ot ssecca sreniart eht evag eW .tnatsissa IA na dna resu eht\u2014sedis htob deyalp yeht hcihw ni snoitasrevnoc dedivorp sreniart IA namuh :gninut-enif desivrepus gnisu ledom laitini na deniart eW .putes noitcelloc atad eht ni secnereffid thgils htiw tub ,TPGtcurtsnI sa sdohtem emas eht gnisu ,)FHLR( kcabdeeF namuH morf gninraeL tnemecrofnieR gnisu ledom siht deniart eW"}, {"prompt": "Spell this sentence backwards, character by character: Latencies will vary over time so we recommend benchmarking prior to making deployment decisions", "ideal": "snoisiced tnemyolped gnikam ot roirp gnikramhcneb dnemmocer ew os emit revo yrav lliw seicnetaL"}, {"prompt": "Spell this sentence backwards, character by character: Our mission is to ensure that artificial general intelligence\u2014AI systems that are generally smarter than humans\u2014benefits all of humanity.", "ideal": ".ytinamuh fo lla stifeneb\u2014snamuh naht retrams yllareneg era taht smetsys IA\u2014ecnegilletni lareneg laicifitra taht erusne ot si noissim ruO"}, {"prompt": "Spell this sentence backwards, character by character: There are several things we think are important to do now to prepare for AGI.", "ideal": ".IGA rof eraperp ot won od ot tnatropmi era kniht ew sgniht lareves era erehT"}, {"prompt": "Spell this sentence backwards, character by character: Second, we are working towards creating increasingly aligned and steerable models. Our shift from models like the first version of GPT-3 to InstructGPT and ChatGPT is an early example of this.", "ideal": ".siht fo elpmaxe ylrae na si TPGtahC dna TPGtcurtsnI ot 3-TPG fo noisrev tsrif eht ekil sledom morf tfihs ruO .sledom elbareets dna dengila ylgnisaercni gnitaerc sdrawot gnikrow era ew ,dnoceS"}, {"prompt": "Spell this sentence backwards, character by character: We have attempted to set up our structure in a way that aligns our incentives with a good outcome.", "ideal": ".emoctuo doog a htiw sevitnecni ruo sngila taht yaw a ni erutcurts ruo pu tes ot detpmetta evah eW"}, {"prompt": "Spell this sentence backwards, character by character: We think it\u2019s important that efforts like ours submit to independent audits before releasing new systems; we will talk about this in more detail later this year. At some point, it may be important to get independent review before starting to train future systems, and for the most advanced efforts to agree to limit the rate of growth of compute used for creating new models. We think public standards about when an AGI effort should stop a training run, decide a model is safe to release, or pull a model from production use are important. Finally, we think it\u2019s important that major world governments have insight about training runs above a certain scale.", "ideal": ".elacs niatrec a evoba snur gniniart tuoba thgisni evah stnemnrevog dlrow rojam taht tnatropmi s\u2019ti kniht ew ,yllaniF .tnatropmi era esu noitcudorp morf ledom a llup ro ,esaeler ot efas si ledom a ediced ,nur gniniart a pots dluohs troffe IGA na nehw tuoba sdradnats cilbup kniht eW .sledom wen gnitaerc rof desu etupmoc fo htworg fo etar eht timil ot eerga ot stroffe decnavda tsom eht rof dna ,smetsys erutuf niart ot gnitrats erofeb weiver tnednepedni teg ot tnatropmi eb yam ti ,tniop emos tA .raey siht retal liated erom ni siht tuoba klat lliw ew ;smetsys wen gnisaeler erofeb stidua tnednepedni ot timbus sruo ekil stroffe taht tnatropmi s\u2019ti kniht eW"}, {"prompt": "Spell this sentence backwards, character by character: We can imagine a world in which humanity flourishes to a degree that is probably impossible for any of us to fully visualize yet.", "ideal": ".tey ezilausiv ylluf ot su fo yna rof elbissopmi ylbaborp si taht eerged a ot sehsiruolf ytinamuh hcihw ni dlrow a enigami nac eW"}, {"prompt": "Spell this sentence backwards, character by character: OpenAI, brining AI to everyone.", "ideal": ".enoyreve ot IA gninirb ,IAnepO"}], "columns": [{"key": "prompt", "header": "Prompt"}, {"key": "ideal", "header": "Ideal"}]}, "position": {"x": -16, "y": 160}, "selected": false, "positionAbsolute": {"x": -16, "y": 160}, "dragging": false}], "edges": [{"source": "prompt-reverse-string", "sourceHandle": "prompt", "target": "eval-reverse-string", "targetHandle": "responseBatch", "interactionWidth": 100, "markerEnd": {"type": "arrow", "width": "22px", "height": "22px"}, "id": "reactflow__edge-prompt-1686756357355prompt-eval-1686756357355responseBatch"}, {"source": "prompt-reverse-string", "sourceHandle": "prompt", "target": "inspect-reverse-string", "targetHandle": "input", "interactionWidth": 100, "markerEnd": {"type": "arrow", "width": "22px", "height": "22px"}, "id": "reactflow__edge-prompt-1686756357355prompt-inspect-1686756357355input"}, {"source": "eval-reverse-string", "sourceHandle": "output", "target": "vis-reverse-string", "targetHandle": "input", "interactionWidth": 100, "markerEnd": {"type": "arrow", "width": "22px", "height": "22px"}, "id": "reactflow__edge-eval-1686756357355output-vis-1686756357355input"}, {"source": "table-reverse-string", "sourceHandle": "Prompt", "target": "prompt-reverse-string", "targetHandle": "prompt", "interactionWidth": 100, "markerEnd": {"type": "arrow", "width": "22px", "height": "22px"}, "id": "reactflow__edge-table-1686756385002Prompt-prompt-1686756357355prompt"}], "viewport": {"x": 144, "y": 37, "zoom": 1}}, "cache": {"eval-1686756357355.json": {}, "inspect-1686756357355.json": {}, "prompt-1686756357355.json": {}, "table-1686756385002.json": {}, "vis-1686756357355.json": {}}} |