mirror of
https://github.com/ParisNeo/lollms-webui.git
synced 2024-12-29 16:48:50 +00:00
27 lines
2.7 KiB
Markdown
27 lines
2.7 KiB
Markdown
Once apon a time, the truth was what our parents and friends told us. Then, it was what the teachers told us. Then what we read in books and press. Then, what we hear in radio and watch on TV. Then what we find on Wikipedia and search on google.
|
|
Now it is what chatgpt tells us.
|
|
|
|
At each one of these stages, misinformation was part of the equation. It has always existed. I still remember loads of myths told by people around me that I took for granted, and looking back, I just laugh at my stupid self! But that's part of life in a community. Now the community is larger, and people who like spreading false things have just more space to talk.
|
|
|
|
But how did we defeated it? (kinda). It is by a kind of a weighted concensus. Betting that the more an idea is accepted by people we judge competant, the more likely for it to be true. Ofcourse, the problem is how we judge competance.
|
|
|
|
That wasn't without some hickups of course, many ideas persisted as truth for long even though evidence contradicted it. We accepted it just because many people say that. Like spinach being a big source of iron (popularized by the animated series Popeye), which turned out to be completely false, a result of a mistake in writing the values. Ruining my childhood in forcing myself to eat it even though I hated it.
|
|
|
|
But we went on, and the scientific method shielded us from most of these. Not to deny that there are loads of complete garbage in scientific literature (I know a thing or two about that). But in the most part, we established solid method to make a concensus about facts.
|
|
|
|
Now what? Models can be fine tuned to produce garbage and poison the field. That's why, it is important to trace the data that was used to train those models. And raise the awareness that those models can be biased.
|
|
|
|
I see no other alternative than full open source. People should have access to the databases and the model code and be able to certify that a database was used to train a model, and have a correspondance between the two.
|
|
|
|
I think we need to have a system like in bitcoin, where we have miners that validate transactions.
|
|
|
|
We can't forbid poison models from existing, but we can forbid them from proliferating. One idea could be to use the block chain to certify the link between the model and its data
|
|
|
|
The multitude of opensource models is a good thing. With enough luck, poison models are minority and can be detected and ruled out.
|
|
|
|
And for non certified models, maybe build a pool of judgement models that are trust worthy, we give them the same prompts as the models to test and look at the statistics of their answers and check the ones that are out of concensus. Just like we do with science.
|
|
|
|
I hope we figure out a way. I'm sure this community is full of smart people who can pull it off.
|
|
|
|
By ParisNeo
|
|
2023 |