mirror of
https://github.com/ParisNeo/lollms-webui.git
synced 2024-12-29 16:48:50 +00:00
22 lines
3.3 KiB
Markdown
22 lines
3.3 KiB
Markdown
# Can we contain AI by laws ?
|
|
|
|
Today, it is possible with a relatively modest hardware to train your own LLM model. With LoRA technique, fine tuning a model and giving it new skills is becoming a kid's game. Which means that regulation is not going to be able to stop AI from being missused. It would most likely favor the big companies who can afford to do model check (whatever that means) and instantly kill all small buisinesses that are based on open source models.
|
|
|
|
My problem with AI is power gained by few. And complete opacity about what's hapening under the hood.
|
|
Today, the truth is what google says: when someone doesn't know something, he asks google. But that is now shifting towards chatgpt. Tomorrow, the truth will be what Chatgpt says. With google, we still do the effort of searching and using our mind to weigh things. With chatgpt, we don't even get to choose. What it says is unique. Just like the ultimate source of knowledge. That's too much power in the hand of a single entity. And knowing that we can bias those models makes me fear what can hapen in the future.
|
|
|
|
Although I am not republican, I made a little experiment with chatgpt few months ago. I asked it to say something good about Donald Trump. It answered me that it doesn't do politics. I asked the same about Biden, and it created a very good essai. Now I am not saying Trump is good, but we can see a bias here. ChatGpt reflects the political view of its creators. That's not acceptable from a tool that can basically become a god in terms of knowledge source.
|
|
I don't think that AI should decide if I should vote Left ot right. I don't think that AI should tell me what's right and what is wrong. Many things in life depend on the point of view.
|
|
|
|
I don't beleive there is a way to have a 100% non biased AI. Bias is something inevitable given how these things work.
|
|
|
|
That's why I am an advocate of opensource models. At least, we can probe them, we can check them, and they are available in many forms and flavors. Closing everything would be forbidding good people who use this for good from benefiting from it. Bad people who doesn't respect the laws will probably continue doing what they do.
|
|
|
|
Ai is like pandora box. It has been opened and can't be closed. At least, I fail to find any possible way to contain it. In these cases, if you can't contain it, then let people access these things and teach them to start using their brains to dissern truth from lies. People should be trained to have critical minds and not just be passive consumers.
|
|
|
|
The models we have today are not concious, they are just function calls. They don't have lives and stop thinking the moment you stop talking to them. But with increasing context size, this may become false. Now recurrent transformers primiss contexts as big as 2 Million tokens. Think of the context as the life line of the conversational ai. through interaction, the AI shapes its personality. With small context size, it can't live long. But with big one, and with new multimodal LLMS, AI can see, can hear, can talk and most importantly, can think.
|
|
|
|
|
|
At some point, we need to forbid those things from starting to think on their own. But projects like autogpt and the langchain are giving more control to the AI. Still, the human is in control, but he is less and less in control. At least for now, bad things still come from humans and not AI by itself.
|
|
|
|
But who knows? |