#LLMs: “The worst-case scenario would probably be a dictatorship that writes its propaganda into models on a grand scale”
The phenomenon of “artificial intelligence” is fuelled by the fascination that emanates from a machine that is difficult to distinguish from humans when communicating. Even if human communication is only simulated, people react to this communication as they are used to as social beings.
So here is the thing: Through the training the models had, vertain values and believes are subtlely infused. Using methods from psychology, researchers have found that responses differ according to the gender with which the model is addressed. Political colouring can also be determined with these psychometric tests.
Scientific studies confirm that people are influenced by them. And this already happens through the writing assistants, who complete or change sentences, for example. They are thus part of the writing process and can subtly change the opinions of the writers.
Interesting: large language models LLMs can, for example, help supporters of conspiracy theories to reduce their belief in these theories. But this also harbours the danger that the opposite effect can occur if the models are systematically polluted.
The article in the magazine #ct delves deeper into this dicussion (German, paywall): www.heise.de/select/ct…