
Aalborg University professor Johannes Bjerva is leading cutting-edge research to uncover subtle manipulation in AI-generated text, warning that large language models could be covertly influenced to promote political agendas without public awareness. His TRUST project aims to detect and prevent such “poisoning” attacks by identifying linguistic patterns that reveal external interference. With significant research grants, including the Carlsberg Foundation’s Semper Ardens and the Novo Nordisk Foundation’s Data Science Investigator award, Bjerva has built one of Denmark’s fastest-growing NLP research groups. The initiative seeks to make AI safer and protect society from malicious use of generative technologies.
Source: Aalborg University