New work explaining the inner workings of artificial intelligence could provide a way around the threat of AI "model collapse ...
Traditional attacks try to break into systems, but model poisoning changes how systems behave after they are trusted.