Skip to content

Hide Navigation Hide TOC

Poison ML Model (822cb1e2-f35f-4b35-a650-59b7770d4abc)

Adversaries may introduce a backdoor by training the model poisoned data, or by interfering with its training process. The model learns to associate a adversary defined trigger with the adversary's desired output.

Cluster A Galaxy A Cluster B Galaxy B Level
Backdoor ML Model (ccf956b4-329e-4de8-8ba2-e784d152e0cb) MITRE ATLAS Attack Pattern Poison ML Model (822cb1e2-f35f-4b35-a650-59b7770d4abc) MITRE ATLAS Attack Pattern 1