We heard about DNS poisonong, search engine poisoning, ARP poisoning etc. With the rise of AI, data poisonings is evolved.

There are 2 types of poisoning:

  • Malicious user to bypass the protection scheme of AI to output what is prohibited for abuse
  • Poison the data model to generate incorrect results to user [The analogy is in the typical web application that malicous user plant bad data and stored in backend database as persistent threat to attack other users due to poor coding.]

On top of regulatory and ethical issues, the key to deal with this is to enable secure use of AI by formulating guidance and apply final human judgment. Treat AI output as reference for insights and research only.

Leave a Reply