Continue reading on DataGuidance with:
Free Member
Limited ArticlesCreate an account to continue accessing select articles, resources, and guidance notes.
Already have an account? Log in
Singapore: IMDA publishes discussion paper on generative AI risks and frameworks
On June 7, 2023, the Infocommunications Media Development Authority (IMDA) published a discussion paper, in cooperation with Aicadium, entitled Generative AI: Implications for Trust and Governance.
New risks
In particular, the paper highlights the risk of generative artificial intelligence (AI) models making mistakes, including 'hallucinations,' ranging from erroneous responses to generated software that is susceptible to vulnerabilities. Results from generative AI models are noted to appear overly 'confident' and have a measure of uncertainty.
In addition, the paper notes that generative AI has 'memorisation' properties, which presents risks to privacy if models 'memorise' wholesale a specific data record and replicate it when queried, and is especially problematic for medical datasets or other datasets which are sensitive. Further, parts of sentences including nouns, pronouns, and numerals are memorized faster than others, and such information is likely to be particularly sensitive.
The paper further states that generative AI trained on language from the internet can also run the risk of propagating toxic content, allowing for impersonation and reputation attacks, alongside allowing actors with little to no technical skills to generate malicious code.
AI frameworks
Notably, the paper reiterates key governance principles provided in Singapore's Model AI Governance Framework, including transparency, accountability, fairness, explainability, and robustness.
Likewise, the paper details a practical risk-based and accretive approach that may contribute to enhanced safety and trust, noting the importance of transparency about how AI models are developed and tested, and for third-party evaluation of this. The paper also provides the importance of investment by policymakers in safety and alignment research to enable the interpretability, control, and robustness of AI. Ultimately, the paper stipulates that responsible AI must be about achieving public good and that consumer literacy programs must be aimed at raising public understanding through education and training.