Hello all. This week, I'll be real busy so let me share more about AI from an HBR article on ‘AI and Machine Learning: 4 Types of Gen AI Risk and How to Mitigate Them’ by Öykü Isik, Amit Joshi, and Lazaros Goutas, May 31, 2024, along with several others na ka.
According to the article, here are what we should stress:
1. Governments’ role: It points out that ‘governments are scrambling to come up with reasonable frameworks and laws to manage this technology and its downsides’ It proposes a high-level framework that will provide executives with a way of classifying the potential challenges within the gen AI landscape and then mitigating them.
This leads me to think of what Khun Tonson Santitarn Sathirathai said in several programs, suggesting our authorities not to have too rigid laws, which could hinder progress of our efforts to learn and excel in the use of AI ka.
2. Generative AI Risks can be classified based on intent (deliberate malpractices) and usage that are put into 4 types, namely, misuse (unethical or illegal exploitation of gen AI capabilities for harmful purposes), misapply (prioritizes plausibility over accuracy and can create inaccurate outputs – hallucination), misrepresent (output created by a third party is purposefully used and disseminated, despite questions about credibility or authenticity), misadventure (accidentally consumed and shared by users who are not aware of its inauthenticity).
3. The authors call for leaders in both public and private enterprises to become proactive and to mitigate these risks as follows:
3.1 Mitigating content creation risks to avoid the misuse and misapplication by developing the capabilities to detect, identify, and prevent the spread of such potentially misleading content.
Actions include strategies to align between organizational values and AI principles, mandate all entities that create gen AI content to watermark their gen AI output for transparency traceability and trust plus the empowerment of users to confidently make judgements on the authenticity of the content they come across, and create a controlled gen AI environment within the organization (curate training datasets, ensure their de-biasedness, and have privacy measures).
3.2 Mitigating content consumption risks to avoid misinterpretation and misrepresentation
Actions include gen AI demystification and awareness training opportunities, validate AI output through labeling and warning mechanisms, damage mitigation plan setups for situations that are not contained.
In the AI world today, it seems that the private sector has stepped up much faster than the governments, which is similar in our country….though we seriously need more ka….instill sense of urgency and understanding in our key leaders/authorities for proper policies and support, upskill our leaders and people for experiments and actions, invest in our infrastructure, and speed up in all dimensions.
ไม่มีความคิดเห็น:
แสดงความคิดเห็น