Written by Isaac Chew
On 20 September 2024, Malaysia’s Ministry of Science, Technology and Innovation (‘MOSTI’) adopted the National Guidelines on AI Governance and Ethics (‘AIGE’). The launch of the AIGE is a key step in supporting the Malaysian National Artificial Intelligence Roadmap 2021-2025 (‘AI-RMAP’).
AIGE reinforces Malaysia’s commitment to global AI ethics inspired by guidelines from United Nations Educational, Scientific and Cultural Organization (‘UNESCO’), Organisation for Economic Co-operation and Development (‘OECD’), and the European Union (‘EU’), to ensure trusted and responsible AI development. Following this, AIGE set out seven core principles consisting of fairness, safety, privacy, inclusiveness, transparency, accountability, and the pursuit of human benefit. (Read more about AIGE here).
To strengthen implementation, Malaysia established the National Blockchain and Artificial Intelligence Committee (‘NBAIC’) under MOSTI, serving as a cross-sector platform to align AI and blockchain developments with AIGE principles. Later, in December 2024, Malaysia officially launched the National Artificial Intelligence Office (‘NAIO’) under the Ministry of Digital through MyDIGITAL Corporation. As the country’s central AI authority, NAIO is responsible for shaping and executing the national AI strategy.
Before new laws are established, these guidelines are intended to serve as a useful reference for industries and AI developers. They are not legally binding; however, stakeholders are encouraged to treat them as a standard to follow, promoting the adoption of responsible AI practices. This approach can help build a culture of ethical discipline that supports long-term sustainability and responsible innovation.
The Minister of MOSTI stated recently that existing laws need to be updated to keep pace with the rapid growth of technology and to help prevent the misuse of artificial intelligence, particularly in areas like cybercrime. He pointed out that the AIGE was only introduced last year, and that it will take time before these guidelines can evolve into a proper AI law. He also mentioned that the Communications and Multimedia Act 1998 and the Penal Code will need amendments to catch up with AI’s pace.
Emerging AI Problems
AI is advancing rapidly, and voluntary guidelines like the AIGE are increasingly under pressure, especially with the rise of generative AI, such as models that create text, images, or audio. While these technologies bring enormous benefits in areas like education, healthcare, and productivity, they also raise serious ethical and policy concerns.
Experts, including the OECD, have warned that generative AI can reinforce biases, spread misinformation, and distort public discourse, posing real risks to society. Similarly, AI systems used in sensitive sectors such as healthcare, autonomous vehicles, finance, hiring, and law enforcement are now widely seen as ‘high-risk’, where mistakes or misuse could have serious consequences.
In response, the EU’s AI Act prohibits certain harmful uses, like social scoring and manipulative AI, and places strict controls on high-risk applications such as robotic surgery or biometric identification. Even lower-risk systems, like chatbots or deepfakes, are required to follow transparency rules so users know they’re interacting with AI.
AIGE in comparison with EU’s AI Act
The ethical misuse of AI has been increasing, largely due to the lack of clear regulations to guide how the technology is developed and used. Establishing strong ethical principles can help prevent such misuse and promote responsible innovation. In response to growing concerns, countries such as those in the European Union and the United Kingdom have begun implementing formal regulatory frameworks for AI. Notably, the EU AI Act, which shapes how AI is governed not just within Europe but globally, sets a strong example for other countries to follow. Additionally, the EU has also introduced the Data Governance Act, the Digital Services Act, and the Digital Markets Act.
The EU’s AI Act classifies AI systems based on risk levels. AI considered ‘unacceptable’, such as social scoring, is completely banned. High-risk systems used in areas like healthcare, transportation, employment, and education must pass strict checks before they can be deployed. Limited-risk AI, such as chatbots or deepfakes, must inform users they’re interacting with AI, while minimal-risk tools like spam filters or video games are largely left unregulated. High-risk AI systems, in particular, must meet a range of legal requirements, including comprehensive risk assessments, the use of high-quality training data, clear documentation, secure logging of activity, and built-in human oversight. They also need to meet standards for cybersecurity and accuracy.
Not only that, the EU’s AI Act also includes strict transparency rules. Users must be informed when they’re interacting with an AI system, and content from generative models, like deepfakes or AI-generated news, must be clearly labelled as such. These are not just suggestions; they are legally binding. Serious violations can lead to hefty fines of up to €35 million or 7% of a company’s global revenue. To ensure compliance, EU member states and the newly established EU AI Office have been given enforcement powers.
Malaysia, on the other hand, has yet to introduce specific laws dedicated to AI governance. However, any use of AI must still be carried out ethically, ensuring accountability, transparency, data privacy, and security. By doing so, we can manage potential risks and strengthen public trust, all while staying in line with Malaysia’s existing law. The EU AI Act is a binding law that categorises AI systems based on their level of risk, with strict requirements and penalties for those developing or deploying high-risk AI. In contrast, Malaysia’s AIGE serves as a set of voluntary guidelines, aimed at encouraging responsible AI use without imposing legal obligations.
In practical terms, an AI developer in Malaysia has the flexibility to decide how to apply the AIGE principles, with no penalties for not following them. However, if the same developer were operating in the EU, they would be legally obligated to comply with the EU AI Act’s strict requirements. EU’s approach goes a step further by turning these rules into law, complete with enforcement mechanisms based on the level of risk, something Malaysia’s current guidelines have yet to include.
Conclusion
The EU’s AI Act shows that it is possible to regulate AI in a way that balances innovation with the protection of public interest. By taking a similar approach, Malaysia can build a forward-looking governance model that keeps pace with global standards while supporting responsible growth. In short, as Malaysia transitions from principles to practice, the AIGE should gradually develop into a more structured and enforceable framework.
Published on 28 July 2025
