Written by Richard Wee and Fatimah Az-Zahra

Introduction – What is AI?

Artificial Intelligence (“AI”) at its core, is a form of technology used to digest large amounts of fed data and analyse them into output of patterns or sequences. The output will then be “remembered” by the system and used for future references. This infinite and complex process of constant analysis and output is akin to the system “learning” and improving with the information that it is being fed with.

The Proposed Act

With the information and knowledge we currently have on AI, it is safe to say that it is the gateway for history-altering technological advancement with many possible avenues in the near future. This technology however, as it is relatively new and swiftly progressing, is largely ungoverned and anything beyond a country’s fundamental cyber laws is the Wild West when it comes to AI. To address this, in April 2021 The European Commission has recently drafted a proposal to further regulate the realms of AI which is coined the Artificial Intelligence Act (“the AIA”). The main bulk of the suggested draft introduces the ‘risk based approach’ which seeks to characterise AI into four main broad risk categories:

  1. unacceptable;
  2. high;
  3. limited; and
  4. minimal risk.

This isn’t the first time that The Commission has drafted legislation of this calibre as we look to the General Data Protection Regulation (“the GDPR”) which was passed in 2018. We can assume that similar to the GDPR, many countries even outside of the EU will soon adopt the AIA once it is fully realised. This article seeks to shed light on this proposed Act, its approach as well as the major concerns that have been raised.

As the name suggests, this risk category would be outright prohibited in the EU and although is met with some pushback on limiting the advancements of technology, it is largely uncontentious. The practices deemed to fall under this category include:

(a) AI systems that deploy harmful manipulative “subliminal techniques” (i.e. manipulative targeted advertising);

(b) AI systems that exploit specific vulnerable groups (ie: physical or mental disability);

(c) AI systems used by public authorities, or on their behalf, for social scoring purposes; and

(d) ‘Real-time’ remote biometric identification systems in publicly accessible spaces for law enforcement purposes, except in a limited number of cases.

Generally, it can be agreed that the practices greatly infringe upon the rights and liberties of the average person if given free reign.

The high risk category is the most controversial as it makes up some of the most advanced AIs yet it also raises multiple questions on whether this classification helps to resolve pre-existing fundamental issues of AI or further exacerbate them. This category mainly regards eight specific areas which the proposed Act has laid out and identified. The areas laid out are:

(a) biometric identification and categorisation of natural persons;

(b) management and operation of critical infrastructure;

(c) education and vocational training;

(d) employment, worker management and access to self-employment;

(e) access to and enjoyment of essential private services and public services and benefits;

(f) law enforcement;

(g) migration, asylum and border control management; and

(h) administration of justice and democratic processes.

Other than these categories, there is also a special mention on facial recognition technology (“FRTs”) AI which is widely used for verification, identification and categorisation purposes. Here, the Act seeks to introduce new rules specific for FRTs and also differentiate them according to the risk based approach. The proposed Act further states that FRTs in public open spaces which operate in real-time would be prohibited, unless ‘member States choose to authorise them for important public security reasons’.

 

The AI systems laid out in this category would be required to a limited set of transparency obligations. Although these obligations are yet to be fully articulated, we can reasonably assume they will closely follow those of the GDPR which discusses the issue of transparency in Article 5. Article 5(1) states that the data shall be obligated to provide sufficient and concise information to users of their personal data uses. Further, the proposed Act lays out certain examples of limited risk AI which include;

(a) systems that interacts with humans (i.e. chatbots);

(b) emotion recognition systems;

(c) biometric categorisation systems; and

(d) AI systems that generate or manipulate image, audio or video content (i.e. deepfakes).

This category acts as an umbrella term encompassing all other AI systems which present only a small risk in operating. These systems need not apply for any assessments or additional obligations. However, the proposed Act mentions the suggestion of creating a code of conduct which would incentivise providers of even low risk AI to voluntarily apply for assessments akin to the high risk category.

Does the AIA solve all pre-existing issues?

The short answer to this would be no as there have been countless issues when concerning AI especially regarding infringement of the rights and liberties of the user. A relevant example of this is the usage of FRTs in the high risk category mentioned earlier. FRTs have posed a long standing concern due many reasons which include problematic methods of data collection (which are often sourced from mugshots of incarcerated individuals due to its accessible nature) as well as its susceptibility to error. 

In countries such as the USA, the nature and rate of incarceration of African-Americans are significantly higher which also translates to a higher amount of fed African-Americans faces into the FRT system. This uneven data creates a potentially racially biassed system and therefore would create an output of the same issue. This system can further be utilised in “anomaly detection” face validation which is commonly used in law enforcement sectors such as airport security. This system will scan faces passing in large public spaces in order to seek for potential deviance or anomalies within the public which should warrant further investigation. Multiple issues can further arise from this such as false arrests or misrepresentation.

These concerns have always been voiced out by activists and they have received substantial backing. However, this may change with the adoption of this Act as it will then become an authoritative piece of legislation for states to justify utilising these types of AI with possible articles mentioning authorisation for use of “important public security reasons”. 

Who is In Charge?

The proposed Act lays out several articles which indicate the need for registration or assessment in order for the systems to be put in the market. The proposed Act introduces the possibility of a European Artificial Intelligence Board which would be composed of representatives of Member States. This proposed commission would be responsible for the implementation of new rules as well as maximising cooperation between national supervisory authorities. 

These national supervisory authorities are to carry out assessments as well as any corrective measures to which include prohibiting, restricting, withdrawing or recalling any AI systems which do not measure up to the laid out guidelines at the state level of any member or participating states.

In general, we can see that the Commission seeks to provide a comprehensive structure of executive as well as legislative officers at many levels. The inherent issue with this is that due to the AI Board being composed of member states and therefore most likely exclusively EU states, the question of diversity and inclusivity still looms in the background. Due to similar technological background and conditions, it is likely that the upcoming regulations are to favour the states with similar AI progress and advancements. 

Will extensive regulations halt advancements?

Similar to any other technological system, AI fundamentally requires rigorous trial and testing in order to further perfect its functions and achieve its desired output. In regards to our current state of progress, we have the previous lack of rules to thank as it has greatly helped foster and encourage countless inspired minds to innovate and create for the betterment of our society without the concern and risk of being penalised for breaking certain regulations. It comes as no surprise that with the introduction of this new Act, there have been concerns of the rules limiting experimental projects and further improvements of the aforementioned systems especially within the unacceptable and high risk category as they cover some of the most advanced yet questionable types of AI.

In tackling this, states are encouraged to create safe testing environments otherwise known as ‘sandboxes’ in order to foster innovation. Through sandboxes, AIs are able to use personal data in a controlled environment with consenting and knowledgeable parties. This addition to the Act is crucial as after all, we cannot expect for a high risk system to ever improve and become secure without any experimental work or controlled operation. 

AI Regulation in Malaysia

Unlike our European counterparts, the Malaysian cyber laws are relatively few and far between. There are regulations such as the Computer Crimes Act 1997 (“CCA”) which governs the misuse of computers as well as the Personal Data Protection Act 2010 (“PDPR”) which seeks to protect personal data especially in commercial transactions however other than these fundamental laws Malaysia, currently has no specific acts which govern AIs. 

This is where the future enactment of the AIA will come into play as we hope that Malaysia will become a signatory to the Act and therefore adopting its regulations in order to better monitor our AI advancements. There however, still exists concerns with this as issues which may arise in regards to Malaysia’s advancements and innovation which could be curtailed by the regulations which are meant to govern EU member states. As our progress is currently not equally matched with the AI companies in Europe (such as appliedAI in Germany or AI Sweden in Sweden), we might be relatively more constrained in terms of experimentation and progress. This is especially true in the case where we adopt the Act word for word and become a signatory which subjects us to the assessments of the European Artificial Intelligence Board. 

On the other hand, in the circumstance where we codify the Act as our own, there is also the concern of our adoption of the Act becoming harsh in terms of penalisation. This has happened before during the draft of the CCA which was modelled after the UK Computer Misuse Act 1990 and the Singapore Computer Misuse Act 1993 (“CMA”). In our CCA, the penalty is twelve times more severe than the UK and twice as severe than in Singapore. An example is S.3(3) of CCA which imposes a fine of 50k and imprisonment up to 5 years for unauthorized access to a computer while S.6 of the UK CMA only fines up to £2,000 and 12 months imprisonment. Due to our history of rigid implementation and harsh punishments, a future AI Act in Malaysia could create a grim fate for our AI development if there are no drastic measures taken.

Learning from our past, certain actions can be taken in order to secure a healthy balance between regulation and freedom. The course of action which was taken during the drafting of the CCA failed to take into account any discussions below the parliamentary level as there was little to none discourse amongst the users regarding the passing of the bill. There was only consultation of international multimedia companies present which created a gap in the drafting of the act and its actual enforcement. In avoiding the same from happening to a future Malaysian AI Act, parliament must actively take into account the opinions of the end-users which are the most impacted in this issue. Local companies and startups must also be brought into the discussion without only considering the opinions of foreign experts. 

Conclusion

All in all, the advancement of AI is something that has been long awaited by many due to the limitless possibilities and breakthroughs it may bring. The law, of course, must strive to progress in tandem with it in order to create a safer world with AI. The AIA however, may take longer than we hope as we take a look at The Commission’s past policies such as the GDPR, which took around four years before going from proposal phase into adoption. Therefore within this proposal period, it is crucial for adequate debates, analysis and conversation to be made in regards to both the good and bad of the AIA.

 

Published on 16 March 2022

Photo by Michael Dziedzic on Unsplash

ALB MLA Law Awards 2021 Finalist Badge - Richard Wee Chambers

visit Us @ RWC:

Level 38, Menara Multi-purpose, 8, Jalan Munshi Abdullah, Commerce Square, 50100 Kuala Lumpur, Federal Territory of Kuala Lumpur,Malaysia

Write To Us @ RWC:

Give us a call @ RWC :

+603 2694 1388

Whatsapp Us:

+6013-902 1388
Subscribe to our Newsletter @ RWC
Always Get Our Latest News & Events Newsletter!

Some description text for this item