ChatGPT’s disruption to industries worldwide is evident, however, the legal implications remain indefinite and undefined. Although the term “artificial intelligence” was first coined in the 1950s, the relatively new AI chatbot, ChatGPT, was released in November 2022. Since then, various questions have arisen with respect to its legal implications. Judges, lawyers, and users alike are eager to enhance their understanding of new AI systems to apply present legislation to unprecedented issues.

Numerous users are dependent on the chatbot as a source of information, yet data gathered and provided by ChatGPT may be inaccurate. This was explicitly announced on the OpenAI website: “ChatGPT sometimes writes plausible-sounding but incorrect or nonsensical answers”. In return, the need for awareness is great and regulations even greater to govern AI.

(To better understand the implications of AI systems such as ChatGPT on intellectual property and data protection rights, kindly refer to a previously written Zu’bi & Partners article linked here)

In light of these advancements, Zu’bi & Partners Attorneys and Legal Consultants in coordination with the Bahrain Institute for Banking and Finance (BIBF) attended a 3-day tailored course entitled ‘Transforming the Legal Landscape’ where AI experts Ameen Altajer and Nishanth Kumar demonstrated the technical and legal aspects of AI systems.

Proactive efforts to enact the first regional harmonized rules are apparent through the EU’s Artificial Intelligence Act (AI Act). The EU is undergoing negotiations to pass this comprehensive legislative framework, which may follow the footsteps of the EU General Data Protection Rules (GDPR) that paved the way for many regulators worldwide.

The question remains: how can an ever-changing intelligence be regulated? The significant provisions of the AI Act and its legal implications are highlighted below.

The approach adopted by the AI Act to regulate all types of AI (including machine learning, traditional and hybrid AI) is risk-based. A risk-based approach to regulations means that the types of AI systems are categorized based on their risks of infringing fundamental rights and thereafter are regulated accordingly. There are four risk categories in accordance with the AI Act:

  1. Unacceptable Risk

If an AI system scores as an ‘unacceptable risk’, it is strictly prohibited. This is the highest risk rating classification for AI systems that may threaten people’s safety through direct or indirect manipulation, exploitation, or social scoring (the categorization of individuals through their behaviours and characteristics). This risk category ensures that AI systems do not disrupt the protection and security of individuals.

  1. High Risk

In the circumstance that an AI system is identified as ‘high risk’, the system may be permitted with certain restrictions. Compliance requirements and ex-ante conformity assessment (i.e., based on forecasts rather than results) are obligatory. This risk scoring generally includes AI systems that are a risk to the health and safety of individuals such as employment and promotional decision making or medical devices.

  1. AI with Specific Transparency Obligations

AI systems under this category of risk have fewer restrictions and obligations than high-risk AI systems. These AI activities are permitted, however, are subject to certain disclosure and transparency obligations. This applies to AI systems that interact with individuals, detect emotions, or generate content such as ‘impersonation’ bots.

  1. Minimal or No Risk

 The abovementioned classifications are either prohibited or have obligatory requirements, whereas AI systems that score ‘minimal or no risk’ are free from restrictions. AI systems supporting innovation that are resilient to possible disruptions are permitted and encouraged, such as AI regulatory sandboxes which test technologies in a controlled environment, similar to fintech regulatory sandboxes adopted internationally (for a brief of the Bahrain fintech regulatory sandbox, please refer to our article here).

Legislations regulating the unprecedented rise of AI systems will differ from traditional legislative structures as is reflected in the AI Act. The provisions of AI laws must incorporate adaptive characteristics to effectively adjust to the ever-changing market and the need for regulation has never been greater.

Recommended Posts