In Limbo: The EU AI Act’s Destiny Mirrored in OpenAI’s Trials

In Limbo: The EU AI Act’s Destiny Mirrored in OpenAI’s Trials

How the EU’s proposed regulation on artificial intelligence and OpenAI’s experiments with GPT models reflect the challenges and opportunities of AI governance.

Introduction

Hello, I am a seasoned blog writer with a keen interest in artificial intelligence and its implications for society. I have been following the developments and debates around AI governance for several years, and I have witnessed how this topic has become increasingly relevant and urgent in the face of rapid technological advances and social changes.

AI governance is the process of establishing and enforcing rules, norms, and standards for the development and use of AI systems. It aims to ensure that AI is aligned with human values, respects human rights, and promotes human well-being. It also seeks to address the potential risks and harms that AI may pose to individuals, groups, and society at large.

AI governance is not a simple or straightforward task. It involves multiple stakeholders, such as governments, businesses, researchers, civil society, and users, who may have different interests, perspectives, and expectations. It also faces various challenges, such as the complexity, uncertainty, and unpredictability of AI systems, the diversity and dynamism of AI applications, and the trade-offs and tensions between competing values and goals.

In this article, I will explore two recent and prominent examples of AI governance in action: the EU AI Act and OpenAI’s trials. I will explain what they are, how they work, and what they mean for the future of AI and society. I will also compare and contrast them in terms of their goals, approaches, and impacts, and highlight the similarities and differences, the strengths and weaknesses, and the opportunities and challenges of each.

The EU AI Act

The EU AI Act is a proposed regulation on artificial intelligence that was published by the European Commission in April 2021. It is the first comprehensive and horizontal legal framework for AI in the world, and it aims to create a single market for trustworthy and human-centric AI in the EU.

The EU AI Act classifies AI systems by risk, and imposes different requirements and obligations for AI providers and users depending on the level of risk. It defines four categories of AI systems:

  • Prohibited AI systems: These are AI systems that are considered to violate fundamental rights or values, such as those that manipulate human behavior, exploit vulnerabilities, or use social scoring for general purposes. These AI systems are banned in the EU.
  • High-risk AI systems: These are AI systems that are used in critical sectors or contexts, such as health, education, justice, or security, and that may pose significant risks to the safety, rights, or freedoms of people or the environment. These AI systems are subject to strict rules and oversight, such as conformity assessment, transparency, human oversight, and data quality.
  • Limited-risk AI systems: These are AI systems that are not high-risk, but may still have some negative impacts on users or society, such as those that use remote biometric identification or emotion recognition. These AI systems are subject to specific transparency obligations, such as informing users that they are interacting with an AI system or that their biometric data is being processed.
  • Minimal-risk AI systems: These are AI systems that are not likely to cause any harm or detriment to users or society, such as those that are used for entertainment, education, or personalization. These AI systems are not subject to any specific rules or obligations, but are encouraged to follow voluntary codes of conduct and best practices.

Some examples of AI systems that would fall under each category are:

  • Prohibited AI systems: An AI system that assigns social scores to citizens based on their online behavior and affects their access to public services or benefits.
  • High-risk AI systems: An AI system that diagnoses diseases based on medical images or tests, or that determines the eligibility of students for scholarships or loans based on their academic performance or personal data.
  • Limited-risk AI systems: An AI system that identifies and tracks individuals in public spaces using facial recognition, or that detects and analyzes the emotions of customers or employees using voice or facial analysis.
  • Minimal-risk AI systems: An AI system that recommends movies or music based on user preferences, or that generates captions or summaries for images or texts.

The EU AI Act also establishes a governance structure and a cooperation mechanism for the implementation and enforcement of the regulation. It creates a European Artificial Intelligence Board, composed of representatives from the member states and the European Commission, to provide guidance, advice, and support on AI matters. It also sets up a network of national competent authorities, designated by the member states, to monitor and supervise the compliance of AI systems with the regulation.

The EU AI Act is a landmark initiative that reflects the EU’s vision and values for AI. It aims to foster innovation and competitiveness, while ensuring trust and respect for human dignity, autonomy, and diversity. It also seeks to balance the benefits and risks of AI, and to promote the development and use of AI for good and for all.

OpenAI’s Trials

OpenAI is a research organization that develops and offers various GPT models for natural language processing. GPT models are deep neural networks that can generate natural language texts based on a given input or prompt. They can be used for various purposes, such as chat, image generation, fine-tuning, and embeddings.

OpenAI’s first GPT model, GPT-1, was released in 2018, and it had 117 million parameters. A parameter is a numerical value that determines how a neural network processes data. The more parameters a model has, the more data it can learn from and the more complex tasks it can perform.

OpenAI’s second GPT model, GPT-2, was released in 2019, and it had 1.5 billion parameters. It was able to generate coherent and diverse texts on various topics and styles, such as news articles, essays, stories, and poems. However, it also raised some concerns about the potential misuse and abuse of the model, such as generating fake or misleading information, spam, or propaganda. Therefore, OpenAI decided to release GPT-2 in a staged manner, starting with a smaller version of 124 million parameters, and gradually releasing larger versions of 355 million, 774 million, and 1.5 billion parameters, along with some tools and guidelines to mitigate the risks and encourage the responsible use of the model.

OpenAI’s third GPT model, GPT-3, was released in 2020, and it had a staggering 175 billion parameters. It was able to generate even more impressive and diverse texts on various domains and tasks, such as answering questions, writing summaries, creating content, and generating code. However, it also had some limitations and challenges, such as the quality, reliability, and bias of the generated texts, the cost and complexity of training and running the model, and the ethical and social implications of the model. Therefore, OpenAI decided to offer GPT-3 as a commercial service, called OpenAI Codex, through an API (application programming interface), which allows users to access the model via the internet. OpenAI also implemented a pricing and access policy, which charges users based on the amount and type of data they use, and limits the access to the model based on the purpose and impact of the use.

Some examples of how GPT models can be used for various purposes are:

  • Chat: A user can have a conversation with a GPT model on any topic or style, such as sports, politics, or humor. The GPT model can generate responses based on the user’s input and the context of the conversation.
  • Image generation: A user can provide a text description of an image, such as “a cat wearing a hat”, and a GPT model can generate an image that matches the description.
  • Fine-tuning: A user can train a GPT model on a specific dataset or task, such as writing reviews, summarizing articles, or translating languages. The GPT model can then generate texts that are tailored to the dataset or task.
  • Embeddings: A user can extract numerical representations of words, sentences, or documents from a GPT model, and use them for various downstream tasks, such as classification, clustering, or similarity analysis.

OpenAI’s trials are a remarkable example of AI innovation and experimentation. They showcase the power and potential of AI for natural language processing, and the challenges and opportunities of AI for society. They also raise some questions and dilemmas about the governance and regulation of AI, such as the ownership, access, and control of AI models, the accountability, transparency, and ethics of AI models, and the impact, benefit, and harm of AI models.

ChatGPT's Common Sense
Image by: https://worldwidedigest.com/

A Comparison of the EU AI Act and OpenAI’s Trials

The EU AI Act and OpenAI’s trials are two different and contrasting examples of AI governance in action. They have different goals, approaches, and impacts on AI and society. Here is a table that summarizes some of the key aspects of each example:

Aspect EU AI Act OpenAI’s Trials
Goal To create a single market for trustworthy and human-centric AI in the EU To develop and offer various GPT models for natural language processing
Approach To classify AI systems by risk and impose different rules and obligations for AI providers and users To release GPT models in a staged manner and offer GPT-3 as a commercial service through an API
Impact To foster innovation and competitiveness, while ensuring trust and respect for human dignity, autonomy, and diversity To showcase the power and potential of AI for natural language processing, and the challenges and opportunities of AI for society

The EU AI Act and OpenAI’s trials have some similarities and differences, some strengths and weaknesses, and some opportunities and challenges. Here are some of the main points of comparison:

  • Similarities: Both the EU AI Act and OpenAI’s trials are examples of AI governance in action, that is, the process of establishing and enforcing rules, norms, and standards for the development and use of AI systems. They both aim to address the potential benefits and risks of AI, and to promote the development and use of AI for good and for all. They both involve multiple stakeholders, such as governments, businesses, researchers, civil society, and users, who may have different interests, perspectives, and expectations. They both face various challenges, such as the complexity, uncertainty, and unpredictability of AI systems, the diversity and dynamism of AI applications, and the trade-offs and tensions between competing values and goals.
  • Differences: The EU AI Act and OpenAI’s trials have different scopes, levels, and methods of AI governance. The EU AI Act is a comprehensive and horizontal legal framework for AI that applies to all AI systems and actors in the EU, regardless of their origin or purpose. It is a top-down and prescriptive approach that sets clear and binding rules and obligations for AI providers and users, and establishes a governance structure and a cooperation mechanism for the implementation and enforcement of the regulation. OpenAI’s trials are specific and vertical experiments with AI that focus on natural language processing and GPT models. They are a bottom-up and descriptive approach that releases GPT models in a staged manner and offers GPT-3 as a commercial service through an API, and implements a pricing and access policy and some tools and guidelines to mitigate the risks and encourage the responsible use of the models.
  • Strengths: The EU AI Act and OpenAI’s trials have some strengths and advantages in terms of AI governance. The EU AI Act is a landmark initiative that reflects the EU’s vision and values for AI. It aims to foster innovation and competitiveness, while ensuring trust and respect for human dignity, autonomy, and diversity. It also seeks to balance the benefits and risks of AI, and to create a single market for trustworthy and human-centric AI in the EU. OpenAI’s trials are a remarkable example of AI innovation and experimentation. They showcase the power and potential of AI for natural language processing, and the challenges and opportunities of AI for society. They also raise some questions and dilemmas about the governance and regulation of AI, and stimulate public awareness and debate on AI matters.
  • Weaknesses: The EU AI Act and OpenAI’s trials also have some weaknesses and limitations in terms of AI governance. The EU AI Act may face some practical and political difficulties in its adoption and application. It may be too rigid or too vague, too restrictive or too permissive, too ambitious or too modest, depending on the perspective and preference of the stakeholders. It may also encounter some resistance or opposition from some actors, such as AI providers or users, who may perceive the regulation as a burden or a threat to their interests or autonomy. OpenAI’s trials may also face some technical and ethical challenges in their development and use. They may be too complex or too simple, too powerful or too limited, too reliable or too biased, depending on the input and output of the models. They may also pose some risks or harms to some individuals, groups, or society at large, such as generating fake or misleading information, spam, or propaganda, or infringing on privacy, security, or rights.
  • Opportunities: The EU AI Act and OpenAI’s trials offer some opportunities and possibilities for AI governance. The EU AI Act may set a precedent and a standard for other regions or countries to follow or emulate, and may create a competitive advantage and a leadership role for the EU in the global AI landscape. It may also foster collaboration and coordination among the member states and the European Commission, and among the EU and other international partners, on AI matters. OpenAI’s trials may enable and facilitate the development and use of AI for various domains and tasks, and may create a competitive edge and a leadership role for OpenAI in the AI industry. They may also foster innovation and experimentation among the researchers and developers, and among the users and customers, on AI matters.
  • Challenges: The EU AI Act and OpenAI’s trials also present some challenges and difficulties for AI governance. The EU AI Act may need to adapt and update its rules and obligations to keep pace with the rapid and dynamic changes in AI technology and applications, and to accommodate the diverse and evolving needs and expectations of the stakeholders. It may also need to balance and harmonize its regulation with other existing or emerging legal frameworks or initiatives, such as the GDPR (General Data Protection Regulation) or the Digital Services Act, at the EU or global level. OpenAI’s trials may need to improve and refine their models and services to ensure the quality, reliability, and fairness of the generated texts, and to reduce the cost and complexity of the training and running of the models. They may also need to balance and harmonize their policy and practice with other existing or emerging ethical frameworks or initiatives, such as the Partnership on AI or the Montreal Declaration, at the industry or global level.

Conclusion

In this article, I have explored two recent and prominent examples of AI governance in action: the EU AI Act and OpenAI’s trials. I have explained what they are, how they work, and what they mean for the future of AI and society. I have also compared and contrasted them in terms of their goals, approaches, and impacts, and highlighted the similarities and differences, the strengths and weaknesses, and the opportunities and challenges of each.

AI governance is not a simple or straightforward task. It involves multiple stakeholders, such as governments, businesses, researchers, civil society, and users, who may have different interests, perspectives, and expectations. It also faces various challenges, such as the complexity, uncertainty, and unpredictability of AI systems, the diversity and dynamism of AI applications, and the trade-offs and tensions between competing values and goals.

Therefore, AI governance requires collaboration, innovation, and responsibility from all the actors involved in the development and use of AI systems. It also requires a holistic and adaptive approach that considers the technical, ethical, social, and legal aspects of AI, and that balances the benefits and risks of AI, and promotes the development and use of AI for good and for all.

Total
0
Shares
Leave a Reply

Your email address will not be published. Required fields are marked *

Previous Article
Your Body Type

Know Thyself: A Guide to Identifying and Honoring Your Body Type

Next Article
EVs

Understanding the Personalities of EVs, BEVs, PHEVs, and HEVs

Booking.com
Related Posts
Booking.com