CARLISLE, Pa. — Approaches to regulating artificial intelligence (AI), from creation to deployment and use in practice, vary internationally. Daryl Lim, Penn State Dickinson Law associate dean for research and innovation, H. Laddie Montague Jr. Chair in Law and Penn State Institute for Computational and Data Sciences (ICDS) co-hire, has proposed an “equity by design” framework to better govern the technology and protect marginalized communities from potential harm in an article published on Jan. 27 in the Duke Technology Law Review.
According to Lim, responsibly governing AI is crucial to maximizing the benefits and minimizing the potential harms — which disproportionally impact underrepresented individuals — of these systems. Governance frameworks help align AI development with societal values and ethical standards within specific regions while also assisting with regulatory compliance and promoting standardization across the industry.
Lim, who is also a consultative member of the United Nations Secretary General’s High-Level Advisory Body on Artificial Intelligence, also addressed this need and how socially responsible AI governance may impact marginalized communities in the published article.
Lim spoke about AI governance and his proposed framework in the following Q&A.
Q: What does socially responsible AI mean? Why is it important?
Lim: Being socially responsible with AI means developing, deploying and using AI technologies in ethical, transparent and beneficial ways. This ensures that AI systems respect human rights, uphold fairness and do not perpetuate biases or discrimination. This responsibility extends to accountability, privacy protection, inclusivity and environmental considerations. It’s important because AI has a significant impact on individuals and communities. By prioritizing social responsibility, we can mitigate risks such as discrimination, biases and privacy invasions, build public trust and ensure that AI technologies can contribute positively to the world. By incorporating social responsibility into AI governance, we can foster innovation while protecting the rights and interests of all stakeholders.
Q: How would you explain the “equity by design” approach to AI governance?
Lim: Equity by design means we should embed equity principles throughout the AI lifecycle in the context of justice and how AI affects marginalized communities. AI has the potential to improve access to justice, particularly for marginalized groups. If someone who may not speak English is looking for assistance and has access to a smartphone with chatbot accessibility, they can input questions in their native language and get generalized information that they need to get started.
There are also risks such as perpetuating biases and increasing inequality, which I call the algorithmic divide. In this case, the algorithmic divide refers to the disparities in access to AI technologies and education about these tools. This includes differences between individuals, organizations or countries in their ability to develop, implement and benefit from AI advancements. We also need to be aware of biases that can be introduced, even unintentionally, by the data that these systems are trained with or by the people training the systems.
Q: What is the goal of this approach to AI governance?
Lim: The overarching goal of this work is to shift the focus from reactive to proactive governance by proposing an equity-centered approach that includes transparency and tailored regulation. The article seeks to address the structural biases in AI systems and the limitations of existing frameworks, advocating for a comprehensive strategy that balances innovation with robust safeguards. The research explores how AI can both improve access to justice and entrench biases. This approach aims to provide a roadmap for policy makers and legal scholars to navigate the complexities of AI while ensuring that technological advancements align with broader society values of equity and the rule of law.
Q: What are some solutions to suggest to further reach an equitable approach to AI?
Lim: The solution, in part, lies in equity audits. How do we, by design, make sure that before an algorithm is released there are checks and balances with the people who are creating the system? People that pick the data may be biased, and that may entrench inequalities, whether the bias manifests itself through racial bias, gender bias or geographical bias. A solution could be hiring a wide group of people with awareness of different biases and who can call out unconscious biases or having third parties look at how systems are implemented and provide feedback to improve outcomes.
The article also looks at normative impact on the rule of law, which in this case involves assessing whether our current legal frameworks adequately address these challenges or if reforms are necessary to uphold the rule of law in the age of AI. Emerging technologies like AI can influence fundamental principles and values that underpin our legal system. This includes considerations of fairness, justice, transparency and accountability. AI technologies can challenge existing legal norms by introducing new complexities in decision-making processes, potentially affecting how laws are interpreted and applied.
Q: What observations further demonstrate the importance of an equity-centered approach to AI governance?
Lim: In September, the “Framework Convention on Artificial Intelligence,” was signed by the United States and European Union (EU). This AI treaty was a major milestone in establishing a global framework to ensure that AI systems respect human rights, democracy and the rule of law. The treaty specifies a risk-based approach, requiring more oversight of high-risk AI applications in sensitive sectors such as health care and criminal justice. The treaty also details how different areas — specifically the U.S., the EU, China and Singapore — have different approaches to AI governance. The U.S. is more market-based; the EU is rights-based; China follows a command economy model and Singapore follows a soft law model, which serves as a framework rather than enforceable regulatory obligations. The treaty emphasizes the importance of global collaboration to address challenges across AI governance approaches. My proposed framework embeds the principles of justice, equity and inclusivity throughout AI’s lifecycle, which aligns with the overarching goals of the treaty. While the equity by design framework does not focus on post-implementation protections, it emphasizes that AI should advance human rights for marginalized communities, and that there should be more transparent and protective audits.