Is AI Trustworthy? The Quest for Ethical AI Governance
Embark on a journey exploring the challenges and opportunities in ensuring trust and ethical governance in AI.
Welcome to the new edition of Trend Hacker
Hi! Due to the importance of the topic, this edition about “AI and Ethics” is entirely free for you to delve into. I aim to glimpse the complex field of emerging trends, their components, and their potential impacts on society and business. This trend transcends tech circles and spans various sectors, from healthcare and finance to education and governance, fundamentally reshaping their functioning. Yet, the immediate impact will be societal. As the topic is so vital, it is my pleasure to present this comprehensive analysis free of charge, empowering you to understand the implications of this transformative trend.
Ignite the trendsetter in your circle by inviting your most forward-thinking friends to join our journey of discovery. And remember, sharing is caring - amplify our innovative insights on your social channels, and let's shape the future together. Ready to make some waves? Click on this link!
Trend Summary
Setting the Context: The rapid development of AI has raised ethical questions that we need to address. A global standard for AI ethics, similar to the “UN Declaration of Human Rights,” could provide guidance.
Hacking the Trend: Key ethical components in AI include bias, discrimination, inclusion, fairness, transparency, accountability, autonomy, agency, trustworthiness, and governance. Numerous examples of bias and discrimination by AI systems in different sectors illustrate the need for ethical considerations.
Cautionary Areas of Impact: AI applications could significantly impact sectors like social media, healthcare, public service departments, disaster response organizations, and prisons. Ethical guidelines should be in place to avoid potential adverse effects.
Positive Prospects in Ethical AI Development: Ethical AI can contribute to fairness, equality, empathy amplification, bias elimination, the growth of public trust, and privacy enhancement. It will require a commitment to ethics throughout the entire AI development process.
Possible Countermovements: Potential countermovements include data monopolies, over-regulation, lack of global standards, countries prioritizing their interests in AI development, and resistance from stakeholders. These factors could undermine efforts toward ethical AI.
Starting the Deep Dive
Setting the Stage: Contextualizing AI and Ethics
AI, defined as software that learns by example, holds the potential to revolutionize industries and societies. However, its rapid development and deployment have raised critical ethical questions. Ethics, a branch of philosophy, involves systematizing, defending, and recommending concepts of right and wrong conduct.
Defining ethical standards for AI lies in the different countries’ ethical, legal, and cultural perspectives. This plurality raises the question of whether a global ethic is possible. As we move towards AI, we must explore whether we need an Ethical Standard of AI or move in the direction of the “UN Declaration of Human Rights.” Although not all countries have signed this Declaration, it provides common ground and rules. A Declaration could guide countries by their definition of ethical standards.
Applying Ethics: How Applied and Normative Ethics Relate to AI
From a practical perspective, the two out of four branches of ethics likely to be most critical in the next ten years are Applied Ethics and Normative Ethics. Here’s why:
Applied Ethics: As AI technologies continue to permeate every aspect of our lives, from healthcare and education to finance and transportation, it’s crucial to examine the specific ethical issues that arise in each of these contexts. Applied ethics provides a framework for analyzing real-world ethical dilemmas and developing practical solutions.
Normative Ethics: As we continue developing and deploying AI systems, we need clear standards or norms for ethical AI behavior. Normative ethics provides these standards, helping us to determine what AI systems should and should not do. This branch of ethics will be crucial in guiding the design of AI systems, ensuring that they behave in ways that align with our moral values and societal norms.
Hacking the Trend: Ethical Components of AI
This process, where we break down the trend of Ethics in AI into its core elements for a deeper understanding, allows us to reflect on the individual components and their implications. In the case of AI and Ethics, it is impossible to hack all parts in this post. Consequently, my highlight in this analysis is on the parts of ethics.
Bias, Discrimination, Inclusion, and Fairness: This involves recognizing and mitigating the potential for AI systems to reflect and reinforce existing biases in the data they learn from. There have been numerous instances where AI has inadvertently perpetuated biases. For example, in 2017, a crime-predicting algorithm in Florida falsely labeled black people as re-offenders at nearly twice the rate of white people, a precise instance of racial bias in AI algorithms (Engadget, 2017). In another example, a study in 2019 found that Facebook’s ad delivery system was gender-biased against specific demographics (The Verge, 2019). The Allegheny Family Screening Tool (AFST), an AI-based system that predicts the likelihood of child abuse using data from public agencies, was found to have racial biases due to relying on past calls more likely to involve Black and biracial families than white ones. Amazon’s hiring algorithm, which was supposed to help recruit top talent, was scrapped after it showed bias against women by penalizing resumes that contained the word “women’s” or mentioned women-only colleges (HBR, 2020). Alphabet’s Google created a hate speech-detection algorithm that assigned higher “toxicity scores” to the speech of African Americans than their white counterparts (TechCrunch, 2020). COMPAS, an AI-based system that assesses the risk of recidivism for criminal defendants, has been criticized for being racially biased and inaccurate by overestimating the chance for Black defendants and underestimating it for white ones (Adheesh Kadiresan et al., 2022).
Transparency: This involves making the workings of AI systems clear and understandable to avoid the “black box” problem. In the COMPAS case mentioned above, the lack of transparency about how the software made its predictions led to controversy and criticism.
Accountability: This involves holding AI systems and their developers responsible for the decisions and outcomes they produce. In 2020, Global Fishing Watch, an AI-driven partnership between Google and the advocacy groups Oceana and SkyTruth that collects vessel location data from satellite images and tracking systems, held accountable Chinese fishing vessels illegally operating in North Korean waters. The AI system identified the ship despite its attempts to remain undetected, highlighting the role of AI in enforcing accountability.
Autonomy and Agency: This involves respecting the autonomy of individuals and ensuring that AI systems do not unduly influence or coerce human decision-making. In 2020, a study used AI to predict recidivism among parolees in Indiana. The use of AI in such contexts raised concerns about the potential for AI to infringe on individuals’ autonomy and agency by making predictions that could influence their future opportunities and freedoms. Another case is the officially denied story from the US air force, where in a simulation, an AI drone virtually killed the operator as he “hindered” the operation’s success (The Guardian, 2023).
Trustworthiness: This involves building AI systems that are reliable and trustworthy and that operate in ways that are consistent with human values and expectations. There have been instances where AI systems have failed to meet these standards. For example, lawyers who used ChatGPT for case research found that the information provided was fictional. Moreover, false articles flood the internet increasingly. Newsguard, a company that gives trust rankings to online news companies, found forty-nine websites that appeared to be publishing hundreds of articles a day that look like news stories but seem to have been written by AI with little or no human oversight specifically to generate advertising revenue (The Guardian, 2023).
Governance: This involves implementing robust governance structures and processes to oversee the use of AI and ensure it aligns with ethical principles. The European Union’s General Data Protection Regulation (GDPR) has set new data protection and privacy standards, highlighting the need for robust governance structures to oversee the use of AI and ensure it aligns with ethical principles. For instance, in the US, these standards are self-imposed by companies. The GDPR regulates European efforts, the UK has its Centre of Data Ethics and Innovation, and Canada has the Montreal Declaration for Responsible AI.
Areas of Concern: Potential Negative Impacts
AI solutions might influence large population segments, potentially causing considerable adverse effects within a feasible ten-year period. Therefore, I have chosen these parameters to focus our discussion.
Social Media: AI algorithms on social media platforms expose users to opinions like their own. For instance, TikTok’s algorithm, highlighted in a Bloomberg Businessweek exposé, exposes users to content that mirrors their existing beliefs. This filtering can lead to societal polarization and misinformation. Addressing this issue can prevent potential harm to vulnerable groups such as teenagers. Yet also, people with depression suffer from these „echo chambers” receiving similar content with the risk of worsening their mental health.
Healthcare: AI is pivotal in healthcare, influencing decision-making, particularly in disease diagnosis and treatment. It already improves the quality of medical imaging, aiding in disease diagnosis and predicting patient outcomes. Furthermore, AI analyzes large volumes of health data to uncover patterns and trends that can inform treatment decisions. However, this could lead to discrimination in pricing, treatment availability, and personalized medicine. For instance, if AI is used to personalize pricing, it could increase prices for people with a possible (predicted) future health issue.
Public Service Departments: Integrating AI in public service departments could revolutionize service delivery. AI’s potential to streamline processes, boost efficiency, and inform decision-making is significant. For instance, AI could predict future demand for public services based on demographic trends, economic indicators, and other relevant data. Consequently, AI could help determine which areas most need social welfare, fire protection, and health care or could create automatic school rankings. However, despite these benefits, we must ensure equality and avoid bias in AI implementations.
Disaster Response Organizations: Incorporating AI into organizations can enhance operations. However, the handling of privacy and data presents a significant challenge. For example, AI systems might need to collect and analyze sensitive data to predict disasters, evaluate action plans and coordinate responses. Therefore, we must enforce robust data handling and privacy protection measures and ensure a balanced reliance on AI and human decision-making.
Prisons: In the next decade, AI will play a crucial role in prisons, affecting inmates, prison staff, and the broader community. AI can enhance prison safety by making parole decisions and monitoring illegal activities. First, however, we must ensure these AI systems operate fairly and accountably, preventing bias and respecting individual rights.
The Bright Side: Potential Benefits
Ethical AI advancements bring not only risks but also potential positive impacts. Here are some key areas of progress:
Progress towards fairness and equality: The current trend in AI ethics promises an increasing awareness to reduce bias in AI systems. By acknowledging and addressing these biases, we promote a society where AI treats all users equally, regardless of race, gender, or socioeconomic status. This careful consideration of ethics should extend through the entire AI development process, from design to deployment and maintenance. Commitment to ethical AI can distinguish a product or service in a socially conscious consumer market.
Empathy Amplification: Ethical AI systems could start valuing diverse groups’ needs and values. This focus on empathy can foster a society where AI technology respects and appreciates our differences rather than enhancing divisions. In addition, guiding the design of AI systems with ethical considerations can result in products better aligned with user needs and expectations, improving user satisfaction and engagement.
Getting rid of unfair preferences: The quest for eliminating bias in AI systems will intensify. For example, in sectors like recruitment and lending, developers could create AI tools that prevent discrimination based on race, gender, or socioeconomic status. While integrating ethics into AI may involve more upfront work, it can yield long-term efficiency gains by reducing the risks of algorithmic bias, discrimination, and reputational damage.
Growth of Public Trust: Greater accountability and transparency in AI systems will bolster public trust in these technologies. This trust can encourage wider adoption of AI applications across sectors, including healthcare and education. A Pew Research Center study highlights trust as a pivotal factor in public acceptance of AI. Businesses can win that trust with ethically designed AI.
Privacy Enhancement: As privacy regulations become stricter, businesses may use techniques like differential privacy and federated learning to develop AI models that respect user privacy. These advancements promise a future where AI becomes a valuable tool and adheres to ethical standards, ensuring fairness and respect for all users.
Potential Setbacks: Countermovements
Data Monopoly Threat: A few dominant corporations controlling all AI data may inhibit the progression of ethical AI. According to a Harvard Business Review report, these data monopolies might consolidate power and wealth, potentially threatening ethical AI initiatives.
Risks of Over-Regulation: Over-stringent regulations may obstruct AI innovation. Sundar Pichai, CEO of Google, cautioned about the implications of over-regulation in a 2020 editorial, highlighting its potential to suffocate innovation and limit the positive impacts of AI. Japan’s Anime Artists, who insist on AI models respecting their copyrights, further illustrated this point.
The Phenomenon of AI Nationalism: Countries might prioritize their national interests in AI development, potentially indulging in unethical practices. Such AI nationalism could incite a global AI arms race, compromising ethical stipulations. This phenomenon is unfolding, with nations focusing on GDP and revenue drivers. The likelihood of “Rogue Nations” is high, as recent actions in Japan and emerging trends in Israel suggest. Japan has clearly stated that no copyright restrictions should apply to AI training sets.
Inconsistency of Global Standards: Regions may adopt diverse ethical standards for AI, causing fragmentation. Ethical norms can vary extensively across countries and cultures. A universally accepted ethical framework for AI might face resistance from regions with unique standards and values. Japan’s recent decision reignites the discussion, underlining the necessity for a lowest common denominator in ethical norms, ensuring AI does not “harm human beings.” This focus incorporates bias, discrimination, inclusion, and fairness within the output of AI logic. Consequently, they would result in a “Human-Centric” AI that asserts the primacy of human values and well-being in designing and implementing AI systems. It focuses on creating AI that supports human autonomy, facilitates meaningful human control, and promotes human welfare and dignity.
Privacy Rights: The Privacy Rights Movement might be the consequence, responding to AI systems’ extensive data collection and usage, advocates for enhanced data protection laws and practices. This movement underscores the necessity for AI systems to respect user privacy, advocating for measures like data anonymization and informed consent. The trend will likely grow as AI continues to evolve and integrate more into various sectors, including healthcare and finance.
Final Thoughts
As we approach the forefront of AI innovation, understanding and implementing ethical considerations in AI development is more critical than ever.
However, this journey toward ethical AI isn’t an easy one. It demands collective effort from technologists, policymakers, and users alike. Robust governance structures globally agreed ethical frameworks, and a solid commitment to societal well-being are needed. It’s our responsibility to ensure AI serves humanity without infringing upon our cherished values.
As we navigate the AI landscape, let’s remember that technological progress should never supersede our human rights. Instead, it should uphold them, nurturing a future where AI and ethics coexist seamlessly. Ultimately, ethical AI is not just about creating intelligent machines—it’s about advancing our society toward fairness, equality, and respect for all.
Word of Notice
The impact analysis presented here strives to be pragmatic, considering challenges from operational development to social acceptance. However, given the inherent nature of such explorations, it’s speculative.
Lastly, while the analysis explores how this trend can influence the “fabric of our lived reality,” it does so by pre-selecting some contributors from a pool of over 300. Therefore, consider this analysis a thought-provoking, speculative, selected snapshot of what may lie ahead. Applying the trend to your business might require further contextualization and deep diving into the logic and product.