Tech Ethics & AI Regulation

Tech Ethics & AI Regulation

May 5, 2025
AI Ethics Framework

AI ethics are the set of guiding principles that stakeholders (from engineers to government officials) use to ensure artificial intelligence technology is developed and used responsibly. This means taking a safe, secure, humane, and environmentally friendly approach to AI.

A strong AI code of ethics can include avoiding bias, ensuring privacy of users and their data, and mitigating environmental risks. Codes of ethics in companies and government regulation frameworks are two main ways that AI ethics can be implemented. By covering global and national ethical AI issues, and laying the policy groundwork for ethical AI in companies, both approaches help regulate AI technology.

More broadly, the discussion around AI ethics has progressed from being centered around academic research and non-profit organizations. Today, big tech companies like IBM, Google, and Meta have assembled teams to tackle ethical issues that arise from collecting massive amounts of data. At the same time, government and intergovernmental entities have begun to devise regulations and ethics policy based on academic research.

Stakeholders in AI ethics

Designing ethical principles for responsible AI use and development requires collaboration between industry actors, business leaders, and government representatives. Stakeholders must examine how social, economic, and political issues intersect with AI and determine how machines and humans can coexist harmoniously by limiting potential risks or unintended consequences.

Each of these actors plays an important role in ensuring less bias and risk for AI technologies:

Academics: 

Researchers and professors are responsible for developing theory-based statistics, research, and ideas that can support governments, corporations, and non-profit organizations.

Government: 

Agencies and committees within a government can help facilitate AI ethics in a nation. A good example of this is the Preparing for the Future of Artificial Intelligence report, which was developed by the National Science and Technology Council (NSTC) in 2016. It outlines AI and its relationship to public outreach, regulation, governance, economy, and security.

Intergovernmental entities: 

Entities like the United Nations and the World Bank are responsible for raising awareness and drafting agreements for AI ethics globally. For example, UNESCO’s 193 member states adopted the first ever global agreement on the Ethics of AI in November 2021 to promote human rights and dignity.

Non-profit organizations: 

Non-profit organizations like Black in AI and Queer in AI help diverse groups gain representation within AI technology. The Future of Life Institute created 23 guidelines that are now the Asilomar AI Principles, which outline specific risks, challenges, and outcomes for AI technologies.

Private companies: 

Executives at Google, Meta, and other tech companies, as well as banking, consulting, health care, and other private sector industries that use AI technology, are responsible for creating ethics teams and codes of conduct. This often sets a standard for companies to follow.

 

Why are AI ethics important?

AI ethics are important because AI technology is meant to augment or replace human intelligence—but when technology is designed to replicate human life, the same issues that can cloud human judgment can seep into the technology.

AI projects built on biased or inaccurate data can have harmful consequences, particularly for underrepresented or marginalized groups and individuals. Further, if AI algorithms and machine learning models are built too hastily, then it can become unmanageable for engineers and product managers to correct learned biases. It’s easier to incorporate a code of ethics during the development process to mitigate any future risks.

AI ethics in film and TV

Science fiction—in books, film, and television—has toyed with the notion of ethics in artificial intelligence for a while. In Spike Jonze’s 2013 film Her, a computer user falls in love with his operating system because of her seductive voice. It’s entertaining to imagine the ways in which machines could influence human lives and push the boundaries of “love”, but it also highlights the need for thoughtfulness around these developing systems.

It may be easiest to illustrate the ethics of artificial intelligence with real-life examples. In December 2022, the app Lensa AI used artificial intelligence to generate cool, cartoon-looking profile photos from people’s regular images. From an ethical standpoint, some people criticized the app for not giving credit or enough money to artists who created the original digital art on which the AI was trained. According to The Washington Post, Lensa was being trained on billions of photographs sourced from the internet without consent.