Information Technology and Ethics/Algorithmic Bias and Fairness
What is Algorithmic Bias
Algorithmic bias is when a computer system consistently makes systematic and repeatable errors that create unfair outcomes or discriminate against a person or subject based on various factors. Often, the factors used to discriminate against a specific person are factors like race, gender, and socioeconomic standing. There are several places where bias can emerge, including the design of the algorithm, the use of the algorithm differing from the intended use, or the data used in the training of the algorithm. This bias can have a profound effect on the subjects it is being used on and will perpetuate societal inequalities.
History
Algorithmic bias saw its first instance when Joseph Wizenbaum wrote about algorithmic bias in his 1976 book Computer Power and Human Resource. It was Wizenbaum who first suggested that the bias could manifest through the data given to the artificial intelligence as well as the way in which the program was written[1]. Given that a program can only process and come to decisions about data using the set of written rules the program was given, there is a concern that the program will have the same biases and expectations as the writer. Wizenbaum also wrote that any data being fed into the program is another instance of “human decision-making processes” while the data is selected. Wizenbaum also talks about another instance of algorithmic bias showing itself: the blind trust of the writer in the program. Wizenbaum talks about a writer being unable to understand the decision-making of the program, which is the same as a tourist making his way through a hotel, going left and right based on the flip of a coin[1]. It does not matter if the solution ends up being correct; it is irreproducible and inconsistent.
An example of algorithmic bias from this time was the applicant screening program used by St. George’s Hospital Medical School. If the applicant was female or had a “foreign-sounding” name, the program already docked points to the application, giving white males a much higher chance of admission. The program was written by Dr. Geoffrey Franglin, who wrote it to lessen discriminatory action and make the task of the initial application round easier[2]. Franglin thought that ceding the responsibility would make the process both easier and fairer. Franglin has coded the bias directly in the program, and the program was perpetuating the same racial bias that the human assessors had.
In the modern day, algorithms are more expertly made and often will avoid most bias that the writer may accidentally introduce through rules or by data, but there are still instances where algorithmic bias will still show itself. It can show itself in often unpredictable ways that are hard to account for. Algorithms and algorithmic bias are often used less as a tool to achieve some end and more as an authoritative figure used to generate said end while posturing itself as the virtual means. Instead of using algorithms to study human behavior, it could be a way for human behavior to be defined[3]. Considering this, working groups co-created the group Fairness, Accountability, and Transparency in Machine Learning (FAT)[4]. Those within the group had the idea to patrol the outcomes of algorithms and to vote on if algorithms have a harmful effect and if they should be controlled or restricted. Many, however, think that FAT cannot serve effectively due to the fact that many of the members are funded by large corporations.
Existing Frameworks and Regulations
Frameworks
When it comes to frameworks for artificial intelligence there is an ever growing amount to choose from. They are all at least a little different from each other since they are designed for a specific purpose in mind, for example JAX is a framework that was “designed for complex numerical computations on high performance devices like GPUs and TPUs”[5] according to Intel. There are many other frameworks that are available for use for just about any project that could be thought of since there are more and more being made. So far there isn’t much ethics taken into consideration within artificial intelligence frameworks currently since it is still such a new and evolving technology. There are many potential areas for concern though. For example imagine an AI chatbot that was being trained on a framework that didn’t take into consideration any ethics. That chatbot could potentially, through interactions with people, say some things that it really shouldn’t be saying to people or expose information that it wasn’t supposed to spread. That could lead a company into a lot of trouble between the incident happening and the potential reputation downfall that it could bring.
Regulations
With regards to the regulation of artificial intelligence there has been a lot of legislation between many states that have been passed. There is a lot of legislation regarding safeguarding data privacy, accountability, and transparency of ai. For example according to Rachel Wright “Texas HB 2060 (2023) is one such example. This bill established an AI advisory council consisting of public and elected officials, academics and technology experts. The council was tasked with studying and monitoring AI systems developed or deployed by states agencies as well as issuing policy recommendations regarding data privacy and preventing algorithmic discrimination.”[6] There is also a blueprint for a potential AI bill of rights that has been made by the Office of Science and Technology Policy which lays out the rights that people should have when AI is being in use or using ai. The rights it goes over is protection from unsafe systems, protection from algorithmic discrimination, data privacy measures, the right to know when ai is being used, and the right to opt out of using ai if you don’t want to[7].
Case Studies
Case Study 1: Predictive Policing: Chicago Police Department and the Strategic Subjects List[8]
Predictive policing algorithms utilize data analysis and machine learning methods to predict the areas where crime is probable to happen and distribute law enforcement resources accordingly. Advocates believe that these systems can reduce crime and improve public safety, but critics fear they may be biased and infringe on civil liberties.
A study by Rudin et al. in 2020 examined the utilization of predictive policing algorithms in Chicago. The research discovered that these algorithms focused mainly on Black and Hispanic areas, resulting in unequal monitoring and policing of minority communities. Furthermore, the algorithms depended on past crime data, which could mirror biases in policing methods and uphold systemic disparities.
Ethical Implications
The employment of predictive policing algorithms gives rise to ethical concerns regarding fairness, transparency, and accountability. Critics claim that these systems may worsen current inequalities in policing and erode trust between law enforcement and marginalized groups.
Public Debate and Reform Efforts
Groups advocating for civil rights, community organizations, and advocates are pushing for more transparency, community involvement, and accountability in the creation and use of predictive policing algorithms such as the SSL.
Legislation has been implemented in certain areas to oversee the use of predictive policing algorithms, with a focus on transparency, accountability, and preventing bias and discrimination.
The case of the Chicago Police Department's Strategic Subjects List (SSL) shows how predictive policing algorithms have intricate ethical and social consequences. Although these algorithms offer the potential to reduce crime and improve public safety, they also bring up important issues regarding transparency, accountability, fairness, and the risk of bias and discrimination. Dealing with these obstacles involves thoughtful reflection on the ethical principles and values that should steer the creation and utilization of predictive policing technologies to guarantee they advance justice, fairness, and the safeguarding of civil liberties.
Case Study 2: Amazon Hiring System
To assist in hiring top people, Amazon created a recruitment tool in 2014 that is driven by AI. Throughout a ten-year period, resumes submitted to Amazon were used to train the AI model. Since males predominate in the computer field, the majority of these resumes were provided by men. The system thus learnt to devalue graduates of all-women's universities and punish resumes that contained the phrase "women's". Having been unable to completely troubleshoot the biased system, Amazon finally abandoned the project in the following years. The example demonstrated how, if not properly examined and rectified, AI algorithms may inherit and magnify social biases contained in training data. It acted as a warning about the moral perils of using AI recruiting tools without sufficient measures for mitigating prejudice and testing. The event brought to light the need of having a diverse staff in AI and doing thorough bias evaluations when developing AI.
Challenges
Data Bias
Algorithms trained on data sets that are skewed, reflecting past biases or social disparities, will pick up on and reinforce those biases. ProPublica's 2018 report uncovered racial bias in COMPAS software used in the American criminal justice system. The algorithm was more likely to misclassify black defendants as high-risk compared to white defendants. This highlights a common issue - unbalanced data sets. If certain demographics are underrepresented in the training data, the algorithm may perform poorly for those groups.[9] It is challenging to address this as bias is often woven into the fabric of our society. Accurately collected data can still reflect long standing inequalities. Historical bias are difficult to remove as its not addressed by fixing a data problem, as the problem lies with society itself. Bias can also be hard to spot at times. If a certain group is underrepresented in the dataset used for training, the algorithm will underperform for said group. Unfortunately, this will not always be obvious unless results are carefully audited. It is important to consider that it is not alway easy to define fairness. What is a “fair outcome” can differ depending on context and the situation. Differing goals lead to different fixes, and there is not always a “right” answer. Many systems operate as “black boxes” meaning there is no transparency to their algorithms. This makes it difficult to trace how a model has come about its results. With no transparency, there is no way to figure out at which point bias enters. Addressing data bias is not purely technical. It is a process that requires hard decisions and constant vigilance.
Model Bias
Weighting and Feature Selection
Algorithms may prioritize certain variables over others, unintentionally reflecting a societal bias that would cause a weighted outcome with bias. As shown in the COMPAS tool used in American Court systems assessed with higher scores black defendants compared to white defendants with similar criminal histories were assigned higher risk scores. This bias appears from historical risk data reflecting bias in the models choices. [10]
Proxy Variables
Designers might use indirect measures to influence on predications on instances with protected aspects such as, zip codes as proxies for race. A 2016 study found mortgage algorithms were charging minority borrowers with higher interest rates despite similar creditworthiness, as through algorithmic biases the system would use neighborhood demographics to influence their borrowing rates. [11]
Algorithmic Design Choices
The way algorithms are designed can introduce bias. Choices about factors and weightage can lead to discriminatory outcomes. Even with unbiased data, models can still exhibit biases due to the algorithm's design or assumptions. In 2020, the U.S. Department of Housing and Urban Development (HUD) lodged a complaint against Facebook, accusing the company's advertising platform of enabling discrimination based on race, gender, and other protected characteristics. The platform's algorithm allowed advertisers to exclude user's by race, gender, or religion through "lookalike audience" targeting, which amplified historical exclusion biases. This situation highlights the legal and ethical consequences of biased algorithms in advertising and housing.[12]
Feedback Loops
Algorithms that rely on user-generated data risk creating self-reinforcing cycles of bias.
Reinforcement of Historical Inequalities
Predictive policing tools, like those used in Bogotá, Columbia have been known to disproportionately flag Black-majority neighborhoods as high-crime areas. This bias has occurred through the reinforcement of feedback that reinforces a bias in the training data of over policing in these regions, leading to more patrols and arrests which further skews data reinforcing this metric. “in a district where few crimes were reported, the tool predicted around 20% of the actual hot spots—locations with a high rate of crime. On the other hand, in a district with a high number of reports, the tool predicted 20% more hot spots than there really were.” [13]
Perpetuating Bias through Feedback Loops
Algorithms that depend on user data have the potential to magnify existing biases as time goes on. If an algorithm favors a specific group, it could produce outcomes that perpetuate that bias in user engagements. Algorithms that produce biased results can reinforce societal biases when incorporated into decision-making processes, establishing a cycle of perpetuation. YouTube's recommendation algorithm has been criticized for promoting extremist content. This bias arises from the algorithm's tendency to recommend content similar to what a user has previously watched, potentially leading users down paths of radicalization.[14]
Diagnosis Feedback Failures
A 2024 study highlights new diagnostic algorithms being trained on biased datasets which underdiagnose diseases in minority groups perpetuating underrepresentation. For instance, in milder cases in marginalized populations there in future models there will be a further established disparity of underestimating health disparities in this group. These feedback loops often demonstrating how algorithmic outputs in the real-world can lead to the shaping of further real-world data. [15]
Lack of Transparency
Many algorithms are complex "black boxes". Meaning answers are given without any explanations about how decisions are made or how the outcomes are produced. For example, when a physician consults an expert, the physician expects a clear explanation based on strong medical knowledge. Similarly, AI systems must be transparent in explaining and justifying their outcomes. [16] Since these systems are hard to understand, it’s difficult to question how they arrive at decisions, making it challenging to identify and address bias. In a legal case that involved the US Department of Homeland Security, a US citizen was selected for extra screening at the border due to an algorithm utilized by Customs and Border Protection (CBP). The plaintiff was granted the right by the court to contest the decision made by the algorithm because of its lack of transparency.[17] This highlights how many individuals don't fully understand how the algorithmic systems work and how it could impact them and potentially affect their lives. The lack of transparency creates a challenge to identify and fix bias and this creates problems in different areas such as healthcare, finance, and criminal justice. Aiming to make these systems better will also provide a way to hold companies or government agencies responsible for harmful algorithmic outcomes.
Fairness Metrics and Trade-offs
With the growing decision-making with the use of AI it has become difficult to achieve a balance between various concepts of fairness. An algorithmic system can produce accurate output but can still be seen as unfair to certain groups by prioritizing one fairness measure which could mean sacrificing another. Some solutions sound promising for minimizing bias but it can result in trade-offs. For example, in 2016, Google researchers discovered that an image recognition algorithm had a greater error rate for individuals with darker skin as opposed to those with lighter skin. When they tried to counteract this prejudice by fine-tuning the model to lower the overall error rates, they discovered that the error rates for people with lighter skin rose, showcasing the compromises needed to tackle algorithmic bias.[18] In the attempt of reducing bias and increase fairness in AI systems, it could cause it to make them less accurate and reduce their performance. There isn’t a clear direction on how to identify if an AI system is fair and there aren’t agreements on what the “right” metrics should be. [19] Developers must understand the importance of fairness and how to fix these problems not only technically but also ethically. While this process might need regular updates and feedback it is important to continue improving the fairness of AI algorithms over time.
Legal Issues
Discrimination Laws
Some companies have faced criticism for biases embedded within their algorithms, leading to formal investigations. In 2019, for example, Apple was publicly accused of gender bias after reports emerged that women using the Apple Card received significantly lower credit limits than men, despite having comparable credit scores.[20] Even when biases are evident in certain algorithms, conducting investigations can be challenging under current legal frameworks. Although laws prohibiting discrimination exist, traditional frameworks often struggle to address the complexities introduced by AI systems.
One example is the Civil Rights Act of 1964 in the United States, which prohibits discrimination based on protected traits such as race, sex, and religion. However, applying such laws to algorithmic bias can be difficult. Many discrimination laws are built around the concept of intentionality—that is, proving that a person or entity deliberately intended to discriminate. For instance, the Equal Protection Clause of the Fourteenth Amendment (EPC) has been interpreted to prohibit only intentional discrimination.[21] Identifying intent is often challenging, and the development of AI systems, which may involve numerous individuals and processes, complicates this further.
Some legal frameworks take a different approach by focusing on outcomes rather than intent. Under EU non-discrimination law, for example, differential treatment based on a protected characteristic is considered sufficient to establish discrimination, regardless of intent. However, this standard is rarely applied to AI systems. One reason is that developers generally avoid encoding explicit forms of discrimination to maintain system accuracy and avoid unpredictable behavior.[22] As a result, direct discrimination is often absent, making outcome-based claims harder to establish.
Recognizing these challenges, policymakers and organizations have begun to consider reforms and new approaches. Existing laws, such as the U.S. Equal Credit Opportunity Act, already regulate fairness in lending decisions, including those made by AI algorithms. In addition, organizations like the Partnership on AI and IEEE have developed ethical guidelines that emphasize transparency, accountability, equality, and respect for human rights in AI development. [23]
Efforts to investigate AI bias could benefit from the involvement of technical and ethical experts who can evaluate systems for hidden forms of discrimination. While progress has been made, significant work remains before AI biases can be consistently identified and addressed within legal and regulatory systems.
Privacy and Data Protection
Personal data generation exceeds expectations in the digital era because businesses along with governments and third parties collect and process enormous quantities of information which threatens individual privacy rights and data security. Individuals discover their data under surveillance through web browsing habits and biometric data as well as GPS locations and financial records even when they remain unaware and unconvinced. The General Data Protection Regulation (GDPR) from the European Union together with the California Consumer Privacy Act (CCPA) in the United States validate new data protection laws to govern data collection and application and dissemination activities. Modern legal systems face challenges from the fast technological progress especially when additional freedoms of artificial intelligence (AI) and facial recognition have joined predictive analytics in the market.
The GDPR established itself as one of the most stringent data protection legislations globally when it started in May 2018. The GDPR reaches all organizations including EU-based entities together with companies that provide goods or services to EU citizens regardless of their physical location. The GDPR grants EU residents fundamental rights to view their personal information and makes it possible to correct mistakes and demand the removal of data and protection against automated decision systems that profile users. California residents obtained new rights through the CCPA during its January 2020 launch because they gained knowledge about their personal data collection and sharing to third parties and the capability to reject such sales. Businesses need to present their data collection systems through accessible disclosure channels to customers. The CCPA makes an important progress in U.S. data privacy legislation while establishing an example for other states to develop their own privacy statutes. Both regulations promote transparency together with individual data control and total accountability which modern society demands from personal data protection legislation.
Digital algorithms are emerging as an important matter before legal authorities worldwide. Machine learning models that affect human life decisions through business and governmental institutions require increasing clarity because people want to understand the underlying reasoning processes. People need to know when automated systems make decisions and should receive adequate explanations about how they arrive at those decisions. Healthcare institutions may need to pay attention to Article 22 of the GDPR which defines how automated individual decision-making should work since individuals have exclusive rights to ask for human intervention before significant decisions affect them. Organizations using AI algorithms face important consequences due to this data protection requirement particularly when these systems function as opaque algorithms which their developers find hard to decode.[24]
Regulatory Challenges
Defining the terms and scope boundary of an emerging technology is a herculean task. Legislators are struggling to understand the technology of Artificial Intelligence (AI), its applications, and impacts before they can even start defining regulations.
- What is AI and what does it encompass?
- How much should be within the scope boundary of any legislation without hindering innovation and business?
- What are the impacts both positive and negative that need to be accounted for?
- Which industries need to have protections against negative uses of AI?
- Who should oversee the regulation and upstream and downstream impacts?
- Who will bear the cost of the changes?
- Is it legally correct to regulate certain applications?
- Are all citizen rights protected or are some groups marginalized?
These are only some of the questions to think about when drafting a legislation. Once a draft has been prepared, then legislation needs support from more legislators and sponsors to be entered into the system. The legislation itself needs to be legally sound and clear to be enforced and upheld in the court of law. The legislation should also gather enough support from legislators to pass it into law. The legislation needs to be a pressing issue in order to navigate a federal legislation system and quickly become law.
Once a legislation is defined, legally sound and enforceable the affected parties must be given time to comply.
- How long should that be?
- Which industries are affected and how much time should they be given?
- Who should be required to implement the changes most urgently?
- Should the regulation be mandatory or voluntary?
There are conflicting ideological ideas on both sides of the aisle regarding how to regulate AI bias and discrimination and leave space for innovation[25] . Legislators have to balance the need of the US to maintain its place as leader in innovation while protecting its citizens from the implications of technology[25]. The challenge itself is in the definition of appropriate legislation taking into account the interests of all parties, as well agreeing on the legislation itself and its applicability by the democratic system.
The European Union has recently passed the EU AI Act which attempts to capture this challenge by defining risk levels and course of action for each level effective in 2026 [26]. While the US federal government has yet to follow suit, the states are taking matters into their own hands by passing their own legislation on AI bias and discrimination.
Ethical Considerations
Data Bias
AI algorithms are trained on data supplied by their respective developer/s. But, the algorithm could unavoidably amplify or even exaggerate biases in the data. Careful choice and preparation of a varied, well-rounded, balanced dataset will help to avoid this. This guarantees that any underlying prejudices may be found and appropriately handled for more equitable artificial intelligence systems. Biased artificial intelligence can have severe effects including sustaining prejudice and harming underprivileged populations. Harmful effects of prejudiced artificial intelligence could make society lose faith in technology. Those engaged in the development of artificial intelligence algorithms as well as developers have a moral obligation to make sure they fit values of justice, responsibility, and fairness. Lack of transparency about data sources or biases violates ethical standards. Unchecked biases could prioritize efficiency/profit over the welfare of humans, which would lead to unintended harm. Developers bare a responsibility to prioritize diverse dataset curation, inclusive team perspectives, and continuous monitoring. It is also important for those developing AI algorithms to remain transparent through the process of creation and maintenance; In addition, these individuals should conduct regular audits to ensure accountability. By embedding ethical principles into AI, we can minimize harm and do our best to uphold justice and equality for the foreseeable future.
Algorithmic Bias
Algorithmic bias arises not only from skewed training data but also from design choices, through assumptions and optimized objectives given to the algorithm itself. These factors can cause a disadvantage on specific subgroups, which perpetuates inequalities.
Design Choices
Algorithms are shaped by their underlying architecture and design rules.
Inductive Bias
Models like decision trees which prioritize splits that maximize information gain, which might unintentionally separate groups based on sensitive subtypes such as: zip codes correlating with race.
Model Complexity
Simpler models are transparent however these simpler models might lack the nuisance required to identify certain datasets without oversimplifying these patterns while, more complex models can risk creating a predisposed layer of bias.
Feedback Loops
Deployed algorithms can reinforce biases overtime, for example a hiring tool favoring graduates from elite schools might exclude certain applicants from minority backgrounds which would further narrow diversity.
Data Privacy
Most artificial intelligence systems require substantial amounts of personal information to operate while this practice creates significant privacy issues. AI systems need extensive datasets to operate efficiently so their required input comes from collections that contain personal identification details along with financial records and medical histories and biometric data. Left unmonitored, the utilization of this information leads organizations to engage in privacy violation and data privacy breaches along with the loss of personal autonomy. Substantial data protection techniques need implementation to safeguard privacy rights of individuals. The protection of individuals requires one to remove personal information from datasets through techniques like anonymization to prevent data exposure. The achievement of complete anonymization remains difficult because researchers have shown that illatories can connect anonymized data with other information to identify individual participants. The protection of sensitive information demands encryption to be employed as a crucial preventive measure. Digital information remains impenetrable to unauthorized parties through encryption because it maintains its security status throughout transmission as well as its storage period.
The fundamental protection safeguard comes from access control systems. Organizations require strict rules about personal data viewing and manipulating access which specify that only essential personnel with clearance levels can access the data. An organization should apply audit functions alongside monitoring systems to analyze data usage patterns and discover unusual use of personal information. Technical measures for ethical data handling must be supported through clear disclosure to users along with their free agreement regarding data processing. Every person must receive details regarding AI systems' methods for collecting and processing personal data and their programmatic uses. Users need to understand consent terms which should avoid complex or difficult-to-understand documentation within lengthy terms and conditions. Artificial Intelligence developers need to evaluate the genuine necessity of collected data for system functions before minimizing data to decrease potential risks.
The development of ethical artificial intelligence according to its standards acknowledges the legal obligation of protecting privacy together with its position as a basic right of human beings. The enforcement of data privacy practices enables both trust between the public and technology systems and the achievement of AI benefits through free and secure territory.
Promote Transparency
Organizations can create autonomous oversight tools to keep an eye on the behavior of AI systems and use algorithmic auditing procedures to encourage accountability and transparency in AI. Algorithmic impact evaluations are one type of transparency tool that might improve credibility and accountability.
Diverse Perspective
There are several strategies that can help reduce biases in the development and use of AI systems. One important approach is to ensure that AI teams and systems are inclusive and reflect a diversity of viewpoints and life experiences. Diverse perspectives can assist in identifying and addressing potential biases and ethical concerns early in the development process.[27]
It is particularly important for professionals to prioritize the inclusion of underrepresented populations when collecting and selecting training data. These groups should not simply be grouped into broad or vague categories, as doing so can obscure meaningful differences and reinforce bias. Similarly, data sets that are drawn from a limited geographic scope can introduce biases and reduce the generalizability of AI systems across different populations.[27]
Providing transparent documentation and clearly describing the methodologies used in data collection and model development can also support critical analysis and help limit biases. In addition, AI systems should be tested rigorously for potential biases to ensure that issues are identified and addressed before deployment. Thorough testing also improves the reproducibility of AI models, which is a key factor in detecting and mitigating bias.[27]
In summary, reducing bias in AI systems requires the use of representative data sets, detailed documentation, rigorous testing and validation processes, and the incorporation of diverse perspectives throughout the development lifecycle.
Human Oversight
Even if AI algorithms are capable of automating decision-making procedures, human supervision and responsibility must always be maintained. Particularly in delicate or high-stakes circumstances, humans ought to examine and confirm the algorithm's conclusions.
In the US, states like Colorado, Texas and Connecticut have already begun passing legislation to create human oversight methods to oversee AI bias and discrimination, based on risk categories and impact to the public[28]. For example, in the Colorado Consumer Protections for Artificial Intelligence Act of 2024 [28] developers and deployers are made responsible to check for bias and discrimination in high-risk AI systems. In 2023, Texas and Connecticut adopted statutes establishing state-level working groups to assess AI systems for bias and discrimination [28].
As scrutiny on the use of AI is growing, explainable AI is entering the landscape. Increasingly industry professionals are turning to explainable AI to increase transparency into the AI system algorithms, with the intent to identify bias or discrimination early in the process [29] [30]. Before this, AI systems have been black-box and bias and discrimination was not easy to detect even by humans. When AI systems are trained on historical data, the bias and discrimination is further perpetuated. Thus requiring close oversight. If humans do not understand how to identify bias and discrimination then it cannot be programmed into a system either. Just like with ethics, it cannot be defined to a unified set of rules, and therefore could not be programmed in a computer system.
In 2022, the Biden administration introduced the Blueprint for an AI Bill of Rights which laid out five principles to protect the rights of the American public. The main principles were: (1) safe and effective systems, (2) protection against algorithmic discrimination, (3) data privacy, (4) transparency in usage, and (5) providing an opt-out option [31] . This was followed up by the CFPB, Justice Department, Equal Employment Opportunity Commission and FTC issued a joint statement in 2023 stating these agencies will protect individuals against discrimination and bias in automated systems [30] . Lastly, the Biden administration also had signed an executive order in 2023 titled Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence, which empowered agencies to ensure AI is used safely, securely, equitably, and respects privacy, among other directives [32]. Under the Biden administration, the FTC cracked down against biased and fraudulent uses of AI such as biased Rite Aid theft monitoring system, fraudulent claims of robot lawyers, fake review generations and more [33] .
Continuous Monitoring and Updating
Biases may appear or change over time in AI systems since they work in dynamic contexts. To preserve fairness and reduce newly developing biases, the algorithm's performance must be regularly monitored and updated as appropriate.
Future Directions and Innovations
Advanced Fairness Metrics and Tools
Fairness Metrics
Establishing future metrics in the field of AI ethics, especially regarding algorithmic bias and accuracy, is important to ensure fair and accurate AI systems. Bias in AI comes from systematic errors that lead to negative results. This often results from assumptions made in development phases such as data collection, algorithm design, and model training. For example, a scoring algorithm trained on biased data can favor candidates from certain populations, preserving existing biases in AI systems.
This ensures that AI models treat everyone fairly, regardless of factors such as age, gender, race, and socioeconomic status. Technology managers must define and use metrics to ensure the development of ethical AI systems. Although the US government does not have such legislation, the legal environment surrounding AI and equity is changing. Current laws, such as the Fair Credit Reporting Act and the Equal Credit Opportunity Act, affect AI equity. Around the world, countries are moving forward with AI legislation, with the EU and Canada leading the way in promoting transparency, accountability, and fairness in AI systems.[34]
Fairness Tools
Tools like IBM’s AI Fairness 360[35] provide a framework for detecting and mitigating bias in machine learning models, providing the foundation for real-time monitoring solutions. The toolset was designed as a part of IBM’s broader effort to bring processes to the delivery of AI and provide a comprehensive set of algorithms, metrics and datasets focused on accuracy. AIF360 includes over 70 fairness metrics and more than 10 bias mitigation algorithms.
AI Fairness 360 can be used in variety of industries and fields such as finance, human resources, healthcare, and criminal justice, where AI decision-making systems can have a significant impact on people’s lives. By installing this set of tools, organizations can ensure that their AI systems are more efficient and accurate, reducing the risk of negative biases that lead to discrimination.
Stakeholder Collaboration and Public Engagement
Stakeholder collaboration and public participation are important steps in addressing procedural bias and equity in AI ethics. These efforts demonstrate the importance of collaboration and the power of public action to ensure that AI systems are developed and deployed ethically and fairly.
Interdisciplinary Collaborations
Collaborative efforts across multiple disciplines are critical to developing accurate and sustainable AI systems. Stakeholders from various fields, such as technology, social science, ethics, and law, are involved in these collaborations. The objective is to develop AI in a comprehensive manner that incorporates ethical considerations at every stage. This approach of learning aids in comprehending the complex nature of the AI hypothesis and developing a comprehensive mitigation strategy.[36][37]
Public Participation in AI Ethics
Public participation is essential in managing AI technology. This includes community engagement through communications, open forums, and participatory design processes. This work will ensure that the development of AI technologies is consistent with public value and social norms. AI systems will be better understood and accountable. Public engagement can be promoted using methods such as advisory voting, which brings together diverse groups of people to discuss and provide input on AI policy.[36]
Educational Initiatives and Awareness
To effectively address algorithmic bias and fairness through educational initiatives and public awareness campaigns, several approaches have been explored and implemented across various organizations and educational bodies.
Educational Initiative
The knowledge and attitudes of technologists and policymakers can be shaped by training curricula that focus on the ethics and legitimacy of AI. For example, projects such as the AI + Ethics curriculum developed by the MIT Media Lab[38] for high school students aim to raise awareness of AI technology and its impact on society, including the issue of algorithmic bias.
Public Awareness Campaigns
Organizations like AI for Humanity are using popular culture to raise awareness among some people about the implications of AI for social justice and to focus on the impact of these technologies on black communities. This includes legislative efforts to ensure transparency and accountability in AI applications.[39]
Collaborative Research and Public Discussions
Sites like AI and the Carnegie[40] Commission's Equality Initiative bring academics together to discuss and address biases in AI, such as gender bias, inequality persists at algorithmic cost. These debates not only raise awareness but encourage research that informs policy and practice.
References
Category:Book:Information Technology and Ethics#Algorithmic%20Bias%20and%20Fairness%20- 1 2 Weizenbaum, Joseph (1976). Computer power and human reason: from judgment to calculation. San Francisco: Freeman. ISBN 978-0-7167-0464-5.
- ↑ "Untold History of AI: Algorithmic Bias Was Born in the 1980s - IEEE Spectrum". spectrum.ieee.org. Retrieved 2024-04-21.
- ↑ Lash, Scott (2007-05). "Power after Hegemony: Cultural Studies in Mutation?". Theory, Culture & Society. 24 (3): 55–78. doi:10.1177/0263276407075956. ISSN 0263-2764.
{{cite journal}}
: Check date values in:|date=
(help)Category:CS1 errors: datesCategory:CS1: abbreviated year range - ↑ Garcia, Megan (2016-12-01). "Racist in the Machine". World Policy Journal. 33 (4): 111–117. doi:10.1215/07402775-3813015. ISSN 0740-2775.
- ↑ "AI Frameworks". Intel. Retrieved 2024-04-23.
- ↑ "Artificial Intelligence in the States: Emerging Legislation - The Council of State Governments". 2023-12-06. Retrieved 2024-04-23.
- ↑ "Blueprint for an AI Bill of Rights | OSTP". The White House. Retrieved 2024-04-23.
- ↑ "Predictions Put Into Practice: a Quasi-experimental Evaluation of Chicago's Predictive Policing Pilot | National Institute of Justice". nij.ojp.gov. Retrieved 2024-04-22.
- ↑ Mattu, Jeff Larson,Julia Angwin,Lauren Kirchner,Surya. "How We Analyzed the COMPAS Recidivism Algorithm". ProPublica. Retrieved 2024-04-22.
- ↑ Mattu, Jeff Larson,Julia Angwin,Lauren Kirchner,Surya. "How We Analyzed the COMPAS Recidivism Algorithm". ProPublica. Retrieved 2025-04-25.
- ↑ Counts, Laura (2018-11-13). "Minority homebuyers face widespread statistical lending discrimination, study finds | Berkely Haas". Haas News | Berkeley Haas. Retrieved 2025-04-25.
- ↑ "Facebook Settles Civil Rights Cases by Making Sweeping Changes to Its Online Ad Platform | ACLU". American Civil Liberties Union. 2019-03-19. Retrieved 2024-04-22.
- ↑ "Predictive policing is still racist—whatever data it uses". MIT Technology Review. Retrieved 2025-04-25.
- ↑ Haroon, Muhammad; Wojcieszak, Magdalena; Chhabra, Anshuman; Liu, Xin; Mohapatra, Prasant; Shafiq, Zubair (2023-12-12). "Auditing YouTube's recommendation system for ideologically congenial, extreme, and problematic recommendations". Proceedings of the National Academy of Sciences. 120 (50). doi:10.1073/pnas.2213020120. ISSN 0027-8424. PMC PMC10723127. PMID 38051772.
{{cite journal}}
: Check|pmc=
value (help); Check|pmid=
value (help)CS1 maint: PMC format (link)Category:CS1 errors: PMCCategory:CS1 errors: PMIDCategory:CS1 maint: PMC format - ↑ Cross, James L.; Choma, Michael A.; Onofrey, John A. (2024-11). "Bias in medical AI: Implications for clinical decision-making". PLOS digital health. 3 (11): e0000651. doi:10.1371/journal.pdig.0000651. ISSN 2767-3170. PMC 11542778. PMID 39509461.
{{cite journal}}
: Check|pmc=
value (help); Check|pmid=
value (help); Check date values in:|date=
(help)Category:CS1 errors: datesCategory:CS1 errors: PMCCategory:CS1 errors: PMIDCategory:CS1: abbreviated year range - ↑ London, A.J (2019). "Artificial Intelligence and Black-Bock Medical Decisions: Accuracy versus Explainability". Hastings Center Report. 49 (1): 15–21.
- ↑ Olsson, Alex (2024-02-09). "Racial bias in AI: unpacking the consequences in criminal justice systems". IRIS Sustainable Dev. Retrieved 2024-04-22.
- ↑ Ferrara, Emilio (2023-12-26). "Fairness And Bias in Artificial Intelligence: A Brief Survey of Sources, Impacts, And Mitigation Strategies". Sci. 6 (1): 3. doi:10.3390/sci6010003. ISSN 2413-4155.
- ↑ Nathim, K.W.; Hameed, N.A.; Salih, S.A.; Taher, N.A.; Salman, H.M.; Chronomordenko, D. (2024). "Ethical AI with Balancing Bias Mitigation and Fairness in Machine Learning Models". "Proceedings of the XXth Conference of Open Innovations Association FRUCT. pp. 797-807. https://doi.org/10.23919/FRUCT64283.2024.10749873.
- ↑ Osoba, Osonde A. (March 25, 2019). "Did No One Audit the Apple Card Algorithm?". RAND: pp. 2. https://www.rand.org/pubs/commentary/2019/11/did-no-one-audit-the-apple-card-algorithm.html.
- ↑ hlr (2025-04-10). "Resetting Antidiscrimination Law in the Age of AI". Harvard Law Review. Retrieved 2025-04-29.
- ↑ Xenidis, Raphaële; Senden, Linda (2020). EU non-discrimination law in the era of artificial intelligence : mapping the challenges of algorithmic discrimination. Kluwer Law International. ISBN 978-94-035-1165-8.
- ↑ Min, Alfonso (2023-10-05). "Artifical Intelligence and Bias: Challenges, Implications, and Remedies". Journal of Social Research. 2 (11): 3808–3817. doi:10.55324/josr.v2i11.1477. ISSN 2828-335X.
- ↑ "Clearview AI gets third €20 million fine for illegal data collection". BleepingComputer. Retrieved 2024-04-22.
- 1 2 Mulligan, S. J. (2024). There are more than 120 AI bills in Congress right now. MIT Technology Review, 2024(9). https://www.govtech.com/policy/more-than-120-ai-bills-currently-processing-in-congress
- ↑ High-level summary of the AI Act | EU Artificial Intelligence Act. (n.d.). Retrieved March 3, 2025, from https://artificialintelligenceact.eu/high-level-summary/
- 1 2 3 Gichoya, Judy Wawira; Thomas, Kaesha; Celi, Leo Anthony; Safdar, Nabile; Banerjee, Imon; Banja, John D; Seyyed-Kalantari, Laleh; Trivedi, Hari; Purkayastha, Saptarshi (2023-10-01). "AI pitfalls and what not to do: mitigating bias in AI". British Journal of Radiology. 96 (1150): 20230023. doi:10.1259/bjr.20230023. ISSN 0007-1285.
- 1 2 3 Anderson, H., Comstock, E., & Hanson, E. (2025, March 31). AI Watch: Global regulatory tracker - United States. White & Case LLP. https://www.whitecase.com/insight-our-thinking/ai-watch-global-regulatory-tracker-united-states
- ↑ Gade, K., Geyik, S. C., Kenthapadi, K., Mithal, V., & Taly, A. (2019). Explainable AI in Industry. Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, 3203–3204. https://doi.org/10.1145/3292500.3332281
- 1 2 Khan, L. M., Chopra, R., Kristen Clarke, & Charlotte A. Burrows. (2023, April 25). Joint Statement on Enforcement Efforts Against Discrimination and Bias in Automated Systems. Federal Trade Commission. https://www.ftc.gov/legal-library/browse/cases-proceedings/public-statements/joint-statement-enforcement-efforts-against-discrimination-bias-automated-systems
- ↑ Blueprint for an AI Bill of Rights | OSTP. (n.d.). The White House. Retrieved March 3, 2025, from https://bidenwhitehouse.archives.gov/ostp/ai-bill-of-rights/
- ↑ Penn State. (2023, October 31). Biden Administration Executive Order on AI. Official Site of the Penn State AI Hub. https://ai.psu.edu/penn-state-professor-shyam-sundar-on-the-biden-administration-executive-order-on-ai/
- ↑ Federal Trade Commission. (2024, September 25). FTC Announces Crackdown on Deceptive AI Claims and Schemes. Federal Trade Commission. https://www.ftc.gov/news-events/news/press-releases/2024/09/ftc-announces-crackdown-deceptive-ai-claims-schemes
- ↑ Councils, Forbes. "AI & Fairness Metrics: Understanding & Eliminating Bias". councils.forbes.com. Retrieved 2024-04-22.
- ↑ Trusted-AI/AIF360, Trusted-AI, 2024-04-21, retrieved 2024-04-22
- 1 2 "Algorithmic bias detection and mitigation: Best practices and policies to reduce consumer harms". Brookings. Retrieved 2024-04-22.
- ↑ "Stakeholder Participation in AI: Beyond "Add Diverse Stakeholders and Stir"". ar5iv. Retrieved 2024-04-22.
- ↑ "Project Overview ‹ AI Audit: AI Ethics Literacy". MIT Media Lab. Retrieved 2024-04-22.
- ↑ Gupta, Damini; Krishnan, T. S. (2020-11-17). "Algorithmic Bias: Why Bother?". California Management Review Insights.
- ↑ "Artificial Intelligence & Equality Initiative | AI Ethics | Carnegie Council". www.carnegiecouncil.org. Retrieved 2024-04-22.