What must we do to prevent algorithmic bias?

Getting information from a variety of sources and perspectives can help us grow. It allows for greater objectivity, and we become more innovative as a result.

Maru Kim
Maru Kim
Good
Review Overview

The availability of large data sets has made it simple to gain new insights using computers. As a result, algorithms have become more sophisticated and pervasive tools for automated decision-making. An algorithm is a set of step-by-step instructions that computers follow to perform a task.

The benefits of artificial intelligence appear obvious in that human decision-makers are far from perfect, and algorithms hold great promise for improving decision-quality. Algorithms are harnessing massive amounts of macro- and micro-data to influence human decisions in a variety of tasks ranging from making film recommendations to assisting banks in determining an individual’s credit ratings.

Algorithms in machine learning depend on multiple data sets, or training data, that specify what the correct outputs are for certain people or objects. It then learns a model from that training data that can be applied to other people or objects to predict what the correct outputs should be.

Many studies, however, have revealed some troubling examples of algorithmic decision-making that falls short of our expectations because machines can treat equally-qualified people and objects differently. Empirical studies of algorithmic bias reveal that some algorithms risk replicating and even amplifying human biases, particularly those affecting vulnerable groups.

Algorithm bias can result from unfair or inadequate training data, or from relying on inaccurate information that reflects historical inequalities. Biased algorithms, if left unchecked, can result in decisions that have a collective, disparate impact on certain groups of people, even if the programmer has no intention of discriminating.

It is up to legislators to ensure that the algorithms that assist in making complex decisions do in a just and equitable manner. Regulatory authorities must define bias practically in terms of its real-world consequences. They must also provide guidelines to industry and set goals for objective, hard-hitting investigations into biased algorithms. To prevent bias, regulators must impose strict internal accountability structures, documentation protocols, and other preventative measures.

How are algorithmic biases generated?

Machine learning algorithms, in a nutshell, are computer programs that can learn from data. They gather information from the data presented to them and apply it to improve their performance on a given task.

The algorithm considers all of the variables it was exposed to during training and determines the best combination of these variables to solve a problem. The machine ‘learns’ this unique combination of variables through trial and error. Machine learning can be classified into several types based on the type of training it receives.

Here are some examples that demonstrate a variety of causes and effects that either inadvertently apply different treatment to groups or intentionally generate a disparate impact on them. One of the well-known examples of algorithmic bias resulting from corrupted data is Amazon’s recruiting algorithm.

Amazon, the world’s largest online retailer, decided to discontinue the use of a recruiting algorithm after discovering gender bias. Engineers derived the data used to create the algorithm from resumes submitted to Amazon over a 10-year period, which were mostly from white males.

The algorithm was trained to recognize word patterns in resumes rather than relevant skill sets, and this data was compared to the company’s predominantly male engineering department to determine an applicant’s fit. As a result, the AI software penalized any resume that included the word “women’s” in the text and downgraded resumes from women who attended women’s colleges, resulting in gender bias.

Another example is COMPAS, a risk assessment algorithm used by many state judges in the United States to assist in sentence determination. It was challenged for assigning a higher re-offense risk factor to Black people in violent crimes.

If African-Americans are more likely to be arrested and incarcerated in the United States due to historical racism, disparities in policing practices, or other inequalities within the criminal justice system, these realities will be reflected in the training data and used to make recommendations about whether a defendant should be detained, according to the COMPAS algorithm. If historical biases are taken into account, the model will make the same mistakes that people do.

In 2016, Microsoft’s one-day experiment with a Twitter chatbot (that learned from other tweets) backfired when the bot sent out 95000 tweets in 16 hours, the majority of which were filled with misogyny, racism, and anti-Semitism.

The machine learning algorithm detected existing racial and gender biases in the training data while analyzing word associations. If these algorithms’ learned associations were used as part of a search engine ranking algorithm or to generate word suggestions as part of an auto-complete tool, the cumulative effect could reinforce racial and gender biases.

These examples show how disastrous outcomes can occur, even when the algorithm’s creators or operators have no malicious intent. Recognizing the possibility and causes of bias is the first step in any mitigation strategy.

Best practices and policies to eliminate algorithmic bias

To hold AI fully accountable, regulators should define what ideal information an algorithm should provide. Algorithms provide critical information to decision-makers; therefore, regulation must ensure that this information is provided accurately, both overall and for specific groups.

Regulators must be prepared to enforce the standards and ensure that the industry adheres to the goals that have been established. There are a few key tools that regulators can use to investigate suspected bias cases.

Policymakers must also provide guidance on how to implement preventive structures and practices within an organization. For example, in any organization, it is critical that algorithmic bias is handled by a single person. This person, who has broad authority over strategic decisions, can provide executive support to the team while also being in charge of bias mitigation.

Furthermore, organizations require clear documentation in addition to a clear accountability structure. Algorithms exist throughout an organization and provide feedback on decisions that affect thousands, if not millions, of people.

It is no longer acceptable for a company to be unaware of algorithmic bias as a potential issue. Negligence is defined as failing to do the necessary work to identify, mitigate, and prevent it.

Weeding out biases necessitates a near-complete understanding of the preconceptions that may have contaminated the data. However, depending on the goal of a machine learning model, identifying potential bias-causing perceptions can range from easy to difficult.

Social media platforms are designed to capitalize on humans’ confirmation bias. 

Despite the fact that the majority of people continue to describe themselves as having balanced opinions, we still naturally gravitate toward certain content online. Over time, algorithms polarize the environment, allowing only the loudest voices and most extreme opinions on either side to break through the noise.

The natural human tendency to seek, interpret, and remember new information in accordance with preexisting beliefs is known as confirmation bias. Humans discover all kinds of information simply by living—through focused research, general experience, and wild hunches—and it feels especially good to our brains when what we learn matches what we already expected.

Confirmation bias is an innate, universal trait that occurs across cultures. It is a part of all of us, but once we recognize its presence, we can take steps to reduce its influence on our thinking. Humans invented the scientific method, legal system, and judicial process to avoid our tendency to jump to conclusions.

Google and Facebook, for example, use computer programming algorithms to determine what information to deliver to you. Your “filter bubble,” as coined by internet activist Eli Pariser, refers to the idea that while automated personalization can be beneficial in some ways, it can also isolate you from other information. The filter bubble created by your online activity, also known as a “echo chamber,” can limit your exposure to different points of view and weaken your ability to avoid fake news and bias.

Though the algorithms we use have the potential to become tools for addressing even deeper-seated societal biases, algorithms have not delivered on this promise, in fact, the opposite has been true.

As an onslaught of news and research reports has demonstrated, algorithms are still prone to bias and frequently exacerbate it. While some algorithmic biases are the result of human error—for example, insufficient data or statistical techniques—many algorithmic biases reflect societal biases in general. 

How to Overcome Confirmation Bias

While a strong first impression can undermine both your decision-making and your mood, there are several simple ways to overcome confirmation bias.

It’s good intellectual practice to keep an open mind and not take your opinions too seriously, even if they’re supported by a lot of data. The best decision makers are those who can take in new information and admit when it is incorrect.

Allow yourself to be incorrect. To get closer to objective truths, you must be willing to admit when you are wrong, especially in the face of new data. If you can’t admit defeat, you’ll never be able to make new discoveries in this world. You can avoid biases by being conscious of your belief systems, whether they are religious, political ideologies, cultural worldviews, or something else. Allow yourself to be wrong and be open to confirmation.

We are usually more aware of our assumptions than of our biases, but assumptions, like biases, frequently prevent us from thinking clearly. It’s risky to assume your assumptions are correct. Always test your hypotheses. This may be accomplished by seeking evidence that contradicts your theories and developing factually supported arguments with new evidence that can further prove your point.

Political and religious tenets are frequently repeated for emphasis, intensity, and effect. This tactic is a type of brainwashing in which you begin to believe something is true simply because you’ve heard it so many times. It is one of several flaws in the human sensory system. You have to pay attention to repetition and be skeptical of what powerful people tell you over and over.

The best way to beat YouTube’s horrible algorithm (and avoid being radicalized by some Antisemitic Flat-Earth Groomer nonsense) is to simply not interact with the platform other than to watch the video you went there to watch.

Avoiding Echo Chamber

Much has been written about how social media can create filter bubbles and echo chambers, which can lead to extremism.

Unfortunately, because social media algorithms are designed to increase engagement, extremism is baked into the system. The more clicks and likes you get, the more money you make. And all mainstream media, whether general news, business, or sports, face the same pressures as they compete for the same dwindling share of advertising revenue. Unsurprisingly, this keeps us trapped in distorted bubbles, resulting in societal polarization.

Echo chambers can occur anywhere information is exchanged, but the Internet has increased the number and ease with which echo chambers can occur. To avoid echo chamber, you need to make a habit of checking multiple news sources to ensure you’re getting complete, objective information. And you must seek out people who have opposing viewpoints to yours. Reading books about experiences that are completely different from your own can also help you escape the echo chamber in which you are trapped.

Getting information from a variety of sources and perspectives can help us grow. It allows for greater objectivity, and we become more innovative as a result.

Review Overview
Share This Article
Follow:
Maru Kim, Editor-in-Chief and Publisher, is dedicated to providing insightful and captivating stories that resonate with both local and global audiences. With a deep passion for journalism and a keen understanding of Busan’s cultural and economic landscape, Maru has positioned 'Breeze in Busan' as a trusted source of news, analysis, and cultural insight.
Leave a review

Leave a review

Your email address will not be published. Required fields are marked *