US policymakers are taking steps to address the potential harm caused by algorithmic decision-making on online platforms. In an effort to promote transparency and prevent discrimination, Sen. Edward Markey and Rep. Doris Matsui have reintroduced the Algorithmic Justice and Online Platform Transparency Act. This bill aims to ban the use of discriminatory or “harmful” automated decision-making, establish safety standards, require platforms to disclose information about their algorithms, publish annual reports on content moderation practices, and create a governmental task force to investigate discriminatory algorithmic processes.
The bill applies to a wide range of online platforms, including social media sites, content aggregation services, and media and file-sharing sites. This means that platforms that provide a community forum for user-generated content will be subject to these regulations. By targeting various types of online platforms, policymakers hope to address the potential for discrimination and harmful decision-making across the digital landscape.
This is not the first time Markey and Matsui have introduced this bill. In 2021, a previous version of the bill made its way to the Subcommittee on Consumer Protection and Commerce but ultimately died in committee. However, with renewed attention on algorithmic transparency and accountability, there is hope that this new iteration of the bill will gain more traction.
One of the challenges in regulating algorithms is the lack of transparency surrounding their development and decision-making processes. Algorithmic systems, including social media recommendation algorithms and machine learning systems, often operate as “black boxes,” making it difficult for users and regulators to understand how decisions are being made and whether they are fair or biased. This opacity is often justified by concerns around intellectual property or the complexity of the system.
However, policymakers and regulators argue that this lack of transparency could enable biased decision-making with far-reaching consequences. For example, insurance companies already use algorithms to determine coverage for patients, and there have been instances where these algorithms have been found to be discriminatory. In 2021, the FTC signaled its intention to take legal action against biased algorithms, highlighting the need for greater oversight and regulation.
Sen. Markey emphasizes the importance of holding Big Tech accountable for the discriminatory impact of their algorithms. He states that, “Congress must hold Big Tech accountable for its black-box algorithms that perpetuate discrimination, inequality, and racism in our society – all to make a quick buck.” This sentiment reflects the concern that algorithms have the potential to perpetuate existing inequalities and discrimination if not properly regulated.
The issue of algorithmic transparency and accountability is not unique to the United States. The European Union is also grappling with these challenges and has proposed its own AI Act, which emphasizes the importance of transparency and accountability in algorithmic systems. The EU’s proposed regulations are currently in the final stages of negotiation and could serve as a model for other countries seeking to address the issues surrounding algorithmic decision-making.
In conclusion, US policymakers are taking steps to address the potential harm caused by algorithms on online platforms. The Algorithmic Justice and Online Platform Transparency Act aims to promote transparency, prevent discrimination, and establish safety standards. By requiring platforms to disclose information about their algorithms and publishing annual reports on content moderation practices, policymakers hope to shed light on the decision-making processes of algorithms and hold online platforms accountable for any discriminatory practices. With algorithmic systems playing an increasingly influential role in our lives, it is crucial to ensure that they are fair, transparent, and unbiased.