Writes Andromeda, Content writer
Headline Diplomat eMagazine, LUDCI.eu
Background:
In an effort to address the growing concerns about misinformation and controversial content in the podcasting industry, several companies are introducing AI-powered tools for censorship. Sounder, NewsGuard, and Barometer are among the companies that have launched brand safety analysis tools, which utilize artificial intelligence algorithms to identify and flag potentially high-risk speech within podcasts. The main objective of these tools is to determine the suitability of podcasts for advertising by ensuring brand safety, according to the facilitators.
The rise of AI-powered censorship tools for podcasts
In a concerning turn of events, the podcasting industry is witnessing the rapid rise of AI-powered censorship tools, posing a significant threat to the freedom of speech and expression.
Companies such as Sounder, NewsGuard, and Barometer have unleashed brand safety analysis tools that employ advanced artificial intelligence algorithms to detect and flag what they deem as “high-risk speech” within podcasts. The primary objective behind these tools is to determine the suitability of podcasts for advertising, effectively controlling the narrative and suppressing dissenting voices.
One notable tool is the “Podcast Credibility Rating” introduced by NewsGuard, a company known for countering misinformation. This tool assigns a trust score from 0 to 10 to podcasts, enabling advertisers to avoid advertising on podcasts that regularly convey false information or exhibit political bias.
“Podcast streaming platforms can also use these ratings to moderate content on their platforms and promote highly trustworthy news and information podcasts in user searches and curated sections,” Newsguard explained.
“NewsGuard’s global team of misinformation experts will have rated the top 200 news podcasts on the largest streaming platforms by January 2024,” the company announced.
However, the emergence of AI-powered tools has sparked discussions about potential censorship and limitations on free speech in the podcasting landscape. Some worry that podcasters discussing controversial topics may find it increasingly difficult to monetize their content. This echoes the oppressive environment already experienced by content creators on platforms like YouTube, where arbitrary policies and algorithmic biases stifle diverse perspectives.
Tackling misinformation: The need for accuracy
Addressing the proliferation of misinformation on podcasts has become an urgent necessity in recent years. Several prominent cases have highlighted the potential for false information to be spread through podcasting platforms, reaching millions of listeners.
One of the most well-known cases of misinformation on a podcast occurred in 2018, when Joe Rogan interviewed Alex Jones, a conspiracy theorist who has made a number of false claims about a variety of topics, including the Sandy Hook Elementary School shooting. Jones’s claims were widely debunked, but they were given a platform on Rogan’s podcast, which has millions of listeners.
In January 2022, Dr. Robert Malone, a controversial figure, made false claims about COVID-19 vaccines on an episode of “The Joe Rogan Experience.” Malone’s false claim goes thus:
“The COVID-19 vaccines are experimental and ineffective; “natural immunity is superior to the vaccine-induced immunity”; vaccine-induced spike protein is dangerous, causes serious side effects; “Omicron is a mild variant. It is absolutely able to escape prior vaccination.”
“There is a huge market for misinformation,” said Jay Van Bavel, an assistant professor of psychology and neural science who has also studied conspiracy theories and misinformation, in response to Malone’s comments in a Washington Post. “The way he’s framed in the conspiracy-theory world is that he’s a courageous whistleblower rather than someone who is spreading misinformation — and it’s only enhancing his profile.”
“I don’t feel what he’s doing and saying is in the right context or necessarily very helpful,” a former colleague of Malone’s also said. “Everyone is entitled to their opinion, but there’s a risk we’re all facing when he’s not accurately representing the information.”
Although there are more, these cases underscore the pressing need to combat misinformation within the podcasting realm. While it is essential to recognize that many podcasts uphold high journalistic standards and provide reliable information, the potential for misinformation dissemination requires heightened scrutiny.
Concerns surrounding AI-powered censorship
AI-powered content moderation tools have been hailed as a necessary step in combating hate speech, misinformation, and other forms of harmful content across various online platforms, including social media and video-sharing sites. These algorithms use machine learning to analyze and identify potentially objectionable material, leading to swift removal or flagging of offending content.
However, as AI-driven moderation becomes more prevalent, concerns have been raised about its potential impact on freedom of speech, particularly its objectivity.
“For podcasters, the proliferation of these brand safety tools is likely to create a YouTube-like environment where those who talk about topics that are deemed unsafe will find it increasingly difficult to monetize their podcasts and because podcasting is no longer as open as it used to be these scores are also likely to be used for more direct censorship in the future,” a video report from Reclaim The Net states, as quoted by Life Site.
NewsGuard has also faced criticism for labeling mainstream outlets as trustworthy despite instances of inaccuracies, while alternative media outlets challenging mainstream narratives have received warning labels.
Other critics argue that relying solely on algorithms to determine what constitutes acceptable content may result in biased or inaccurate decisions, leading to censorship of legitimate viewpoints.
“When Artificial Intelligence (AI) is applied in decision-making that affects people’s lives, it is now well established that the outcomes can be biased or discriminatory,” writes Catherine Stinson from the Philosophy Department and School of Computing Kingston of the Queen’s University, Kingston, ON, Canada, in a research abstract published in 2022, titled “Algorithms are not neutral: Bias in collaborative filtering.” “Uncorrected statistical bias has negative effects on the performance of algorithms, which is bad for users, as well as media producers and advertisers who stand to gain from accurate recommendations. The negative effects are worse for some users than others, and the implications go well beyond occasionally having to scroll past unwanted recommendations.”
A primary concern centers around the lack of transparency and accountability in AI-powered content moderation. The intricate workings of these algorithms are often closely guarded secrets, making it difficult for users and content creators to understand the criteria by which their content is flagged or removed. Without clear guidelines or an appeals process, there is a risk of inadvertently stifling legitimate discourse and dissenting opinions.
Another worry is the potential for AI algorithms to target marginalized voices disproportionately. Studies have highlighted the biases ingrained within AI systems, which can lead to the unfair targeting and suppression of content from certain communities. This raises concerns about the potential silencing of marginalized groups, as their voices may be disproportionately censored or overlooked.
“There is ample evidence of the discriminatory harm that AI tools can cause to already marginalized groups. After all, AI is built by humans and deployed in systems and institutions that have been marked by entrenched discrimination — from the criminal legal system, to housing, to the workplace, to our financial systems,” Olga Akselrod, a Senior Staff Attorney of the Racial Justice Program from the American Civil Liberty Unions (ACLU) expresses concern in a news commentary on the issue.
According to him: “Bias is often baked into the outcomes the AI is asked to predict. Likewise, bias is in the data used to train the AI — data that is often discriminatory or unrepresentative for people of color, women, or other marginalized groups — and can rear its head throughout the AI’s design, development, implementation, and use.”
Moreover, the fast-paced nature of AI moderation can also result in errors and false positives. Content that is not actually harmful may be mistakenly flagged and removed, creating a chilling effect on free expression. This raises questions about whether the benefits of AI-driven content moderation outweigh the potential drawbacks and unintended consequences.
Ensuring freedom of speech in podcasting
As concerns mount over the stifling of freedom of speech, there are calls for increased transparency and accountability in implementing AI-powered content moderation.
Also, critics are discussing the need for robust content moderation in the podcasting ecosystem to address the spread of hate speech, misinformation, and harmful content. For instance, Chris Meserole, a data analyst in the Artificial Intelligence and Emerging Technologies Initiative, and Valerie Wirtschafter, a senior data analyst in the Artificial Intelligence and Emerging Technologies Initiative at the Brookings Institution, highlight in a report on Brookings that while platforms like Facebook and Twitter have developed content moderation policies, podcast apps have lagged behind in implementing similar measures. The report suggests several policy recommendations for ensuring freedom of speech while also mitigating societal harms:
- Balancing moderation with censorship: The challenge lies in determining how to handle hate speech, misinformation, and related content that is legal but can have harmful effects. Excessive restrictions risk limiting freedom of expression, while allowing their mass distribution can lead to societal harm.
- Clear guidelines and policies for podcast apps: Podcasting apps should develop more nuanced and transparent policies for content users can download and play. Guidelines should go beyond blocking illegal content and include managing hate speech, misinformation, and content related to elections and COVID-19.
- User reporting mechanisms: Podcast apps should establish clear and easy-to-use mechanisms for reporting inappropriate content. This can help identify harmful content since podcast apps often rely on user reporting rather than sophisticated algorithms for content moderation.
- Voting and commenting systems: Some podcasting apps may consider implementing voting and commenting systems to leverage user feedback for content moderation at scale. This approach can help ensure quality content is prominently featured while guarding against attempts to manipulate the system.
- Regulation and transparency: Regulators and lawmakers can play a role in shaping policies in the podcast ecosystem. They should push for greater transparency from podcasting apps regarding content guidelines and procedures, moderation practices and appeals processes, recommendation algorithms, and financial disclosures.
The report emphasizes that a mature content moderation framework is necessary as podcasts become a mass medium. It acknowledges the evolving business models and architectures in the podcasting space and calls for a flexible approach that balances responsible content moderation with freedom of speech.
Striking a Balance: Promoting Responsible Podcasting
Ensuring freedom of speech in podcasting is a crucial aspect of maintaining a healthy and vibrant podcast ecosystem. While it is essential to address the spread of hate speech, misinformation, and harmful content, it is equally vital to uphold the principles of free expression and democratic discourse. Striking the right balance requires thoughtful policies and measures that promote responsible content moderation without unduly restricting freedom of speech.
One of the challenges in content moderation for podcasts lies in differentiating between lawful but harmful content and blatantly illegal content. While major podcasting apps have established procedures to address illegal content, such as terrorist recruitment podcasts, the handling of hate speech, misinformation, and related content that is legal but can have societal harm is less clear. The large-scale distribution of such content through popular podcasting apps has been associated with negative consequences, including the dissemination of the “Big Lie” leading up to the January 6th Capitol assault and the spread of COVID-19 vaccine misinformation.
To ensure freedom of speech while mitigating societal harms, podcasting apps need to develop more nuanced and transparent content moderation policies. These policies should go beyond simply blocking illegal content and address the challenges posed by hate speech, misinformation, and content related to elections and public health. Clear guidelines should be established to help app users understand the boundaries of acceptable content.
Additionally, podcast apps should enhance user reporting mechanisms. While major social media platforms rely on algorithms and user reporting to identify harmful content, podcast apps often lack sophisticated systems for content moderation. Implementing user-friendly reporting features can empower listeners to report inappropriate content, contributing to a collective effort to identify and address harmful material.
Moreover, voting and commenting systems could be explored as a means to leverage the wisdom of the crowd. Such systems, already employed in platforms like Reddit and Stack Overflow, allow users to upvote/downvote content and leave comments. By incorporating user feedback, podcasting apps can better moderate content at scale, ensuring that quality content is prominently featured while minimizing the influence of malicious actors.
Regulation and transparency play crucial roles in shaping the podcasting ecosystem. Regulators should require podcasting apps to clearly disclose their content moderation policies, moderation practices, and appeals processes. Transparency should extend to recommendation algorithms, as users often discover new podcasts through these algorithms. While protecting user privacy, basic information about the factors considered by recommendation algorithms should be made available to the public. Additionally, financial disclosures should be mandated to bring transparency to sponsorship and funding practices, preventing foreign governments or obscure funders from exerting undue influence.
Conclusion
As podcasting continues to gain popularity and influence, it is crucial to ensure the preservation of freedom of speech while combating the spread of misinformation. In this age of AI-powered censorship, a collaborative effort between various stakeholders is necessary to strike a balance between upholding free expression and addressing the challenges posed by harmful content.
To address concerns surrounding AI-powered censorship, tech companies, policymakers, and civil society organizations must engage in open dialogue and ongoing research. This collaborative approach will help to establish ethical guidelines and understand the implications of AI moderation in podcasting and other media platforms.
The debate surrounding AI-powered censorship will undoubtedly persist as technology advances and society grapples with content moderation challenges. Finding the right balance between effective moderation and safeguarding freedom of speech remains a complex and evolving task that requires continuous scrutiny and collaboration.
To navigate this path forward successfully, users and content creators need to remain vigilant and advocate for transparent and accountable content moderation practices. Transparent guidelines, robust user reporting mechanisms, and leveraging user feedback through voting and commenting systems are critical steps in combating misinformation and ensuring responsible content dissemination.
In conclusion, ensuring freedom of speech in podcasting requires a multifaceted approach. By implementing robust content moderation policies, embracing transparency, and leveraging user engagement, podcasting platforms can play a vital role in stopping the spread of misinformation while safeguarding the principles of free expression. Through collaboration and responsible practices, podcasting apps, users, and regulators can collectively contribute to a trustworthy and reliable podcasting ecosystem.
Featured Photo: Анастасія Білик, Pexels