Google is Banning Gemini from Talking About Elections

Published on
Google Restricts Gemini Elections

Google has confirmed that it is restricting the types of questions users can ask Gemini, its large language model, to prevent it from talking about elections.

Alongside the Gemini restrictions, Google is prioritizing ‘authoritative’ content related to elections in its search and YouTube results.

In a blog post, the tech giant said its large language models will be working alongside staff to help them with content moderation and ‘abuse-fighting’ efforts.

“A dedicated team of local experts across all major Indian languages are working 24X7 to provide relevant context. With recent advances in our Large Language Models (LLMs), we’re building faster and more adaptable enforcement systems that enable us to remain nimble and take action even more quickly when new threats emerge,” the statement reads.

The blog post also states that Google has ‘strict policies and restrictions around who can run election-related advertising on [their] platforms’. However, this is not particularly reassuring considering Google Ads have previously been used by cyber criminals to distribute malware.

Concerns about the potential for generative AI to be used for spreading misinformation have led governments worldwide to explore regulations for this technology. 

Ahead of the 2024 Indian General Election Google has partnered with Shakti, India Election Fact-Checking Collective, a consortium of news publishers and fact-checkers in India, who are working to detect online misinformation, including deep fakes. 

Google's AI Bias Issue

Google’s Gemini tool has been at the centre of recent controversy related to its generative art feature. Google apologized for “missing the mark” after users reported the tool generated images depicting a historically inaccurate variety of genders and ethnicities. 

This led to accusations from right wing influencers that Gemini was “racist” due to the fact it was supposedly intentionally erasing white figures from history or showing ‘anti-white bias.’

Google have paused the image generation feature of their AI Gemini and have not confirmed when it will be live again.

Read: Is Gemini Racist? Google’s AI Pulled Amidst Bias Allegations

Google Ai Bias

Google's decision to restrict Gemini's responses to election-related queries reflects a growing awareness of the ethical issues surrounding AI. While AI chatbots offer a convenient way to access information, their reliance on potentially biased data can lead to the spread of misinformation. 

As generative AI continues to develop, robust regulations and responsible development practices will be crucial to ensure these powerful tools are used for good and not to manipulate or mislead.

The Problem with AI Chatbots

From quickly answering questions that would previously require multi-source research to providing 24/7 customer support AI chatbots are changing how we access and interact with information online.

Instead of typing keywords, users can ask questions in a natural way, mimicking a conversation, especially when they're being integrated into platforms like messaging apps and social media, placing information directly within the user's existing communication channels. 

But with this convenience comes a new set of considerations and issues. 

Misinformation

Chatbots rely on training data, and if that data is biased or inaccurate, the chatbot can become a source of misinformation. This is particularly concerning for large language models like Google's Gemini, which source information directly from the vast and often unfiltered internet.

A chatbot trained on huge data sets of news articles can easily produce issues. While the articles themselves may be objective, the sheer volume of pieces focusing on a particular political party or event could be amplified by the training algorithm. This could lead the chatbot to present that party or event as more prominent or significant than others.

Chatbots trained on historical data might unknowingly perpetuate outdated or biased views. For example, a chatbot trained on political speeches from the early 20th century might present a skewed view of gender roles in politics.

A chatbot programmed to answer political questions could unintentionally favor certain candidates or parties if its training data leans towards a specific ideology.

Deep Fakes

Deep Fakes are a form of artificial intelligence that can be used to create realistic video and audio forgeries. By manipulating existing footage, deepfakes can make it appear as if someone said or did something they never did. This has the power to sway public opinion and even the outcomes of elections.

donald trump deepfake

The rise of deepfakes also contributes to eroding trust in the democratic system and other institutions. If people can't be sure whether a video or audio recording is real, they may be less likely to believe anything they see or hear.

This is already taking place, in March 2023, images of former US president Donald Trump getting arrested in the streets of New York went viral. These images, created from multiple angles cataloguing the entire “arrest” turned out to be completely fabricated. The event never actually happened. 

Read: What Are Deep Fakes and Why Are They Dangerous? 

Join 34,209 IT professionals who already have a head start

Network with the biggest names in IT and gain instant access to all of our exclusive content for free.

Get Started Now