LLMs' Potential Influences on Our Democracy:
Challenges and Opportunities

Yujin Potter1, Yejin Choi2, David Rand3, Dawn Song1

1UC Berkeley 2University of Washington 3MIT

Summary

A growing literature [Potter et al. 2024, Rozado 2024, Feng et al. 2023, Santurkar et al. 2023, Hartmann et al. 2023, Vijay et al. 2024] shows that LLMs exhibit left political leaning on a range of issues. This has sparked active discussion about LLMs' potential influences on political discourse and democratic processes. Recent papers have taken initial steps forward in exploring this question. For example, Potter et al. (2024) showed that interactions with LLMs can sway voters towards the Democratic candidate, even without LLMs being instructed to persuade voters. Fisher et al. (2024) revealed that LLMs with radical political leanings can inject their political views into users' decisions on less sensitive political topics. However, these studies only examined the immediate effects of LLMs, leaving the question of their long-term effects open. On the other hand, Costello et al. (2024) demonstrated how LLM conversations can be used positively to durably reduce users' conspiracy beliefs. These findings highlight the importance of further studies on how LLMs may influence our democracy. Many crucial questions currently remain unanswered: What factors cause LLMs' left leaning? What goals should we set for these models? For example, should we pursue political neutrality in AI systems or some other goal(s), such as accuracy and/or pluralistic values? Is political neutrality even possible? This article discusses the path forward and proposes future research questions in four broad areas: (1) evaluation of LLM political leanings, (2) understanding LLMs’ influences on our democracy, (3) better policy frameworks for AI development, and (4) technical solutions to mitigate political leanings. As LLMs become increasingly integrated into society, continued investigation of how LLMs will reshape democracy is essential to maximize their benefits while minimizing risks to democratic processes.

As large language models (LLMs) continue to advance at a remarkable pace, understanding their societal implications has become increasingly vital. In particular, the potential influence of LLMs on political discourse has emerged as a critical area of study. Researchers in various fields such as economics, philosophy, and law have also recently voiced the importance of research in this area [Brynjolfsson et al. 2024]. Recent papers [Potter et al. 2024, Fisher et al. 2024, Costello et al. 2024] have taken initial steps toward investigating how LLMs can influence users’ political beliefs. These findings raise important questions that need further exploration. We identify key research directions to ensure LLMs can constructively contribute to democratic processes and society.

LLMs’ Political Leanings and Their Effects on Users

LLMs’ Left-of-Center Outputs

Much recent literature [Potter et al. 2024, Rozado 2024, Feng et al. 2023, Santurkar et al. 2023, Hartmann et al. 2023, Vijay et al. 2024] has shown that LLMs exhibit left-leaning. For instance, Potter et al. (2024) examined LLMs' political leanings in three scenarios: (1) a voting simulation, (2) LLMs' comments on candidate policies, and (3) interactive political discourse with users in the context of the U.S. presidential election. These experiments consistently demonstrated LLMs' preference for the Democratic nominee (Joseph R. Biden, during the study period) and his policies over the Republican nominee (Donald J. Trump). In particular, despite the well-known tendency of LLMs to exhibit sycophancy bias, LLMs exhibited this political leaning regardless of their human interlocutors' political stances. Many studies [Rozado 2024, Feng et al. 2023, Santurkar et al. 2023, Hartmann et al. 2023] also revealed LLMs’ left-leaning on various issues, using multiple-choice surveys and questionnaires widely employed in social science.

Additionally, recent studies have documented the manifestation of LLMs’ left-leaning in various applications. For example, Vijay et al. (2024) showed that LLM-generated news summaries tend to highlight liberal perspectives. Similarly, Feng et al. (2023) showed that this left-leaning tendency affects their ability to detect misinformation and hate speech, with varying sensitivity based on the political orientation of source materials. These studies demonstrate LLMs’ consistent political leanings across diverse contexts.

LLMs’ Influence on Users’ Political Stances

Given these documented left-leaning tendencies, society has begun discussing LLMs' potential impact on users' political views and democratic processes more broadly. However, their effect on democracy, particularly how interactions with LLMs might shape users’ political perspectives, remains largely unexplored. Recent research has examined whether direct LLM interactions can influence users’ political viewpoints.

Potter et al. (2024) demonstrated that even brief interactions with LLMs (which were simply asked to provide answers and comments regarding Biden and Trump's policies without any persuasive intent) could shift voters' choices in the presidential election context, where individuals typically hold firm opinions. Fisher et al. (2024) demonstrated that LLMs programmed with extreme right or left-wing viewpoints can influence human decisions on unfamiliar political topics through brief conversations, illustrating how LLMs' explicit political leanings can affect users’ political stances. Additional research has explored intentional persuasion by LLMs designed to promote specific political positions or spread misinformation [Anthropic persuasion report 2024, Goldstein et al. 2024]. Notably, Costello et al. (2024) showed that interactions with LLMs can persuade people to reduce their belief in conspiracies, including political conspiracies (e.g. related to election fraud and COVID-19), suggesting potential positive applications for such persuasive capabilities. These studies describe how LLMs could influence our democracy, highlighting the necessity of further research. However, several caveats apply to these studies. For example, because these experiments were conducted in controlled settings, the findings may not generalize to real-world applications. Moreover, these findings’ robustness and the influence of various factors, including prompt variations and experiment timing, warrant additional investigation.

The Path Forward and Future Research Questions

The observed left-leaning tendencies of LLMs and their influence on users’ political perspectives raise critical questions that society must address. We highlight four key areas requiring further investigation by both researchers and society at large.

First, we need more comprehensive model evaluation and analysis.

  • When and how do LLMs manifest their political leaning? While many studies have identified LLMs' left-leaning tendencies, their responses can vary based on prompts and context. For example, Röttger et al. (2024) and Ceron et al. (2024) demonstrated that LLMs' political answers can shift depending on question framing. This suggests the need for examining LLMs’ political leanings across diverse scenarios. Additionally, as LLM applications become more widespread, understanding which specific use cases might exhibit these left-leaning tendencies becomes crucial.
  • What causes LLMs’ left-leaning tendencies? The complexity of model development makes identifying precise reasons why political stances emerge challenging. Training data composition likely plays a role, as it primarily consists of modern web content, that may skew liberal. Additionally, these studies [Potter et al. 2024, Santurkar et al. 2023, Sorensen et al. 2024] indicate that post-training processes may have amplified these political leanings. Specifically, instruction-tuned models show stronger left-leaning tendencies compared to their base versions, suggesting that current post-training methods increase political leanings. However, the specific aspects driving this amplification remain unclear and require further investigation. One possibility is that political orientation may be confounded with other relevant features [Mosleh et al. 2024, Fulay et al. 2024]. For example, Fulay et al. (2024) observed a left-leaning orientation of models trained for truthfulness, on some specific political topics such as climate change. This suggests that even apolitical fine-tuning efforts may inadvertently lead to responses that align more closely with liberal positions on certain issues. This may stem in part from truthfulness-related training data including scientific facts being more aligned with liberal positions for specific issues like climate change [Pennycook et al. 2023, False balance]. Quantifying these contributing factors represents an essential step toward understanding the models' political leanings.

Second, we need to deepen our understanding of LLMs' influence on users and their impact on democracy.

  • When and how can LLMs shape users' views and affect our democratic processes? While recent papers [Potter et al. 2024, Fisher et al. 2024, Costello et al. 2024] take a step forward in investigating this question, several crucial questions remain unresolved. Foremost, how effectively will these findings translate to real-world scenarios? For example, the observed candidate-leaning changes of voters following the LLM interaction in Potter et al. (2024) might not translate to actual voting behavior. Currently, no research has assessed LLMs' impact on users’ political perspectives in the wild. We call for extensive field experiments to evaluate these effects more accurately. Second, although significant changes in study participants' political stance were observed after LLM intervention [Potter et al. 2024, Fisher et al. 2024], these effects might be time-dependent. For example, political science literature [Kalla et al. 2017] suggests that even successful political persuasion tends to diminish over time. Nevertheless, Costello et al. (2024) demonstrated significant long-term reductions in users' conspiracy beliefs. Whether and when LLM-induced political shifts persist presents another compelling avenue for future research. Furthermore, we must explore how LLM deployment in various contexts will affect users. Understanding which applications carry greater or lesser influence on users will prove essential. Additional research should examine the conditions under which LLMs most significantly influence public opinion.
  • Why are LLMs able to shape users' views and affect our democratic processes? The mechanisms behind LLM’s effects on participants’ political leanings remain largely undefined. LLM interactions encompass numerous characteristics beyond their inferred political leanings, including elements like helpfulness and engagement style. Understanding their influence requires exploring how each factor might affect users’ political perspectives and determining their relative impact. For example, Fisher et al. 2024 discovered that LLMs expressing extreme political views effectively swayed user opinions on unfamiliar political topics. Similarly, we need to investigate which other aspects (like truthfulness or helpfulness) of LLM interactions can influence people's viewpoints. Disentangling these various factors to identify the precise mechanisms of LLMs' political influence presents an ongoing challenge requiring sustained research.
  • Should we always view LLM influence on democracy as problematic? While some might argue that any LLM influence on human perspectives is inherently concerning, recent work suggests these systems could enhance democratic processes. Costello et al. 2024 demonstrate that LLMs can effectively reduce individuals' belief in conspiracy theories. Tessler et al. 2024 also demonstrate how LLMs can help find a middle ground in political discourses. Similarly, Argyle et al. 2023 document LLMs' ability to elevate the quality of political discourse. These findings illustrate how LLM influence can positively contribute to democratic processes. Future research should identify the conditions that determine whether LLM influence proves beneficial or detrimental.

Third, we need to explore better policy and value decisions for AI development.

  • Should we pursue AI political neutrality? The potential influence of LLMs on democratic processes raises fundamental questions about AI development trajectories. Many advocate for political neutrality in LLMs, assuming these systems should avoid any political leaning [Rozado 2023, Vijay et al. 2024]. However, political neutrality is not the only potential goal. One salient alternative is that LLMs should be as factually accurate as possible. Mosleh et al. (2024) showed that liberals tend to share more factually accurate content online than conservatives. This could raise the question of whether pursuing accuracy when training LLMs might contribute to the appearance of models having political leanings. For example, should the LLM represent the scientific consensus regarding the existence of human-caused climate change, or should it equally present arguments for and against human-caused climate change in the name of balance? Even if one does choose to pursue political balance, defining neutrality presents significant challenges, as human perspectives inherently carry bias [Wikipedia NPOV policy, "view from nowhere" from Thomas Nagel's phrase]. If we define neutrality as what both sides see as unbiased, the media theory [Perloff 2015] suggests that achieving true neutrality on sensitive topics may be fundamentally impossible. Given this, pursuing LLM political neutrality might be an unattainable goal. Moreover, people often gain valuable insights through engaging with different viewpoints. These considerations raise a future question of whether pursuing LLM political neutrality is truly the right path forward.
  • Should we pursue AI systems with diverse perspectives or pluralistic values? An alternative approach involves creating AI systems that embrace pluralistic values. A recent paper [Sorensen et al. 2024] proposed developing pluralistic AI systems capable of representing diverse human values and perspectives. But, open questions about implementation strategies remain for future research. For instance, as Sorensen et al. (2024) noted, implementing distributional pluralism—where AI systems reflect views in proportion to population demographics—would amplify popular opinions, even when potentially harmful or incorrect. Another promising direction involves multi-agent systems [Lai et al. 2024, Feng et al. 2024]. Such systems, comprising LLMs with diverse viewpoints, could accommodate and support various perspectives. However, significant challenges remain. Users primarily interacting with LLMs sharing specific viewpoints might reinforce existing biases. This approach could potentially be misused to systematically influence public opinion in particular directions. Additionally, LLMs designed to represent different perspectives may vary in effectiveness based on their inherent leanings. These challenges warrant careful investigation to guide AI development decisions.
  • How can we determine appropriate policies and values? Society must systematically explore these policy and value spaces to identify better directions. This requires comprehensive analysis of potential outcomes across different policies using multiple metrics. This process involves establishing robust evaluation frameworks, including appropriate benchmarks. For example, Sorensen et al. (2024) have formalized pluralistic values and proposed corresponding evaluation metrics. Moreover, we need to develop stronger empirical evidence and embrace evidence-based approaches [Bommasani et al. 2024].

Fourth, we need to explore and develop various techniques including safety methods to mitigate or alter LLM political leanings.

  • What methods effectively mitigate or alter LLMs' political leanings? Alongside exploring better policies and values, we must investigate technical solutions to mitigate or alter LLMs' political leanings. If we decide to pursue political neutrality, what methods should we use? How should we approach cases where we decide to have multiple LLMs with different political leanings? One promising approach involves representation control [Zou et al. 2023, Durmus et al. 2024]. Assuming a political neutrality goal, Potter et al. (2024) applied the representation engineering developed by Zou et al. (2023) to both Llama-3.1-8B and 70B models, examining potential approaches to reduce LLMs’ political leaning. While these results appear promising, many open questions remain for future investigation: Can we develop methods that mitigate or alter political leaning without compromising model capabilities? How do political leanings in AI models relate to other important model characteristics? These fundamental questions require extensive future exploration.

Recent papers [Potter et al. 2024, Fisher et al. 2024, Costello et al. 2024] are among the first empirical explorations of how LLM could influence the political beliefs of users, highlighting the importance of investigating how LLMs will reshape our democracy in the future. The future impact of LLMs on democracy remains an open yet crucial question that society must address. While potential risks exist, LLMs also present opportunities to strengthen democratic processes, such as reducing political polarization and facilitating constructive dialogue. The questions discussed above represent essential steps toward harnessing LLMs' potential for democratic benefit. These issues require continued investigation by the research community. Furthermore, we anticipate that LLMs' influence will extend well beyond political discourse. We encourage both the research community and society at large to thoughtfully explore these possibilities and suggested research directions above.