Building Public Trust in AI: Insights from the Ipsos Report and Implications for Public Affairs

As AI systems continue to expand their role in various industries, understanding public perception of artificial intelligence is critical. The Ipsos Public Trust in AI report, released in September 2024, sheds light on public sentiment toward AI, with a focus on governance, ethics, and trust. While AI adoption is increasing, significant scepticism remains — especially around data privacy, transparency, and ethical governance.

Key Findings from the Ipsos Report
01

Lack of Public Confidence in AI Governance

A large portion of the public does not trust that governments or organisations are sufficiently regulating AI systems. People are wary of how AI technologies are being deployed and managed — particularly when it comes to transparency, fairness, and accountability.

02

Ethical Concerns Around Data Privacy and Fairness

AI’s reliance on large datasets is raising ethical red flags, particularly regarding data privacy and potential biases in decision-making. Without robust ethical guidelines, AI systems could exacerbate existing inequalities or harm vulnerable populations.

03

Demand for Greater Transparency and Accountability

The public is calling for more transparency in how AI systems are built, trained, and deployed. There is a strong desire for clear governance frameworks that protect individuals’ rights and ensure AI is used for the public good.

04

Growing Desire for AI Education

People want to better understand how AI systems work, what risks they pose, and how they can be harnessed for positive outcomes. Public education on AI is seen as critical to improving trust and acceptance.

Impact on the Public Affairs Industry

The findings from the Ipsos report carry significant implications for the public affairs industry. As AI plays a more prominent role in policy monitoring, campaign strategies, and political communications, professionals in this sector must be attuned to the public’s concerns.

AI in Public Affairs: Bridging the Trust Gap

Public affairs professionals rely on AI tools to gather data, analyse public sentiment, and craft informed strategies. However, with the public expressing low trust in AI governance and ethical use, the industry faces a trust gap that must be addressed. Public affairs professionals are tasked with not only using AI responsibly but also advocating for transparent AI policies that align with public values.

Building trust in AI means establishing clear ethical guidelines, promoting responsible data use, and ensuring that AI technologies are seen as tools for democratic engagement rather than manipulation. Firms that emphasise transparency, accountability, and public education on AI’s role in shaping policy will be better positioned to gain public confidence and deliver value to their clients.

Using AI to Increase Transparency in Public Policy

AI has the potential to enhance transparency in government and public policy by enabling real-time data collection, analysis, and dissemination. AI systems can track legislative developments, monitor political sentiment across different demographics, and even predict policy outcomes based on historical data.

For public affairs professionals, this translates into more informed advocacy strategies, real-time monitoring of policy impacts, and enhanced engagement with stakeholders. However, this only works if these AI systems are governed by transparent, ethical frameworks. Without trust, the public is unlikely to accept AI-driven insights in political discourse.

The public is open to AI’s potential but remains cautious of its current limitations. This presents both a challenge and an opportunity — how can we make AI more trustworthy, while reinforcing its role as a tool that enhances rather than replaces human expertise?

— Ipsos Public Trust in AI Report, September 2024

The AI Paradox in Policy Monitoring

There is a notable paradox at the heart of AI adoption: while many users are willing to adopt AI for efficiency and data-driven insights, they often remain sceptical that AI can outperform human judgement. Yet optimised AI systems can produce high-quality analysis of even very large policy documents within minutes — a capability demonstrated repeatedly through AI flash analysis of landmark reports such as the Draghi Report on European competitiveness.

Many professionals still believe that AI tools, no matter how advanced, cannot fully replicate the nuance, context, and critical thinking that human experts bring to the table. This scepticism isn’t unwarranted. AI systems are only as good as the data they’re trained on. They lack the ability to grasp the subtleties of political landscapes, cultural contexts, or the strategic insights that seasoned public affairs professionals provide.

This blend of reliance on AI tools for their processing power and scepticism of their ultimate decision-making capabilities speaks to the broader trend highlighted in the Ipsos report: the public is open to AI’s potential but remains cautious of its current limitations.

Building Trust in AI: Practical Steps for Public Affairs

Given the Ipsos findings and the continued scepticism toward AI’s capabilities, public affairs professionals must adopt strategies that emphasise trust, transparency, and ethical responsibility. Here are four practical steps:

01

Advocate for Ethical AI Regulations

Public affairs professionals should work closely with policymakers to advocate for clear, ethical regulations governing AI. This includes promoting accountability in how AI systems are trained, used, and monitored.

02

Enhance Public Understanding

Building AI literacy is critical. Public affairs professionals should engage in efforts to educate the public on how AI systems work, what they’re used for, and how they can be leveraged for positive social outcomes. Bridging the knowledge gap will be key to fostering trust.

03

Transparent Data Usage

Ensure that the AI systems you use or advocate for are transparent about how they collect, store, and process data. This includes being open about the sources of the data, the algorithms used, and the potential biases within those systems.

04

Work in Collaboration with Human Experts

AI should be seen as a tool that enhances human capabilities, not replaces them. The most successful public affairs strategies will combine AI-driven insights with the strategic thinking and nuanced understanding that only human professionals can offer.

The Ipsos Public Trust in AI report offers valuable insights into the challenges and opportunities surrounding AI adoption. For public affairs professionals, these findings underscore the need to advocate for transparent, ethical, and responsible AI practices. While AI has the potential to revolutionise policy monitoring, political strategy, and public engagement, its success will depend on how well we can bridge the gap between public scepticism and technological advancement.

The most trustworthy AI tools are those built with accountability, openness, and a genuine commitment to augmenting — not replacing — the human expertise that public affairs demands.

Report Source
Ipsos — Public Trust in AI (September 2024)

Ipsos is one of the world’s largest market research and polling organisations. Their September 2024 Public Trust in AI report surveyed respondents across multiple countries, examining attitudes toward AI governance, data privacy, transparency, and the role of AI in public life. The full report is available on the Ipsos website.

Related analysis

AI in Politics
The Liar’s Dividend: Insights from a Kroll Report on the Impact of AI in Politics
AI in Politics
Alexa’s AI Bias: Implications for the 2024 U.S. Election
Tell us what you need to monitor

No spam. No automatic sign-up. We will contact you directly to discuss your setup.