Sarah K. Breuer, German Institute of Development and Sustainability (IDOS), Bonn, Germany.
Alejandro M. Torres, Center for Democratic Studies, National Autonomous University of Mexico (UNAM), Mexico City, Mexico.
Priya R. Menon, School of Global Affairs, King’s College London, United Kingdom.
Correspondence
Correspondence concerning this article should be addressed to Sarah K. Breuer, German Institute of Development and Sustainability (IDOS), Tulpenfeld 6, 53113 Bonn, Germany. Email: [email protected]
Summary
The spread of information pollution endangers democratic systems by weakening informed public choices and eroding social trust. With advances in artificial intelligence (AI)—including the rise of deepfakes—identifying reliable information is increasingly difficult, allowing manipulation of collective opinion. These distortions affect marginalized communities most severely and can even spark both online and offline violence.
While there is already a wide range of strategies to confront information pollution, current policy discussions remain largely confined to regulating content. International initiatives are underway to defend information integrity, especially during elections, but diverging national approaches to data governance and the global drift toward authoritarianism complicate efforts to establish a shared stance.
Moreover, collaboration across sectors remains inadequate. Stronger partnerships with technology firms and private actors will be crucial to building and enforcing an international governance framework. The forthcoming Global Digital Compact—set to be negotiated at the UN’s Summit for the Future on 22–23 September—aims to promote such cooperation and outline a shared digital vision.
This brief highlights global initiatives and evaluates available countermeasures against information pollution. Key recommendations include:
- Strengthening multilateral and cross-sector collaboration to shape a safe, inclusive transnational digital order, with active participation from private stakeholders.
- Broadening the focus beyond contentious content moderation toward content-neutral tools, with AI leveraged to expand their reach.
- Tailoring policies to local contexts, since tools that work well in one setting may have adverse effects in another. Careful research should precede integration into national strategies or development programmes.
- Recognizing that only long-term, comprehensive measures can build resilience. This requires reinforcing independent journalism, ensuring free information flows, and embedding information literacy in education systems alongside digital interventions.
© The Author(s) 2025. Open Access This article is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License, which permits any non-commercial use, sharing, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if you modified the licensed material. You do not have permission under this licence to share adapted material derived from this article or parts of it. The images or other third-party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit https://creativecommons.org/licenses/by-nc-nd/4.0/
DIGITAL PRESSURES ON DEMOCRACY AND OBSTACLES FOR GLOBAL COOPERATION
The escalating spread of information pollution is creating serious risks for democratic systems and social stability worldwide. Democracy depends on an informed citizenry, freedom of speech, an independent media sector, and equal access to credible information. When people receive factual, impartial knowledge, they are better able to make sound political and economic decisions. Yet, when citizens encounter misleading or contradictory narratives on digital platforms, it often fuels political polarisation and weakens both confidence in democratic institutions and the reliability of information itself.
These challenges intensify during election periods, when information manipulation is frequently deployed to tarnish political rivals, discredit journalists and activists, or cast doubt on electoral authorities and processes. Disruptions to information integrity are a defining strategy of authoritarian governance—autocratic regimes deliberately use disinformation to consolidate and expand their control.
Figure 1 demonstrates the connection between autocratisation and the spread of government-led disinformation. The vertical axis shows domestic disinformation levels disseminated by governments, while the horizontal axis reflects disinformation scores measured prior to the beginning of democratisation or autocratisation episodes (which vary by country and year). Only nations that underwent regime changes are displayed, labeled as “autocratising” or “democratising” in 2023. Although descriptive in nature, the figure indicates that states experiencing autocratisation tend to exhibit higher levels of state-driven disinformation.
Figure 1: Government-driven disinformation and political regime shifts (autocratisation or democratisation)
Mexico offers a telling example of these dynamics. According to V-DEM’s Digital Society Project, the spread of disinformation by the government rose markedly during the administration of President López Obrador (2018–2024). He frequently employed both malinformation and disinformation, often with the aim of undermining journalists and damaging their professional credibility (Breuer, 2024). This trend has paralleled a deterioration in Mexico’s democratic standards, with V-DEM data noting the onset of an autocratisation process in 2020. The rapid evolution of artificial intelligence (AI) further amplifies these digital threats to democracy. Distinguishing between AI-generated and human-produced content is becoming increasingly difficult, heightening susceptibility to manipulation (Kreps et al., 2022). Such AI-driven material, frequently crafted to provoke strong emotional reactions, spreads quickly through algorithm-based platforms and media systems. Although generative AI is still at an early stage (Garimella & Chauchard, 2024), deepfakes are expected to become an even more prominent tool for disseminating falsified information. Additionally, information pollution disproportionately harms marginalized communities and minorities, who face heightened risks of digital violence. A global survey by UN Women (2022) found that 38 percent of women have been subjected to online abuse. Importantly, such hostility does not remain confined to the digital sphere: hate speech often spills into real-world violence, with minority groups targeted by vigilante movements organized via online platforms. These developments have fueled growing international alarm about the role of information pollution in driving polarisation and autocratisation. Reflecting these anxieties—particularly in the context of the “Super Election Year 2024”—the World Economic Forum’s Global Risks Report 2024 identified misinformation and disinformation as the most pressing short-term threat confronting the global community (World Economic Forum, 2024).
BARRIERS TO INTERNATIONAL COOPERATION AND DEVELOPMENT ENGAGEMENT
In response to these mounting risks, global actors have intensified efforts to safeguard the integrity of the information environment. One prominent example is the UN Global Principles for Information Integrity, unveiled in June 2023 (UN, 2023). Measures to curb information pollution are now an active feature of international policy dialogue as well as development cooperation. The OECD has recently revised its framework for media support. Broad consultations involving journalists and media development organisations revealed that the earlier 2014 OECD Principles were no longer adequate in the face of rising disinformation, polarisation, and autocratisation. As a result, in March 2024, the OECD DAC Network on Governance (DAC-GovNet) approved updated “Principles for Relevant and Effective Support to Media and the Information Environment” (OECD, 2024). These principles, aimed at development agencies, media practitioners, political actors, international policymakers, private foundations, and investors, call for a comprehensive and forward-looking approach. They stress the importance of countering manipulation while simultaneously upholding freedom of expression, particularly in light of emerging AI technologies.
Another noteworthy initiative is the “Media and Digital” working group under the Team Europe Democracy (TED) Initiative. Tasked with advancing inclusive democracy and fostering pluralistic, independent media—key elements of the EU Strategic Agenda 2019–2024—the group includes EU donor agencies, academics, journalists, and rights-based organisations. Ahead of the upcoming UN Summit for the Future, the group issued recommendations urging EU member states and the European Commission to push for the inclusion of strong commitments on access to information, press freedom, and public-interest journalism in the UN’s Pact for the Future.
Still, international collaboration on disinformation is hampered by several persistent obstacles:
- Structural barrier. The accelerating trend toward autocratisation severely undermines cooperation. Today, 42 states are undergoing autocratisation, with half the world’s population living under authoritarian regimes. As principal drivers of manipulated information, autocrats and populist leaders have little interest in endorsing a collective stance against information pollution.
- Funding barrier. Support through international development remains weak. Just 0.3 percent of OECD Official Development Assistance (ODA) is currently directed toward strengthening media and ensuring the free flow of information.
- Sectoral silo barrier. Joint action is hindered by insufficient cross-sector collaboration. Although awareness among businesses is growing, coordination with the private sector is still lacking. This gap poses two risks: (1) companies can unintentionally bankroll disinformation campaigns around global events through automated advertising systems, as ads often appear on manipulative sites without the buyer’s knowledge (Ahmad et al., 2024); and (2) digital platforms themselves are not always included as formal partners in multilateral initiatives. Yet, their involvement is essential, since effective countermeasures often depend on platform-level design choices—for instance, WhatsApp’s “forwarded” label or YouTube’s promotion of trusted sources.
- Tunnel-vision barrier. Policy discussions tend to overemphasize content regulation. While demands for fast crisis-response measures are increasing, this narrow focus is both short-term in outlook and conceptually limited.
TOOLS AND STRATEGIES TO ADDRESS INFORMATION POLLUTION
Public appetite for strict content moderation remains limited and varies by subject matter. Although widely practiced across digital platforms, moderation carries ethical concerns and inherent weaknesses. For one, it is a reactive measure that relies heavily on the willingness and resources of the platform itself—conditions that cannot always be assumed. Moreover, critics often argue that it infringes on freedom of expression. Research indicates that exposure to incivility or intolerance online does not necessarily lead people to support the removal of harmful content (Pradel et al., 2024). In fact, citizens tend to favor the deletion of false information only when it involves violent threats targeting minority groups.
Suspending accounts is another contentious option. “De-platforming” the promoters of disinformation has been employed by several platforms; for example, Twitter applied this strategy following the attack on the U.S. Capitol on 6 January 2021 (McCabe et al., 2024). The intervention curtailed the circulation of misinformation and prompted other harmful actors to exit the platform voluntarily. Nevertheless, the practice remains politically divisive. Critics accuse technology firms of censorship, and surveys show that the public generally supports removing posts more than permanently banning accounts (Kozyreva et al., 2023).
The arsenal of available interventions is far wider than account suspensions or content takedowns. To overcome the “tunnel-vision” problem, researchers highlight a diverse toolkit of measures against polluted information (Kozyreva et al., 2024). Table 1 outlines these approaches, assessing their effectiveness and scalability. Each carries benefits and drawbacks, which can vary depending on context, durability, and possible side effects. Key alternatives include:
- Debunking and fact-checking. These measures attempt to limit misinformation after the fact by providing evidence-based corrections and explanations. A robust global fact-checking ecosystem exists, operating with professional standards and transparent methodologies (EFCSN, 2024).
- Accuracy nudges. Tools such as accuracy prompts aim to shift user behavior by encouraging reflection on whether content is reliable. The assumption is that accuracy, when made salient, can outweigh partisan instincts. In 2021, Twitter introduced its Community Notes feature, enabling contributors to attach contextual clarifications to misleading tweets.
- Pre-bunking. This preventive, educational strategy builds resilience by teaching users to recognize manipulative techniques before exposure. Google and Jigsaw recently deployed a prebunking campaign ahead of European Parliament elections. Game-based formats also exist, such as the popular Bad News Game, where players adopt the role of misinformation producers (Iyengar et al., 2023).
Table 1: Toolkit Options for Addressing Information Pollution
Tool | Purpose | Effectiveness / Scalability | Key Limitations |
Content moderation & de-platforming | Removing misleading content or suspending accounts that spread it | Can restrict circulation in the short term if platforms commit resources | Often unpopular with users; widely criticized as censorship; reactive rather than preventive |
Debunking & fact-checking | Providing factual corrections and logical clarification of why content is false | Reduces spread of misinformation temporarily but requires significant resources | May erode trust in media and credible sources; only reactive; effectiveness can diminish over time; often topic-specific |
Accuracy nudges | Prompting users to reflect on whether content/headlines are accurate | Highly scalable with cooperation from platforms; modest short-term effects | Impact may fade over time; may only work in certain contexts or for particular partisan groups/issues |
Pre-bunking | Training users to recognize manipulative tactics before exposure | Strong preventative impact; evidence of durable effects; moderately scalable | Needs reinforcement (“boosters”); effectiveness differs depending on medium |
Evidence points to the usefulness of these alternatives. Fact-checking and debunking can substantially limit misinformation during large-scale disinformation operations (Unver, 2020). Light-touch nudges, like accuracy reminders, make people more likely to share true rather than false headlines (Pennycook et al., 2021). Pre-bunking shows promise as a “psychological vaccine” against manipulative content (McPhedran et al., 2023).
Still, their effectiveness is not uniform:
- Debunking and fact-checking are inherently reactive and may yield only short-lived results. Some studies suggest they do not necessarily reduce overall engagement with false content (Carey et al., 2022).
- Accuracy nudges can have limited and context-specific impact, sometimes fading over time or only influencing groups less entrenched in misinformation. Twitter’s Community Notes reduce shares of false posts (Renault et al., 2024), but delays between publication and correction blunt their reach. The medium also matters: short WhatsApp prompts urging regular fact-checking proved more effective than long-form podcasts (Bowles et al., 2023).
- Educational interventions like pre-bunking risk losing their potency unless reinforced. “Booster” activities are often needed to sustain their protective effects (Maertens et al., 2021).
RISKS, LIMITATIONS, AND THE ROLE OF LONG-TERM STRATEGIES
Interventions against information pollution may also bring unintended consequences. For instance, fact-checking, while helpful during disinformation surges, can paradoxically erode trust in journalism and even cast doubt on verified facts (Hoes et al., 2024). In this sense, it may inadvertently reinforce the objectives of disinformation campaigns, which often aim less at persuasion than at spreading confusion and mistrust (Altay et al., 2023). Scalability varies significantly across tools. Fact-checking and debunking are resource-heavy, reactive, and often limited to specific topics, making them difficult to generalize. Some educational approaches share similar constraints. By contrast, accuracy nudges lend themselves to wider adoption. For example, one proposal suggests adding a “misleading count” beside the “like” tally, enabling users to flag dubious posts—higher counts would discourage further sharing (Pretus et al., 2024).
In short, countering information pollution requires a broad toolkit, not just content takedowns. Digital infrastructures can be designed with content-neutral, real-time measures that preemptively target manipulation. Early evidence suggests pre-bunking is especially promising in fostering resilience. Still, no universal fix exists. Effectiveness varies by context, and policy responses must carefully weigh long-term impacts before implementation. Artificial intelligence plays a dual role: it accelerates disinformation risks while also offering new ways to combat them. AI tools can enhance scalability of educational measures. For instance, recent research found that conversations with ChatGPT effectively helped conspiracy theory believers reject false narratives, with durable and replicable results (Costello et al., 2024).
Sustainable solutions must also extend beyond digital interventions. Media and digital literacy should be mainstreamed into school curricula, targeting disadvantaged youth in particular. Offline strategies—such as community dialogue and awareness campaigns—are crucial in areas where internet access remains limited; today, over half the world still lacks reliable broadband. Strengthening trust in official sources and supporting independent journalism remain critical pillars. In practice, anti-disinformation work should be embedded across thematic fields such as health, climate action, elections, and peacebuilding, not treated as isolated digital projects.
CONCLUSIONS AND POLICY RECOMMENDATIONS
Information pollution continues to undermine democracy, deepen polarisation, erode trust in institutions, and reinforce authoritarian power structures.
To address this, debates and cooperative efforts must move beyond divisive content regulation and embrace a wider set of tools, tailored to context and aimed at long-term resilience. While strategies such as accuracy nudges and pre-bunking hold promise, their limitations in durability and scalability mean they must be carefully designed, tested, and adapted before integration into policy. Artificial intelligence should also be harnessed to expand these interventions.
Cross-sectoral collaboration is equally important. Technology companies and private businesses have to be brought into the fold to make large-scale interventions feasible. Positive examples exist—for instance, Google’s prebunking campaign ahead of the European Parliament elections and its financial support for civil society projects to promote media literacy in Central and Eastern Europe (Green, 2022). Yet, such initiatives remain the exception rather than the rule, and business involvement in multilateral frameworks is still insufficient.
The upcoming Summit for the Future (22–23 September) represents a rare chance to establish such a framework. The draft Global Digital Compact (UN, 2024), which will be annexed to the Pact for the Future if agreed upon, outlines principles for corporate accountability, including obligations to respect human rights online, integrate human rights norms into emerging technologies, and mitigate AI-related risks. It also calls on technology companies to co-develop accountability standards (Art. 29b) and to foster an inclusive, safe, and secure digital sphere (Art. 59). Ensuring these provisions survive the negotiations will be vital to creating incentives for companies to act beyond ad hoc moderation practices.
Key recommendations:
- Adopt a broad toolkit. Shift focus away from politicised content moderation toward content-neutral, scalable tactics that work preemptively and in real time.
- Design with context in mind. No tool works everywhere. Promising approaches like pre-bunking must be adapted carefully, with scalability enhanced through AI support.
- Invest in long-term resilience. Governments and development partners should finance literacy and dialogue initiatives—both online and offline—especially targeting vulnerable groups.
- Build a transnational regulatory framework. Businesses must be active participants in multilateral initiatives. Given their control over global information flows, large tech companies must be bound by frameworks that ensure transparency, independent oversight, and accountability. The Global Digital Compact provides a timely opportunity to lay this foundation.
Acknowledgements
The authors are grateful to colleagues at the OECD DAC Network on Governance and the Team Europe Democracy (TED) Initiative for valuable feedback on earlier drafts. We also thank the anonymous reviewers for their constructive comments.
Funding
No fund was obtained.
Conflict of Interest
The authors declare no competing interests.
Author Contributions
S.K.B. conceived the study and drafted the initial manuscript. A.M.T. conducted case study research and contributed to the analysis. P.R.M. provided methodological oversight and critical revisions. All authors approved the final version of the manuscript.
References
Ahmad, W., Sen, A., Eesley, C., & Brynjolfsson, E. (2024). Companies inadvertently fund online misinformation despite consumer backlash. Nature, 630(8015), 123-131. https://doi.org/10.1038/s41586-024-07404-1
Altay, S., Lyons, B., & Modirrousta-Galian, A. (2023). Exposure to higher rates of false news erodes media trust and fuels overconfidence. OSF Preprint. https://doi.org/10.31234/osf.io/t9r43
Bowles, J., Croke, K., Larreguy, H., Marshall, J., & Liu, S. (2023). Sustaining exposure to fact-checks: misinformation discernment, media consumption, and its political implications (SSRN Scholarly Paper 4582703). https://doi.org/10.2139/ssrn.4582703
Breuer, A. (2024). Information integrity and information pollution: Vulnerabilities and impact on social cohesion and democracy in Mexico (Discussion Paper 2/2024). German Institute of Development and Sustainability (IDOS). https://www.idos-research.de/discussion-paper/article/information-integrity-and-information-pollutionvulnerabilities-and-impact-on-social-cohesion-and-democracy-in-mexico/
Butler, L. H., Prike, T., & Ecker, U. K. H. (2024). Nudge-based misinformation interventions are effective in information environments with low misinformation prevalence. Scientific Reports, 14(1), 1-12. https://doi.org/10.1038/s41598-024-62286-7
Carey, J. M., Guess, A. M., Loewen, P. J., Merkley, E., Nyhan, B., Phillips, J. B., & Reifler, J. (2022). The ephemeral effects of fact-checks on COVID-19 misperceptions in the United States, Great Britain and Canada. Nature Human Behaviour, 6(2), 236-243. https://doi.org/10.1038/s41562-021-01278-3
Costello, T. H., Pennycook, G., & Rand, D. (2024). Durably reducing conspiracy beliefs through dialogues with AI. OSF. https://doi.org/10.31234/osf.io/xcwdn
EFCSN (European Fact-Checking Standards Network). (2024). Code of standards. European Fact-Checking Standards Network. https://efcsn.com/code-of-standards/
Garimella, K., & Chauchard, S. (2024). How prevalent is AI misinformation? What our studies in India show so far. Nature, 630(8015), 32-34. https://doi.org/10.1038/d41586-024-01588-2
Green, Y. (2022, 12 December). Disinformation as a weapon of war: The case for prebunking. Friends of Europe. https://www.friendsofeurope.org/insights/disinformation-as-a-weapon-of-war-the-case-for-prebunking/
Hoes, E., Aitken, B., Zhang, J., Gackowski, T., & Wojcieszak, M. (2024). Prominent misinformation interventions reduce misperceptions but increase scepticism. Nature Human Behaviour, 1-9. https://doi.org/10.1038/s41562- 024-01884-x
Iyengar, A., Gupta, P., & Priya, N. (2023). Inoculation against conspiracy theories: A consumer side approach to India’s fake news problem. Applied Cognitive Psychology, 37(2), 290-303. https://doi.org/10.1002/acp.3995
Kozyreva, A., Herzog, S. M., Lewandowsky, S., Hertwig, R., Lorenz-Spreen, P., Leiser, M., & Reifler, J. (2023). Resolving content moderation dilemmas between free speech and harmful misinformation. Proceedings of the National Academy of Sciences, 120(7), 1-12. https://doi.org/10.1073/pnas.2210666120
Kozyreva, A., Lorenz-Spreen, P., Herzog, S. M., Ecker, U. K. H., Lewandowsky, S., Hertwig, R., Ali, A., BakColeman, J. . . . Wineburg, S. (2024). Toolbox of individual-level interventions against online misinformation. Nature Human Behaviour, 1-9. https://doi.org/10.1038/s41562-024-01881-0
Kreps, S., McCain, R. M., & Brundage, M. (2022). All the news that’s fit to fabricate: AI-generated text as a tool of media misinformation. Journal of Experimental Political Science, 9(1), 104-117. https://doi.org/10.1017/XPS.2020.37
Maertens, R., Roozenbeek, J., Basol, M., & van der Linden, S. (2021). Long-term effectiveness of inoculation against misinformation: Three longitudinal experiments. Journal of Experimental Psychology: Applied, 27(1), 1- 16. https://doi.org/10.1037/xap0000315
McCabe, S. D., Ferrari, D., Green, J., Lazer, D. M. J., & Esterling, K. M. (2024). Post-January 6th deplatforming reduced the reach of misinformation on Twitter. Nature, 630(8015), 132-140. https://doi.org/10.1038/s41586- 024-07524-8
McPhedran, R., Ratajczak, M., Mawby, M., King, E., Yang, Y., & Gold, N. (2023). Psychological inoculation protects against the social media infodemic. Scientific Reports, 13(1), 5780. https://doi.org/10.1038/s41598-023- 32962-1
Mechkova, V., Pemstein, D., Seim, B., & Wilson, Steven. L. (2024). Measuring online political activity: Introducing the digital society project dataset. Journal of Information Technology & Politics, 1-17. https://doi.org/10.1080/19331681.2024.2350495
OECD (Organisation for Economic Co-operation and Development). (2024). Development co-operation principles for relevant and effective support to media and the information environment. https://www.oecdilibrary.org/development/development-co-operation-principles-for-relevant-and-effective-support-to-media-andthe-information-environment_76d82856-en
Pennycook, G., Epstein, Z., Mosleh, M., Arechar, A. A., Eckles, D., & Rand, D. G. (2021). Shifting attention to accuracy can reduce misinformation online. Nature, 592(7855), 590-595. https://doi.org/10.1038/s41586-021- 03344-2
Pradel, F., Zilinsky, J., Kosmidis, S., & Theocharis, Y. (2024). Toxic speech and limited demand for content moderation on social media. American Political Science Review, 1-18. https://doi.org/10.1017/S000305542300134X
Pretus, C., Javeed, A. M., Hughes, D., Hackenburg, K., Tsakiris, M., Vilarroya, O., & Van Bavel, J. J. (2024). The Misleading count: An identity-based intervention to counter partisan misinformation sharing.
Phil. Trans. R. Soc. B, 379(1897), 1-9. https://doi.org/10.1098/rstb.2023.0040
Renault, T., Amariles, D. R., & Troussel, A. (2024). Collaboratively adding context to social media posts reduces the sharing of false news (arXiv:2404.02803). arXiv. https://doi.org/10.48550/arXiv.2404.02803
UN (United Nations). (2023). Information integrity on digital platforms (Our Common Agenda Policy Brief 8, pp. 1-28). https://www.un.org/sites/un2.un.org/files/our-common-agenda-policy-brief-information-integrity-en.pdf
- (2024, 10 September). GDC Rev 3 – Draft under silence procedure. un.org: https://www.un.org/techenvoy/sites/www.un.org.techenvoy/files/general/GDC_Rev_3_silence_procedure.pdf
UNDP (United Nations Development Programme). (2022). Information integrity: Forging a pathway to truth, resilience and trust. https://www.undp.org/publications/information-integrity-forging-pathway-truth-resilienceand-trust
UN Women. (2022). Accelerating efforts to tackle online and technology-facilitated violence against women and girls. https://shknowledgehub.unwomen.org/en/resources/accelerating-efforts-tackle-online-and-technologyfacilitated-violence-against-women-and
Unver, A. (2020). Fact-checkers and fact-checking in Turkey (Cyber Governance and Digital Democracy). EDAM. https://edam.org.tr/en/cyber-governance-digital-democracy/fact-checkers-and-fact-checking-in-turkey
V-Dem Institute. (2024). Democracy Report 2024: Democracy winning and losing at the ballot. University of Gothenburg. https://v-dem.net/documents/43/v-dem_dr2024_lowres.pdf World Economic Forum. (2024). Global risks report 2024. https://www.weforum.org/publications/global-risksreport-2024