Summary
The AI-generated image of former U.S. President Donald Trump posing as the Pope emerged as a notable example of the intersection between generative artificial intelligence (AI) and political discourse. Created using advanced diffusion-based AI image synthesis techniques, the image depicts Trump dressed in traditional papal vestments and was shared by Trump on his social media platform, Truth Social, in May 2025, shortly before the papal conclave to elect Pope Francis’s successor. This image gained widespread attention due to its timing, realistic appearance, and provocative symbolism amid the mourning period following Pope Francis’s death.
This incident highlights the increasing use of AI-generated synthetic media in political contexts, where such images can serve as tools of propaganda, satire, or misinformation. The viral spread of the Trump-as-the-Pope image underscores concerns about the erosion of trust in visual media and the challenges faced by the public and institutions in distinguishing genuine content from AI fabrications. The image sparked controversy, particularly among Catholic groups and Italian observers, who criticized it as disrespectful and inappropriate during a solemn religious transition. At the same time, political actors and commentators debated its implications for freedom of expression and political communication.
The case exemplifies broader ethical, legal, and societal issues surrounding AI-generated deepfakes, including the potential for manipulation of public opinion and the weaponization of synthetic media during politically sensitive moments. Various U.S. states have enacted legislation targeting deceptive AI content, and discussions at the federal level continue regarding regulatory frameworks to address the spread of AI-driven disinformation. As generative AI technologies become more accessible and sophisticated, the Trump-Astounds-with-AI-Created-Image-Posing-as-the-Pope episode serves as a cautionary illustration of both the creative possibilities and disruptive risks posed by synthetic imagery in modern politics.
Background
The rise of generative artificial intelligence (AI) has transformed the creation of digital images, enabling the production of highly realistic visuals through sophisticated algorithms. Generative AI, a subset of artificial intelligence focused on content creation, is trained on vast datasets comprising millions of images paired with descriptive text captions. These models learn to generate new images by progressively refining random visual noise into coherent pictures that align with given textual prompts. One prominent technique in this domain is diffusion modeling, which operates by starting with pure noise and gradually “denoising” the image over multiple iterations to reveal the desired content, akin to perceiving shapes in clouds and clarifying them over time.
Earlier approaches to AI image generation primarily relied on Generative Adversarial Networks (GANs), introduced by Ian Goodfellow in 2014. GANs involve two neural networks — a generator and a discriminator — that compete to produce increasingly realistic outputs. While GANs laid the groundwork for synthetic image creation, they had limitations in diversity and quality compared to newer diffusion-based methods. These advancements have enabled AI to create images that are often indistinguishable from photographs, with applications ranging from art to misinformation.
The political sphere has witnessed a growing use of AI-generated images, especially amid heightened tensions during election cycles. Synthetic images have been disseminated widely on social media following politically charged events, frequently used as tools of propaganda and disinformation. For example, AI-generated visuals have included doctored depictions of political figures and fabricated scenarios aimed at influencing public opinion. This proliferation of synthetic media raises concerns about the erosion of trust in factual information and the challenges of distinguishing reality from AI-generated fabrications.
One notable viral example involved an AI-generated image of Pope Francis wearing a Balenciaga puffer jacket, which circulated widely and drew significant attention. While some found the image humorous or intriguing, Pope Francis himself expressed concern about the implications of deepfakes and AI for misinformation. Following the Pope’s death in April 2025, the dissemination of AI-generated images related to him, including those that intersected with political figures such as former President Donald Trump, underscored the complex and often controversial role of AI in shaping public narratives around solemn events.
Description of the Image
The image in question depicts former President Donald Trump dressed in traditional papal clothing, presenting himself as the Pope. This AI-generated image was created and shared by Trump on his social media platform, Truth Social, on May 2, 2025, just days before the papal conclave to elect a new leader of the Catholic Church was set to begin. The visual portrays Trump in the solemn and iconic attire associated with the papacy, evoking a strong symbolic message amid the mourning of Pope Francis’s death.
The image was produced using artificial intelligence image generation technology, which encodes textual descriptions into numerical representations. These representations guide the AI to construct images by exploring various artistic elements and styles, effectively transforming written concepts into visual art. This technique allows the creation of highly detailed and realistic images that mimic human artistic expression, although the output is generated by AI models following engineered instructions rather than direct human craftsmanship.
Trump’s AI-generated portrayal as the Pope sparked significant controversy and criticism, particularly from Catholic groups and Italian observers, who viewed the image as disrespectful given the timing and context surrounding the papal transition.
Distribution and Media Circulation
AI-generated images, such as the one depicting former U.S. President Donald Trump in papal attire, have seen significant circulation across social media platforms, particularly in politically charged contexts. This specific image first appeared on Trump’s Truth Social account shortly before the Catholic cardinals convened for a conclave to elect the next pope. Its dissemination exemplifies a broader trend in which synthetic imagery is used to influence public perception and spread partisan narratives, often with factual accuracy being secondary to the intended political message.
The rise of Industry 4.0 technologies, combined with the widespread adoption of social media, has created fertile ground for the rapid spread of such AI-generated multimedia content. Researchers and industry leaders have noted the challenges in detecting and preventing the distribution of these fake images, which exploit sophisticated generative models like Stable Diffusion and Latent Diffusion Models (LDM) to create convincing visual fabrications. These images are strategically timed and crafted to appear around significant political events to maximize impact and confusion among audiences.
The circulation of AI-generated political disinformation is not confined to domestic platforms but is increasingly fueled by international actors seeking to amplify misinformation ahead of critical moments such as elections. Platforms like Twitter and Facebook have taken steps to remove state-backed accounts that contribute to the spread of such content, yet the proliferation continues, with fake videos and images going viral rapidly. The viral nature of these images underscores the urgent need for awareness and technological solutions to mitigate the disruptive effects of AI-generated fakes on public discourse.
Public and Political Reaction
The AI-generated image of Pope Francis posing with Donald Trump sparked significant public and political reactions, highlighting the growing concerns over synthetic media in political discourse. Critics, including Italian commentator Matteo Renzi, expressed offense, interpreting the image as an insult to religious institutions and believers, suggesting that it portrayed the Pope in a mocking light associated with right-wing politics. Conversely, the White House dismissed these accusations, emphasizing that President Trump’s visit to Italy was to pay respects to Pope Francis and attend his funeral, underscoring his long-standing support for Catholics and religious freedom.
This incident fits into a broader context where AI-generated synthetic images have become prevalent during election cycles, often circulating on social media following politically charged events. Experts warn that such images can be used as a form of political propaganda, where the factual accuracy becomes secondary to the partisan narratives they promote. The proliferation of these images risks misleading voters, as some fail to recognize the tell-tale signs of AI manipulation, such as anatomical anomalies like extra limbs.
Political scientists also warn about the dangers of AI-enabled impersonation, which allows for highly convincing yet deceptive portrayals of public figures, potentially spreading falsehoods directly to their supporters. These developments have raised alarms about the future of political communication and the challenges in distinguishing genuine content from fabricated media in an increasingly digital and AI-enhanced political landscape.
Ethical, Legal, and Societal Considerations
The use of AI-generated images, such as the incident involving a fabricated image of Pope Francis related to Trump, raises significant ethical, legal, and societal issues. Ethically, the deployment of such technology to mislead the public or manipulate political opinions challenges principles of honesty and transparency. Experts emphasize the need for clear labeling and disclosure of AI-generated political content to ensure viewers are aware of its artificial nature, which could mitigate the spread of misinformation and promote informed decision-making.
Legally, various state legislatures in the United States have begun enacting laws to regulate the creation and dissemination of deceptive deepfake content, particularly when intended to harm electoral processes or defame candidates. For example, Texas SB 751 criminalizes producing deceptive videos with the intent to damage a candidate’s reputation or influence election outcomes. Additionally, several states have passed legislation specifically banning the creation of explicit deepfakes involving minors. At the federal level, regulatory efforts are under consideration, though comprehensive standards remain lacking, highlighting a gap in governance for AI-generated disinformation.
From a societal perspective, the proliferation of AI-generated images on social media has become a potent tool for spreading partisan narratives rather than factual information, often intensifying political polarization and mistrust. The ease with which these synthetic images can convincingly impersonate public figures poses new dangers by enabling falsehoods to circulate within supporters’ communities, complicating efforts to discern truth from fabrication. Commentators have noted the absence of robust corporate standards or government oversight governing the use of AI in creating such misleading content, suggesting that current technological advances outpace existing regulatory frameworks and societal readiness to address these challenges.
Similar Cases and Contextual Examples
The use of AI-generated images for political and social influence has become increasingly prevalent, especially in recent election cycles. These synthetic images often serve to spread partisan narratives rather than factual information, capitalizing on the speed and reach of social media platforms. For example, AI-generated images surfaced frequently after politically charged events, allowing political campaigns to engage with or amplify the content, thereby appearing involved in the ongoing conversation or even the humor surrounding the images.
One notable instance involved fake images of prominent figures created using AI programs such as Midjourney, DALL·E 2, and Stable Diffusion. These tools generate images based on textual descriptions, which has resulted in misleading portrayals of well-known individuals. Among these were viral deepfakes showing former President Donald Trump being arrested and the Pope dressed in unusual attire, such as a white puffer coat and a bejeweled crucifix, which fooled many online viewers. In a particularly high-profile example, President Trump himself posted an AI-generated image depicting him as the pope on the social media platform Truth Social in May 2025.
Beyond high-profile political figures, there is growing concern about the misuse of AI-generated deepfake images to distort political processes, manipulate public narratives around conflicts like the war in Ukraine, or target everyday individuals. Such images can be weaponized to fabricate incriminating scenarios, enforce forms of bribery or humiliation, and cause personal harm to those lacking the resources to combat their spread. This has led many state legislatures to enact bans on the use of deepfakes intended to mislead voters, and the federal government is actively considering further regulations. Experts emphasize the need for transparency measures, such as mandatory labeling of AI-generated political content, as well as broader consideration of content designed to harass candidates or discredit elections.
The proliferation of AI-generated images highlights the rapid technological advances in deepfake creation and the challenges they pose in maintaining truthful political discourse and protecting individuals from digital manipulation.
Impact and Consequences
The use of AI-generated images, such as the one depicting Trump posing as the Pope, highlights the growing influence and potential dangers of deepfake technology in political contexts. These examples illustrate how far AI has advanced, enabling the creation of highly convincing yet fabricated content that can be used for satire, misinformation, or propaganda purposes.
One significant consequence is the potential chilling effect on political participation, especially among vulnerable groups. For instance, in patriarchal societies like India, AI-generated images have been used to produce objectionable and sexually explicit content targeting women, which may discourage them from expressing their political views openly. This kind of automated manipulation can rapidly influence public opinion and intimidate individuals, thereby altering the democratic discourse.
Moreover, AI technology allows for the impersonation of politicians, using their likeness or voice to disseminate false information directly to their supporters, exacerbating the spread of disinformation during election campaigns. Political campaigns might unintentionally amplify such content by sharing or endorsing it, blurring the lines between genuine and manipulated media and normalizing AI-generated political messaging as a form of propaganda or fandom-like support.
In response, many state legislatures have enacted prohibitions on the use of deepfakes intended to mislead voters, and federal authorities are considering broader regulations to address not only deceptive content but also harassment and false discrediting of candidates or elections. Experts emphasize the necessity of transparency measures, including clear labeling of all AI-generated political material, to help mitigate the risks posed by these technologies.
