Public perception is an important frontier in any conflict, with each side seeking to garner both domestic and international support for their military campaign. The proliferation and growing sophistication of artificial intelligence (AI) means that such technologies are playing an ever-growing role in the perception domain, as has been demonstrated throughout the ongoing conflict between Israel and Hamas.

One prominent example of AI technology being deployed is in the use of Generative AI (GAI) to create photorealistic imagery without the need for specialized software or training. With all the opportunities this brings, there are also clear risks in the perception domain, with actors from all sides easily able to produce content that supports their narrative, regardless of whether the events depicted reflect reality, thus further fueling and accelerating the spread of misinformation surrounding the war.

For those looking to win the perception war, and for those of us who are interested in knowing the truth, cutting through AI-generated disinformation is a complex and challenging task. The sheer volume of content being shared online makes it difficult to identify and verify all of it, and AI-generated content can be very difficult to detect. Additionally, malicious actors are constantly evolving their techniques, making it difficult for traditional fact-checking methods to keep up. This growing phenomenon has been on full display during the ongoing war between Israel and Hamas.

One such use of AI has been the production of emotive imagery intended to raise sympathies for a given side in the conflict. For example, pro-Palestinian influencers have often shared images of civilians, particularly children, against the backdrop of buildings destroyed by the IDF. Such imagery is likely intended to appeal to the target audience’s empathy to further support for the Palestinian cause, while also perpetuating the allegation that the IDF indiscriminately bombs civilians.

The image below is a typical example of this. Depicting four injured but smiling young children holding a Palestinian flag whilst stood amongst the rubble of civilian buildings, the image aims to show Palestinian resilience in the face of their brutal reality. However, on closer inspection the image shows several signs that it was created by AI.

The image shows several errors that are common in AI-generated images. In figure 1 it’s possible to see the girl’s left hand appears unnatural, with distorted and misplaced fingers, while also seemingly out of scale with her right hand. This is typical of AI so far which, despite its strengths, still falls short on such details.

Similarly, figures 2 and 3 show that the AI has included limbs that at first glance may look natural, but on closer inspection do not appear to belong in the image. Figure 3, for example, shows two hands appearing between two of the children, holding each other with fingers intertwined. However, it is not clear whose hands these are. They are far lower down than would be natural for any of the children in the photo and appear from nowhere. Likewise, figure 2 shows what is clearly supposed to be the hand belonging to the boy on the left of the group, holding the Palestinian flag. However, the arm is clearly out of proportion with the boy’s body, while his legs are also not visible, despite him standing on the edge of the group.

Below we can see further examples:

The above claims to show a Palestinian child holding his cat among the rubble in Gaza. However, a closer look shows the cat to have five legs, while the child is missing a finger on one hand, while his other arm shows the distortion that is typical of AI imagery.

Likewise, we can see that the child in the following image has an extra finger.

The ubiquity of false information has a secondary, arguably more damaging effect. As more and more misleading content is produced, shared, and ultimately disputed, social media audiences’ trust is eroded to the point where they are no longer able to tell truth from fiction, or honest actors from disingenuous ones. This goes to undermine trust across the board, regardless of the credibility of the information being shared, or the intentions of those sharing it. In turn, there is a growing tendency for nefarious actors to dismiss credible information as being AI-generated, while audiences are likely to mistrust anything that counters their existing biases. In this context, while social media may have offered the opportunity for opposing narrative to spread, be debated, and for perceptions to shift, the AI-enabled acceleration of disinformation threatens to make this increasingly difficult, with opposing sides less likely to sway from their entrenched beliefs.

One such example is the denial of a photo depicting the body of a baby killed on October 7th, which was shared by a prominent pro-Israel influencer. A community note was added to the X post attempting to debunk the influencer’s post, claiming that he had shared an AI-generated image. This claim was widely repeated by pro-Hamas social media influencers, users, and official media channels. Israeli government officials and reputable media outlets confirmed that the image was real, and the community note was eventually removed. However, the claim that an AI-generated image had been used to support Israeli accusations against Hamas has persisted and continued to contribute to a particular narrative that Israel invented or exaggerated Hamas’ crimes against civilians on October 7th. One X user uploaded the image to a generative -AI program and replaced the baby in the photo with a puppy, leading others to claim that it was the original photo from which the “Israeli propaganda fake” was created.

This example demonstrates the erosion of trust and the increasing difficulty in the perception sphere, accelerated by AI-generated imagery that, in a context of already abundant disinformation, plays into and reinforces audiences’ biases. As AI rapidly increases its sophistication and the believability of its output, awareness, and an ability to cut through the ever-growing amount of disinformation online will be crucial in the perception sphere.