In a significant move to combat misinformation, social media platform X has unveiled new regulations targeting creators who share artificial intelligence-generated videos depicting armed conflicts without proper disclosure. The platform has announced that it will suspend such creators from its lucrative revenue-sharing programme if they do not clearly label AI-generated content, particularly videos related to war and military action. This initiative is part of X’s broader effort to enhance transparency and safeguard users from misleading visual media.
The company detailed that it will implement more rigorous content labeling requirements to help users differentiate between genuine footage and AI-created videos. This is especially crucial for war-related content, where the risk of spreading false or manipulated images can have serious consequences, including inflaming tensions or distorting public perception of ongoing conflicts. By enforcing these rules, X aims to reduce the circulation of deceptive media that could otherwise mislead audiences on such sensitive topics.
Under the updated policy, any creator who shares AI-generated videos showing scenes of armed conflict without an explicit disclosure tag will face penalties. These sanctions include temporary suspension from the platform’s monetisation and revenue-sharing programmes, which are vital income sources for many content producers. To support enforcement, X is upgrading its automated detection tools to identify and flag unlabeled AI-generated content more efficiently, thereby promoting responsible content creation and consumption.
This policy change comes amid rising global concerns about the misuse of deepfake technology and generative AI in manipulating public opinion, particularly in regions experiencing active conflicts such as the Middle East. Other major technology platforms have also issued warnings and introduced similar guidelines, emphasizing the importance of clear labeling to maintain user trust and uphold the integrity of digital information ecosystems.
X’s new rules specifically target AI-generated videos portraying war scenes, military operations, or armed confrontations. Creators are now required to include a clear disclosure if their content is either fully or partially produced using artificial intelligence. Failure to comply with these requirements will not only result in removal from monetisation programmes but may also lead to further penalties on the platform, underscoring the seriousness of the issue.
The announcement has sparked a range of reactions among users and industry experts. Advocates for the policy argue that it is a necessary step to prevent the spread of misleading visuals that could exacerbate conflicts or misinform the public about real-world events. On the other hand, some critics express concerns about the practical challenges of enforcement, warning that distinguishing AI-generated content from authentic user uploads may lead to over-censorship or confusion among creators trying to navigate the new rules.
It is important to note that X’s parent company has not provided additional comments following the announcement. However, the move highlights the growing responsibility social media platforms face in addressing the ethical implications of AI technology and its impact on information integrity.
The significance of this development lies in the increasing sophistication of AI tools capable of producing highly realistic video content that can easily be mistaken for genuine footage. In conflict zones, where verifying the authenticity of images is already challenging, unlabeled AI-generated videos pose a serious risk by potentially fueling propaganda or distorting public understanding of ongoing crises.
By tying content disclosure to eligibility for monetisation, X is leveraging financial incentives to encourage creators to adopt responsible posting practices. Since many content producers rely on the platform’s revenue-sharing programme as a primary source of income, suspension from this programme could have a substantial economic impact, thereby motivating compliance with the new rules.
This policy update aligns with a broader global conversation about the role of AI in media and information dissemination. Governments and regulators worldwide have voiced concerns over the potential for AI to be exploited in creating fake news, deepfakes, and manipulated videos that could influence elections, public sentiment, and international relations. X’s decision reflects an industry-wide trend toward demanding transparency in AI-generated content, a stance also supported by competing platforms and emerging regulatory frameworks in Europe and North America.
