In a recent announcement, YouTube has introduced mandatory disclosure requirements for creators uploading AI-altered or synthetic content.
This new policy aims to maintain transparency, especially for videos concerning sensitive topics such as elections and public health crises.
Creators who fail to adhere to this new policy risk facing stringent actions from YouTube.
These may include the removal of content, suspension from the YouTube Partner Programme, or other penalties deemed necessary to uphold the platform's integrity and user trust.
YouTube is set to implement two new features to inform viewers of AI-manipulated content.
A label will be added to the video's description panel, and for highly sensitive content, a more prominent notification will appear on the video player itself.
Users will soon have the option to request the removal of AI-generated content that imitates identifiable individuals.
YouTube will consider factors like context, satire, public figures' involvement, and the identifiable nature of the person requesting the removal before taking action.
This policy revision follows growing concerns over misinformation through deepfakes, highlighted by the incident involving a viral video of Indian actress Rashmika Mandanna.
The video, later identified as a deepfake, prompted police to investigate the origins, underscoring the potential for misuse of synthetic media.
Also watch: WhatsApp to end unlimited Google Drive backups for Android: Free storage shrinks to 15GB