❌

Normal view

Yesterday β€” 11 March 2026Main stream

YouTube Expands AI Deepfake Detection To Politicians, Government Officials, and Journalists

11 March 2026 at 12:00
YouTube is expanding its AI deepfake detection tools to a pilot group of politicians, government officials, and journalists, allowing them to identify and request removal of unauthorized AI-generated videos impersonating them. TechCrunch reports: The technology itself launched last year to roughly 4 million YouTube creators in the YouTube Partner Program, following earlier tests. Similar to YouTube's existing Content ID system, which detects copyright-protected material in users' uploaded videos, the likeness detection feature looks for simulated faces made with AI tools. These tools are sometimes used to try to spread misinformation and manipulate people's perception of reality, as they leverage the deepfaked personas of notable figures -- like politicians or other government officials -- to say and do things in these AI videos that they didn't in real life. With the new pilot program, YouTube aims to balance users' free expression with the risks associated with AI technology that can generate a convincing likeness of a public figure. [...] [Leslie Miller, YouTube's vice president of Government Affairs and Public Policy] explained that not all of the detected matches would be removed when requested. Instead, YouTube would evaluate each request under its existing privacy policy guidelines to determine whether the content is parody or political critique, which are protected forms of free expression. The company noted it's advocating for these protections at a federal level, too, with its support for the NO FAKES Act in D.C., which would regulate the use of AI to create unauthorized recreations of an individual's voice and visual likeness. To use the new tool, eligible pilot testers must first prove their identity by uploading a selfie and a government ID. They can then create a profile, view the matches that show up, and optionally request their removal. YouTube says it plans to eventually give people the ability to prevent uploads of violating content before they go live or, possibly, allow them to monetize those videos, similar to how its Content ID system works. The company would not confirm which politicians or officials would be among its initial testers, but said the goal is to make the technology broadly available over time.

Read more of this story at Slashdot.

Before yesterdayMain stream

Court Rules That Ripping YouTube Clips Can Violate the DMCA

5 February 2026 at 18:30
A federal court in California has ruled that YouTube creators who use stream-ripping tools to download clips for reaction and commentary videos may face liability under the DMCA's anti-circumvention provisions -- a decision that could reshape how one of the platform's most popular content genres operates. U.S. Magistrate Judge Virginia K. DeMarchi of the Northern District of California denied a motion to dismiss in Cordova v. Huneault, a creator-versus-creator dispute, finding that YouTube's "rolling cipher" technology qualifies as an access control measure under section 1201(a) even though the underlying videos are freely viewable by the public. The distinction matters because it separates the act of watching a video from the act of downloading it. The defense had argued that no ripping tools were actually used and that screen recording could account for the copied footage. Judge DeMarchi allowed the claim to proceed to discovery regardless, noting that the plaintiff had adequately pled the circumvention allegation. The ruling opens a legal avenue beyond standard copyright infringement for creators who want to go after rivals. Reaction channels have long leaned on fair use as a blanket defense, but plaintiff's attorney Randall S. Newman told TorrentFreak that circumventing copy protections under section 1201 is a separate violation unaffected by any fair use finding.

Read more of this story at Slashdot.

YouTube Kills Background Playback on Third-Party Mobile Browsers

3 February 2026 at 12:00
YouTube has confirmed that it is blocking background playback -- the ability to keep a video's audio running after minimizing the browser or locking the screen -- for non-Premium users across third-party mobile browsers including Samsung Internet, Brave, Vivaldi and Microsoft Edge. Users began reporting the issue last week, noting that audio would cut out the moment they left the browser, sometimes after a brief "MediaOngoingActivity" notification flashed before media controls disappeared. A Google spokesperson told Android Authority that the platform "updated the experience to ensure consistency," calling background play a Premium-exclusive feature.

Read more of this story at Slashdot.

❌
❌