YouTube tests AI that turns someone else’s Short into a brand new video

YouTube tests AI that turns someone else's Short into a brand new video


YouTube this week launched testing two new AI-powered capabilities inside its Shorts Remix menu, allowing a compact group of creators to generate entirely new videos from existing content posted by others on the platform. The announcement, dated February 24, 2026, was posted by JJ from TeamYouTube in the platform’s community forum for feature experiments and represents one of the more significant shifts in how the platform handles derivative content creation.

The test builds on a long-running expansion of YouTube’s remix toolset – but this time, the underlying mechanics are generative AI rather than simple editing. What creators are being given access to is not just cutting or collaging; it is the ability to produce new video sequences applying someone else’s Short as a starting point.

According to the announcement, two distinct capabilities are being rolled out to the test group. The first is called “Add object,” which allows creators to insert items into a scene from the original Short, applying either suggested or custom text prompts. The scene duration eligible for object insertion is limited to up to 8 seconds from the source video.

The second capability is called “Reimagine.” This option goes further. According to the YouTube community post, it allows a creator to “transform a single frame from the original Short into an entirely new video via suggested or custom prompts, with the option to upload up to two reference photos.” The ability to introduce reference photos into the generation process gives creators a degree of visual control – for example, matching a specific aesthetic or style – that goes beyond what a text prompt alone would provide.

Both tools remain accessible only to creators with English-language settings. The geographic scope excludes the European Union and United Kingdom, consistent with the restrictions applied to several other AI-powered YouTube features, including the earlier Extfinish with AI tool launched in September 2025.

Attribution and opt-out mechanics

A key operational detail: every Short produced applying Add object or Reimagine will include a link back to the original creator’s video from the Shorts player. This attribution logic mirrors how existing remix features – Cut, Green Screen, Collab, and Sound – already function on the platform. The source of remixed video is credited in the Shorts player with a direct link to the source video, and the source of remixed audio is credited on the Sound Library page.

Creators who do not wish their Shorts to be applyd in these AI-powered recreations can opt out. However, opting out of the new AI remix capabilities also rerelocates eligibility for traditional remix options. According to YouTube’s Help Center documentation, creators can manage this through YouTube Studio: from the left menu, clicking Content, selecting a video, scrolling to “Shorts remixing,” and choosing whether to allow remixing before saving. Partners with access to YouTube Studio Content Manager can manage opt-outs for both audio and video remixing in bulk.

This design – linking opt-out from AI remixing to opt-out from all video remixing – has implications for creators who might accept traditional cut-and-paste remixes but object to generative recreation. The two are treated as a single permission category.

Where this fits in YouTube’s AI feature arc

This test does not emerge from nowhere. YouTube has been systematically layering AI generation tools into its Shorts creation workflow since at least late 2024, with the pace of announcements accelerating through 2025. The platform extfinished Shorts to three minutes in October 2024, and shortly after launched pushing AI creation features into that expanded canvas.

The Extfinish with AI feature, announced September 26, 2025, introduced the first AI-powered remix tool in Shorts, allowing creators to select a clip up to 5 seconds long and generate a new segment continuing from it. The generated segments typically ran 8 seconds or less, and could be trimmed within Shorts creation tools. That feature was English-only and excluded the EU and UK – the same restrictions applied to the current test.

In September 2025, at the Made on YouTube event, the platform introduced Veo 3 Fast – a custom version of Google DeepMind’s video generation model operating at 480p resolution – directly into the Shorts creation workflow. That tool generates video with synchronized audio from text prompts, and was described by Dina Berrada, Director of Product for Generative AI Creation, as offering “unlimited free video content with audio capabilities.” It was the first time YouTube allowed creators to generate video with sound from text prompts directly inside the platform.

Then in November 2025, YouTube launched Edit with AI, a tool that takes raw camera roll footage and converts it into an edited Short automatically – available initially in 15 markets including the United States, Canada, Australia, and several Asia-Pacific countries.

The February 24 test for Add object and Reimagine is therefore the latest in a consistent sequence. Each feature has followed a similar pattern: limited English-only release, EU and UK excluded, compact creator group, with an expansion announcement deferred to a later date.

Earlier AI features operated on the creator’s own content, or on content they were generating from scratch. Dream Track generates audio. Dream Screen generates video backgrounds. Extfinish with AI generates continuations. Edit with AI assembles footage the creator already shot.

Reimagine is different in one important respect: it applys a single frame from another creator’s Short as the seed for a generative sequence. The output is a new video, not a copy, not a clip – it is something generated by an AI model that was initialized from someone else’s work. The original Short serves as a visual reference point, not as footage included in the final product.

Add object is similarly distinct: rather than the creator inserting their own visual elements, AI places objects into a scene from another creator’s content. The scene alters, but the original video’s visual context remains the environment in which that alter occurs.

These distinctions matter becaapply they push against the conventional understanding of derivative content. Traditional remixes – cutting a 5-second clip, applying audio, applying green screen – involve clearly identifiable fragments of the original work. AI-generated output from a single frame is a different category of derivation, and the attribution link back to the original Short may not resolve all questions creators have about how their work is being applyd.

Existing remix tools remain unalterd

The platform’s non-AI remix toolset continues to operate as before. According to YouTube’s Help Center documentation, creators can remix audio from long-form videos by tapping Remix and selecting “Use this sound,” or sample up to 90 seconds of audio from a music video – though partner agreements with YouTube may limit some music videos to 30 seconds. The Cut feature allows a 1-to-5 second video segment from a Short or long-form video. Green Screen applys another video as the background, supporting visual-only or audio-and-video sampling. Collab creates a side-by-side layout from another Short or long-form video.

Music video content on Official Artist Channels can be remixed applying Green Screen, Cut, or Sound. YouTube notes that eligibility for this depfinishs on partner agreements. Creators applying audio from music videos receive attribution through the Sound Library page, where the source video is linked alongside other Shorts that applyd the same audio.

Notification experiment running in parallel

The February 24 test is not the only experiment YouTube is running this month. On February 11, 2026, the platform launched testing alters to push notification delivery for creators, reducing notifications for viewers who have not recently engaged with a channel despite receiving prior push notifications. According to Hank from TeamYouTube, these viewers will still receive notifications through the in-app inbox and can still see content on the Subscriptions feed – only the push notification is withheld. Channels that upload infrequently are exempt from this experiment.

The rationale given is that viewers who feel overwhelmed often turn off all notifications from YouTube entirely, which means even engaged viewers of other channels stop receiving alerts. The experiment aims to reduce that behavior by reducing notification volume for already-disengaged viewers.

On February 18, YouTube also launched testing its conversational AI tool – an interactive question-and-answer feature already available on mobile – on smart TVs, gaming consoles, and streaming devices. Eligible applyrs can press an “Ask” button while watching a video, or apply a TV remote microphone to question spoken questions. The system supports prompts like recipe ingredient questions or song lyric context.

The attribution question for marketers

For the marketing community, the Reimagine and Add object tests raise a practical question that goes beyond creator relations. If AI tools can transform a single frame of any eligible Short into a new video, the definition of “original content” on the platform becomes more fluid. Brands that produce Shorts as part of their organic or paid strategy should understand that their content – unless they opt out – may serve as source material for AI-generated derivatives that link back to their video.

YouTube Shorts leads short-form video consumption at 56% among U.S. consumers surveyed by Media.net in November 2025, ahead of TikTok and Facebook at 50% each. With that scale, the platform’s decisions about what creators can do with each other’s content carry weight. Attribution links may drive discovery – the Help Center notes that remixing is “a great opportunity for new viewers to discover your content” – but that framing reflects a promotional perspective rather than a rights-based one.

The mandatory AI content disclosure policy, effective May 21, 2025, requires creators who apply generative AI to label content that appears realistic but does not reflect actual events. Shorts produced applying Add object or Reimagine would fall under this framework, and according to YouTube’s existing pattern with tools like Dream Track and Dream Screen, the labeling is applied automatically when platform-native AI tools are applyd.

Scale and current status

The test remains compact. According to the YouTube community post, the Reimagine and Add object options appear only for “a compact group of creators (English only)” after tapping Remix on an eligible Short. Not all Shorts are eligible – the same eligibility restrictions that govern existing remix tools apply here. YouTube has not provided a timeline for wider release, and the post finishs with: “We’ll keep you posted on our expansion plans.”

The test follows a pattern documented extensively across YouTube’s AI feature rollouts in 2025: restricted launch, English-first, EU and UK excluded, expansion plans unspecified. Whether this test leads to a broader rollout depfinishs in part on the feedback YouTube collects from the initial group, and on how the creator community responds to the underlying permission structure.

Timeline

Summary

Who: YouTube, operating under Google, is testing new AI-powered remix features with a compact group of English-language creators. The test was announced by JJ from TeamYouTube in the platform’s official feature experiments community thread on February 24, 2026.

What: Two new capabilities – “Add object” and “Reimagine” – are being tested inside the Shorts Remix menu. Add object allows creators to insert AI-generated items into scenes up to 8 seconds long from another creator’s Short. Reimagine allows a creator to take a single frame from any eligible Short and transform it into an entirely new video through text or custom prompts, with the option to add up to two reference photos. Every Short produced with these tools links back to the original creator’s video.

When: The test was announced on February 24, 2026, as part of YouTube’s ongoing experiments and feature testing program, which was originally established in October 2019 and is regularly updated by TeamYouTube community managers.

Where: The test is available only to a compact group of creators applying English-language settings. It is not available in the European Union or United Kingdom, consistent with geographic restrictions applied to other recent AI features including Extfinish with AI and Ingredients to Video.

Why: YouTube has been progressively adding AI generation tools to its Shorts creation workflow since late 2024, responding to competition from TikTok and Instagram Reels. The Reimagine and Add object tools extfinish this strategy into derivative content – allowing creators to apply others’ Shorts as generative seeds rather than just as clips or audio sources. The attribution model keeps links to the original content in place, with opt-out available but linked to opting out of all video remix features.


Share this article


The link has been copied!





Source link

Leave a Reply

Your email address will not be published. Required fields are marked *