menu_open Columnists
We use cookies to provide some features and experiences in QOSHE

More information  .  Close

Photoshop’s new AI assistant makes it easer than ever to edit images

7 0
10.03.2026

Photoshop’s new AI assistant makes it easer than ever to edit images

Unlike rival tools like Nano Banana or GPT-Image, the AI assistant in Photoshop web and mobile gives you ultra-precise editing capabilities.

Today Adobe is launching the public beta of its new AI assistant for Photoshop Web and Photoshop Mobile. The company’s impressive new assistant technology enables anyone to do seemingly flawless photo editing—Nano Banana style—by prompting the apps. Then it ups its power by giving you easy and precise ways to interact with that software—whether it’s via voice or using your finger to navigate the interface.

Photoshop Mobile and Web have included AI features for a while. The web version already had Adobe Firefly generative AI features like generative fill and generative expand. The previous mobile version of Photoshop became truly usable because it smartly integrated AI to allow for making accurate object selections with your fat finger.

This new AI assistant integration removes any lingering difficulty from image editing, putting it in competition with popular AI image generators like Google’s Nano Banana, OpenAI GPT-Image, or ByteDance’s Seedream. Unlike those models, however, combining the new Adobe AI assistant with Photoshop Mobile and Web gives users a lot more image editing precision through its new tools.

Plus, it adds the possibility of “upstreaming” results beyond posting an edited image on social media. Users will be able to move the AI-edited files into the full Adobe creative app workflows, to go full Photoshop, integrate into a Premiere project, or publish a book in Acrobat.

How the new Photoshop web and mobile work

When you click on the assistant icon, the model first analyzes the raw pixels on your screen. The assistant essentially scans the image to identify both the overall context and the specific objects within the frame—recognizing the difference between a human subject in the foreground, all the different objects present, and a chaotic crowd in the background. Once it maps out the “reality” in the image, the app provides you with proactive recommendations.

The assistant suggests edits, which can be any number of things, depending on the nature of the image, from removing “scattered objects to tighten the composition” to refining the lighting to adjust the color palette or anything in between. If you prefer to be hands-off, you can tell the machine to do it for you, or you can choose to bypass the automation and do your own thing.

Taking the manual route means you can use your voice or text prompts to manipulate the image while retaining granular control over the assistant’s actions. In the mobile app, for instance, you can issue a vocal command to alter a specific object—like removing the cropped head of a dude in the background—and the assistant will automatically isolate that element and place the changes on a dedicated layer.

Claire's went from tween mall icon to bankrupt — twice?


© Fast Company