Adobe has added a number of Firefly-based features to Photoshop that use generative AI to extend images beyond their borders, add objects to images, and remove objects with more precision than the previously available content-aware fill. These features will only be available in the beta version of Photoshop for now. Firefly beta users on the web will also have access to some of these capabilities. Photoshop users can use natural language text prompts to describe the kind of image or object they want Firefly to create. Adobe will provide users with three variations for every prompt. Photoshop sends parts of a given image to Firefly, not the entire image, and creates a new layer for the results. Firefly was trained on the photos available in Adobe Stock and does especially well with landscapes. Adobe has implemented additional safeguards to ensure that the model returns safe results. Adobe will automatically apply its Content Credentials to any images utilizing these AI-based features. Adobe is planning to bring Firefly to its photo management tool, Lightroom, but has not committed to a timeline.
