Google · Google Research
Google Research unveiled a new approach for editing images, now live in the Auto frame capability in Google Photos
Compiled by KHAO Editorial — aggregated from 1 outlet. See llms.txt for citation guidance.
★ Tier-1 Source
Have you ever looked back at your camera roll and wished you had captured a scene slightly differently?
Key facts
- Marcos Seefelder, Staff Software Engineer, Platforms & Devices, and Pedro Velez, Senior Research Engineer, Google Deepmind
- In contrast to other generative image editing solutions, their method consists of two stages: (1) 3D scene and camera estimation and (2) generative inpainting and retouching
- Key contributors include: Thiemo Alldieck, Marcos Seefelder, Hannah Woods, Pedro Velez, Michael Milne, Bert Le, Navin Sarma, Jasmin Repenning, and Selena Shang
- Their method, now available as part of the Auto frame feature in Google Photos, uses machine learning (ML) models to understand the scene and its spatial layout and uses generative AI to imagine
Summary
Marcos Seefelder, Staff Software Engineer, Platforms & Devices, and Pedro Velez, Senior Research Engineer, Google Deepmind. The team introduced a new approach for editing images, now live in the Auto frame feature in Google Photos, allowing users to re-imagine photos from a new perspective after they have been taken. While cropping and zooming may help, classic image editing tools won’t fix the underlying problem: the image is still showing the scene from a fixed, imperfect perspective. The new Auto frame feature interprets a standard 2D photo as a 3D scene.