AI-Generated Humor Spreads on Gemini Nano Banana: Light-hearted Modifications Spark Concerns over User Privacy
In September 2025, a new AI-powered photo editing tool named Nano Banana, part of Google's Gemini 2.5 Flash Image model, gained significant popularity on social media. This tool, which has generated over 500 million images and 10 million new downloads in weeks, has fueled a slew of viral trends, including The Saree Trend, 3D Figurines, Celebrity Selfies, Polaroids, and Pet Transformations.
While Nano Banana delights users with its ability to blend user-uploaded photos with imaginative prompts, such as transforming selfies into Bollywood-style portraits or pets into plush toys, it has also raised concerns about privacy violations and the looming threat of deepfakes.
Child safety advocates have labeled kid-friendly versions of the tool as "high risk" for data leaks. The deeper worry is deepfakes, AI-generated media that convincingly alters reality. Malicious actors could use Nano Banana to create deceptive content, such as fraudulent images or videos.
One study noted that realistic deepfakes can evade detection 70% of the time with current tools, amplifying the threat. Privacy advocates stress that users, especially minors, are vulnerable if images are misused. A user could potentially use Nano Banana to create a convincing but false image, such as placing themselves in a fake news broadcast.
Google has implemented watermarks and SynthID as safeguards for Nano Banana, but these measures are not foolproof. Experts urge users to avoid sharing sensitive photos, scrutinize outputs for anomalies, and demand stronger safeguards from tech giants like Google.
In India, where the saree trend thrives, experts highlight risks of identity fraud, as AI edits can sometimes be reverse-engineered to reconstruct original photos. The Saree Trend transforms selfies into vintage Bollywood portraits with intricate sarees and cinematic lighting.
3D Figurines turn portraits or pet photos into collectible-style miniatures, as if displayed on a virtual shelf. Polaroids convert photos into faded polaroid-style shots for retro vibes. Pet Transformations turn pets into plush toys or fantasy creatures.
Celebrity Selfies blend users into star-studded scenes or morph them into celebrity lookalikes. The saree and celebrity selfie trends illustrate the risk: a photo morphed into a convincing but fabricated scene could be weaponized.
The lack of robust, universal deepfake detection tools exacerbates the problem, leaving individuals and platforms struggling to keep up. The organization AlgorithmWatch highlighted the legal issues and security risks associated with the AI-based photo editing tool "Nano Banana."
Deepfake technology advances pose a growing threat, with experts warning that tools like Nano Banana could fuel scams or propaganda if not tightly regulated. As the use of Nano Banana continues to grow, it is crucial for users and tech companies to prioritize privacy and security to mitigate the risks associated with deepfakes.
Read also:
- Mandated automobile safety technologies in the EU may be deemed "irrational," "erratic," and potentially dangerous, experts caution.
- New study reveals that Language Models can execute complex assaults independent of human intervention
- Cybercriminals struck once more, allegedly Lazarus group, causing a $23 million loss to a UK-registered cryptocurrency platform.
- Upgraded advisory from CISA and Microsoft on security weakness in Exchange Server