Back to Nodes
Google Nano Banana

Google Nano Banana

Official

Google's advanced image editing and generation model from Gemini 2.5.

Nodespell AI
AI / Image / Google

Google's advanced image editing and generation model from Gemini 2.5.

Model Overview

Gemini 2.5 Flash Image is a state-of-the-art multimodal model from Google, designed for creative workflows that involve image generation and editing. It natively understands and generates images, enabling a unified process for creating and refining visuals.

Best At

  • Seamlessly combining multiple images into a new visual (Multi-image Fusion).
  • Maintaining character, object, or style consistency across different prompts and images.
  • Performing precise, targeted edits using natural language descriptions (Conversational Editing).
  • Complex visual reasoning tasks that require understanding beyond simple photorealism.

Limitations / Not Good At

  • While powerful, specific limitations regarding complex scene generation or highly detailed fine-tuning are not explicitly detailed in the provided documentation.

Ideal Use Cases

  • Integrating products into new scenes.
  • Restyling rooms by merging furniture and decor images.
  • Generating a series of cohesive visual assets for storytelling or branding.
  • Creating targeted edits like blurring backgrounds, removing objects, or altering poses.
  • Interpreting diagrams or following multi-step visual instructions.

Input & Output Format

Input: Text prompt, optional input images (array), optional output format (string).
Output: Image (URI).

Performance Notes

  • Designed for fast, conversational, and multi-turn creative workflows.
  • Supports efficient handling of image data through the Gemini API's File API for larger files and repeated use.
  • All generated or edited images are embedded with an invisible SynthID watermark for transparency.
Inputs (2)

Prompt

String

A text description of the image you want to generate

Multi InputMin: 0Max: 100

Image Input

String

Input images to transform or use as reference (supports multiple images)

Multi InputMin: 0Max: 100
Parameters (3)

Prompt

String

A text description of the image you want to generate

Default:

Aspect Ratio

String

Aspect ratio of the generated image

Default: match_input_image

Output Format

String

Format of the output image

Default: jpg
Outputs (1)

Output

Inferred

Output

Used in Snippets (3)

Fashion Print Design
Snippet
# Fashion Print Design ## Overview This Fashion Print Design workflow turns your fabric motifs and reference photos into production-ready dress visuals and motion previews. It combines multiple image models to apply prints to cotton garments with realistic studio lighting and consistent color. ## What You'll Build - High‑resolution **1:1 garment mockups** with your print applied across the full dress. - 2K **fabric texture tiles** suitable for textile sampling or e‑commerce. - Short **5‑second fashion clips** that showcase the dress and print in motion. - Iterative concept boards that stay aligned to your fashion print moodboard. ## How It Works 1. A moodboard image input (e.g., `fashion_print_moodboard`) anchors the overall style, palette, and motif direction. 2. Multiple **Seedream 4** nodes (25 total, key ones like `seedream9`, `seedream11`, `seedream13`) generate 2048×2048, 2K print swatches and fabric renders at a **1:1 aspect ratio**, optimized for color consistency where Nano Banana struggles with flower tones. 3. **googleNanoBanana** nodes (7 total, JPG output, 1:1) support fast ideation passes, while sticky notes guide prompts such as changing the dress material to match the reference and ensuring the print pattern wraps cleanly across the garment. 4. A dedicated instruction note drives photorealism: applying the print texture to **cotton fabric** under clear, realistic studio lighting. 5. **qwenImageEditPlus** and **qwenImage** refine fit, fabric details, and print placement, while **reveCreate** and `hailuo23Fast` assist with stylistic variations and composition. 6. **kling25ImageToVideo** nodes transform key frames into **5‑second videos** (CFG scale 0.5, negative prompt to avoid blur, distortion, and low quality), giving you animated fashion previews. ## Best For - Fashion and textile designers developing new print collections. - Apparel brands needing fast dress and fabric mockups from reference art. - Surface pattern designers pitching prints to clothing labels. - E‑commerce teams creating on‑model visuals and motion previews without a full photoshoot. - Creative studios prototyping AI‑assisted fashion print design workflows. Try this Fashion Print Design snippet in Nodespell to turn flat print references into polished, motion‑ready fashion visuals in a few guided steps.
NTNodespell Team
Recent
AI Sunglasses Product Mockup Generator
Snippet
## Overview This Nodespell snippet is an **AI sunglasses mockup workflow** that turns multiple reference photos and style notes into polished, production-ready eyewear visuals. It generates new sunglasses designs that match frame shape, lens color, and viewing angle instructions. ## What You'll Build - High-resolution (2K) sunglasses product renders based on Etsy-style reference images. - Front and side-view variations, including a right-hand side profile of the same frame. - Customised frame silhouettes with more curved, rounded outer edges and refined corners. - Consistent lens colour and finish driven by a dedicated lens colour reference image. ## How It Works 1. **Reference intake via stickyImage nodes** – Seven `stickyImage` nodes load base inspiration images from Etsy URLs plus a dedicated `lens_colour_ref` node to lock in lens tint and finish. 2. **Design intent with stickyNote prompts** – Fourteen `stickyNote` nodes (e.g. `sticky_note4`, `sticky_note5`, `sticky_note18`) specify shape changes, side-view requirements, and which reference to follow for frames vs. lenses. 3. **Primary 2K generation with seedream4** – Four `seedream4` nodes generate core sunglasses renders at 2048×2048 resolution (`size: 2K`, `aspect_ratio: match_input_image`, `max_images: 1`) based on the combined textual and visual guidance. 4. **Variant and layout handling with googleNanoBanana** – Nine `googleNanoBanana` nodes create 16:9 PNG outputs for marketing-ready images, web product cards, and banner layouts (`aspect_ratio: 16:9`, `output_format: png`). 5. **Detail edits with qwenImageEditPlus** – A single `qwenImageEditPlus` node performs targeted refinements like softening frame edges, rounding corners, and aligning the side view to the front-view design. 6. **Complex graph orchestration** – The 35 nodes and 38 connections coordinate 21 inputs and 12 outputs, ensuring reference images, notes, and model calls stay in sync through the full design cycle. ## Best For - Eyewear brands and independent makers prototyping new sunglasses lines. - Etsy and DTC sellers needing fast, on-brand sunglasses mockups. - Product designers exploring frame variations without manual 3D work. - Marketers creating consistent hero images and ad creatives from a few reference photos. Try this snippet in Nodespell to rapidly turn your reference shots into polished AI-generated sunglasses visuals ready for product pages and campaigns.
NTNodespell Team
Recent
Multi-Dish Food Image Prompt & Generation Workflow
Snippet
## Overview This Nodespell snippet is a multi-dish **AI food image prompt and generation workflow**. It turns simple dish ideas into detailed visual prompts, then renders high‑resolution 4:3 food images using Google Nano Banana and Seedream models. ## What You'll Build - A reusable pipeline that expands dish concepts into rich, camera-ready image prompts. - 2K, 4:3 food photos for menus, blogs, or social media, exported as JPGs. - Parallel image variants for multiple dishes in a single run. - Optional text extras (like jokes or captions) powered by Gemini 2.5 Flash. ## How It Works 1. You describe dishes or ingredients through the 11 input nodes, guided by 10 **stickyNote** instructions (for example, “Generate a detailed prompt for image generation of the dish #1/#6, include all visible ingredients”). 2. Eight **geminiText** nodes call the **gemini-2.5-flash** model to expand each dish into a scene-level prompt: plating, lighting, background, camera style, and visible ingredients. 3. These enriched prompts fan out into eight **googleNanoBanana** nodes configured to a **4:3 aspect ratio** and **JPG** output, generating fast concept images and low-cost visual drafts. 4. Once satisfied with prompts, they feed into six **seedream** image nodes (seedream4/5/6/7/8) set to **2K resolution (2048×2048), 4:3, max_images: 1, sequential_image_generation: disabled** for sharp, production-ready renders. 5. A **stickyImage** node can serve as an optional visual reference, helping align AI output to a brand or photography style. 6. Twelve output nodes collect final images and text so you can review, compare dishes, and export for menus, posts, or recipe apps. ## Best For - Food bloggers and creators needing consistent, high-quality dish imagery. - Restaurant owners and menu designers prototyping layouts and specials. - Recipe platforms and cooking apps generating scalable visual libraries. - AI artists and prompt engineers exploring food photography styles. - Marketing teams producing rapid A/B-tested visuals for campaigns. Try this snippet in Nodespell to rapidly turn raw dish ideas into polished, high-resolution food visuals.
NTNodespell Team
Recent
Nodespell

Nodespell

📍 London

Building the future. Join us!

Type

Node

Status

Official

Package

Nodespell AI

Category

AI / Image / Google

Input

TextImage

Output

Image

Tags

Image EditImage Gen

Keywords (11)

FastImage EditingImage GenerationImage GenerationImage Edit
Use in Workflow