🗞️ Sora Just Went Unlimited – Say Goodbye to Token Caps 💥🚀

AI for eCommerce Newsletter - 37

OpenAI recently unlocked unlimited Sora usage for ChatGPT Plus and Pro users—removing the previous credit-based system that capped video generation.

This is a game-changer, especially considering how computationally expensive text-to-video models are. I’ve personally burned through a lot of $$$ testing various tools, and none have offered this level of access without racking up costs.

What sets Sora apart is how effortlessly it turns simple prompts into cinematic, visually rich scenes—and then lets you break them into modular, editable chunks, like a storyboard built with AI. It’s intuitive, fast, and surprisingly fun to play with.

Sora takes that simple idea and expands it into a detailed, cinematic scene. Here's what it generated for the first shot:

📍 Scene 1

A cute corgi with its fluffy tail wagging runs energetically in slow motion towards the camera. The corgi’s ears flop with each joyful bound as it moves across a lush green lawn, which is part of a quaint house’s yard. A white picket fence frames the background, enhancing the charming suburban setting. The sunlight casts a warm glow over the scene, highlighting the corgi’s golden fur, and the atmosphere is filled with a sense of playfulness and joy.

Then, you can seamlessly move to the next moment in the story:

📍 Scene 2

The corgi settles down on the ground looking up at me.

Each moment lives on its own “card,” which you can rearrange, tweak, or replace—just like editing slides in a storyboard. This kind of control makes it super easy to shape a video scene-by-scene, whether you’re crafting product demos, storytelling reels, or even branded shorts.

This is the result - pretty darn good!

What’s different about video prompting is that you’re not just describing a static image—you’re guiding motion, pacing, mood, transitions, and even camera angles over time. It's more like directing than writing. With tools like Sora, you're able to shape not just what the viewer sees, but how they see it—frame by frame. The AI understands cues like "slow motion," "close-up," or "sunlight casting a warm glow," and turns them into dynamic, evolving visuals. That’s a major leap from image generation—it’s storytelling in motion.

It took me just one weekend to go through all of Sora’s tutorials—and after that, I was off to the races. The learning curve is surprisingly smooth, especially if you’re already comfortable writing prompts. Once you get a feel for how Sora interprets pacing, motion, and scene transitions, you’ll find yourself thinking more like a director than a writer.

And this is the final product video. How cool is that?

Do you want AI Ads built for your Sponsored Brand ads? Contact us at [email protected]

Flora - Your AI Supercanvas for Creativity💥🚀

If you’ve ever felt like jumping between ChatGPT, Midjourney, Runway, and other AI tools kills your creative flow — meet Flora.

Flora is a beautifully designed infinite canvas that combines text, image, and video generation in one space. It’s like Figma, but for AI workflows — letting you ideate, prompt, visualize, iterate, and remix without context-switching a dozen times.

What I love about Flora:

âś… You can use multiple AI models (Claude, GPT-4, Stable Diffusion, Runway, etc.) side-by-side
âś… Everything lives on a visual canvas, so your creative thought process actually stays visible
âś… You can plug into community-built workflows for inspiration or rapid prototyping
âś… Great for both solo brainstorming and collaborative client work

This tool is perfect if you're:

  • Building creative assets for e-commerce

  • Mapping out video ads or storyboard ideas

  • Experimenting with product positioning or brand voice

  • Just curious about chaining multiple AI tools together

  • And bonus: it’s designed by creatives for creatives — no coding required.

Flora brings together text, image, and video generation using multiple AI models, all within a drag-and-drop infinite canvas. It’s designed to keep your creative flow uninterrupted — no tab-switching, no context loss.

Here’s what I did:

  1. Typed a simple text prompt describing a fruit basket 🍓🍇

  2. Flora used AI to generate a gorgeous, photorealistic image

  3. Then I dragged that image into a video generation node, where I picked Minimax from among 10+ top models (like Runway, Pika, Luma, and more)

  4. Voilà — a full creative pipeline, from text ➡️ image ➡️ video, in minutes

This kind of multimodal chaining makes Flora an incredible tool for:

  • Prototyping Amazon product images

  • Pitching visual storyboards to clients

  • Experimenting with brand messaging

  • Running AI-powered content brainstorms

🧪 It’s like having Midjourney, Runway, and GPT-4 stitched together into a single visual workspace. No code. Just creativity.

Google's Gemini Advanced — A Quiet $2 Upgrade with Serious Upside

If you’ve noticed the little ✨“Help me write” button in Gmail lately, you’re already seeing Gemini at work.

This feature is part of Google’s silent rollout of Gemini into Workspace—and it's more powerful than it looks. Whether you're replying to emails, summarizing long threads, or crafting outreach, Gemini can now handle the heavy lifting directly inside your inbox.

Even better: this goes beyond email. In Docs, it can summarize meeting notes. In Sheets, it can explain formulas or create charts. In Meet, it helps generate live summaries and follow-ups.

For teams like ours that live inside Workspace all day, this kind of embedded AI is a serious time-saver. No switching tabs. No exporting data. Just fast, contextual support right where the work happens.

This update also unlock access to Google's evolving Gemini 2.0 models, which support advanced multimodal understanding, long-context processing, and tool use.

What’s included in the Gemini Advanced plan:

  • 2.0 Flash – for everyday tasks with added capabilities

  • 2.0 Flash Thinking (experimental) – supports advanced reasoning

  • Deep Research – generates in-depth research reports

  • 2.0 Pro (experimental) – optimized for complex tasks

The inclusion of Deep Research is notable. OpenAI offers a similar capability but limits usage to 10 free searches/month, or 100 searches/month on their $200 Pro plan.

Gemini also offers extremely high token limits:

  • Input: 1,048,576 tokens (over 1M)

  • Output: 8,192 tokens

By contrast, Claude.ai has a context window of ~200k tokens. While I still prefer Claude for programming and reasoning, I frequently hit the max-length cap, which interrupts workflows.

The competition between LLMs is heating up fast. With Google silently embedding Gemini into Workspace, it's clear that the race isn’t just about model performance anymore—it’s about deep integration, utility, and ecosystem lock-in. As OpenAI, Anthropic, and Google push to outpace one another, the winners won’t just be the models—they’ll be the users who figure out how to leverage these tools the fastest.

This week, I’m excited to feature Jo Lamadjieva, a sharp, creative mind at the intersection of performance marketing and AI. Jo’s been quietly building tools that solve real problems for media buyers, and her latest one is a gem. She’s just launched a Meta Ads AI auditor that’s fully autonomous, insanely easy to use, and—best of all—free for the first 200 people. If you’re running Meta ads, this is something you’ll want to check out here.

Upcoming AI RelatedEvents

Unleash AI For Business LIVE Summit (FREE)

30+ Top Entrepreneurs Reveal AI Secrets To Grow Your Business

Click on image to register for FREE:

Prosper Show, March 25-27, Las Vegas

Register here and get 100 off: SANAR1100OFFSPEAK

We hope you liked this edition of the AI for E-Commerce Newsletter! Hit reply and let us know what you think! Thank you for being a subscriber! Know anyone who might be interested to receive this newsletter? Ask them to subscribe here: www.ppc-ninja.com/subscribe. They will thank you for it đź’Ąđź’Ş!!

~Ritu

Reply

or to participate.