If youβre new, welcome! If youβre a regular, thanks for being here as we navigate this AI shift in eCommerce. I share what Iβm testing and whatβs actually moving the needle. Whereβs the vault you ask? Every previous edition is saved here.
In this edition (shortcuts):
The Email I Didnβt Send
Tell Claude You're Going to Sleep π₯±
GPT-5.5 vs Opus 4.7 (The April Showdown) π₯
News Worth Reading
Upcoming Events
ποΈ The Email I Didn't Sendπ₯π
I sent an email to my team this morning. Eight people, with me Cc'd.
I didn't write it. I didn't paste it into Gmail. I wasn't even at my keyboard for the actual send.
I asked Claude to do it. And Claude did it.
This is not another "Claude drafted my email" story. We've all done that. Claude has been writing my drafts for two years now. This is something different. I told Claude what I wanted, Claude wrote it, showed me a preview, and then actually sent it from my Gmail to eight of my teammates. Subject line, formatting, recipients, Cc, signature. The whole thing.
That sounds small. It's not.
What Actually Happened
I'm rolling out a new tool to my PPC Ninja team. Nothing dramatic, just a one-time setup they all need to run on their laptops. Five minutes, one paste of a command, and they're done.
Normally I would write that email myself. Sit down, draft it, format the Mac and Windows instructions separately, double check the recipient list, send, then field the followups when someone got stuck on step three.
This time I just told my AI assistant Claude:
"Send an email to my eight teammates with separate instructions for Mac and Windows, as non-technical as possible. Cc me. Say that you are my assistant Claude."
That was the whole instruction.
Claude drafted the email. Showed me a preview in my browser. I asked for two changes (give the Mac and Windows sections colored banners, and account for the fact that some teammates have the folder named slightly differently). Claude updated the preview. I said "send."
The email landed in eight inboxes within a second. With me Cc'd, so I have a clean record on my side.
[IMAGE: screenshot of the rendered email preview with a blue MAC banner and an orange WINDOWS banner, recipient block visible at top]
So What's Actually New Here
You've been able to ask AI to write emails for years. Old news.
What's new is the send part. The "AI did the thing in the world" part.
Until this week, every AI email workflow I'd used had a human-shaped gap right at the end. The AI writes the draft. You copy it into Gmail. You add the recipients. You fix the subject line. You hit send. The AI is the writer, you are the operator.
That gap closed for me today. Claude is now the writer and the operator. I'm just the editor and the "yes, ship it" button.
The thing that made it work isn't a fancy new model. It's a small, free, boring CLI tool from Google called gws (Google Workspace CLI). It runs on my Mac. Claude knows how to use it. When I say "send," Claude calls the gws send command with the right arguments, gws calls the Gmail API, and Gmail does what Gmail does.
No Zapier. No make.com. No monthly subscription stack. Just my AI, my CLI, and my Google account.
But Wait, Don't I Want a Human in the Loop?
Yes. And I had one. Me.
The thing that makes this safe (and not "AI gone rogue spamming my contacts") is the workflow. Every write action goes through three checkpoints before it actually happens:
πΆ The dry run. Before sending anything, Claude calls the API in "dry run" mode, which validates the request without firing it. If the recipients are wrong or the formatting breaks, this is where it fails, harmlessly.
πΆ The preview. Claude renders the actual email in a browser tab so I can see it the way my recipients will see it. Subject, banners, code blocks, spacing. I'm not reading raw markdown, I'm looking at the finished thing.
πΆ The "ship it" gate. Nothing sends until I literally type "send." That's the only word that matters. Until I say it, the draft just sits in a temp file.
That third one is the most important. The AI is fast and confident. I still own the decision to put something in someone else's inbox. The skill that runs Claude's workspace tools has a hard rule baked in: no writes, no sends, no calendar invites without explicit confirmation. Claude refuses to skip that step, even when I'm in a hurry.
[IMAGE: a flow diagram showing the three checkpoints, draft β preview β send with a person icon at the send gate]
That's what "fast with guardrails" looks like in practice. The AI moves at AI speed. The human stays in charge of consequences.
Why This Matters For Sellers
Forget the email for a second. Picture the next twelve months of your business.
πΆ "Email the agency, ask them about Q3 budget reallocation" πΆ "Tell the warehouse the cutoff date moved to Friday" πΆ "Send our top 20 ASIN list to the new hire and Cc Mike" πΆ "Reply to the photographer and ask if she can reshoot the lifestyle set this week" πΆ "Reply to that buyer who asked about MOQ. Polite but firm. Cc legal."
Every single one of those is something you do today by stopping whatever you were doing, opening Gmail, fishing the right thread, writing the message, and sending. Five to ten minutes each. Twenty of them in a day. That's two to three hours of your week being a Gmail operator.
Now imagine just saying the request out loud (or typing it into a chat) to your AI, and the email is sent. Not drafted. Sent. With you Cc'd or Bcc'd if you want a record. With your tone, your judgment, and your audit trail attached.
That's where this is going. And it's not a 2027 thing. It works today.
The Bigger Shift
There's a phrase I keep coming back to: AI is moving from a thing you talk to into a thing that acts on your behalf.
For two years we've been chatting with AI in browser tabs. Asking it questions, getting answers, copying and pasting back into our actual work. The AI was clever, but trapped behind a chat box.
What's happening now is the chat box is leaking. AI is starting to reach into your real tools. Your Gmail. Your Drive. Your calendar. Your bulk file uploader. Your ad account.
Email is just the simplest, oldest, most universal version of this. Every business runs on email. Every team coordinates through email. Every customer reaches you through email. If your AI can send email as you, with your judgment behind it and your audit trail attached, you've quietly upgraded from "I have an AI assistant" to "I have an AI employee."
The model didn't get smarter this month. The wrapper around the model did.
What You Can Do This Week
You don't need a Claude Code setup or a CLI tool to start moving toward this. Three things you can do today:
πΆ Pick one repetitive email task. The kind you do five times a week without thinking. Status updates, supplier check-ins, agency followups, "did you see this" forwards. That's your candidate.
πΆ Write the request like a delegation, not a prompt. "Email Anna and ask if the August invoice is being processed, polite, Cc me, sign off as Ritu" reads differently than "Write me an email about an invoice." The first one is how you'd talk to a real assistant. Get used to that voice.
πΆ Decide your "ship it" gate. Even when you have AI that can send, you still want a moment of human judgment before each send. Decide what that looks like for you. Mine is a one line preview I can scan in three seconds. Yours might be different. The point is to pick it on purpose, not by accident.
The rest is plumbing. The plumbing exists today, and it will be in front of you, dressed up nicely inside the next round of AI products that ship over the next quarter.
The sellers who are practiced at delegating to AI by then will use those tools fluently. The ones who aren't will still be copy-pasting drafts into Gmail.
Cool, right?
Want to try it? Hit reply.
If you want the actual install commands and the small Claude skill that makes this work on your own laptop (Mac or Windows), hit reply with "try it" and I'll send the setup over. Takes about five minutes from start to finish, no developer experience needed.
Have you handed off a real action to your AI yet? Not a draft. An actual send, or save, or post. Hit reply and tell me what it was. I'm collecting examples.
Know a seller who's still in the "AI writes my drafts" stage? Forward this to them.
~Ritu
PPC Ninja is helping brands future proof their listings for AI, helping you build RUFUS enabled, stunning images and videos with AI. Hit reply on this to chat with us. Explore how we can scale your content production across Social media, Amazon ads, Amazon Posts efficiently and affordably.
NERD BYTES
Tell Claude You're Going to Sleep π₯±
Has Claude ever stopped you mid-flow with seven follow-up questions before doing the actual work?
"Should the heading be H2 or H3?" "Comma or semicolon here?" "Should I send the email now or wait until you confirm?"
That's because Claude defaults to aΒ collaborative mode. It checks in. It asks permission. Which is great when you're sitting at the keyboard answering in real time. Less great when you handed it a 30-minute task and went to grab coffee.
Here's the trick I've started using.
The Magic Words
When you're about to step away (or just don't want to be pinged), tell Claude this:
"I'm stepping away for the next few hours. Don't wait on me. If you have questions or subtasks, spin up subagents and let them handle it. Use your best judgment. I'll review when I'm back."
Or, more dramatic:
"I'm going to sleep. Don't wake me. Spin up subagents as needed and orchestrate the work."
Then walk away.
What Actually Happens
Claude shifts modes. It stops being a polite assistant asking permission. It becomes anΒ orchestrator, breaking the work into chunks and dispatching subagents to handle them in parallel. The subagents do their pieces, hand back results, and Claude assembles the final output.
The results are honestly a little wild. I came back from a coffee break to find:
πΆ Three open questions answered with reasonable assumptions, each clearly labeled
πΆ Five subtasks dispatched and completed in parallel
πΆ A summary of every decision made, with the reasoning
πΆ A short list of "wasn't sure, please confirm" items at the bottom for me to scan
Way better than coming back to find Claude waited four hours on question number one.
Pro Tip
This works best for tasks withΒ inherently independent sub-pieces.Β Audits across multiple ASINs. Bulk file generation across many campaigns. Research across a list of competitors. Anything you'd describe as "do the same thing, just for 10 of them."
It works less well when each step depends on the exact answer from the step before. For tightly sequential work, you really do need to stay in the loop.
This is the "delegate, don't supervise" pattern. The tools have been there for a while. Most people just don't tell Claude it's allowed to use them.
COOL TOOLS
GPT-5.5 vs Opus 4.7 (The April Showdown) π₯
Two Heavyweight Releases In Eight Days
April was wild. Anthropic shipped Claude Opus 4.7 on the 16th. OpenAI fired back with GPT-5.5 (the model behind the new Codex) on the 23rd. Eight days, two of the strongest coding models the industry has ever seen, both raved about by developers I trust.
The "which one is better" question is genuinely interesting this time. Because the answer is: it depends on what you're doing.
Here's how I'm thinking about it, and how to access both inside the IDE you already use.
The Scoreboard
I went through the actual benchmarks instead of the hot takes. Here's the honest comparison.
Claude Opus 4.7 | GPT-5.5 (Codex) | |
|---|---|---|
Released | April 16, 2026 | April 23, 2026 |
SWE-bench Verified | 87.6% | competitive |
SWE-bench Pro (real GitHub work) | 64.3% (industry highest) | ~58% |
Terminal-Bench 2.0 (drive a real terminal) | 69.4% | 82.7% (state of the art) |
Output token efficiency | baseline | 72% fewer tokens for the same answer |
Context window | 1M (via API) | 400K (IDE), 1M (API) |
Pricing per 1M tokens | $5 in / $25 out | $5 in / $30 out |
Vision | 3.75MP (jumped from 1.15MP) | strong, no recent jump |
Neither model "wins." They optimize for different things, and that's actually useful.
Where Each One Pulls Ahead
After a week of poking at both, here's my read.
πΆ Opus 4.7 is better when the task is "understand this whole thing first, then change it carefully." Big codebases. Refactors. "Why does this break under that condition." Anything where the AI has to hold a lot in its head before touching anything. The 1M context window plus the highest SWE-bench Pro score in the industry is not a coincidence.
πΆ GPT-5.5 is better when the task is "go drive the terminal and finish this for me." Set up a project. Run the test suite, fix what fails, run it again. Generate a bunch of files in parallel. Anything agentic and "go do the thing." The Terminal-Bench score (82.7%) tells you the truth here. It's the best terminal driver currently shipping.
πΆ GPT-5.5 is also significantly cheaper in practice, despite a slightly higher per-token output price. The 72% reduction in output tokens means an average task costs less, even before you count the speed gain.
πΆ Opus 4.7 has a real lead on visual reasoning. The vision resolution jumped from 1.15MP to 3.75MP this release. If your work involves screenshots, charts, or mockups (looking at you, Amazon Creative Studio reviewers), this matters more than benchmark deltas.
How To Use Both In Cursor
The good news: you don't have to pick one and stick with it. Cursor supports both, and switching takes one click.
For Opus 4.7:
Open the chat panel in Cursor
Click the model dropdown at the bottom
Select Claude Opus 4.7 (it's at the top of the list right now)
That's it. Your existing Cursor Pro plan covers usage, or you can bring your own Anthropic API key
For GPT-5.5 (via the Codex extension):
Open Cursor's Extensions panel
Search for "Codex (OpenAI)" and install
Sign in with your ChatGPT account (Plus, Pro, Business, Enterprise, or Go all work) or paste an OpenAI API key
Drag the Codex icon to your right sidebar so it sits next to your Cursor chat
Pick a mode: Chat, Agent, or Agent (Full Access)
Pick a reasoning effort: low, medium, or high
[IMAGE: Cursor sidebar showing both the native chat with Opus 4.7 selected, and the Codex extension panel side by side]
If you already pay for ChatGPT, your existing plan covers Codex. No second subscription needed. Same goes for Cursor Pro and Opus.
Pro Tip: Bind both to keyboard shortcuts so you can flip between them without breaking flow. I run Cmd+L for Cursor's native chat (Opus 4.7) and Cmd+Shift+K for Codex (GPT-5.5). Different jobs, different tools, one keypress apart.
My Workflow This Week
I've started using them in tag team. Plan and architect with Opus. Execute and orchestrate with GPT-5.5.
Concretely: when I'm starting a new internal tool, I ask Opus 4.7 to read the existing codebase, understand the patterns, and write the implementation plan. Then I hand the plan to Codex in Agent mode and let it actually build the thing while I'm in another meeting (see this issue's Nerd Byte for the "go do this while I'm asleep" pattern).
Opus is the architect. Codex is the foreman. Both feel less anxious than the previous generation. Both ask fewer permission questions before they get to the answer.
Should You Switch?
If you're already vibe coding and you're paying for both ChatGPT and Cursor (which I am), you don't have to choose. Use Opus 4.7 for thinking, GPT-5.5 for doing.
If you can only afford one, here's the honest split:
πΆ Lots of small, well-scoped tasks? GPT-5.5 wins on speed and cost.
πΆ Bigger refactors and "understand my whole codebase" work? Opus 4.7 still has the edge.
If you're not vibe coding yet at all, this isn't the news that should pull you in. Start with either one inside Cursor's free tier. The frontier models are now so close that the choice barely matters until you're shipping real volume.
My take: This is the first time in two years the gap between OpenAI and Anthropic at the top of the leaderboard genuinely feels like a rotation, not a winner. Both are excellent. Both are getting cheaper. Both are getting more agentic. Crazy times we live in.
I'll report back with real numbers after a full month of running them in parallel.
Are you Team Opus or Team Codex right now? Or running both like me? Hit reply and tell me what's working in your stack.
NEWS WORTH FOLLOWING
UPCOMING EVENTS
Time to tune in to the Go with the Flow Podcast where Danny McMillan and I dive into AI topics. We will be talking about Claude Code, specifically some of the newer roll-outs and ways to improve Claude Skills. Listen here!

Join this free webinar by Helium10 today!
https://streamyard.com/watch/qUmtMXiMKvKt

We hope you liked this edition of the AI for E-Commerce Newsletter! Hit reply and let us know what you think! Thank you for being a subscriber! Know anyone who might be interested to receive this newsletter? Share it with them and they will thank you for it! π Ritu

