👋 Tomorrow’s Tech, Delivered Today
Hi! Welcome to the 12th edition of the TomorrowToday newsletter.
We’re here to decode the AI chaos so you don't have to. Think of us as your friendly neighbourhood tech translators - we cut through the chaos, translate the jargon, and spotlight new AI tools that matter for founders, builders, and curious minds.
Buckle up, because the future's moving fast and we're here to make sure you don't get left behind! ⚡
If you enjoyed today’s newsletter, please forward it to a friend & subscribe by following this link.
~5 mins read
🗞️ News Flash
🍌 The mysterious "Nano-Banana" was Google’s new Gemini model all along
/Image /Benchmark
Remember last week when we told you about those Nano-Banana rumours floating around? Well, the cat's out of the bag - it was Google's Gemini 2.5 Flash Image model all along, and the internet is absolutely losing its mind over how good this thing is.
Here's why everyone's freaking out: you can upload any image, tell Nano-Banana to change your outfit, remove objects, or completely transform the scene, and it maintains perfect character consistency. This has been the holy grail problem in AI image generation - keeping faces, bodies, and objects looking the same across edits.
The model doesn't just do basic edits either. It understands complex instructions like "make me wear a vintage leather jacket instead of this hoodie" or "remove that person walking in the background" while keeping everything else pixel-perfect. The results are so good that people are calling it a Photoshop killer. And the best part. Using the API an single image costs $0.0034. Not only is nano-banana incredibly impressive, it is also cheap and fast generating an image in less than 10 seconds.
Bottom line: Google just solved one of AI's biggest image problems. Character consistency in image editing is no longer a pipe dream - it's reality, and it's available right now.
If you are interested to see examples of how good nano banana is, look at this post on X or read Google’s blog post.
Real-life use case: Edit photos, add elements to existing photos and replace you Adobe Photoshop license
🌐 Claude comes to Chrome
/Browser
While everyone expected Anthropic to follow OpenAI and Perplexity by building their own AI browser, they pulled a complete 180. Instead of asking you to switch browsers, Claude is coming to the browser you already use - Chrome.
The Claude for Chrome extension lets Claude read, click, fill forms, and navigate websites on your behalf. It can manage your calendar, draft email responses, handle expense reports, and basically become your digital assistant without you having to learn a new interface or change your workflow.
This strategy is brilliantly different. Rather than competing with Chrome's 68% market share, Anthropic is riding it. Why force users to adopt a new browser when you can just plug into their existing one?
There's a catch though - only 1,000 Max plan users get access initially, and Anthropic is being very careful about security. Their testing revealed that malicious websites could trick Claude into doing harmful things (like deleting emails), so they're rolling out safety measures gradually.
Bottom line: This could be the future of AI productivity - not new apps or browsers, but AI that seamlessly integrates into your existing workflow. If the security issues get sorted, this approach might just kill the standalone AI browser category entirely.
Want to no more, go and read the article for more information of what is possible.
Real-life use case: Have Claude manage your calendar, book appointments, fill out forms, and handle routine tasks directly in your browser without switching apps.
🎙️ OpenAI's voice agents just levelled up
/Productivity /Voice
OpenAI just moved its Realtime API out of beta, and the upgrades are impressive. The new gpt-realtime model can detect nonverbal cues, switch languages mid-conversation, and maintain naturally flowing dialogue - basically, it's getting eerily close to talking to an actual human.
The numbers tell the story: 82.8% accuracy on audio reasoning benchmarks, up from 65.6% in the previous version. That's a massive leap in just one update.
But here's what makes this release special - it's not just about better conversation. OpenAI added Model Context Protocol support, meaning these voice agents can now connect to external data sources and tools without custom integrations. Plus, they can process images alongside voice, so you can show the AI a photo while explaining what you need.
Bottom line: Voice agents are moving from novelty to necessity. With these improvements in natural conversation and system integrations, we're looking at AI assistants that could genuinely replace customer support calls and handle complex voice-based workflows.
If you want to learn more about the release, you can read more at OpenAI’s blog post.
Real-life use case: Replace customer support calls, conduct voice-based interviews, or have natural conversations with AI that can see and understand images you share.
💡 Curiosity Corner
In this section, we aim to spotlight an incredible AI tool or use case and guide you on how you can try it.
This week’s challenge: Fix your vacation photos with Nano-Banana 🍌
The rumours are true - Google's Nano-Banana is absolutely incredible for image editing, and you can try it right now.
Here's how to get started:
Go to Google AI Studio
Sign in with your Google account (it's free)
Click on “Try nano-banana” or select Gemini 2.5 Flash Image (the actual name of the model)
Upload any photo you want to edit
Tell it exactly what you want to change or generate
Watch the magic happen
Real example: JT recently got back from vacation in Turkey, and in one of our best photos, someone was walking through the background with their phone in hand, completely ruining the photo. So he uploaded it to Nano-Banana and simply said, "Remove the person walking in the background with the phone."
The result? A perfect vacation photo without any trace that someone was ever there. The lighting, shadows, and background all adjusted naturally - you'd never know the image was edited.

Before asking nano-banana to remove the guy in the blue shirt.

The image that nano-banana returned in <10 seconds on the first try.
Try it with your own photos - change outfits, remove unwanted objects, or even completely transform scenes while keeping everything else consistent. This is the future of photo editing, and it's available right now.
📜 AI Dictionary
AI is full of jargon, and we’re here to decode it. Each week, we’ll give you a plain-English definition of a buzzy term you’ve probably seen (but never fully understood).
Prompt Injection - noun
⚡ Weird & Wonderful
In this section, we aim to spotlight something weird & wonderful in the world of AI.
This week: Your AI questions have a hidden environmental cost
Every time you ask Gemini a question, you're triggering a complex chain of environmental impact that most of us never think about. Google just released the first detailed breakdown of what each AI query actually costs our planet.
The numbers they shared seem small: each text prompt uses energy equivalent to watching TV for 9 seconds, emits 0.03 grams of CO2, and consumes about 5 drops of water. Google also claims Gemini became 33x more energy efficient over the past year.
But here's the reality check: those millions of daily AI interactions add up fast. And critics argue Google's methodology misses the bigger picture - like the massive water consumption by power plants generating electricity for data centres, which could dwarf the direct usage numbers.
As AI becomes as common as Google search, we're looking at a sustainability challenge that could reshape how we think about digital convenience. The question isn't whether AI will have an environmental impact - it's whether that impact will be manageable as billions of people start asking AI dozens of questions daily.
The transparency is a step forward, but the real test will be whether the tech industry can make AI sustainable at the scale we're heading toward. Because right now, every "Hey Claude" or "OK Google" is leaving a footprint we're only beginning to measure.
We’d like to ask a favour 🤝
If this email lands up in your Promotional or Spam folder, please move it to your Primary inbox. We’re working hard to bring you the best content weekly, and your support is truly appreciated. Thanks!
Thanks for reading TomorrowToday! We’d love to hear from you:
➡️ What would you like us to cover next?
➡️ Have a tool or topic we should feature?
We’re building this with (and for) you. 🚀
See you next Tuesday 👋


