👋 Tomorrow’s Tech, Delivered Today
Hi! Welcome to the 24th edition of the TomorrowToday newsletter.
We’re here to decode the AI chaos so you don't have to. Think of us as your friendly neighbourhood tech translators - we cut through the chaos, translate the jargon, and spotlight new AI tools that matter for founders, builders, and curious minds.
Buckle up, because the future's moving fast and we're here to make sure you don't get left behind! ⚡
If you enjoyed today’s newsletter, please forward it to a friend & subscribe by following this link.
~7 mins read
🗞️ News Flash
🚀 Gemini 3: Google Just Took the Crown (And It's Not Even Close)
/Gemini /Benchmark
Google dropped Gemini 3 this week, and honestly? It's the kind of release that makes you sit back and think, "Okay, that's the bar now."
This model is performing at a level that frankly feels like cheating. On LMarena (the leaderboard everyone watches), it scored 1501 Elo - far enough ahead of the competition that we had to check if the numbers were right. On tests that measure actual reasoning (not just pattern matching), it's obliterating everything. Tests where the previous best was 0.5%? Gemini 3 hit 23.4%. Tests that literally measure novel problem-solving? It jumped from 4.9% to 31.1%. These aren't incremental improvements; these are quantum leaps.
What's wild is what you can actually do with it. Feed it a handwritten recipe, a YouTube lecture, a research paper, and a sketch on a napkin—and it synthesises it all into something coherent. You can ask it to build you an interactive visualisation of a complex concept, write code to solve a problem, or brainstorm creative ideas. The model understands context so well that you need to spend less time explaining yourself. It just... gets it.
Real-life use case: Product teams can dump their entire codebase (The model has a 1m context window), feature requests, and design docs into Gemini 3 and ask it to build the next feature, complete with testing. Researchers can feed it entire papers and ask it to find contradictions or suggest new experiments. Designers can describe a vision and get back interactive prototypes. It's the first model where the AI actually feels like it's thinking, not just predicting the next word.
🎨 Nano Banana Pro: AI Image Generation Just Grew Up
/Image /Editing /Integration
Google's image model (Nano Banana) was already good. The Pro version? It's the version that makes you forget a human didn't create it.
The original was fun for quick mockups. This one is actually usable for real work. Crystal-clear text rendering (previously the weak point of all AI image generators). Studio-level control over lighting, camera angles, and aspect ratios. 2K resolution outputs. The ability to blend multiple reference images whilst keeping consistency. When you ask it to create an infographic or a historically accurate scene, it doesn't just guess - it actually knows the details.
Within hours of launch, designers and creatives were flooding social media with outputs that honestly look professional. Photorealistic product mockups. Intricate typographic designs. Complex infographics that rival actual graphic designers' work. The internet's collective reaction was basically: "Wait, the AI did that?"
📱 Check out this post for a few super cool creations.
Real-life use case: E-commerce teams can now photograph products in literally any setting without a photoshoot. Design agencies can rapid-prototype client concepts in minutes. Marketing teams can generate dozens of campaign variations and A/B test them. One agency we heard about just saved £15,000 on a photoshoot by using Nano Banana Pro instead. That's the kind of impact we're talking about.
🏗️ Google Antigravity: Development Just Got an AI Superpower
/AgentFirst /Development
This one's genuinely different. Google didn't just add another AI chatbot to an IDE; they reimagined what development could look like when you have actual intelligent agents running alongside you.
Here's the picture: You've got your familiar code editor for when you want hands-on control. But you also have what Google calls the "Manager Surface" - basically a command centre where you can spawn multiple AI agents that work independently whilst you work on something else. These agents don't just write code. They plan the architecture, write the code, run it, test it in a browser, find bugs, fix them, and then show you everything they did with screenshots and video walkthroughs.
Tell an agent: "Build me a flight-tracking dashboard with real-time updates." The agent breaks that down into steps (set up the backend, create the UI, integrate the API, handle edge cases), executes them all, tests everything, and reports back. Whilst it's doing that, you're free to do actual creative work instead of wrestling with syntax.
Real-life use case: Developers can delegate entire features to agents and come back when they're done. Startups can build prototypes in days instead of weeks. Junior developers have a senior-level partner handling the boilerplate whilst they focus on business logic. Agencies can parallelise work: multiple agents working on different features simultaneously, all coordinated and verified automatically.
💡 Curiosity Corner
In this section, we aim to spotlight an incredible AI tool or use case and guide you on how you can try it.
This week’s challenge: Create something stunning with Nano Banana Pro 🍌
Nano Banana Pro is the kind of tool where the best way to learn is by playing. We've picked three prompts from Google's official showcase that showcase what's actually possible.
Here's how to do it:
Go to gemini.google.com
Click on the image generation button (Make sure you have “Thinking” mode activated)
Paste one of the prompts below
Let the magic happen
Try variations - change the colours, style, or composition and see how the model adapts
Share your creations with us (seriously, we'd love to see what you build)
Prompt 1 - Professional Product Photography:
Transform this product photo into a luxury lifestyle scene. Show the product being used in an elegant home setting with warm, natural window lighting. Include tasteful interior design elements that complement the product. Professional product photography style, 4:3Prompt 2 - Build an infographic:
Create an infographic that shows how solar energy works, arranged on a clean light gray background. The visual story flows from left to right in clear steps with simple, clean black arrows guiding the eye. Keep it simple, clean, modern, and easy to understand. Format 16:9Prompt 3 - See how smart the model is that it can understand how it looks at specific coordinates:
Create an image at 31.7785° N, 35.2296° E, April 3, 33 CE, 15:00 hoursThe secret to Nano Banana Pro is specificity. The more detail you give it about lighting, composition, and style, the better the results. It's almost like it's actually listening.
🏢 AI in Enterprise
You spoke, we listened. “AI in Enterprise” is here to stay. In this section, we're spotlighting real businesses using AI to solve actual problems.
💼 The Monopoly Just Cracked
For years, Nvidia owned the AI hardware market completely. Want to build a frontier AI model? You needed their chips. Want to build anything serious? Same answer. That just changed forever.
Here's what's quietly revolutionary: Gemini 3 wasn't built on Nvidia chips. Not one. Google trained it entirely on their custom-designed TPUs (Tensor Processing Units), and it reached the top of every single benchmark. This breaks the narrative that's dominated the industry for the past five years.
For context: Nvidia controls roughly 80–95% of the AI accelerator market. They've had a stranglehold on any company that wants to do serious AI work. Need to train a model? Wait in queue, pay premium prices, and hope supplies don't run out. Google's just proved this is no longer necessary.
TPUs are custom-built chips designed specifically for AI mathematics. The new generation of TPUs is designed for a 2.8 times performance increase over the prior one, and for large models, TPUs are proven to offer superior performance and efficiency, in some cases demonstrating 2 times to 3times better performance per watt than comparable Nvidia GPUs. More importantly, they're Google's, which means Google controls their supply, price, and optimisation without answering to anyone else.
What's happening now is fascinating. Anthropic (OpenAI's closest competitor) just committed billions to TPU infrastructure. Midjourney is building on TPUs. Even companies with deep Nvidia relationships are quietly testing alternatives. Why? Because Google just proved you can reach the frontier without Nvidia's supply chain.
Here's the real kicker: Google owns everything. Custom silicon. Data centres. Software frameworks. Networking. Direct access to billions of users. OpenAI has ChatGPT and Microsoft. Anthropic has good research. But Google controls the entire stack from the chip you run the model on to the search box where billions of people use it daily.
This is vertical integration winning in real time. The company that controls every layer—hardware, software, distribution—will dominate the race. Nvidia's comfortable monopoly is officially under threat, and the AI landscape just became a lot more competitive.
Bottom line? Google didn't just release the best model this week. They proved that owning your infrastructure gives you an almost unbeatable advantage. Well done to Sundar Pichai and Demis Hassabis for making the future genuinely interesting.
📜 AI Dictionary
AI is full of jargon, and we’re here to decode it. Each week, we’ll give you a plain-English definition of a buzzy term you’ve probably seen (but never fully understood).
Tensor Processing Unit (TPU) - noun
We’d like to ask a favour 🤝
If this email lands up in your Promotional or Spam folder, please move it to your Primary inbox. We’re working hard to bring you the best content weekly, and your support is truly appreciated. Thanks!
Thanks for reading TomorrowToday! We’d love to hear from you:
➡️ What would you like us to cover next?
➡️ Have a tool or topic we should feature?
We’re building this with (and for) you. 🚀
See you next Tuesday 👋


