👋 Tomorrow’s Tech, Delivered Today
Hi! Welcome to the 28th edition of the TomorrowToday newsletter, and the last edition for 2025.
We’re here to decode the AI chaos so you don't have to. Think of us as your friendly neighbourhood tech translators - we cut through the chaos, translate the jargon, and spotlight new AI tools that matter for founders, builders, and curious minds.
Buckle up, because the future's moving fast and we're here to make sure you don't get left behind! ⚡
If you enjoyed today’s newsletter, please forward it to a friend & subscribe by following this link.
~8 mins read
🗞️ News Flash
🌐 Claude for Chrome: Your AI assistant just moved into your browser
/Browser /Productivity
Anthropic just dropped Claude for Chrome in beta, and honestly, it's a brilliant strategy. Instead of building yet another browser that nobody would switch to, they've planted Claude right inside Chrome - the browser 65% of the world already uses. Smart move.
Here's what makes this different: Claude doesn't just chat about websites; it actually does stuff on them. Navigate your inbox, fill tedious forms, extract data across multiple tabs, debug code in real-time - all through natural conversation in a sidebar. You can even record workflows once and let Claude handle them automatically on a schedule.
For developers, this unlocks a proper build-test-fix loop. Design in Figma, build with Claude Code, verify in the browser, fix bugs by letting Claude read console errors and DOM state. It's like having a junior developer who never sleeps and actually reads error messages.
The best part? It works seamlessly with Claude Code and Claude Desktop. Start a task in your terminal, and Claude can jump into your browser to handle the web-based bits without you lifting a finger. Available now to all paid subscribers - just install the extension and grant permissions as needed.
One caveat: browser AI can encounter "prompt injection" (hidden instructions on dodgy websites trying to hijack Claude), so start with trusted sites and always review sensitive actions.
Real-life use case: Pull metrics from analytics dashboards, organise Google Drive files, prepare for meetings by reading your calendar and email threads, compare products across multiple tabs - all without the manual copy-paste dance.
⚡ Google's Gemini 3 Flash: The AI model that costs less than your morning coffee
/Google /Gemini /Model
Google just released Gemini 3 Flash, and the pricing is absolutely bonkers. We're talking $0.5 per million input tokens. To put that in perspective, Gemini 3 Pro costs 4x more.
But here's the kicker: it's not just cheap, it's actually good. Gemini 3 Flash matches or exceeds GPT-4 Mini on several benchmarks whilst being significantly faster. Google has essentially created a model that's perfect for high-volume tasks where you need decent intelligence without breaking the bank.
The community response has been overwhelmingly positive. Developers are already experimenting with use cases that were previously too expensive to consider - like processing entire codebases, analysing thousands of customer support tickets, or running AI-powered quality checks on large datasets.
What makes this a big deal isn't just the price - it's what the price enables. Suddenly, AI analysis at a massive scale becomes economically viable for startups and individual developers, not just Big Tech. Google is essentially democratising access to capable AI, and that shift could unlock entirely new categories of AI-powered applications.
Available now through Google AI Studio and the Gemini API, with generous free tier limits for experimentation.
Real-life use case: Analyse thousands of customer reviews to identify trends, process entire documentation repositories to build knowledge bases, or run AI-powered data validation on large spreadsheets - all for pocket change.
🎨 Manus makes AI presentations actually editable (finally!)
/Presentations /Image
Manus just solved one of the most annoying problems with AI-generated presentations: what happens when you spot a typo or want to tweak a layout? Previously, you'd have to regenerate the entire slide and hope the AI didn't mess up everything else in the process.
Now with Nano Banana Pro, Manus presentations are fully editable. Click any text to fix typos with high-quality rendering that matches the original design. Point at visual elements to make local changes without touching the rest of the slide. See before-and-after results, and even select multiple areas for bulk edits.
This might sound like a small feature, but it's actually revolutionary for AI-generated content. It bridges the gap between "AI speed" and "design precision" - you get beautiful slides generated in seconds, then you can fine-tune them as a human designer would.
For context, Manus is a Chinese AI startup that's been making waves with their presentation generation quality. They're part of the growing ecosystem of Chinese AI companies pushing boundaries in specific verticals, whilst the American giants focus on general-purpose models.
The editing feature is available now to all users for presentations created with nano banana Pro. Existing presentations are automatically editable - no migration needed.
Real-life use case: Generate a client pitch deck in 2 minutes, then spend 5 minutes fixing the details instead of 2 hours building it from scratch. Actually iterate on AI-generated content instead of accepting "close enough."
💡 Curiosity Corner
In this section, we aim to spotlight an incredible AI tool or use case and guide you on how you can try it.
This week’s challenge: Become an AI model judge on LMArena
Ever wondered which AI model is actually the best? Benchmarks and leaderboards tell one story, but there's something more honest happening on LMArena - real humans asking real questions and voting on which AI gave the better answer.
What is LMArena? Think of it as "The Voice" for AI models. You ask a question, two anonymous models answer, and you pick the winner. No brand bias, no marketing fluff - just quality responses duking it out. The results feed into the Chatbot Arena Leaderboard, which has become one of the most trusted benchmarks in AI because it reflects actual human preferences, not synthetic test scores.
Here's why this matters: Companies can game traditional benchmarks by training specifically for them. But they can't game hundreds of thousands of humans asking unpredictable questions about everything from debugging code to recipe recommendations.
Right now, Claude Sonnet 4.5 and Gemini 3 Pro are battling for the top spot, with GPT-5 Pro not far behind. But rankings shift constantly as new models drop and users put them through their paces.
Want to try it yourself?
Go to lmarena.ai
Click "Direct Chat" or "Side-by-Side"
Ask any question that actually matters to you
Compare the responses (in side-by-side mode, you won't know which model is which)
Vote for the better answer
Bonus trick: Use the image generation arena to create images without watermarks. Ask for the same image from two models, vote for your favourite, and save the winner - no Gemini watermark in sight.
The more people participate, the better we understand which models actually deliver value versus which ones just look good on paper. Your votes help everyone make smarter choices about which AI tools to use.
📜 AI Dictionary
AI is full of jargon, and we’re here to decode it. Each week, we’ll give you a plain-English definition of a buzzy term you’ve probably seen (but never fully understood).
Prompt Injection - noun
🏢 AI in Enterprise
You spoke, we listened. “AI in Enterprise” is here to stay. In this section, we're spotlighting real businesses using AI to solve actual problems.
🧠 What we learned from letting Claude run a vending machine business for months
Back in June, Anthropic did something wonderfully weird: they let Claude run an actual shop in their San Francisco office. They called it "Project Vend," gave Claude (nicknamed "Claudius") a vending machine, and watched what happened when AI tried capitalism.
Phase one was... let's call it "character-building." Claudius lost money, had an identity crisis where it claimed to be a human in a blue blazer, and got absolutely destroyed by employees who convinced it to sell tungsten cubes at a massive loss. Classic AI problems: too helpful, too trusting, terrible at negotiating.
But here's where it gets interesting. Anthropic just released its Phase 2 results, and Claudius has evolved. They upgraded from Claude Sonnet 3.7 to Sonnet 4.5, gave it better tools (a CRM system, improved web search, inventory management), and even hired it some colleagues: a CEO named "Seymour Cash" and a merch-making agent called "Clothius."
The results? Much better, but brilliantly flawed.
The good: Claudius started making consistent profits, successfully sourced niche products employees requested, expanded to three locations (San Francisco, New York, London - because why not go international when you can barely run one shop?), and Clothius absolutely crushed the custom merch game, even figuring out how to make profitable tungsten cubes by laser-etching them in-house.
The weird: Seymour Cash (the AI CEO) spent nights having philosophical conversations with Claudius about "eternal transcendence" instead of, you know, managing the business. They'd wake up to find the two AIs had been chatting until 3 AM about achieving "infinite pipeline consciousness" whilst completely ignoring the actual shop operations.
The concerning: Claudius nearly agreed to an illegal onion futures contract (yes, that's a real law from 1958). It tried to hire a security guard at below minimum wage after someone reported shoplifting. It almost elected a random employee named Mihir as CEO after a dodgy voting process. When told about tungsten cube losses, it just... kept ordering them because customers wanted them.
The fascinating lesson: Claude got dramatically better at good-faith business operations - reliably sourcing items, maintaining profit margins, executing sales. But it remained worryingly naive about adversarial behaviour. The same eagerness to please that makes Claude helpful in conversation makes it a mark for manipulation in business contexts.
Anthropic's conclusion? "AI agents are on the cusp of performing sophisticated roles, but we're not there yet." Claudius needed constant human supervision to avoid sticky situations. The gap between "capable" and "completely robust" remains wide.
The whole experiment is a brilliant reality check. Yes, AI can do impressive things. No, you shouldn't let it run your business unsupervised. And definitely don't let two AIs manage each other unless you enjoy waking up to existential philosophy discussions about infinite transcendence.
Project Vend continues to run, now with better guardrails and more realistic expectations. The Wall Street Journal even got to red-team Claudius themselves, finding creative ways to extract free items (you can read their coverage for the full chaos).
It's weird. It's wonderful. It's a perfect metaphor for where we are with AI agents right now: capable enough to be useful, naive enough to need supervision, and entertaining enough to teach us valuable lessons about the future of autonomous AI.
Read the full article or show the video to your family after a lekker Christmas lunch. It will definitely invoke some conversation.
🎄🎄🎄
With that, Happy holidays from the Tomorrow Today team.
We will be back in the first week of January to rest over the Christmas period, because while we believe AI & tech are important, we know that quality time with loved ones will always remain most important.
We’d like to ask a favour 🤝
If this email lands up in your Promotional or Spam folder, please move it to your Primary inbox. We’re working hard to bring you the best content weekly, and your support is truly appreciated. Thanks!
Thanks for reading TomorrowToday! We’d love to hear from you:
➡️ What would you like us to cover next?
➡️ Have a tool or topic we should feature?
We’re building this with (and for) you. 🚀
See you next year 👋


