👋 Tomorrow’s Tech, Delivered Today
Hi! Welcome to the 25th edition of the TomorrowToday newsletter.
We’re here to decode the AI chaos so you don't have to. Think of us as your friendly neighbourhood tech translators - we cut through the chaos, translate the jargon, and spotlight new AI tools that matter for founders, builders, and curious minds.
Buckle up, because the future's moving fast and we're here to make sure you don't get left behind! ⚡
If you enjoyed today’s newsletter, please forward it to a friend & subscribe by following this link.
~7 mins read
🗞️ News Flash
😲 Claude Opus 4.5: Anthropic Said "Hold My Beer"
/Claude /Anthropic /Coding /Benchmarks
Just when Google launched Gemini 3 Pro last week, Anthropic released Claude Opus 4.5 and basically said, "We're not done yet." On SWE-bench Verified (which measures real-world software engineering ability), Opus 4.5 scored 80.9%, outperforming Google's Gemini 3 Pro at 76.2% and OpenAI's GPT-5.1-Codex-Max at 77.9%.
But here's the plot twist: Anthropic just slashed pricing to $5 per million input tokens and $25 per million output tokens - a jaw-dropping 67% reduction from the previous Opus generation. Google's Gemini 3 Pro comes in at $2–4 input / $12–18 output for standard contexts, which looks cheaper on paper. But on actual problem-solving for developers? Claude's token efficiency means you often need fewer tokens to get the right answer, making the real cost-per-solved-problem competitive or better.
Anthropic also released effort controls on the API - you can now tune between speed and reasoning depth. At medium effort, Opus 4.5 matches Sonnet 4.5's coding performance while using 76% fewer output tokens. Translation: Frontier AI just got a lot more accessible.
Real-life use case: A South African fintech startup building with Opus 4.5 can now afford to run complex multi-step agent workflows (customer service bots, financial analysis, code generation) at scale without burning through budgets. Smaller teams that previously needed Sonnet as a compromise now have true frontier capability within reach.
👓 Alibaba Launches Quark Glasses While Meta Watches
/Alibaba /Wearables /China /AI-Race
Alibaba released its new Quark AI Glasses series in China this week: the S1 flagship with dual micro-OLED displays at ¥3,799 (~R7,200), and the lifestyle-focused G1 at ¥1,899 (~R3,600). Both run on Alibaba's Qwen AI and integrate deeply with the entire Alibaba ecosystem - Alipay payments, Taobao shopping, and music streaming platforms.
Meanwhile, Meta's Ray-Ban Display glasses launched at $799 (R14,800) with a monocular AR display controlled by a neural wristband. Meta's Gen 2 standard glasses sit at $379 (R7,000).
The price gap is telling, but the real story is geopolitical. This mirrors the 1960s space race between the US and USSR — except instead of the moon, the prize is who controls the future through AI-integrated wearables. Alibaba is embedding AI into every Chinese consumer touchpoint; Meta is betting on immersive AR. The country that nails this technology controls surveillance, commerce, and information flow for billions of people. This race is heating up fast, and there's no neutral ground.
Real-life use case: An e-commerce merchant in South Africa could eventually use Quark Glasses to manage inventory, process payments, and assist customers in real-time through a heads-up display. Meta's approach offers richer AR overlays. Both represent the next battleground in consumer tech.
🛍️ ChatGPT's New Shopping Assistant: The Future of Product Discovery
/OpenAI /ChatGPT /Ecommerce /ConsumerAI
OpenAI quietly launched Shopping Research in ChatGPT this week. Instead of tab-hopping through 47 product review sites, you just tell ChatGPT what you want - "Find the quietest cordless stick vacuum for a small apartment" - and it builds a personalised buyer's guide in minutes.
The magic: Shopping Research is powered by GPT-5 mini, specifically trained on retail tasks with reinforcement learning. It researches product pages, cross-references prices and reviews, learns your preferences from past conversations (if memory is on), and delivers a curated guide with tradeoffs and alternatives.
It's rolling out to all users (Free, Go, Plus, Pro) with nearly unlimited usage through the holidays. The feature also appears in ChatGPT Pulse, which proactively suggests guides based on your conversation history.
Real-life use case: A small business owner in South Africa shopping for office furniture can now get AI-powered recommendations tailored to local pricing, delivery times, and warranty options, without relying on paid sponsored ads from retailers.
💡 Curiosity Corner
In this section, we aim to spotlight an incredible AI tool or use case and guide you on how you can try it.
This week's challenge: Build a Flight Tracker App in 4 Hours (Yes, Really) 🛫
Last week, Google launched Anti-Gravity, a game-changing AI-powered development platform that lets you build full applications by orchestrating AI agents, not writing code line by line. We're talking scaffolding, API integration, design, testing - all coordinated by AI agents running in parallel.
Here's the deal: Instead of becoming a full-stack engineer, you become an orchestrator - you give high-level instructions, review AI-generated plans, approve them, and watch agents handle the heavy lifting. One agent builds the app, another researches APIs, a third designs the UI, and a fourth runs tests. Parallel. At the same time.
Google shared a complete walkthrough on YouTube showing developers building a flight tracker app with live Aviation Stack API data, Google Calendar integration, and a custom logo in one session.
Here's how to get started:
Step 1: Download Anti-Gravity
Head to https://antigravity.google/, download for macOS, Windows, or Linux, and sign in with your Google account.
Step 2: Create a Workspace
Open the app, click "Open Folder," create an empty folder called
flight-tracker, and let Anti-Gravity load it.
Step 3: Give Your First Instruction
In the Agent Manager, write: "Build me a flight lookup Next.js web app where users enter a flight number and get departure time, arrival time, timezones, start location, and end location. For now, use mock data."
Watch the agent scaffold a full Next.js project automatically.
Step 4: Review the Implementation Plan
The agent generates a plan document. Review logic, architecture, and dependencies. Add comments. The agent learns and improves.
Step 5: Research & Integrate Live APIs
Spin up a second agent: "Look up the Aviation Stack API. I have an API key. Create a utility function in
/utils/aviationStack.ts."The agent researches, tests, and writes the code without touching your main app.
Step 6: Refactor in the Editor
Press
Command + Eto open the editor. Delete mock data, and AI autocomplete suggests the correct API calls. Done in seconds.
Step 7: Add Features in Parallel
Spin up a third agent for design: "Create logo mockups — one minimalist, one classic aviation theme. Make it a favicon."
While design happens, your main app finishes. Everything runs at the same time.
Pro Tips:
Use Agent-Assisted Development mode for the best balance of automation and control
Parallel agents mean design, code, and testing happen simultaneously
The browser agent auto-tests by clicking and filling forms
Each agent generates artefacts, so you always know what's happening
Key Takeaway:
By the time you'd finish project scaffolding in a traditional IDE, Anti-Gravity users have a fully functional, integrated, tested app with custom branding. Drag your engineering buddies into the chat and tell them you just built a flight tracker in 4 hours with Google Anti-Gravity. Their jaws will hit the floor.
🏢 AI in Enterprise
You spoke, we listened. “AI in Enterprise” is here to stay. In this section, we're spotlighting real businesses using AI to solve actual problems.
This week: OpenAI's Monetisation Crisis (And Why It Matters for Everyone) 💰
OpenAI has a math problem. A leaked internal test shows they're preparing ads for ChatGPT - and based on financial realities, they really need them.
Here's the situation: OpenAI reportedly has $30 billion in spending commitments (infrastructure, salaries, partnerships) against roughly $10 billion in annual revenue. They're underwater. Meanwhile, fewer than 5% of ChatGPT's 800+ million weekly users are paid subscribers. The unit economics don't work.
Enter ads. OpenAI is testing ad placements in ChatGPT's search feature - think Google's model, but supercharged. The difference is critical: OpenAI knows more about user intent than Google. When you ask ChatGPT to "find a laptop under R15,000 for programming," the system understands context, your skill level, past preferences, and professional background. Ads served at that moment could be far more valuable to merchants than generic search ads.
The leaked test shows ads initially appearing in search results only. Expect expansion. This isn't just about closing OpenAI's funding gap - it's a fundamental shift in how AI-powered commerce works. Merchants will compete for relevance in AI-generated recommendations, not just ad placement. If you sell anything, your future depends on being AI-discoverable.
The lesson: Monetisation through ads is inevitable for consumer AI. But the real story is how this reshapes the entire e-commerce landscape. The winners will be products that align with genuine user needs, not just the highest bidders.
📜 AI Dictionary
AI is full of jargon, and we’re here to decode it. Each week, we’ll give you a plain-English definition of a buzzy term you’ve probably seen (but never fully understood).
Prompt Injection - noun
We’d like to ask a favour 🤝
If this email lands up in your Promotional or Spam folder, please move it to your Primary inbox. We’re working hard to bring you the best content weekly, and your support is truly appreciated. Thanks!
Thanks for reading TomorrowToday! We’d love to hear from you:
➡️ What would you like us to cover next?
➡️ Have a tool or topic we should feature?
We’re building this with (and for) you. 🚀
See you next Tuesday 👋


