👋 Tomorrow’s Tech, Delivered Today
Hi! Welcome to the 31st edition of the TomorrowToday newsletter.
We’re here to decode the AI chaos so you don't have to. Think of us as your friendly neighbourhood tech translators - we cut through the chaos, translate the jargon, and spotlight new AI tools that matter for founders, builders, and curious minds.
Buckle up, because the future's moving fast and we're here to make sure you don't get left behind! ⚡
If you enjoyed today’s newsletter, please forward it to a friend & subscribe by following this link.
~10 mins read
🗞️ News Flash
🤝 Cowork: Claude's answer to "but I'm not a developer"
/Productivity /Claude /Automation
When Anthropic launched Claude Code for developers, something unexpected happened. Developers used it for coding, sure - but then they started using it for everything else. Reorganising messy downloads. Converting screenshots into spreadsheets. Drafting reports from scattered notes. Turns out, the agentic coding tool was brilliant for non-coding work too.
So Anthropic built Cowork - essentially Claude Code for the rest of us. It launched today as a research preview for Claude Max subscribers on macOS, and it's a glimpse into how we'll work with AI assistants in the very near future.
Here's how it works: You give Claude access to a folder on your computer. That's it. Claude can then read, edit, or create files in that folder autonomously. Ask it to "reorganise my downloads by sorting and renaming each file" or "create a spreadsheet with expenses from these receipt screenshots" and Claude makes a plan, executes it, and keeps you updated along the way.
The difference from a regular Claude conversation? Agency. Once you set a task, Claude works through it independently - much like leaving instructions for a coworker rather than micromanaging every step. You can queue up multiple tasks and let Claude handle them in parallel. No more copying and pasting outputs, no more manually converting formats, no more back-and-forth.
When you've mastered the basics, Cowork gets even more powerful. Claude can use your existing connectors (like Notion, Google Drive, Slack), plus a new set of skills for creating documents, presentations, and other files. Pair it with Claude in Chrome, and it can complete tasks that require browser access, too.
The obvious question: Is this safe? Anthropic's been thoughtful here. You choose which folders and connectors Claude can access - it can't touch anything without explicit permission. Claude also asks before taking significant actions, so you can steer it if needed.
That said, there are risks. By default, Claude can take potentially destructive actions like deleting local files if instructed (though misinterpretation is possible, so be clear with instructions). There's also the risk of "prompt injections" - attempts by attackers to alter Claude's behaviour through content it encounters online. Anthropic's built defences, but agent safety is still an active area of development.
This is a research preview, which means Anthropic is releasing it early to learn what people actually use it for. They're planning rapid improvements, including cross-device sync and bringing it to Windows. If you're a Claude Max subscriber, you can try it now by downloading the macOS app and clicking "Cowork" in the sidebar. Others can join the waitlist.
The bigger picture? This is what "AI doing work for you" actually looks like in practice. Not chatbots that answer questions, but autonomous agents that complete entire workflows whilst you focus on higher-level thinking.
Source: Cowork
Real-life use case: Give Claude access to your Downloads folder and ask it to "organise all receipts from the last month, extract the amounts and categories, and create a spreadsheet tracking my expenses" - then watch as it sorts through dozens of files, extracts the data, and delivers a formatted spreadsheet ready for your tax return.
🍎 Apple finally admits defeat, hands Siri over to Google
/Partnerships /AIModels /Siri
In a move that surprises absolutely no one who's ever tried to use Siri for anything remotely complex, Apple just announced a multi-year deal with Google to power the next generation of Siri using Gemini models.
Let's be honest - Siri has been disappointing users for years. Whilst ChatGPT, Claude, and Gemini have been getting smarter, Siri's been stuck setting timers and occasionally understanding what song is playing. Apple's late arrival to the AI race has been painful to watch, with delayed Siri upgrades, executive shake-ups, and a lukewarm reception to their initial AI features.
Now Apple's turned to Google - the same company that's been paying them tens of billions annually to be the default search engine on Apple devices. Except this time, the tables have turned. Apple evaluated the options and concluded that "Google's AI technology provides the most capable foundation for Apple Foundation Models."
This is massive for Google. Their Gemini models already power much of Samsung's "Galaxy AI", but the Siri deal unlocks Apple's installed base of over two billion active devices. It's a major vote of confidence for Alphabet, whose market valuation hit $4 trillion on the news (the stock jumped 65% last year on AI momentum).
For Apple users? This might actually be the hope you've been waiting for. A Siri that can handle complex queries, understand context, and actually be useful beyond "Hey Siri, set a timer for 10 minutes." The revamped Siri is coming later this year, with Google's models also powering other future Apple Intelligence features.
The obvious question: what about OpenAI? Apple rolled out ChatGPT integration into devices in late 2024, allowing Siri to tap into the chatbot's expertise for complicated questions. That partnership isn't going away (for now), but Google's deal positions Gemini as the default intelligence layer whilst ChatGPT remains for complex, opt-in queries. It's a supporting role, not the starring one.
Apple is emphasising that Apple Intelligence will continue to run on Apple devices and Private Cloud Compute, maintaining their "industry-leading privacy standards." The question of whether Gemini will run fully on-device or require cloud processing isn't entirely clear yet, though Apple's focus on on-device processing and Private Cloud Compute suggests a hybrid approach - simple queries on-device, complex reasoning in the cloud with privacy protections.
The deal deepens a relationship that's already worth billions. Google pays Apple to be the default search engine. Now Google's models will be the default AI brain. For Google, it's a defensive move against OpenAI's early lead. For Apple, it's an admission that building frontier AI models from scratch isn't its strength.
The financial details weren't disclosed, but you can bet Google's paying handsomely for this privilege. When the revamped Siri launches later this year, we'll see if it was worth it.
Real-life use case: Later this year, ask Siri complex questions like "Compare the features of the top three noise-cancelling headphones under R5,000 and tell me which would be best for my daily commute" and actually get a thoughtful, accurate response powered by Gemini's reasoning capabilities - instead of Siri suggesting you search the web.
🛒 Google just rewrote the rules of online shopping
/Commerce /Protocol /AI-Agents
If you thought e-commerce was just about clicking "Add to Cart", Google's about to blow your mind. They just launched the Universal Commerce Protocol (UCP) - think of it as the HTTP of shopping, except instead of humans clicking through websites, AI agents handle everything autonomously.
Here's why this matters: For 20 years, online shopping followed the same boring flow - search, scroll through ads, click product pages, checkout. UCP breaks that entirely. Now it's just intent → agent reasoning → purchase. No clicks. No SEO games. No funnels.
UCP is an open standard that creates a shared language for AI agents, merchants, and payment systems to transact end-to-end. Discovery, price negotiation, checkout, even returns - all handled by software. And this isn't some vaporware announcement. Google launched with heavyweight partners: Shopify, Walmart, Target, Etsy, Wayfair, Visa, Stripe, and Adyen. Over 20 partners on day one.
What makes this properly impressive is that it solves e-commerce's nightmare integration problem. Normally, every new platform needs custom integrations with every merchant - an N×N problem that's crippled innovation for years. UCP kills that. One integration, and any AI agent can transact with any merchant.
But here's the kicker: UCP doesn't replace existing infrastructure. It connects it. It works with Agent2Agent (A2A) for agent communication, Model Context Protocol (MCP) for shared context, and Agent Payments Protocol (AP2) for secure payments. Google's building rails, not a walled garden.
And Google's uniquely positioned to win this. They already own global search intent, the world's largest product graph, Gemini AI embedded everywhere, and distribution through Search, Android, and YouTube. UCP turns all that into infrastructure that merchants and agents can build on.
The implications are wild. Brands won't compete for attention anymore - they'll compete to be chosen by machines. Websites become optional. Checkout buttons become legacy UI. We're entering an era of quiet, autonomous, everywhere commerce where AI agents handle purchasing whilst you sleep.
Shopify's already all-in, co-developing UCP with Google. In the coming months, Google's rolling out checkout directly in AI Mode in Search and Gemini, Business Agent (think virtual sales associates for brands), and Direct Offers that let retailers present exclusive discounts in AI search results.
Real-life use case: Imagine telling your AI assistant "I need running shoes for trail running, under R2,000, available for delivery this week" and having it autonomously search across all retailers, negotiate the best price, apply loyalty rewards, complete checkout, and confirm delivery - all without you opening a single browser tab.
💡 Curiosity Corner
In this section, we aim to spotlight an incredible AI tool or use case and guide you on how you can try it.
This week's challenge: Try on that designer bag you've been eyeing 👜
We've all been there. Scrolling Instagram, seeing an influencer rock a Longchamp bag that looks incredible on them, but will it suit you? Enter Doppl - Google's experimental AI app that lets you virtually try on outfits using your own photos.
This isn't Google Shopping's basic try-on feature. Doppl uses generative AI to create static images or animated videos of you wearing clothes from shopping sites or your own photos. Think of it as your personal fashion playground.
Here's how to give it a go:
Step 1: Download and set up
Download Doppl from the Google Play Store or App Store (currently US-only, 18+)
Sign in with your Google account
Upload a high-resolution photo of yourself (512x1024+ pixels, front-facing, plain background works best)
Step 2: Try on outfits
Browse the personalised outfit feed or search for specific styles
Tap "Try On Me" to generate your image in that outfit
The AI will render you wearing the clothes
Step 3: Get creative
Adjust poses (options are limited but improving)
Create video animations of walks or turns
Bookmark your favourite looks or share them on socials (tag @GoogleLabs)
Pro Tips:
Upload clear, full-length body photos with good lighting for the most accurate results
You've got monthly generation limits that refresh automatically
If you're privacy-focused, toggle "Product improvement" off in settings
You can delete all your data anytime (it's held for 30 days)
The fun angle? Spot luxury fits on yourself without spending a cent. See how that R15,000 handbag or designer jacket looks on your body before even considering the purchase. It's AI disruption meeting practical fashion decisions.
Fair warning: Doppl is experimental, US-only for now, and results can vary (the AI sometimes mismatches body proportions). But it's a fascinating glimpse into how AI is changing online shopping. Global rollout is expected soon, so South African fashionistas - watch this space.
🏢 AI in Enterprise
In this section, we're spotlighting real businesses using AI to solve actual problems. But this week, we're switching it up with a use case that's equal parts impressive and hilarious.
This week: Shopify's CEO builds an MRI viewer in under an hour 🧠
Toby Lütke, CEO of Shopify, recently shared something brilliant on X (formerly Twitter) that perfectly captures where we are with AI capabilities.
The problem? Lütke received his annual MRI scan results on a USB stick, but accessing the data required expensive, restrictive commercial Windows software. He was on his Mac, didn't want to mess with Windows, and thought, "Why not just ask AI to handle this?"
So he fed the MRI data directly into Claude and asked it to build a custom web app. One prompt. That's it. In under an hour, he had a fully functional HTML-based viewer that visualised his scans interactively - allowing him to zoom, slice through 3D volumes, and measure features. Far beyond what the commercial software offered.
Then he gave Claude one more prompt, and it annotated everything with medical findings.
Here's the exact prompt he used:
The lesson? AI is democratising access to your own data. No more vendor lock-in. No more paying for expensive software licenses. No more being forced onto specific operating systems.
But there's a deeper insight here. Lütke emphasised "reflexively" reaching for AI. A year ago, this would've been impossible. He would've needed to either install the Windows software, hire a developer, or just accept the limitation. Today? It's a solved problem in 60 minutes.
That's the shift. We're training our brains to reach for AI first. When you tinker with these tools enough, you develop an intuition for what's possible. And what's possible is expanding faster than most people realise.
📜 AI Dictionary
AI is full of jargon, and we’re here to decode it. Each week, we’ll give you a plain-English definition of a buzzy term you’ve probably seen (but never fully understood).
Agent-to-Agent (A2A) - noun
We’d like to ask a favour 🤝
If this email lands up in your Promotional or Spam folder, please move it to your Primary inbox. We’re working hard to bring you the best content weekly, and your support is truly appreciated. Thanks!
Thanks for reading TomorrowToday! We’d love to hear from you:
➡️ What would you like us to cover next?
➡️ Have a tool or topic we should feature?
We’re building this with (and for) you. 🚀
See you next Tuesday 👋


