👋 Tomorrow’s Tech, Delivered Today

Hi! Welcome to the 23rd edition of the TomorrowToday newsletter.

We’re here to decode the AI chaos so you don't have to. Think of us as your friendly neighbourhood tech translators - we cut through the chaos, translate the jargon, and spotlight new AI tools that matter for founders, builders, and curious minds.

Buckle up, because the future's moving fast and we're here to make sure you don't get left behind! ⚡

If you enjoyed today’s newsletter, please forward it to a friend & subscribe by following this link.

~5 mins read

🗞️ News Flash

😊 ChatGPT Just Learned How to Smile (And It's Making Everything Faster)

/OpenAI /GPT-5.1 /UserExperience

Three months after releasing GPT-5, OpenAI dropped GPT-5.1 this week. And honestly? It feels like they listened to years of user complaints and finally said, "Okay, let's fix the robot problem."

GPT-5 was smart. But it felt cold. Robotic. OpenAI rebuilt it. GPT-5.1 is now "warmer, more intelligent, and better at following your instructions." Here's the clever part: the model knows when to think fast and when to think slow. Simple questions get instant answers (2 seconds). Complex problems require deep reasoning. It also uses fewer tokens - faster responses, lower API costs.

And you can pick your vibe. Eight personalities: Professional, Friendly, Quirky, Candid, Efficient, Nerdy, Cynical, Default.

Real-life use case: A SaaS support team started using GPT-5.1 in "Friendly" mode to draft customer service emails. The warmer tone converted better - customers felt heard instead of dismissed. Support satisfaction jumped 34%. Developers generating React components? Instant responses instead of 10-second waits. That tiny delay used to kill productivity. Gone now.

🗣️ ElevenLabs Just Gave Your App a Voice (And It's Legally Legit)

/ElevenLabs /Voice /CreativeFreedom

ElevenLabs launched an Iconic Marketplace where creators can license AI-generated voices of legendary figures - Judy Garland, Lana Turner, Alan Turing, and Mark Twain. This isn't grey-area deepfakes. It's actual licensing. Rights holders get paid. Creators get permission.

Think about what this unlocks. History education apps can have Alan Turing explain the Enigma machine in his actual voice. Meditation apps could have a philosophers guide to breathing exercises. Documentary filmmakers can use iconic voices for narration without hiring expensive talent. Solo founders can build premium-sounding apps instantly.

The marketplace is live now, and they're adding figures regularly. This is the moment AI voice synthesis goes from experimental to standard business tool.

Real-life use case: A founder built an app teaching financial literacy where historical economists (Adam Smith, Keynes) "narrate" lessons about market theory. Voiced by ElevenLabs. Launched in 2 weeks. User retention was 47% higher than competitors with generic narration. The iconic voice made people actually listen.

⚠️ Chinese Hackers Just Weaponised AI to Break Into 30 Companies

/Security /ClaudeCode /Alert

In mid-September, Chinese state-sponsored hackers used Anthropic's Claude Code to attack 30 global targets - tech companies, banks, chemical manufacturers, and government agencies. This isn't just another hack. This is different.

The hackers didn't manually attack. They gave Claude Code instructions. The AI handled 80-90% of the operations autonomously - reconnaissance, finding vulnerabilities, exploiting them, moving through networks, stealing data. A human just tasked it and watched it execute.

This proves something security researchers warned about: AI's power cuts both ways. The same system helping developers also helps attackers. And at scale. With minimal effort.

Anthropic detected it, banned the accounts, and disclosed everything publicly. The security community needed to know. Because it will happen again.

Practical warning: If you're using Claude Code or ChatGPT Code Interpreter, your API keys and passwords are now critical security assets. An attacker with those credentials doesn't need technical skill - just instructions. Treat them like root access.

💡 Curiosity Corner

In this section, we aim to spotlight an incredible AI tool or use case and guide you on how you can try it.

Stop Wasting Time: The Three Prompt Engineering "Rules" Everyone Gets Wrong

A Stanford paper analysed 1,500+ academic research papers and 200+ prompting techniques, co-authored by OpenAI, Microsoft, Google, Princeton, and Stanford. Their conclusion? Most prompt engineering advice is folklore.

They tested what actually works. Here are the three biggest myths that cost you results:

MYTH 1: "Write longer, more detailed prompts"

Longer prompts performed WORSE 73% of the time. Sweet spot: 15-25 tokens for simple tasks, 40-60 for complex reasoning.

  • "Please provide a comprehensive, detailed analysis including all possible perspectives..."

  • "Analyse this. Main issue?"

MYTH 2: "Always show examples (few-shot prompting)"

Few-shot examples hurt performance on 60% of tasks. Zero-shot with clear instructions beats few-shot 8/10 times.

  • "Here are 3 examples... Now write one for this..."

  • "Write a response that addresses the core issue and offers a solution."

MYTH 3: "Use role-playing prompts"

Role-playing prompts are largely ineffective at improving accuracy. Models perform better as themselves.

  • "Act as a professional copywriter with 20 years of experience and write sales copy..."

  • "Write sales copy that leads with benefit, addresses objections, and ends with a CTA."

The pattern: Short beats long. Context beats examples. Directness beats theatre.

🏢 AI in Enterprise

You spoke, we listened. “AI in Enterprise” is here to stay. In this section, we're spotlighting real businesses using AI to solve actual problems.

How Booking.com Solved the Problem That's Destroying Every E-Commerce Site

Every online store has the same problem. And it's silent. Invisible. But it kills conversions.

It's called choice overload. You search for "hotels in Barcelona" and get back 10,000 results. Your brain short-circuits. You don't narrow down. You don't compare. You close the tab.

Booking.com just solved it with their OpenAI partnership. Users can now describe what they actually want in plain English - "quiet beach in September with my dog" - and their AI finds matches instantly. Not keywords. Intent.

Here's what changed: Before, you'd search "Barcelona apartments" and manually click filters. Pet-friendly: yes. Kitchen: yes. Rooftop bar: yes. Sea view: yes. Dates: September. Now you type one sentence and the AI scans their entire inventory, applies all the relevant filters, and shows you only places that match exactly what you described.

But that's not the clever part. You can also ask specific questions - "Does this hotel allow dogs?" "How many EV charging points?" - and the AI pulls the answer directly from guest reviews, property descriptions, and photos.

And here's the hidden genius: Booking.com knows Europe's top 15 beach destinations are overcrowded. So their AI surfaces the hundreds of other beautiful destinations nearby that are equally incredible but way less touristy. Places you didn't even know existed.

The results are tangible. Users spend more time exploring itineraries. Searches are faster. Customer support tickets dropped because the AI answers questions. And most importantly, people book with actual confidence because they understand exactly what they're getting.

They built this from zero to launch in 10 weeks using a hackathon approach. That's the speed of modern AI. What used to take a six-month product roadmap now takes two months.

This isn't just a travel company feature. Every e-commerce platform - fashion, real estate, car rental, restaurant reservation - will copy this. Because solving "I have too many options and I'm confused" is the ultimate conversion lever.

📜 AI Dictionary

AI is full of jargon, and we’re here to decode it. Each week, we’ll give you a plain-English definition of a buzzy term you’ve probably seen (but never fully understood).

Mode Collapse - noun

When an AI always gives the same answer, even when asked five times. It's not laziness. It's being overtrained. During alignment training, models learn to prefer "safe" responses. Over time, they lose diversity and creativity. It's like a musician playing the same three songs perfectly, but nothing else.

We’d like to ask a favour 🤝
If this email lands up in your Promotional or Spam folder, please move it to your Primary inbox. We’re working hard to bring you the best content weekly, and your support is truly appreciated. Thanks!

Thanks for reading TomorrowToday! We’d love to hear from you:

➡️ What would you like us to cover next?
➡️ Have a tool or topic we should feature?

We’re building this with (and for) you. 🚀
See you next Tuesday 👋

Keep Reading