1,000 words per second 🤯

They say Meta used pirated books to train their AI model. Let's uncover it.

In partnership with

The AI race is on. Big tech is throwing billions into AI. Startups are moving at light speed. The next 5 years? Pure chaos.

AI will replace jobs. That’s a fact.

AI will create new opportunities. That’s also a fact.

Some will adapt and thrive. Others will refuse to evolve and get left behind.

Your best bet? Learn how to use AI before it uses you.

How do you see AI impacting your job?

In today’s post:

  • Mistral’s AI assistant and its insane power

  • Meta is accused of using pirated books to train AI

  • Google’s adding watermarks on AI photos

  • Trending AI tools + More AI news

What’s Trending Today

MISTRAL

Mistral’s AI Assistant Is Now on Your Phone But Will You Use It?

Mistral just rolled out its AI assistant, Le Chat, on iOS and Android.

Here’s what’s interesting about it:

  • Mistral is Europe’s big AI bet, often seen as a rival to OpenAI and Anthropic.

  • Le Chat now has a mobile app, making it easier to use on the go—just like ChatGPT or Google Gemini.

  • It’s fast. Mistral claims it can generate up to 1,000 words per second, which is pretty wild.

  • Image generation? Yep, they’ve got that too, powered by Flux Ultra.

  • There’s a Pro tier ($14.99/month) that gives you access to its best model and the ability to opt out of data sharing.

  • But there’s no voice mode yet: so if you prefer talking to AI assistants, this might be a dealbreaker.

Here’s what I think:

Mistral is trying to carve out space in an AI world dominated by U.S. giants. The speed and image generation are cool, but will people switch from ChatGPT or Gemini? That’s the real test.

Would you give Le Chat a try?

SPONSORED BY GAMMA

The future of presentations, powered by AI

Gamma’s AI creates beautiful presentations, websites, and more. No design or coding skills required. Try it free today.

META

Meta’s AI Trained on Pirated Books And They Knew It

Another day, another AI controversy and this time, Meta is in hot water for training its AI models on pirated books. And it’s not just speculation—newly unsealed emails show that Meta’s own employees were raising red flags about it.

Here’s what’s going on:

  • Authors like Sarah Silverman are suing Meta, claiming their books were used without permission.

  • Internal emails reveal that Meta engineers knowingly downloaded pirated datasets, including from LibGen and Z-Library.

  • In one email, a researcher even admitted that "torrenting from a corporate laptop doesn’t feel right" but they did it anyway.

  • Meta allegedly tried to hide its tracks by using external servers, calling it “stealth mode.”

  • The lawsuit could get messy: Meta has argued this falls under “fair use,” but with emails proving they knew it was sketchy, that defense might fall apart.

Here’s what I think:

This isn’t just a legal headache for Meta and it’s an ethical disaster. If they knowingly built AI models using stolen content, what does that say about the future of AI-generated work? It’s one thing to debate fair use, but when your own team is worried they’re breaking the law… that’s a different story.

What do you think does this change how you see AI companies?

GOOGLE

Google’s New AI Watermark: A Step Forward or Just a Patch?

AI-generated images are everywhere, and Google thinks it has a solution: invisible watermarks. The company just introduced SynthID, a digital marker embedded in AI-created or AI-edited photos. You can’t see it, but Google’s tools can detect it.

Here’s what’s interesting:

  • The watermark is built into the pixels, so even if you crop, resize, or filter an image, it should still be detectable.

  • Google is rolling this out on its AI tools, like the Magic Editor on Pixel devices and its text-to-image model, Imagen.

  • The goal? More transparency—so people know when an image has been altered by AI.

  • But there’s a catch: Not all AI tools use watermarks, so bad actors can just sidestep them.

  • Experts say SynthID is a good step, but without a universal standard across platforms, it won’t solve AI misinformation on its own.

Here’s what I think:

It’s smart for Google to build this in, but let’s be real AI-generated content isn’t slowing down. If only some companies watermark their images and others don’t, it’s easy to game the system. Transparency is great, but without industry-wide adoption (or tougher regulations), this might just be another band-aid.

Would a watermark change how much you trust an image?

AI TRENDS + TOOLS

AI TOOLS

  • PowerDrill: AI assistant for research and data analysis.

  • Trinka: AI-powered grammar and language enhancement tool.

  • Faceswapper.ai: AI tool for swapping faces in images.

  • 1min.AI: All-in-one AI toolkit for various tasks.

  • Ssemble: AI-powered video editing and collaboration tool.

THE AI DIGEST

🚀 Founders & AI Builders, Listen up!

If you’ve built an AI tool, here’s an opportunity to gain serious visibility.

Nextool AI is a leading tools aggregator that offers:

  • 500k+ page views and a rapidly growing audience.

  • Exposure to developers, entrepreneurs, and tech enthusiasts actively searching for innovative tools.

  • A spot in a curated list of cutting-edge AI tools, trusted by the community.

  • Increased traffic, users, and brand recognition for your tool.

Take the next step to grow your tool’s reach and impact.

That's a wrap:

Please let us know how was this newsletter:

Login or Subscribe to participate in polls.

Reach 100,000+ READERS:

Expand your reach and boost your brand’s visibility!

Partner with Nextool AI to showcase your product or service to 100,000+ engaged subscribers, including entrepreneurs, tech enthusiasts, developers, and industry leaders.

Ready to make an impact? Visit our sponsorship website to explore sponsorship opportunities and learn more!