• Nextool AI
  • Posts
  • Microsoft is leading the AI race with this new model

Microsoft is leading the AI race with this new model

Are you using AI tools from safe AI company or the least safe one? Explore here.

In partnership with

Are you aware of AI companies stealing your data?

Yes, there are AI companies that you trust with your data who are stealing it from you to train their models.

Let me share who they are and which ones to use...

In today’s post:

  • Microsoft’s Phi-4 AI model

  • Report: Safe and dangerous AI company: The choice of users

  • Meta AI content watermarking tool is here

  • Trending AI tools + free AI courses

What's New in AI?

Phi-4, the strongest AI model

Taken from Microsoft

You know what they’re calling these models? Small language models. Yeah, small. Now, think about large language models like GPT-4, which are massive, and then imagine “small” still means a model with 14 billion parameters. Crazy, right?

Well, Microsoft’s latest release, Phi-4, is one of these so-called “small” models. And it’s already making waves.

Here’s everything you need to know:

  • It’s Microsoft’s newest addition to its generative AI family, built with 14 billion parameters. It shines at tasks like solving tough math problems, all thanks to better training data and smarter development.

  • Smaller models like Phi-4 (alongside competitors like GPT-4o mini and Claude 3.5 Haiku) are rising stars because they’re cheaper and faster to run while still delivering solid performance.

  • Phi-4 got a boost from high-quality synthetic and human-generated datasets, plus some secret sauce in post-training tweaks.

  • For now, Phi-4 is under lock and key, available only on Microsoft’s Azure AI Foundry platform for research use.

  • This is the first Phi-series model to launch since Sébastien Bubeck, one of Microsoft’s top AI minds, left the company for OpenAI in October.

Here’s what I think:

Microsoft’s making a clear move to refine smaller, faster models while navigating the challenges of AI training data. Phi-4 feels like a signal that there’s still plenty of room for innovation even with “small” models. Let’s see how it stacks up.

Together with Roku:

Run CTV Ads on Roku This Holiday

  • Reach holiday shoppers where they’re streaming on Roku 

  • Set up, optimize, and measure campaigns in real-time

  • Engage viewers with interactive, shoppable on-screen ad formats

Everyone’s hyped about building bigger, smarter AI models. But guess what’s getting left in the dust? Safety.

A new report by the Future of Life Institute says the AI industry’s efforts to keep things safe are... let’s just say, not great.

Here’s the complete breakdown:

  • The Future of Life Institute a nonprofit that’s big on reducing global risks. They got a panel of experts, including AI heavyweights like Yoshua Bengio and Stuart Russell, to evaluate how companies handle AI safety.

  • How did the companies score? It’s not pretty.

    From Future Of Life

    Meta (yep, the Facebook guys) scored an F.

    Elon Musk’s x.AI got a D-.

    OpenAI (makers of ChatGPT) and Google DeepMind managed a D+.

    Even Anthropic, which prides itself on being all about safety, got a C—the highest grade.

  • Every flagship AI model was found to be vulnerable to “jailbreaks,” aka hacks that bypass their safety guardrails. Worse, no company has solid plans to ensure future AI systems (the kind that might rival human intelligence) stay safe and under control.

  • Experts say companies need independent oversight, better risk assessments, and technical breakthroughs to peek inside the “black box” of AI. Spoiler: most aren’t even doing the basics.

Here’s what I think:

This report should be a wake-up call. If even the “safest” companies are barely passing, it’s clear the race to build powerful AI needs guardrails and fast. Safety shouldn’t be a bonus feature; it needs to be part of the blueprint.

AI content watermarking…Meta takes action

Video Seal imagined by Ideogram…Nextool AI created

Deepfakes are getting scary good and that’s exactly why Meta just dropped a tool to fight back.

It’s called Video Seal, and it’s here to make AI-generated videos easier to spot while keeping original content authentic.

Here’s what you should know:

  • Video Seal is a watermarking tool that places invisible marks on AI-generated videos, making it easier to identify their origin. The tool can also embed hidden messages to help trace a video’s authenticity later on.

  • Meta says Video Seal beats the competition (like DeepMind’s SynthID) by being more resistant to edits like cropping, blurring, and even video compression things that often mess with other watermarking tools.

  • Open to all: Video Seal is open-source, meaning developers can freely integrate it into their systems. Meta also re-released its other tools, like Watermark Anything and Audio Seal, under a permissive license.

  • Deepfake videos are getting harder to distinguish from real ones, posing risks from misinformation to copyright issues. Tools like Video Seal aim to put guardrails in place as AI-generated media becomes more widespread.

  • Video Seal isn’t perfect. Heavy video edits or extreme compression can still disrupt the watermark. Adoption might also be slow since many companies already rely on proprietary tools.

Here’s what I think:

Meta’s move to open-source Video Seal is a smart step toward making watermarking a standard. But let’s face it: keeping up with how fast deepfakes are evolving will take more than a single tool. Collaboration across the tech industry is the only way we’ll stay ahead.

Next-Level AI Tools to Explore

Trending AI tools:

  • MagicBuddy: Automate Telegram messaging and interactions with this smart AI chatbot.

  • Trinka: Improve academic and technical writing with AI-powered grammar corrections.

  • Meshy: Create detailed 3D models for design projects with advanced AI tools.

  • Eightify: Summarize YouTube videos into clickable highlights for quick insights.

  • Whisper: Transcribe audio files into text with OpenAI’s advanced language models.

  • AIVA: Generate custom music tracks by choosing emotions, genres, and harmonies.

🚀 Founders & AI Builders, Listen up!

If you’ve built an AI tool, here’s an opportunity to gain serious visibility.

Nextool AI is a leading tools aggregator that offers:

  • 500k+ page views and a rapidly growing audience.

  • Exposure to developers, entrepreneurs, and tech enthusiasts actively searching for innovative tools.

  • A spot in a curated list of cutting-edge AI tools, trusted by the community.

  • Increased traffic, users, and brand recognition for your tool.

Take the next step to grow your tool’s reach and impact.

Resources to learn AI:

  • Practical Application of Generative AI for Project Managers
    Learn how project managers can integrate generative AI into workflows effectively.
    Learn more here

  • ChatGPT Prompt Engineering for Developers by DeepLearning.AI
    Master prompt engineering in this free course co-created with OpenAI.
    Sign up here

  • AI Applications in People Management (University of Pennsylvania)
    Discover how AI is transforming people management strategies in this Coursera course.
    Start learning here

  • AI Applications in Marketing and Finance (University of Pennsylvania)
    Explore the use of AI in marketing and finance with this free course on Coursera.
    Access the course here

  • ChatGPT Mastery by Jafar (me):

    Learn how to use ChatGPT and master prompting in less than 24 hours.

    Get the course here

That's a wrap:

Please let us know how was this newsletter:

Login or Subscribe to participate in polls.

Reach 100,000+ READERS:

Expand your reach and boost your brand’s visibility!

Partner with Nextool AI to showcase your product or service to 100,000+ engaged subscribers, including entrepreneurs, tech enthusiasts, developers, and industry leaders.

Ready to make an impact? Visit our sponsorship website to explore sponsorship opportunities and learn more!