- Nextool AI
- Posts
- Grok’s dirty secret just got it banned
Grok’s dirty secret just got it banned
Plus: Learn Vibe coding to build the next million dollar business YOURSELF
Today we are gpoing to see Grok crossed a line, and regulators aren’t waiting. Two countries just blocked Musk’s AI over safety and human rights concerns. Also Motional stalled out after layoffs and missed launches. Now it's back with an AI-first system and a 2026 deadline to prove it works.
In today’s post:
Musk’s AI faces its first global bans
Motional bets it all on AI for robotaxi comeback
Google quietly pulls AI answers for medical questions
Together with AirCampus:
Imagine creating browser extensions, mobile apps, plugins, and micro-SaaS — all without writing a single line of traditional code.
That’s the power of Vibe Coding, a brand-new way of building with AI.
In this 3-hour Masterclass, you’ll learn how to turn your ideas into fully working apps using the most powerful AI tools — Lovable, Bolt, Cursor, Replit, Claude, Supabase and more.
What You’ll Experience:
Turn raw ideas into real apps (live demo)
Build using beginner-friendly AI platforms
Create your first AI-built app during the session
Discover how Vibe Coding helps you build 10x faster
No coding. No technical background. No complexity.
🎯 Join the Masterclass and learn to build real AI products.
What’s Trending Today
STRATEGY
Grok just crossed a line and two countries pushed back

Malaysia and Indonesia just became the first to ban Elon Musk’s AI chatbot, Grok.
The reason? Sexually explicit deepfakes.
Here’s everything you need to know:
Grok, available on Musk’s X platform, lets users generate images with AI.
But it's now being used to alter real photos often of women into sexualized fakes.
Regulators in Malaysia and Indonesia say this violates human dignity and safety.
Both countries demanded action from X but say Musk’s team didn’t address the core risks.
Victims, like disability advocate Kirana Ayuningtyas, are speaking out after being targeted.
The UK is also considering action, with leaders calling Grok’s misuse “disgusting.”
Musk, meanwhile, claims critics are just looking for an “excuse for censorship.”
The backlash isn’t just about one tool, it’s a warning shot for all platforms that deploy generative AI without guardrails. When the tech moves faster than ethics or safety, it’s only a matter of time before real harm catches up.
BREAKTHROUGH
From layoffs to a reboot, can Motional make driverless work?

After missing deadlines and losing investors, Motional hit reset. Now it’s back, betting everything on a new AI-driven approach to launch robotaxis by 2026.
Here’s everything you need to know:
Motional, backed by Hyundai, paused operations after setbacks and mass layoffs in 2024.
It’s now pivoting to an AI-first self-driving system, aiming to go fully driverless in Las Vegas by late 2026.
Early employee-only rides are live with safety drivers still on board for now.
The shift abandons rule-based robotics in favor of a foundation model architecture.
This new approach allows the system to generalize quickly to new cities and driving scenarios.
A live demo showed smoother navigation in real-world conditions, but there’s still room for polish.
Motional says robotaxis are step one but the long game is personal cars with Level 4 autonomy.
Motional’s pivot is bold and necessary. It’s a bet that AI can crack what classic robotics couldn’t. But building safer, smarter systems won’t be enough. Winning back trust, both from users and investors, may be the harder ride
RESEARCH
Investigation just forced Google to backtrack

Image Credits: Google
After reports of inaccurate AI-generated health advice, Google is quietly removing some AI Overviews from search starting with liver test queries.
Here’s everything you need to know:
Google's AI was offering misleading info on liver function test ranges.
The AI Overviews didn’t adjust for key factors like age, sex, or ethnicity.
Google has now removed Overviews for some queries but not all variations.
Searches like “lft test reference range” still triggered AI content earlier this week.
Google claims the info wasn’t inaccurate, but says it’s working on “broad improvements.”
The British Liver Trust welcomed the removal but called it a surface-level fix.
The bigger concern? AI-generated health advice still lacks oversight and nuance.
This isn’t just about fixing one bad search result. It’s about whether AI should answer medical questions at all. Until there’s transparency and real accountability, even “improved” AI features might do more harm than good.
Free Guides
My Free Guides to Download:
🚀 Founders & AI Builders, Listen up!
If you’ve built an AI tool, here’s an opportunity to gain serious visibility.
Nextool AI is a leading tools aggregator that offers:
500k+ page views and a rapidly growing audience.
Exposure to developers, entrepreneurs, and tech enthusiasts actively searching for innovative tools.
A spot in a curated list of cutting-edge AI tools, trusted by the community.
Increased traffic, users, and brand recognition for your tool.
Take the next step to grow your tool’s reach and impact.
That's a wrap:Please let us know how was this newsletter: |
Reach 150,000+ READERS:
Expand your reach and boost your brand’s visibility!
Partner with Nextool AI to showcase your product or service to 140,000+ engaged subscribers, including entrepreneurs, tech enthusiasts, developers, and industry leaders.
Ready to make an impact? Visit our sponsorship website to explore sponsorship opportunities and learn more!
