• Nextool AI
  • Posts
  • Sam Altman is worried about his browser getting hacked

Sam Altman is worried about his browser getting hacked

Also talking about companies spending billions in AI development + AI hallucinations that are going out of hand

In partnership with

Today we’re looking at three signals that reveal the same pressure point in AI’s rise. OpenAI admits AI browsers may never be fully secure. Big Tech is borrowing heavily to fund the AI arms race. And new research shows hallucinations may come from models failing to move beyond the question itself.

In today’s post:

  • AI browsers can be hacked says Sam Altman

  • We built the most powerful tech using borrowed money

  • The AI hallucinations and Math that answers

  • Trending AI tools + Free AI guides

Nextool Partner:

The Future of Shopping? AI + Actual Humans.

AI has changed how consumers shop by speeding up research. But one thing hasn’t changed: shoppers still trust people more than AI.

Levanta’s new Affiliate 3.0 Consumer Report reveals a major shift in how shoppers blend AI tools with human influence. Consumers use AI to explore options, but when it comes time to buy, they still turn to creators, communities, and real experiences to validate their decisions.

The data shows:

  • Only 10% of shoppers buy through AI-recommended links

  • 87% discover products through creators, blogs, or communities they trust

  • Human sources like reviews and creators rank higher in trust than AI recommendations

The most effective brands are combining AI discovery with authentic human influence to drive measurable conversions.

Affiliate marketing isn’t being replaced by AI, it’s being amplified by it.

What’s Trending Today

Shocking insight

The more autonomy we give AI, the more attack surface we create.

OpenAI just admitted something most security teams already suspected.
Prompt injection attacks aren’t going away even in AI browsers built to be secure.

OpenAI says this plainly: prompt injection is like phishing. Persistent, evolving, and never fully solved.

Here’s everything you need to know:

  • Prompt injection works by hiding malicious instructions inside normal content, like emails or documents.

  • AI browsers amplify this risk because they don’t just read content; they act on it.

  • OpenAI concedes that “agent mode” expands the security threat surface by design.

  • Security researchers quickly showed how simple text could hijack browser behavior.

  • Governments now warn that prompt injection may never be fully mitigated.

  • OpenAI’s response is speed, scale, and simulation, not absolute prevention.

  • The company trained an AI attacker to continuously find new attack strategies before humans do.

Here’s what I think:

This isn’t a bug problem. It’s a power problem. When AI systems combine autonomy with access, security becomes probabilistic, not guaranteed. Faster defenses help, but the core trade-off remains.

AI browsers will get safer. But they’ll never be safe by default. The real question isn’t whether attacks stop it’s whether the value becomes worth the risk.

AI’s Development

The AI arms race is quietly reshaping Big Tech balance sheets.

Global tech companies are issuing debt at record levels. The driver isn’t survival it’s the escalating cost of AI ambition.

The details in plain English that you and I understand:

  • Tech firms issued $428.3 billion in bonds in 2025, the highest on record, per Dealogic.

  • U.S. companies dominated issuance, reflecting where AI infrastructure spending is most aggressive.

  • Even cash-rich firms are borrowing because AI hardware ages fast and reinvestment never stops.

  • Debt-to-EBITDA ratios have nearly doubled since 2020, rising faster than earnings growth.

  • Operating cash flow coverage hit a five-year low before only partially recovering.

  • Credit markets are noticing, with rising CDS spreads on Oracle and Microsoft.

  • Analysts warn the “go big or go home” AI narrative may be inflating risk tolerance.

Here’s what I think:

This isn’t reckless borrowing. It’s defensive borrowing. AI has turned capital discipline into a timing problem. Spend now, or fall behind forever. But debt always assumes future certainty and AI returns are still unproven.

Math & AI

Sometimes the math understands model failure before we do.

For months, researchers tried to learn hallucination detectors. Then one of them stopped learning and started measuring angles.

Here’s everything you need to know:

  • Hallucinations in RAG systems aren’t random; they’re often a failure to engage with retrieved sources.

  • Instead of training another model, this work reframed grounding as geometry on an embedding sphere.

  • Questions, contexts, and responses form triangles whose angles reveal semantic movement.

  • Grounded answers move toward their sources; hallucinations stay near the question.

  • A simple ratio of angular distances, the Semantic Grounding Index, captures this “semantic laziness.”

  • Across five embedding models, SGI consistently separates grounded answers from hallucinations.

  • Even more striking, geometry predicted when SGI should work better and the data confirmed it.

Here’s what I think:

This is a reminder that progress doesn’t always come from bigger models. Sometimes it comes from better questions. Training failed because the problem wasn’t statistical. It was structural. By stepping back, the author found a signal hiding in plain sight not in parameters, but in space.

The deeper lesson isn’t just about hallucinations. It’s about humility. When complex systems fail, the simplest abstraction might be the one that finally tells the truth.

AI TOOLS + Free Guides

AI TOOLS

  • Gumloop: Builds no-code automation workflows across your apps that run on autopilot even while you sleep

  • Shortwave: Smart email client that uses AI to group messages, write replies, and cut your inbox time by 70%

  • Mubert: Generates royalty-free background music for videos in seconds by picking mood and genre

  • MyHeritage Deep Nostalgia: Animates old photos by adding realistic facial movements to bring ancestors to life

  • Humata: Chat with your PDFs like they're people - ask questions and get instant answers from any document

  • Elicit: Research assistant that reads academic papers and pulls out key findings so you don't have to

Find 10,000+ AI tools that can help you automate all your tasks here at Nextool AI

My Free Guides to Download:

🚀 Founders & AI Builders, Listen up!

If you’ve built an AI tool, here’s an opportunity to gain serious visibility.

Nextool AI is a leading tools aggregator that offers:

  • 500k+ page views and a rapidly growing audience.

  • Exposure to developers, entrepreneurs, and tech enthusiasts actively searching for innovative tools.

  • A spot in a curated list of cutting-edge AI tools, trusted by the community.

  • Increased traffic, users, and brand recognition for your tool.

Take the next step to grow your tool’s reach and impact.

That's a wrap:

Please let us know how was this newsletter:

Login or Subscribe to participate in polls.

Reach 150,000+ READERS:

Expand your reach and boost your brand’s visibility!

Partner with Nextool AI to showcase your product or service to 140,000+ engaged subscribers, including entrepreneurs, tech enthusiasts, developers, and industry leaders.

Ready to make an impact? Visit our sponsorship website to explore sponsorship opportunities and learn more!