• Nextool AI
  • Posts
  • Nvidia just showed what AI infrastructure really needs

Nvidia just showed what AI infrastructure really needs

Plus: Medical deepfakes are forcing a new AI debate

Sponsored by

Nvidia is betting on optical fiber to power the next phase of AI infrastructure, while the US government and the AMA are moving to put stronger guardrails around how AI models are tested and used. Together, these stories show the same shift from different angles: AI is no longer just a software race. It is becoming a fight over chips, cables, factories, safety, trust, and who gets to shape the systems people depend on.

In today’s post:

  • Nvidia just moved closer to glass

  • The US wants to test AI first

  • Medical AI has a trust problem

SPONSORED BY

How Jennifer Aniston’s LolaVie brand grew sales 40% with CTV ads

The DTC beauty category is crowded. To break through, Jennifer Aniston’s brand LolaVie, worked with Roku Ads Manager to easily set up, test, and optimize CTV ad creatives. The campaign helped drive a big lift in sales and customer growth, helping LolaVie break through in the crowded beauty category.

What’s Trending Today

RESEARCH

Nvidia’s next AI advantage may not come from chips

Image Credits: NVIDIA

Nvidia and Corning just announced a major optical fiber deal. The goal is simple. Move AI data faster, with less power, inside massive data centers.

Here’s everything you need to know:

  • Corning will build three new U.S. factories for Nvidia’s optical technologies.

  • These factories could create at least 3,000 advanced manufacturing jobs.

  • Nvidia may invest up to $3.2 billion in Corning through warrants.

  • This deal points toward one big shift: replacing copper with glass.

  • Copper works, but AI systems are pushing it to its limits.

  • Fiber can move data faster, while using far less energy.

  • That matters because AI data centers are becoming power-hungry machines.

Most people still see Nvidia as a chip company. But the bigger story is infrastructure. AI is no longer just about better models or faster GPUs. It is becoming a race to redesign the physical world around computing. Power, cables, factories, cooling, and supply chains now matter more. That is why this deal is interesting. The future of AI may depend on things that look boring today. Glass. Fiber. Manufacturing. Distance between chips. That is usually where the next advantage hides.

AI SAFETY

Big AI models may face more safety checks before launch

Image Credits: BBC

Google, Microsoft, and xAI have agreed to let the US Department of Commerce test new AI tools before public release. The testing will happen through CAISI, the government’s AI standards and safety center.

Here’s everything you need to know:

  • The agreements are voluntary, not formal regulation.

  • CAISI will evaluate models for safety, security, and capabilities.

  • This expands earlier testing deals involving OpenAI and Anthropic.

  • Microsoft said national security testing needs government collaboration.

  • Google declined to comment, while xAI did not respond.

  • The move is notable because Trump has favored lighter AI regulation.

  • Rising military use of AI appears to be changing the government’s posture.

This is not a full AI crackdown. It is something quieter, and maybe more important. The government seems to be accepting that frontier AI cannot be treated like normal software. Once these models are released, control becomes harder. So the testing layer moves earlier. That does not solve every problem. It may even create new fights between companies and regulators. But it shows where AI is heading. The next battle is not just who builds the best model. It is who gets trusted to release it.

MEDICAL AI

The AMA wants lawmakers to stop AI from becoming a health misinformation machine

Image Credits: American Medical Association

The American Medical Association is pushing for legislation around AI in health care. Its concern is simple. Medical deepfakes, fake research, and chatbot errors can harm real people.

Here’s everything you need to know:

  • The AMA warned that AI can spread false medical claims quickly.

  • Deepfakes can impersonate doctors and mislead vulnerable patients.

  • Fraudulent ads have already used fake medical authority.

  • One experiment showed fake disease research spreading into AI systems.

  • The fictional disease was called “bixonimania.”

  • The AMA says patients should not have to detect deepfakes themselves.

  • Health AI companies may face stricter rules around accuracy and sourcing.

Medical AI is different from normal AI. A wrong answer about a movie is annoying. A wrong answer about treatment can be dangerous. That is why this debate matters. The real issue is not whether AI belongs in health care. It probably does. The issue is where trust comes from. Patients need to know when they are talking to software. They need clear sources. They need systems that admit uncertainty. Without that, AI does not improve health care. It just makes misinformation look professional.

Free Guides

My Free Guides to Download:

🚀 Founders & AI Builders, Listen up!

If you’ve built an AI tool, here’s an opportunity to gain serious visibility.

Nextool AI is a leading tools aggregator that offers:

  • 500k+ page views and a rapidly growing audience.

  • Exposure to developers, entrepreneurs, and tech enthusiasts actively searching for innovative tools.

  • A spot in a curated list of cutting-edge AI tools, trusted by the community.

  • Increased traffic, users, and brand recognition for your tool.

Take the next step to grow your tool’s reach and impact.

That's a wrap:

Please let us know how was this newsletter:

Login or Subscribe to participate in polls.

Reach 150,000+ READERS:

Expand your reach and boost your brand’s visibility!

Partner with Nextool AI to showcase your product or service to 140,000+ engaged subscribers, including entrepreneurs, tech enthusiasts, developers, and industry leaders.

Ready to make an impact? Visit our sponsorship website to explore sponsorship opportunities and learn more!