Can You Cure Cancer With Code?

Can You Cure Cancer With Code?

Feb 24, 2025

Jul 8, 2025

Jul 8, 2025

Jul 8, 2025

Jul 8, 2025

|

4

min read

Good morning AI enthusiasts & entrepreneurs, "Solving all diseases" might sound like classic Silicon Valley exaggeration — until you realize the company making the claim holds $600M in funding and a Nobel Prize-winning breakthrough. Isomorphic Labs, the Google DeepMind offshoot, is moving its AlphaFold-powered drug candidates out of digital models and into real-world human testing. Their vision? On-demand, AI-generated cures. In today’s AI news: • Isomorphic Labs’ AI-designed drugs head toward clinical trials • Chinese tech giants clash over AI model cloning • Managers increasingly rely on AI for workforce decisions • xAI's Grok 4 Launches July 9 • Top Tools & Quick News Subscribe to stay ahead in the AI race!Subscribe Isomorphic Labs’ AI-designed drugs ready for human testing Image: TechCrunch The News: Isomorphic Labs, an Alphabet-backed DeepMind spinoff, is preparing for its first human clinical trials for cancer therapies developed using AlphaFold 3, a cutting-edge AI for predicting protein folding and molecular interactions. The Details: • The drugs are the result of four years of R&D leveraging AlphaFold 3's advanced biomolecular modeling capabilities. • In April 2025, the company raised $600M in funding, led by Thrive Capital, to expand drug development and infrastructure. • Their stated mission is to build an AI-driven drug generation engine, aiming to deliver personalized, on-demand treatments. • Initial human trials will focus on oncology, with licensing of promising compounds to pharma partners like Novartis and Eli Lilly possible post-early results. Why it Matters: This development could mark a paradigm shift from traditional pharma's slow, trial-based models toward AI-led drug discovery, compressing timelines, reducing costs, and increasing success rates. The long-shot goal of "solving all diseases" is edging closer to reality, now backed by Nobel Prize-winning technology. Chinese AI drama heats up with model-copying claims Image: HonestAGI The News: Huawei's research arm, Noah's Ark Lab is under scrutiny after accusations emerged that its new Pangu Pro Moe model was copied from Alibaba’s Qwen 2.5-14B. The claims stemmed from a technical report published by GitHub group HonestAGI, alleging an “extraordinary correlation” between the two models. The Details: • HonestAGI cited a correlation coefficient of 0.927 between Pangu Pro Moe and Qwen 2.5-14B, implying model reuse. They also accused Huawei of fabricating technical docs and overstating internal R&D. • Huawei denied the allegations, claiming Pangu Pro Moe was independently developed, built on its Ascend AI chips, and compliant with all licensing standards. • A self-identified whistleblower claiming to work at Huawei suggested the company was under internal pressure to close the gap with rivals, pushing them to clone third-party models. Why it Matters:  While China’s AI sector has often appeared collaborative, this incident reveals deepening rivalries and the growing tension between openness and innovation integrity. With model provenance becoming critical, ethical conduct in AI development is more important than ever. AI now drives key managerial decisions Image: Resume Builder The News: A recent survey reveals that 60% of U.S. managers now use AI tools for critical people decisions — from promotions and raises to terminations — often without formal training or oversight. The Details: • Surveyed 1,342 managers; 78% use AI for pay decisions, 77% for promotions, and 64% for terminations. 66% reported using it for layoffs. • Leading tools include ChatGPT (53%), Microsoft Copilot (29%), and Google Gemini (16%). • 20% of managers allow AI to make final decisions without human intervention, while 43% have replaced employees based on AI recommendations. Why it Matters: As AI becomes a fixture in people management, governance lags behind. Most managers lack formal training on ethical AI use in HR. This shift not only affects employees but raises the question: could the managers themselves be the next layer to get automated? xAI's Grok 4 Launches July 9 Image Source: NextBigFuture The News: Elon Musk has confirmed that Grok 4, xAI’s next-generation language model, will officially launch on July 9, 2025, at 8 PM PT via a livestream on @xAI's account on X. The Details: • Grok 4 is expected to outperform its predecessors in logical reasoning and developer tasks, with early benchmarks showing progress in STEM and code generation. • While Grok 4 will launch text-only, future updates are set to include image generation and multimodal understanding, akin to GPT-4o and Gemini. • Several Grok 4 versions are expected, including "Grok 4 Code" and "grok-4-prod-mimic" for business and domain-specific applications. • Musk has said the model is designed to be skeptical of legacy media, avoiding censorship and allowing "controversial viewpoints". Why it Matters: Grok 4 is xAI’s most polarizing model yet—positioned as a challenger to GPT-4o, Gemini Ultra, and Claude, but arguably more of a culture war avatar than a serious AI breakthrough. While the technical claims are loud, the track record of previous Grok models is shaky, and the decision to double down on anti-establishment narratives risks making Grok 4 more dangerous than disruptive.

Subscribe To Out Newsletter

Subscribe To Out Newsletter

Subscribe To Out Newsletter

Subscribe To Out Newsletter

Subscribe To Out Newsletter

Share It On: