This Month in Tech: February 2026

TLDR of the TLDR: February 2026 in Tech

  1. OpenAI Closes Record-Breaking $110 Billion Funding Round
    OpenAI finalized the largest private funding round in history at a $730 billion pre-money valuation. Amazon invested $50 billion (including 2 GW of AWS Trainium compute), Nvidia put in $30 billion, and SoftBank contributed $30 billion. The round later expanded to $122 billion as additional investors joined.

  2. Anthropic Raises $30 Billion Series G at $380 Billion Valuation
    Anthropic closed a $30 billion Series G, the second-largest private financing round on record, more than doubling its September 2025 value. Claude Code alone is generating $2.5 billion annualized. The round was led by Coatue and GIC, with participation from Microsoft, Nvidia, and Founders Fund.

  3. GPT-5.3-Codex: OpenAI’s Most Capable Coding Model
    Released February 5, GPT-5.3-Codex merges the coding prowess of GPT-5.2-Codex with GPT-5.2’s deep reasoning into a single model that runs 25% faster. It set new industry highs on SWE-Bench Pro and Terminal-Bench. Notably, it is the first model that was instrumental in creating itself, with the Codex team using early versions to debug its own training and deployment.

  4. Anthropic Releases Claude Opus 4.6 and Sonnet 4.6
    Opus 4.6 dropped February 5 and Sonnet 4.6 on February 17. Opus 4.6 achieved METR’s longest task-completion time horizon (14 hours 30 minutes at 50% confidence), both models support 1M token context windows, and Opus 4.6 offers 128k max output tokens. In a striking demonstration, 16 Claude Opus 4.6 agents wrote a working C compiler in Rust from scratch capable of compiling the Linux kernel.

  5. Google Announces Gemini 3.1 Pro
    Gemini 3.1 Pro demonstrated more than double the reasoning performance of Gemini 3 Pro, scoring 77.1% on ARC-AGI-2. This marks Google’s first “.1” increment (past generations used .5), signaling an accelerated release cadence. The model launched in preview across the Gemini API, Vertex AI, and NotebookLM.

  6. Big Tech Commits $650 Billion to AI Infrastructure in 2026
    Alphabet ($185B), Amazon ($200B), Meta ($115-135B), and Microsoft (~$145B) are collectively on track to pour approximately $650 billion into AI infrastructure this year. That’s a 60% increase from 2025’s $410 billion, representing a capital deployment without parallel this century.

  7. NVIDIA Reports Record $68.1 Billion Quarterly Revenue
    NVIDIA reported Q4 FY2026 revenue of $68.1 billion (up 73% YoY) and full-year revenue of $215.9 billion. Data center revenue alone hit $62.3 billion. Forward guidance of $78 billion for Q1 FY2027 quashed lingering AI bubble concerns and confirmed sustained hypergrowth in compute demand.

  8. Mercury 2: The Diffusion Language Model That Generates 1,000 Tokens/Second
    Released by Inception, Mercury 2 is the first reasoning diffusion LLM (dLLM). Instead of sequential token decoding, it generates responses through parallel refinement, achieving approximately 1,000 tokens/second output throughput compared to ~89 for Claude 4.5 Haiku and ~71 for GPT-5 Mini. Priced at just $0.25/$0.75 per million input/output tokens, this could be a paradigm shift in inference architecture.

  9. Open-Source AI Reaches Parity with Proprietary Models
    February saw at least 10 new models arrive that would have been considered frontier-class a year ago, completely reshuffling the open-source leaderboard. GLM-5 debuted as the top-ranked open-source model, followed by Kimi K2.5 and MiniMax M2.5. Open-source models have reached parity with the best proprietary systems across coding, math, multilingual, and multimodal tasks.

  10. GLM-5: Frontier Model Trained Entirely on Huawei Chips
    Zhipu AI’s GLM-5 is a 744B-parameter MoE model (44B active) trained entirely on Huawei Ascend chips with zero NVIDIA GPUs, a milestone for chip independence. Using a novel RL technique called “Slime,” it compressed hallucination rates from 90% to 34%, beating Claude Sonnet 4.5’s previous record. Released under MIT license and free to use.

  11. Alibaba Unifies AI Under Qwen Brand, Releases Qwen 3.5
    Alibaba consolidated all its AI brands under “Qwen” and released the Qwen 3.5 family: 397B total parameters with only 17B active via MoE, delivering 60% lower cost and 8x higher throughput than its predecessor. It supports 201 languages, native multimodal capabilities, and built-in agentic features, all under Apache 2.0.

  12. Waymo Raises $16 Billion at $126 Billion Valuation
    Waymo closed a $16 billion round led by Sequoia Capital and DST Global, nearly tripling its $45 billion valuation from October 2024. After serving 15 million trips in 2025, Waymo plans to expand to over a dozen new cities including London (its first international market) and Tokyo in 2026.

  13. Meta Signs Multi-Billion Dollar Deal to Rent Google’s TPU Chips
    Meta agreed to a multi-year deal to rent Google Cloud’s custom TPUs for training and running next-generation LLMs, with the possibility of purchasing TPUs outright for its own data centers by 2027. This is a significant strategic shift away from single-supplier dependency on Nvidia and positions Google’s TPU business as a serious competitor in the AI chip market.

  14. xAI Launches Grok 4.20 with Multi-Agent Architecture
    Grok 4.20 is the most structurally different Grok release ever, introducing a “rapid learning” architecture that improves weekly based on real-world use. It features a 4-agent parallel collaboration system with specialized agents that deliberate before generating responses. It also adds medical document analysis and improved engineering reasoning.

  15. Global Venture Funding Hits $189 Billion Record in February
    February shattered records with $189 billion in global venture investment, the largest single month ever. AI-related startups raised $171 billion, accounting for 90% of all global VC. However, 83% of the total went to just three companies (OpenAI, Anthropic, Waymo), raising questions about concentration.

  16. Moonshot AI’s Kimi K2.5: Trillion-Parameter Agent Swarm
    Kimi K2.5 features 1 trillion total parameters (32B active via MoE) and a radical “Agent Swarm” architecture that can dynamically spawn up to 100 autonomous sub-agents specialized in web research, code execution, and fact-checking. It can execute up to 1,500 coordinated tool calls in a single session, claiming the #2 spot on open-source rankings.

  17. Samsung Unveils Galaxy S26 Series at Galaxy Unpacked
    Samsung revealed the Galaxy S26, S26+, and S26 Ultra at Galaxy Unpacked in San Francisco. The S26 Ultra introduces the world’s first Privacy Display (blocking side-angle viewing), the lineup leans heavily on Google’s Gemini for AI features, and all models now start at 256 GB storage.

  18. White House Announces Ratepayer Protection Pledge for AI Data Centers
    The Ratepayer Protection Pledge requires tech companies expanding AI compute to shoulder the full cost of new energy generation and power delivery infrastructure for their data centers. Amazon, Google, Meta, Microsoft, OpenAI, Oracle, and xAI signed on. This marks the first major US federal policy response to AI’s massive energy footprint.

  19. February’s Model Avalanche: 12 Significant Releases in One Month
    Beyond the individual releases listed above, February saw an unprecedented 12 significant model releases: Gemini 3.1 Pro, Claude Opus 4.6, Claude Sonnet 4.6, GPT-5.3-Codex, Grok 4.20, Qwen 3.5, Mercury 2, Seed 2.0, MiniMax M2.5, GLM-5, and more. AI model development has reached industrial scale with near-continuous releases from all major players.

  20. BrainChip Launches AKD2500 Neuromorphic Silicon Project
    BrainChip announced a $2.5 million silicon development project to integrate its next-generation Akida 2.0 neuromorphic architecture into silicon using TSMC’s 12nm process. Neuromorphic chips process information more like the human brain, offering dramatic power efficiency gains that could matter as AI infrastructure energy costs become a central industry concern.