Daily AI News Roundup: 09 Jan 2026
Nous Research’s NousCoder-14B is an open-source coding model landing right in the Claude Code moment Nous Research, backed by crypto‑venture firm Paradigm, unveiled the open‑source coding model NousCoder‑14B, which was trained in just four days on 48 Nvidia B200 GPUs and reaches a 67.87 % accuracy on the LiveCodeBench v6 benchmark—about 7 percentage points higher than its base model, Alibaba’s Qwen3‑14B. The release includes not only the model weights but also the full Atropos reinforcement‑learning environment, benchmark suite and training harness, allowing anyone with sufficient compute to reproduce or extend the work. Training leverages “verifiable rewards” (binary pass/fail on executed code), dynamic‑sampling policies, and progressive context‑window expansion up to roughly 80 k tokens, while pipelining inference and verification to maximize GPU utilization. Researchers note that the 24 000 competitive‑programming problems used for training exhaust most high‑quality public data in the domain, prompting calls for synthetic problem generation and self‑play to overcome future data scarcity. With $65 million in funding, Nous Research positions its open‑source approach as a direct competitor to proprietary tools like Anthropic’s Claude Code, emphasizing transparency, reproducibility, and the next‑generation research directions of multi‑turn RL and autonomous problem creation. ...