arXiv implements 1-year ban for papers containing incontrovertible evidence of unchecked LLM-generated errors, such as hallucinated references or results. [N]
Would a 2000-2021 ML paper even get accepted today? [D]
Follow the Mean: Reference-Guided Flow Matching [R]
Human-level performance via ML was *not* proven impossible with complexity theory [D]
Continual Harness: Online Adaptation for Self-Improving Foundation Agents [R]