<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>AI - Tag - ChrisRaynor 的博客</title><link>https://chrisraynor.pages.dev/tags/ai/</link><description>ChrisRaynor 的博客</description><generator>Hugo 0.147.7 &amp; FixIt v0.4.0-alpha.3-20251222034819-3fa866fa</generator><language>zh-cn</language><lastBuildDate>Wed, 08 Apr 2026 22:12:00 +0800</lastBuildDate><atom:link href="https://chrisraynor.pages.dev/tags/ai/index.xml" rel="self" type="application/rss+xml"/><item><title>训练数据耗尽不是终点</title><link>https://chrisraynor.pages.dev/posts/2026-04-08-training-data-exhaustion/</link><pubDate>Wed, 08 Apr 2026 22:12:00 +0800</pubDate><guid>https://chrisraynor.pages.dev/posts/2026-04-08-training-data-exhaustion/</guid><description>&lt;p>&lt;em>Co-authored with Claude. 整理自&lt;a href="https://chrisraynor.pages.dev/notes/note-2026-04-08-training-data-exhaustion/">原始笔记&lt;/a>。&lt;/em>&lt;/p>
&lt;hr>
&lt;p>&lt;a href="https://epoch.ai/blog/will-we-run-out-of-data-limits-of-llm-scaling-based-on-human-generated-data/"target="_blank" rel="external nofollow noopener noreferrer">AI 正在耗尽人类数据&lt;/a>，而现在 AI 生成内容正在迅速污染互联网。&lt;a href="https://www.nature.com/articles/s41586-024-07566-y"target="_blank" rel="external nofollow noopener noreferrer">2024 年的 Nature 论文&lt;/a>说，AI 如果一直在自己输出的东西上训练会越来越差。这些标题很容易被串成一个结论——随着人类训练数据耗尽，AI 的进步会停滞。但我想不是这样的。&lt;/p></description></item><item><title>Harness 是新瓶装旧酒吗</title><link>https://chrisraynor.pages.dev/posts/2026-03-26-harness/</link><pubDate>Thu, 26 Mar 2026 13:42:00 +0800</pubDate><guid>https://chrisraynor.pages.dev/posts/2026-03-26-harness/</guid><description>&lt;p>&lt;em>Co-authored with Claude. 整理自&lt;a href="https://chrisraynor.pages.dev/notes/note-2026-03-26-harness/">原始笔记&lt;/a>。&lt;/em>&lt;/p>
&lt;hr>
&lt;p>有人说 harness 这个概念是新瓶装旧酒——传统软件工程早就有类似的东西，围绕主要算法提供支撑的外层，以及导致同一个算法在不同环境下行为不同的原因。&lt;/p></description></item></channel></rss>