<?xml version="1.0" encoding="utf-8"?>
<feed xmlns="http://www.w3.org/2005/Atom">
    <id>https://shemol.tech/</id>
    <title>Shemol's Blog</title>
    <updated>2026-02-27T02:54:56.326Z</updated>
    <generator>https://github.com/jpmonette/feed</generator>
    <author>
        <name>Shemol</name>
        <email>shemol106@gmail.com</email>
        <uri>https://shemol.tech</uri>
    </author>
    <link rel="alternate" href="https://shemol.tech/"/>
    <link rel="self" href="https://shemol.tech/feed.xml"/>
    <subtitle>Sneak through holes and climb over fences.</subtitle>
    <icon>https://shemol.tech/favicon.svg</icon>
    <rights>All rights reserved 2026, Shemol</rights>
    <entry>
        <title type="html"><![CDATA[关于陈皓前辈：陈皓325]]></title>
        <id>https://shemol.tech/about-chen-hao-325</id>
        <link href="https://shemol.tech/about-chen-hao-325"/>
        <updated>2026-02-17T00:00:00.000Z</updated>
        <summary type="html"><![CDATA[记录在知乎上看到的关于陈皓前辈的事情。]]></summary>
        <content type="html"><![CDATA[<h1>陈皓前辈325</h1>
<p>整理笔记的时候看到的，想要放到博客上。</p>
<p>作者：匿名用户</p>
<p>链接：https://www.zhihu.com/question/29614511/answer/45025842</p>
<p>来源：知乎</p>
<p>著作权归作者所有。商业转载请联系作者获得授权，非商业转载请注明出处。</p>
<p>这事必需匿名了。因为是利益关系人，和耗子团队成员的人交往太近，知道一些事。</p>
<p>情况应该是这样的，当时阿里云ECS做一个项目，是一个叫VPC的项目，这个项目应该做了一年了，还没有正式发布。这个项目一开始就做错了，我听朋友说，一开始耗子就和阿里云ECS的人争论他们要实施的技术方案是错的，但是这个项目太大了，据说有30-40人好多团队，耗子完全没法控制，用我朋友的话来说，阿里云的神太多了。</p>
<p>这个项目从一开始就开始惨无人道的加班了，从星期一到星期日，每天工作到凌晨2-3点，一共持续了3-4个月，你能相信这种加班吗？</p>
<p>我的朋友也在这个项目中，我朋友和我天天都在吐槽这项目中出现的很多技术问题，有的问题相当可笑，完全就是技术外行人员才会犯的错。</p>
<p>耗子控制不了这个项目，就不允许自己的人加班，他认为那些低级的错误都是加班导致的，而且大家都知道他是反对加班的。我听朋友说，他当时和团队里的人半开玩笑说，如果加班超过8点，绩效3.25，价值观C。（我觉得他可能是想讽刺一下那些为了KPI不知死活的人），</p>
<p>这个项目最终还是失败了，据说现在还在大返工中。这项目当时干了3个月，全是bug，上不了线，据说惊动了高层，于是问罪下来。然后呢，这个项目的负责人对上面的老大说，这项目有一部分原因是耗子的团队不给力，他们不加班。于是呢，他老板第二天就直接让我的朋友和耗子团队的其它人转岗那个天天加班到凌晨的团队。谈了一整天，没有一个人想过去。</p>
<p>事实上呢？耗子团队参加的两个人在不加班的情况下不但按时完成了自己的模块，而且产生的bug数只占了很少一部分。</p>
<p>但是结果呢，老板就强行做了个决定，人可以不过去，但是工作由那边安排，耗子完全被架空了，耗子的团队就这样在事实上没了。</p>
<p>这两天，耗子在微博上批评寒冬的那篇逻辑错误百出的公关文后，惊动了公司的公关部，而他的新老板直接把耗子的团队划走了，我朋友和整个团队包括耗子完全不知情。从这点来看，阿里的管理很暴力吧。</p>
<p>这应该就是耗子微博上发的那些因“价值观”不同受到的迫害的例子吧。</p>]]></content>
        <author>
            <name>Shemol</name>
            <email>shemol106@gmail.com</email>
            <uri>https://shemol.tech</uri>
        </author>
    </entry>
    <entry>
        <title type="html"><![CDATA[2026.1.31]]></title>
        <id>https://shemol.tech/2026-1-31</id>
        <link href="https://shemol.tech/2026-1-31"/>
        <updated>2026-01-31T00:00:00.000Z</updated>
        <summary type="html"><![CDATA[最近的两周的一些想法和事情]]></summary>
        <content type="html"><![CDATA[<h1>2026.1.31</h1>
<p>本来想等到周三实验室聚餐完正式放假就开始写，然后一拖再拖，想着干脆等到投完论文再写，终于今天收拾完寝室有时间写一下了。</p>
<p>实际上也不知道写点什么，看着很忙也不知道在忙点什么。权当总结一下之前学习的内容，给接下来的学习做一下计划吧。</p>
<p>先定一下整体的基调，既然AI是能力放大器，那么我们就要花更多的时间学习，沉淀更多的想法。所以接下来的主要事情还是学习，更加细节深入的学习。至于实习工作的事情，虽然我也很着急，但是我又隐隐约约觉得好像也并没有那么重要。这些都是外界的评价标准，实习进入一家公司，再到以后工作进入一家公司，都是在做别人的产品，实现别人的想法。那么为什么不尝试做自己的产品呢，做什么都行，好坏都行。拿着自己做的东西再去找实习工作，我觉得也是有帮助的。</p>
<h1>全栈（暂时的计划是前端</h1>
<p>最近在学前端嘛，看ts，看react然后实操一些项目，有一些项目的想法但是都还没有动手去做，这应该也挺花时间的，包括在去做的过程中回顾知识，以及为了实习还要去背一些内容，可能是下学期的重点。最终的目标当然是全栈，一步一步来吧。先从网页开始，然后做app，慢慢涉及到后端数据库。</p>
<h1>Agent</h1>
<p>agent觉得自己学的还是太粗糙了，想要继续深造一下，再看看boj老师的课程和直播，看看Langchain这些框架，看看SDK，看看API这些。还要看看hook，skills这些，持续跟进新的技术。</p>
<p>包括很多前辈推出的frontend skills等。</p>
<h2>Memory</h2>
<p>包括细分的memory，我觉得memory还是相当重要的。自己之前理解的就是文件系统或者RAG，这种理解太浅显了，实际因为实习，看了白婷老师的memory综述论文，又接触了很多memory产品才知道不是这么回事，还是有一些需要再进一步了解的事情。</p>
<h1>论文</h1>
<p>1.29号是ICML的截稿日期，前几天学长带着我狂写论文。我画了很多图和表格，被学长教育了很多，怎么画图，怎么画表格，怎么去排版写论文，学到了很多。还和学长一起通宵，非常难忘，后续还要再仔细揣摩学长论文写作的思路，下学期为了毕业应该还要再发论文，不过这个先放一放，暂时没有那么重要，相关的事情可以从memory中移到disk。</p>
<h1>实习</h1>
<p>八点论文截稿之后，我就去打印实习材料了，第二天（也就是30号，昨天）去一家memory初创实习，还认识了几个哥们人都挺好的，这一天也学到了很多，看了很多memory产品，还有一些那几个哥们在用的工具。但是出于自己的考虑，下周不打算去了，专注在自己的事情上。至于暑期实习，日常实习，我都想先放一放，先构建自己的产品。</p>
<p>我觉得实习都是一些外部的评判标准，每个人都在告诉你日常实习多么重要，暑期实习涉及转正多么重要，但是问题的本质还是在于你自己有没有构建的能力，实战的能力。给别的项目开源贡献我觉得也是，先停一停吧，先尝试构建自己的产品吧。没有时间了，开始创造。我想要拿着自己的作品集去申请工作实习的机会。我觉得我的想法也符合AI正在让junior工程师变得更少的趋势。当然基础也要过硬。</p>
<h1>Be Open</h1>
<p>自己其实一直在尝试变得更加开放，尝试和更多人接触交流。前一段时间和学长就论文的事情打交道，再包括短暂的一天实习的经历，都让我更加加深了这个想法，更多的和人打交道，不做任何设限。</p>
<p>自己也在尝试各种和别人打交道的方法。包括自己想要想要加入Kubernetes release team shadow，那么从现在就要开始做准备。</p>
<p>还要锻炼自己的表达能力，包括更多使用语音输入法typeless，autotyper这些，我觉得是不是也能帮我改掉口吃的毛病。</p>
<h1>锻炼</h1>
<p>今天早上起来做了一些深蹲，慢慢来做一些力所能及的锻炼我觉得也很重要，女朋友经常在我耳边念叨男人过了25就是65…学长也非常强调锻炼的重要性，一定要重视。</p>
<h1>In the end</h1>
<p>剩下的感觉没什么了。每隔一段时间做一个总结也挺好的，打字打的飞快，都是一些废话。</p>
<p>最近在用Things来做Todo做一些项目管理，感觉也挺好的，希望自己养成习惯。Telegram用来存一些自己看过的文章的链接。</p>
<p>后面可能会对Agent memory产品做一些整理。</p>]]></content>
        <author>
            <name>Shemol</name>
            <email>shemol106@gmail.com</email>
            <uri>https://shemol.tech</uri>
        </author>
    </entry>
    <entry>
        <title type="html"><![CDATA[2026.1.4 - Agent]]></title>
        <id>https://shemol.tech/2026-1-4-agent</id>
        <link href="https://shemol.tech/2026-1-4-agent"/>
        <updated>2026-01-05T00:00:00.000Z</updated>
        <summary type="html"><![CDATA[对agent复习一下]]></summary>
        <content type="html"><![CDATA[<h1>2026.1.4 - Agent</h1>
<p>尝试复刻Cursor官网的网页觉得挫败，又转回来看看Agent。</p>
<a href="https://01.me/2025/12/silicon-valley-ai-insights-2025/">https://01.me/2025/12/silicon-valley-ai-insights-2025/</a>
<p>AI编程效果那里的大厂日常开发感觉很好笑，写代码的时间非常有限只有15%，真的不喜欢这样。</p>
<p>Research Code可以用来写写智能体写写脚本啥的。</p>
<p>基础设施代码，包括Linux内核还有共识协议，都不太行。</p>
<p>vibe coding最佳实践就是拆分，一次生成尽可能少的代码。</p>
<p>还有一个就是TDD测试驱动开发，说实话我觉得这个比Ralph开发要靠谱一点…</p>
<p>大型重构的spec很重要，我看还有根据spec来写linux内核的文件系统的论文…不知道效果如何。</p>
<p>严格的evaluation system也是积累代码数据的过程啊，现在大家都知道数据很重要了，每家公司都会构造数据集。</p>
<p>硅谷巨头的一些情况真的是长见识了。</p>
<p>初创公司的启示我觉得也很有帮助，实际上初创需要找到自己在整个行业的生态位，不能做通用领域因为大企业都会做，要找到非常细分的垂直领域</p>
<p>我觉得一定不能脱离工程实践，只有自己亲手去尝试，才能获得对一个工作的最真实的感受，vibe coding也好，训模型也好，不能道听途说，要自己去尝试。</p>
<h1>技术实践</h1>
<h2>Context Engineering框架</h2>
<li>System Prompt</li>
<li>Tools</li>
<li>Data Retrieval</li>
<li>Long Horizon Optimizations 长期任务优化</li>
<p>Data-Retrieval 范式转变</p>
<p>新方法是 just in time 即时加载</p>
<li>策略一：轻量级标识符</li>
<li>渐进式披露</li>
<li>自主探索</li>
<p>所有模型在长上下文上都会出现性能下降</p>
<p>当超出context window容量时的解决方案</p>
<li>上下文压缩</li>
<li>Agent 维护显式的记忆工件，存储”工作笔记”：决策、学习、状态。按需检索，而非保存在上下文中</li>
<li>Sub-Agent。将复杂任务分解为专门的 Agent，每个 Sub-Agent 有专注、清晰、狭窄的上下文。main agent编排并且综合结果</li>
<p>Skills机制的工作机理</p>
<p>Claude 可以动态发现和加载</p>
<p>``<code>markdown</p>
<p>pdf/SKILL.md (主文件)</p>
<p>├── YAML Frontmatter (name, description)</p>
<p>├── Overview (概述)</p>
<p>└── References: "For advanced features, see /reference.md"</p>
<p>pdf/reference.md (详细参考)</p>
<p>└── Advanced PDF processing features...</p>
<p>pdf/forms.md (专门功能)</p>
<p>└── PDF form filling instructions...</p>
</code>``
<li>Memory（记忆）</li>
<li>Sub Agents & Collaboration（子 Agent 与协作）</li>
<li>Dynamic Tool Calls（动态工具调用）</li>
<li>Code Generation & Execution（代码生成与执行）</li>
<li>Web Search（网页搜索）</li>
<li>Agentic Search（Agent 搜索）</li>
<li>Long Running Tasks（长期运行任务）</li>]]></content>
        <author>
            <name>Shemol</name>
            <email>shemol106@gmail.com</email>
            <uri>https://shemol.tech</uri>
        </author>
    </entry>
    <entry>
        <title type="html"><![CDATA[年终总结-2025]]></title>
        <id>https://shemol.tech/year-review-2025</id>
        <link href="https://shemol.tech/year-review-2025"/>
        <updated>2026-01-01T00:00:00.000Z</updated>
        <summary type="html"><![CDATA[25年的年终总结]]></summary>
        <content type="html"><![CDATA[<h1>年终总结-2025</h1>
<p>又到一年结束，本想12月31号写年终总结，但是昨天上午下午各一场面试，面试完又去到女朋友那里，在他们家吃完饭之后又去找朋友汇合一起跨年，实在是没有时间。今天坐下来就赶紧起笔。</p>
<p>在fedi上找到自己25年初的帖子，把时间拉回到25年1月。24年年末看了很多动漫和漫画，星际牛仔、重看EVA、炎拳、重看电锯人、重看蓦然回首。25年年初我继续看电影看动漫。</p>
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E5%B9%B4%E7%BB%88%E6%80%BB%E7%BB%93-2025_1770867598746.png" alt="" />
<p>25年是我的本命年，家里人要给我做法事化太岁，还给了我一个护身符，我只戴了不到半个月就彻底放在寝室了，一方面我不相信这个，一方面我不喜欢那个算命的大师。回首整个25年，我觉得并不存在所谓命运导致的意外之喜或者意外之悲，更多时候，每件事情的发生，只要仔细观察，都能找到导致这件事情发生的因。所以以后也要养成仔细观察的习惯，继续connect the dots.</p>
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E5%B9%B4%E7%BB%88%E6%80%BB%E7%BB%93-2025_1770867599669.png" alt="" />
<p>尝试自己动手亲力亲为，但是经历了一些失败哈哈哈。</p>
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E5%B9%B4%E7%BB%88%E6%80%BB%E7%BB%93-2025_1770867600558.png" alt="" />
<p>一些梦。</p>
<p>一月首次去尝试了cosplay，和舍友一起。我出了碇真嗣，他出渚薰。逛了一整天很累但是很开心，然后被一个明日香集邮。</p>
<p>扩列之后得知这位老师叫安飒老师。</p>
<p>然后17号的漫展我也去了，一方面是因为是很早就买的票，不去觉得很可惜。另一方面，是想要再次和安飒老师相见（x</p>
<p>这之后我们继续在QQ上联系，我返校之后继续约见面。</p>
<p>她跟我说过，她的一个朋友算命说她二月一定会脱单。于是我就顺应这个预言，在二月的最后一天，我向她告白，我们就在一起了。</p>
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E5%B9%B4%E7%BB%88%E6%80%BB%E7%BB%93-2025_1770867601527.png" alt="" />
<p>正如四叠半神话大系所言：“没有什么比有情人终成眷属，更不值得一提的事情了。”</p>
<p>今年的生活和往年发生了很大的改变，从几乎没有什么生活和仪式感，到需要规划两人的行程，需要考虑节日的安排，需要从更多的角度去思考问题。经历了很多课题，如何去信任别人的课题，如何去看待成长的课题，如何处理亲密关系的课题等等…从暴瘦十斤再到比原来还胖十斤，我认为虽然一段亲密关系中虽然是两个人，但是你始终是在面对你自己。</p>
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E5%B9%B4%E7%BB%88%E6%80%BB%E7%BB%93-2025_1770867602537.png" alt="" />
<p>所以今年是从多个方面成长的一年。</p>
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E5%B9%B4%E7%BB%88%E6%80%BB%E7%BB%93-2025_1770867603510.png" alt="" />
<p>今年英语算是有长进吧，但是还不够…日语是只有年初学了一点，就没再学了…</p>
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E5%B9%B4%E7%BB%88%E6%80%BB%E7%BB%93-2025_1770867604495.png" alt="" />
<p>在北京见了shrik3站长，和他边吃饭边聊了两三个小时，非常好的站长！谢谢站长的伴手礼！</p>
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E5%B9%B4%E7%BB%88%E6%80%BB%E7%BB%93-2025_1770867606081.png" alt="" />
<p>四月的想法还是这样的，但是下半年就一头扎进agent和共识协议了…</p>
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E5%B9%B4%E7%BB%88%E6%80%BB%E7%BB%93-2025_1770867607192.png" alt="" />
<p>那么我应该怎么做呢？</p>
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E5%B9%B4%E7%BB%88%E6%80%BB%E7%BB%93-2025_1770867608179.png" alt="" />
<p>今年看的动漫不多，gquuuuuux算是我和女朋友都比较喜欢的动漫。然而暑假沉迷科研，没有时间出cos，女朋友换了工作之后工作性质的原因（周末不放假），我们也很少再去漫展了。</p>
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E5%B9%B4%E7%BB%88%E6%80%BB%E7%BB%93-2025_1770867609122.png" alt="" />
<p>四月左右的股市波动，让我吃到了一点点甜头。从那之后到年底，股市基本都在平稳增长。作为价值投资的门下走狗，我也会继续持有。</p>
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E5%B9%B4%E7%BB%88%E6%80%BB%E7%BB%93-2025_1770867610110.png" alt="" />
<p>每次焦虑就会投实习试试水，今年年底的时候也是同样，投实习获得了三个小厂的面试机会。十二月的最后一天面了两个，我就明白自己的问题了。仍然是算法和八股，项目经历也需要符合岗位特点。我发现哪怕我在面试前再紧张，对着面试官的时候，也不再有紧张的心情了。仿佛进入了面试的心流中，只顾回答问题，全然不会考虑其它。就是这样面试完会很累。</p>
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E5%B9%B4%E7%BB%88%E6%80%BB%E7%BB%93-2025_1770867611159.png" alt="" />
<p>大体满足了以上的计划，论文的工作都进行的差不多，只剩下完善实验和写论文就可以了，因为工作量小了，我也可以同步准备实习了。</p>
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E5%B9%B4%E7%BB%88%E6%80%BB%E7%BB%93-2025_1770867612269.png" alt="" />
<p>hhhhhh…忘了说了，和女朋友在一起后也一起去出了cosplay。一次是她出了丽，我出了利贺田。还有一次是去eva在朝阳大悦城的活动潮流与艺术展，我出了碇真嗣，她出了明日香。</p>
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E5%B9%B4%E7%BB%88%E6%80%BB%E7%BB%93-2025_1770867613303.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E5%B9%B4%E7%BB%88%E6%80%BB%E7%BB%93-2025_1770867614401.png" alt="" />
<p>我有这样的才能吗？</p>
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E5%B9%B4%E7%BB%88%E6%80%BB%E7%BB%93-2025_1770867615302.png" alt="" />
<p>我的梦在哪呢？</p>
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E5%B9%B4%E7%BB%88%E6%80%BB%E7%BB%93-2025_1770867616306.png" alt="" />
<p>我仍然想问出这样的问题，Aaron Swartz会为今天的llm感到高兴吗？一方面是人们获取知识更加容易了，但是另一方面，就如Rob Pike所说的，开源代码被拿去训练反过来却造成技术垄断，这一定不是Aaron想要看到的吧。</p>
<p>五月末六月初的时候，导师安排了我和两个学长一起吃饭。其中一个学长在NUS读博。他是一个特别外向的学长，吃饭的时候聊了很多很多，从自己的行程计划到学术八卦等等等等…学长的精力也非常旺盛，听他说体力巅峰的时候通宵完还可以跑 1000 米在三分半以内...还有记忆力也很好，记得很多八卦的细节，我提到我有时候去学校附近看电影他马上说出两个电影院的名字。这是印象最深刻的两点，还有觉得和他差距很大的两点。自己精力并不是那么旺盛，而且记忆力不太行，经常只记得事情的大致框架而忘记具体细节。</p>
<p>学长也有过一段痛苦的时期，学长愿意说的，我就听着，但是我也知道，一定也有除了他，我们旁人谁都无法触及的地方。</p>
<p>从那次吃饭不久，我就开始跟着学长做agent和共识协议相关的科研了，一直一直忙到了最近才算有了点眉目。我不想在年终总结中赘述我所做的工作了，总之希望今年能在科研也有一个好的成果。</p>
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E5%B9%B4%E7%BB%88%E6%80%BB%E7%BB%93-2025_1770867617196.png" alt="" />
<p>和女朋友接触时的一点趣事哈哈哈哈。</p>
<p>然后我还是想说说vibe coding或者说AI辅助编程。24年的时候，我就在cursor的自动补全的帮助下完成了开源之夏的任务。但是从那之后的AI辅助编程，老实说我并没有太在意。在五月前，我一直都认为还是应该手写代码，让AI辅助的话，会遗漏很多编程中的细节。</p>
<p>但是在给导师做一个横向的时候，我尝试性的用Trae让AI去生成这个项目代码，它一口气写了上万行的代码，我就被震惊了。无论它写代码是对是错（当然肯定有bug），它能一口气生成万行代码这件事情本身，我认为就应该引起注意了。</p>
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E5%B9%B4%E7%BB%88%E6%80%BB%E7%BB%93-2025_1770867618360.png" alt="" />
<p>后来的事情，大家都知道了。到了年底，在通用软件领域（涉及到Linux内核和共识协议，AI的作用还是很有限）没有人再会宣称纯手写代码了。</p>
<p>不过我认为，区分出程序员和vibe coder的，仍然是对编程的深入掌握。好的工具可以让程序员效率十倍，但是首先，应该先具有程序员的素养。</p>
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E5%B9%B4%E7%BB%88%E6%80%BB%E7%BB%93-2025_1770867619325.png" alt="" />
<p>滨寿司应该是今年的年度餐厅了，去了很多次。寿司郎后来去过一次，花了三百多，就不敢再吃了。</p>
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E5%B9%B4%E7%BB%88%E6%80%BB%E7%BB%93-2025_1770867620261.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E5%B9%B4%E7%BB%88%E6%80%BB%E7%BB%93-2025_1770867621202.png" alt="" />
<p>发小七月在北京实习，短暂的团聚一个月，此后就是他继续回澳洲上学。我大概没办法体会到他所描述的深处异乡的孤独，但是我能做的也实在有限，分身乏术，只能尽可能去回应他。</p>
<p>才能所在：</p>
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E5%B9%B4%E7%BB%88%E6%80%BB%E7%BB%93-2025_1770867622183.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E5%B9%B4%E7%BB%88%E6%80%BB%E7%BB%93-2025_1770867623336.png" alt="" />
<p>仍然是焦虑…但是自己已经在朝着正确的方向前进了，那么就加油吧。</p>
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E5%B9%B4%E7%BB%88%E6%80%BB%E7%BB%93-2025_1770867624285.png" alt="" />
<p>看到Hawstein前辈的文章，从那以后，就有什么东西萌芽了。</p>
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E5%B9%B4%E7%BB%88%E6%80%BB%E7%BB%93-2025_1770867625158.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E5%B9%B4%E7%BB%88%E6%80%BB%E7%BB%93-2025_1770867626033.png" alt="" />
<p>八月只回家了一周多，在威海玩了几天，看看海，吃吃威海的美食。</p>
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E5%B9%B4%E7%BB%88%E6%80%BB%E7%BB%93-2025_1770867626973.png" alt="" />
<p>反正你这个人不管选的哪条路，都会变成现在这个样子的。</p>
<p>九月的时候，外甥女也去幼儿园了。此外清明回家参加表格婚礼的时候，我还得知姐姐又怀孕了，之后姐告诉我应该是男孩，在十二月底的时候，顺利生下。也祝贺姐和姐夫。</p>
<p>九月十月在集中刷算法题，但是后来忙着科研的事情就又耽搁了。以至于昨天的面试算法题没有写出来，闲下来就开始重新刷算法题了。</p>
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E5%B9%B4%E7%BB%88%E6%80%BB%E7%BB%93-2025_1770867627990.png" alt="" />
<p>也可以看看vibe coding的另一面。</p>
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E5%B9%B4%E7%BB%88%E6%80%BB%E7%BB%93-2025_1770867629256.png" alt="" />
<p>因为我的手机太卡了，就想着给妈换一个，最后给妈买了17 pro max，我拿着妈的OPPO在用。OPPO自带谷歌框架，我觉得非常爽。</p>
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E5%B9%B4%E7%BB%88%E6%80%BB%E7%BB%93-2025_1770867630264.png" alt="" />
<p>十月末终于刷完了hot 100…</p>
<p>十二月的时候进行开题答辩，最后也是有惊无险的顺利通过。下一次就是26年12月的中期答辩，在这之间只要把论文发表，评审老师就不会为难了。</p>
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E5%B9%B4%E7%BB%88%E6%80%BB%E7%BB%93-2025_1770867631378.png" alt="" />
<p>一些想法。</p>
<p>十二月末的时候，学长来北京了，我们一起吃了个饭，聊的也非常痛快哈哈哈，对学长的了解更深了。也窥见了一些他们的交际处事之道。</p>
<p>今年的一大变化就是，很多时候是和女朋友在一起的，比如一起过圣诞，一起和朋友们跨年，一起面部清洁等等等等。那么来年也继续一起走吧！</p>
<p>以及希望自己26年可以发论文并且找到一个好的实习，先把这关过了我们再说别的。要把基础打牢。</p>
<p>新的一年，继续用眼睛仔细去观察，用耳朵仔细去听，不放过每一个微小的细节，继续思考吧。</p>]]></content>
        <author>
            <name>Shemol</name>
            <email>shemol106@gmail.com</email>
            <uri>https://shemol.tech</uri>
        </author>
    </entry>
    <entry>
        <title type="html"><![CDATA[2025.12.28]]></title>
        <id>https://shemol.tech/2025-12-28-en</id>
        <link href="https://shemol.tech/2025-12-28-en"/>
        <updated>2025-12-28T00:00:00.000Z</updated>
        <summary type="html"><![CDATA[Weekly report for the last week of '25.]]></summary>
        <content type="html"><![CDATA[<h1>2025.12.28</h1>
<p>On Monday, I was searching for relevant papers on the work I’m currently doing. The more I searched, the more papers I found, and it occurred to me whether I could also publish a review paper along the way. I asked my senior about it, and he said it could be submitted to IJCAI. After organizing the papers, I sent them to him on Tuesday.</p>
<p>On Monday evening, I had dinner with three seniors at Tingyuan Jiangnan Cai in Zhongguancun. Listening to them talk about many academic matters.My senior is not only academically accomplished but also outgoing with high emotional intelligence, making him very adept at social interactions. We ended up chatting until after ten o'clock and then took the subway home together.</p>
<p>On Wednesday, I picked up front-end development again and submitted pull requests (PRs) for Dify and Cherry Studio. My plan is to start small by writing tests and fixing bugs before gradually delving deeper into these projects.</p>
<p>Thursday was Christmas Day. In the morning, my girlfriend and I played a non-horror escape room in Sanlitun (I don’t dare try mildly scary ones anymore—they're such a trap!). Then we went shopping around Guomao Mall where we took some photos together while enjoying desserts and coffee before heading over towards Wangfujing Central Plaza area which had festive decorations set up; there we ate Japanese cuisine later that night meeting up with high school friends visiting an anime goods store ("谷店"). After closing time at mall everyone went singing karaoke for two hours until dispersing around 1 AM – though honestly by then during KTV session energy levels were pretty low... typical low-energy person here!</p>
<p>Saturday noon involved video conferencing with one of those seniors who advised focusing solely coding rather than worrying about writing tasks since likely both survey paper or Infocom poster drafts would be handled by him instead anyway . That evening met university classmates catching each other's updates happily chatting away .</p>
<p>Friday through Saturday mostly consisted submitting PRs toward Dify/Cherry Studio projects as mentioned earlier .</p>
<p>Sunday today spent sprucing personal blog design quite satisfied haha !</p>
<p>Reflecting back this week feels incredibly long even though dining out happened just last Monday seems like ages ago now somehow …</p>
<a href="https://shemol.tech/">https://shemol.tech/</a>]]></content>
        <author>
            <name>Shemol</name>
            <email>shemol106@gmail.com</email>
            <uri>https://shemol.tech</uri>
        </author>
    </entry>
    <entry>
        <title type="html"><![CDATA[2025.12.21]]></title>
        <id>https://shemol.tech/2025-12-21</id>
        <link href="https://shemol.tech/2025-12-21"/>
        <updated>2025-12-21T00:00:00.000Z</updated>
        <summary type="html"><![CDATA[这周的周报]]></summary>
        <content type="html"><![CDATA[<h1>2025.12.21</h1>
<p>因为觉得有必要提高自己的表达能力， 就决定重拾写周记的习惯…</p>
<p>还有就是觉得自己读文章很多都是读一遍就过去了，没有沉淀，也许写下来可以让思路更好的沉淀一下。</p>
<p>上周末去了AI Maker Summit。起因是看到李博杰老师在fedi上发了自己要去做演讲的消息，我点进summit官网发现票价三百多，犹豫了一下还是忍痛报名。结果我到了现场发现没有博杰老师的演讲，转天才在fedi上看到李博杰老师似乎去了美国。周五的时候博杰老师发了一篇博客讲他在硅谷的AI见闻觉得很有意思，反复读了两三遍。</p>
<a href="https://01.me/2025/12/silicon-valley-ai-insights-2025/">https://01.me/2025/12/silicon-valley-ai-insights-2025/</a>
<p>我觉得对个人比较有帮助的就是AI编程的最佳实践部分，学会把任务切分，让AI写代码控制在一个任务500行以内。然后也可以使用不同的模型来review代码。</p>
<p>回到AI Maker Summit，在AI Maker Summit全部听下来，岛姐的和一个关于后训练的演讲，还有一个投资人的演讲觉得最有意思。最喜欢投资人讲的一句话就是，“我们看了很多很多项目…”，给人一种踏实的感觉。</p>
<p>岛姐演讲的时候讲到隔壁大厅接下来会讲的一个agent memory system，他说如果大模型完全解决了记忆问题，那么其实这样的项目根本不会存在了。</p>
<p>给我的启发还是蛮大的，再结合博杰老师的博客：独立开发或者说小型企业，需要清楚自己所处在的生态位，确保自己推出的产品不会被大模型随着不断强大而覆盖，同时也要确保不要被大企业给实现。所以我觉得即使是工程师程序员，也需要读一读AI论文和各公司的tech report来掌握大模型的进展。</p>
<p>结束之后过一两天，就被拉到了AI Maker Summit的群里，第二天一个创始人发了ProductHunt的宣传。</p>
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/2025.12.21_1770867667048.png" alt="" />
<p>因为我本身也认可阅读这件事情本身不能被AI替代，所以就去Product Hunt上瞅了一眼，说的是和著名的人一起读一本书，我就点进应用试用了一下，发现真的很有趣！</p>
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/2025.12.21_1770867668061.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/2025.12.21_1770867669345.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/2025.12.21_1770867670502.png" alt="" />
<p>看到旁边的Jobs和Munger说得这些话，我就想到之前看到的《穷查理宝典》和《乔布斯传》这些书，我发现这也是对之前读的书的一个回顾，而且更容易融会贯通，因此去Discord上领了5折优惠后，毫不犹豫订阅了Anuual Plan，毕竟买得多就省得多嘛。</p>
<p>然后就用Readever接着读Tony Dinh的《My indie book》，读到后面已经有点乏了，急着想要读完。可能晚上或者明天我就会把它读完。</p>
<p>就在刚刚又看到了一个有趣的例子，</p>
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/2025.12.21_1770867671927.png" alt="" />
<p>Jason Young前辈为了把OpenRouter的chat格式转为Claude Code的兼容模式，花费了大量的心力和金钱，但是转头OpenRouter推出了兼容接口。</p>
<p>Tony Dinh在书中也做出过类似的选择。</p>
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/2025.12.21_1770867673193.png" alt="" />
<p>这周的前三天都在准备开题答辩，以为准备的很充分了，还是被评审老师说感觉是产品经理在讲一个产品，听后面答辩的同门说好像有一个老师给我打了七十多分，不知道还需不需要再到院里去答辩，下周才出结果…</p>
<p>今年的年终总结我决定等到31号那天再写，因为现在每一天都可能出现惊喜，一定要等到最后一天才算一年的结束。</p>]]></content>
        <author>
            <name>Shemol</name>
            <email>shemol106@gmail.com</email>
            <uri>https://shemol.tech</uri>
        </author>
    </entry>
    <entry>
        <title type="html"><![CDATA[字节前端工程训练营机试]]></title>
        <id>https://shemol.tech/bytedance-frontend-eg-camp</id>
        <link href="https://shemol.tech/bytedance-frontend-eg-camp"/>
        <updated>2025-11-10T00:00:00.000Z</updated>
        <summary type="html"><![CDATA[字节前端工程训练营机试]]></summary>
        <content type="html"><![CDATA[<h1>字节前端工程训练营机试</h1>
<p>分为单选题和编程题，记录一下大概的知识点。</p>
<h1>单选题</h1>
<p>主要考数据结构与算法算法，计算机网络和html,css,js基础。</p>
<h2>数据结构与算法</h2>
<p>梦回大一还是大二的数据结构与算法考试…</p>
<p>排序题，冒泡排序，快速排序等等。出了两三道吧。</p>
<p>算法复杂度出了一道很简单的题。</p>
<p>二叉树遍历感觉也出了两三道。</p>
<p>还有一道关于栈的问题。</p>
<h2>计算机网络</h2>
<blockquote>传输层不可靠传输协议是什么</blockquote>
<p>UDP协议。</p>
<blockquote>还有一道DNS报文的题目</blockquote>
<p>只记得A选项是QR为0代表查询，QR为1代表响应了。</p>
<h2>html,css,js</h2>
<blockquote>如何统一不同浏览器的margin padding？</blockquote>
<p>CSS Reset</p>
<p>``<code>css</p>
<p>* {</p>
<p>  margin: 0;</p>
<p>  padding: 0;</p>
<p>}</p>
</code>`<code>
<p>似乎还有 Normalize.css，这个要再学习一下。</p>
<blockquote>还有一个 CSS float的问题，问哪一个属于错误的用法</blockquote>
<p>选项中有</p>
<p>A. float: **</p>
<p>B. float: none</p>
<p>C. float: left</p>
<p>D. float: right</p>
<p>总之选A就对了</p>
<blockquote>如何清除父元素高度塌陷</blockquote>
<p>这个我忘记选项是什么了，我暂时还没了解。</p>
<p>1. </code>::after<code> 伪元素清除。</p>
<p>2. 现代方案： </code>display: flow-root<code> 。</p>
<p>3. 触发 BFC 法：</code>overflow<code> 属性。缺点是： </code>overflow: hidden<code> 的本职工作是“隐藏溢出的内容”。如果有下拉菜单、阴影、提示框等需要“溢出”父容器的子元素，它们会被裁切掉。</p>
<blockquote>做法属于BFC应用的是：</blockquote>
<p>BFC（Block Formatting Context），块级格式化上下文。</p>
<p>BFC应用：</p>
<p>1. 清除内部浮动（最常见的应用）：父元素内部有浮动子元素（</code>float: left/right<code>），导致父元素高度塌陷。给<strong>父元素</strong>设置 </code>overflow: hidden;<code> 或 </code>display: flow-root;<code>。</p>
<p>2. 防止垂直外边距折叠（Margin Collapse）：在正常的文档流中，<strong>相邻</strong>的两个兄弟块级元素，它们的<strong>垂直外边距</strong>（</code>margin-top<code> 和 </code>margin-bottom<code>）会发生“折叠”，合并为两者中较大的那个值。 将其中一个元素（或两个元素分别）用一个新的父元素包裹起来，并为这个父元素触发 BFC（例如 </code>overflow: hidden;<code>）。</p>
<p>3. 实现自适应两栏/三栏布局：实现一边定宽、另一边自适应的布局？（例如，左侧菜单栏 </code>float: left<code>，右侧主内容区自动填满剩余宽度）。左侧元素 </code>float: left;<code>（定宽）。右侧主内容区触发 BFC（例如 </code>overflow: hidden;<code> 或 </code>display: flow-root;<code>）。</p>
<p>面试题的另一个问法可能是：“以下哪个属性<strong>可以</strong>触发 BFC？”</p>
<p>以下是几种常见的触发方式（做法）：</p>
<li></code>overflow: hidden<code><strong></code>;<code></strong> / </code>auto;<code> / </code>scroll;<code> (最经典的 hack)</li>
<li></code>display: flow-root;<code> (最现代、语义最正确的“BFC 触发器”)</li>
<li></code>float: left;<code> / </code>right;<code> (浮动元素自己会创建一个 BFC)</li>
<li></code>position: absolute;<code> / </code>fixed;<code> (绝对定位元素会创建一个 BFC)</li>
<li></code>display: inline-block;<code></li>
<li></code>display: table-cell;<code></li>
<li>Flex/Grid 布局的子项 (</code>flex item<code> / </code>grid item<code>)</li>
<p>下次在面试题中看到 </code>overflow: hidden;<code>、</code>display: flow-root;<code> 这些做法时，如果它们是为了解决“高度塌陷”、“外边距折叠”或“两栏布局”问题，那么这就是一次BFC的应用。</p>
<blockquote>js 中的requestAnimationFrame</blockquote>
<p>看了一下js红宝书没有太看明白，这个回头再说好了。</p>
<blockquote>js中关于</code>setTimeout<code>，</code>Promise.then()<code>的输出顺序</blockquote>
<p>js 在执行时会把任务分成三种：</p>
<li>同步任务（Synchronous Code）：在调用栈中立即执行的代码</li>
<li>微任务：在当前同步任务执行完毕后，立即执行的任务。</code>Promise.then()<code> 和 </code>.catch()<code> 里的回调函数就是最常见的微任务。</li>
<li>宏任务：在同步任务和所有微任务都执行完毕后，才从队列里拿出一个来执行的任务。</code>setTimeout()<code> 和 </code>setInterval()<code> 里的回调函数就是宏任务。</li>
</code>`<code>css
<p>console.log('1. 同步代码：开始');</p>
<p>// 安排一个宏任务</p>
<p>setTimeout(() => {</p>
<p>  console.log('2. 宏任务：setTimeout 1');</p>
<p>}, 0);</p>
<p>// new Promise 的执行器是同步的</p>
<p>new Promise((resolve, reject) => {</p>
<p>  console.log('3. 同步代码：Promise Executor');</p>
<p>  </p>
<p>  // 在 Promise 内部安排一个宏任务</p>
<p>  setTimeout(() => {</p>
<p>    console.log('4. 宏任务：setTimeout 2 (在Promise内部)');</p>
<p>    resolve(); // 在这个宏任务中，Promise 状态变为 fulfilled</p>
<p>  }, 0);</p>
<p>}).then(() => {</p>
<p>  // 当 promise被 resolve() 时，这个 .then() 才会被放入微任务队列</p>
<p>  console.log('5. 微任务：Promise.then 1');</p>
<p>});</p>
<p>// 安排一个立即 resolve 的 Promise</p>
<p>Promise.resolve().then(() => {</p>
<p>  console.log('6. 微任务：Promise.then 2');</p>
<p>});</p>
<p>console.log('7. 同步代码：结束');</p>
</code>`<code>
<p>1. 执行</code>console.log('1. 同步代码：开始')<code> </p>
<p>2. 遇到</code>setTimeout 1<code> ，将其回调放入宏任务队列</p>
<p>3. 遇到 </code>new Promise<code>，<strong>立即同步执行</strong>它的 </code>executor<code> 函数。执行 </code>console.log('3. 同步代码：Promise Executor')<code>。</p>
<p>4. 遇到 </code>setTimeout 2<code>，将其回调放入宏任务队列。</p>
<p>5. 遇到 </code>Promise.resolve().then()<code>，这个Promise立即 </code>resolved<code>，将其 </code>.then<code> 回调放入微任务队列。</p>
<p>6. 执行 </code>console.log('7. 同步代码：结束')<code>。</p>
<p>7. 清空微任务队列。</p>
<p>8. 取出第一个宏任务执行。</p>
<p>9. 再检查微任务队列。</p>
<p>10. 执行下一个宏任务。在同一个任务中，</code>resolve()<code> 被调用。</code>resolve()<code> 触发了它所关联的 </code>.then<code>，将其回调放入微任务队列。</p>
<p>11. 清空微任务队列。</p>
<h1>编程题</h1>
<p>ACM 模式还是有点不太熟悉，回头多刷一下牛客。</p>
<blockquote>给定一组方程，给出A，B，C的值，求该方程组有多少实数解</blockquote>
<blockquote>方程组形式：</blockquote>
<blockquote>X² + A²Y² + C = 0</blockquote>
<blockquote>Y² + Z² + B = 0</blockquote>
<blockquote>Z² + A = 0</blockquote>
<p>感觉是数学题…先把他们都算出来然后讨论就行。</p>
<blockquote>在位数为k的整数中，有多少个数，其每个位上之和为m。</blockquote>
<blockquote>如k=2,m=3,有12,21,30这三个数每个位上的整数之和为3</blockquote>
</code>`<code>python
<p>import functools</p>
<p>def solve_digit_sum(k:int,m:int)->int:</p>
<p>	@functools.lru_cache(None)</p>
<p>	def count_sequences(digits:int,target_sum:int)->int:</p>
<p>		if target_sum < 0:</p>
<p>			return 0</p>
<p>			</p>
<p>		if target_sum > 9*digits:</p>
<p>			return 0</p>
<p>			</p>
<p>		if digits == 0:</p>
<p>			return 1 if target_sum==0 else 0</p>
<p>			</p>
<p>		total_ways = 0</p>
<p>		for d in range(10):</p>
<p>			total_ways += count_sequences(digits - 1,target_sum - d)</p>
<p>		</p>
<p>		return total_ways</p>
<p>		</p>
<p>		</p>
<p>	if k<=0:</p>
<p>		return 0</p>
<p>		</p>
<p>	final_count = 0</p>
<p>	</p>
<p>	for d1 in range(1,10):</p>
<p>		final_count += count_sequences(k-1,m-d1)</p>
<p>		</p>
<p>	return final_count</p>
</code>``
<p>还有一道题貌似是：</p>
<p>“花费”定义为该简单路径上所有边权值的最大值。一个无向带权简单图，图联通，统计图上有多少不同的节点（u，v），满足他们之间的最小花费为k。</p>
<p>但是感觉太难了，我打算先放一放。</p>]]></content>
        <author>
            <name>Shemol</name>
            <email>shemol106@gmail.com</email>
            <uri>https://shemol.tech</uri>
        </author>
    </entry>
    <entry>
        <title type="html"><![CDATA[Context Engineering for AI Agents with LangChain and Manus]]></title>
        <id>https://shemol.tech/Context-Engineering-for-AI-Agents-with-LangChain-and-Manus</id>
        <link href="https://shemol.tech/Context-Engineering-for-AI-Agents-with-LangChain-and-Manus"/>
        <updated>2025-10-20T00:00:00.000Z</updated>
        <summary type="html"><![CDATA[learning notes of context engineering]]></summary>
        <content type="html"><![CDATA[<h1>Context Engineering for AI Agents with LangChain and Manus</h1>
<p>months ago Manus post a blog talking about Context Engineering.</p>
<a href="https://manus.im/blog/Context-Engineering-for-AI-Agents-Lessons-from-Building-Manus">https://manus.im/blog/Context-Engineering-for-AI-Agents-Lessons-from-Building-Manus</a>
<p>You don’t need all context to live in the messages history of your agent, so we need context offloading.</p>
<h1>Langchain experience</h1>
<h2>offload context to a file system</h2>
<p>So one of the most popular ideas here is just using a <strong>file system</strong>.</p>
<p>Take the output of a tool message as an example, dump it to the file system, send back to your agent just some minimal piece of information necessary so it can reference the full context if it needs to, but that full payload, for example, web search result that's very token-heavy, isn't spammed into your context window for perpetuity.</p>
<p>offloading context takes some piece of information, like a tool message that's token-heavy, and not sending it all back to your messages list, but dumping it to a file system where it can be retrieved only as needed.</p>
<h2>Reduce context</h2>
<p>Summarize or compress information to reduce context.Summarizing tool call outputs is one intuitive way to do this.So this idea of pruning old tool calls with tool outputs or tool messages is something that Claud is now kind of built into their their SDK.</p>
<p>Cognition(an agent application) also talks about idea of summarizing approving at agent-to-agent handoffs.</p>
<h2>Retrieve Context</h2>
<p>Claude Code Force only uses the file system and simple search tools, notably glob and grep. So there's different ways to retrieve context on demand for your agent.</p>
<p>Indexing and something like semantic search, file system and simple file search tools, both can be highly effective.</p>
<h2>Context isolation</h2>
<p>Context isolation is major, in particular splitting context across multi-agents.</p>
<p>Each sub-agent has its own context window and sub-agents allow for separation of concerns.</p>
<h2>Caching Context</h2>
<p>langchain open deep research </p>
<a href="https://github.com/langchain-ai/open_deep_research">https://github.com/langchain-ai/open_deep_research</a>
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/Context+Engineering+for+AI+Age_1770869899254.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/Context+Engineering+for+AI+Age_1770869900279.png" alt="" />
<p>It has three phases: scoping of the research, the research phase itself using a multi-agent, basically architecture, and then a final one-shot writing phase. We use offloading, so we basically create a brief to scope our research plan.</p>
<p>We offload that so we don't just save that in the context window because that context window is going to get peppered with other things.</p>
<p>We offload it so it's saved independently, it can be accessed in our case from the line graph state, but it could also be from file system, it's the same idea.</p>
<p>So you create a research plan, you offload it, it's always accessible. You go do a bunch of work, you can pull that back in on demand so you can put it kind of at the end of your message list so it's accessible and readily available to your agent to perform, for example, the writing phase.</p>
<p>We use offloading, as you can see, to help steer the research and writing phases. We use reduction to summarize observation from token-heavy surf tool calls, that's done inside research itself.</p>
<p>And we use context isolation across sub-agents within research itself. And this is kind of a summary of a bunch of different, uh, of these various ideas across a bunch of different projects.</p>
<h1>Manus experience</h1>
<p>instead of building specialized models too early, uh, startups really should lean on general models and context engineering for as long as possible.</p>
<h2>Context Reduction: Compaction vs. Summarization</h2>
<p>For compaction, in Manus, every tool call and tool result we actually has two different formats: a full format and a compact one.</p>
<p>The compact version strips out any information that can be like reconstructed from the file system or external state. For for example, here, let's say you have a a tool that writes to a file and it probably has two fields, a path and a content field.</p>
<p>And but once the tool returns, you can ensure that the file already exists in the environment. So in the compact format, we can safely drop the super long content field and just keep the path.</p>
<p>And if your agent start is smart enough, well like whenever it needs to read that file again, it can simply retrieve it via the path. So no information is truly lost. It's just like externalized.</p>
<p>We think this kind of like reversibility is crucial because agents do like chain predictions based on previous actions and observations and you never know like which past action will suddenly become super important like 10 steps later.</p>
<p>You cannot predict it. So this is a a reversible reduction by using compaction.</p>
<p>Of course, like compaction only take you so far. Eventually like your context will will still grow and will hit the ceiling, and that's when we combine compaction with the more like traditional summarization, but we do it very carefully.</p>
<p>For example, here, before summarizing, we might offload key parts of the context into files. And sometimes like we even more do more aggressively, we can dump the entire pre-summary context as a text file or simply a log file into the file system so that we can like always recover it later.</p>
<p>And like Lance, like just mentioned some people just use like glob and grep. You know, glob also works for log files. So if the model is smart enough, it even knows how to retrieve those like summarized , those pre-summarized context.</p>
<p>the difference here is that <strong>compaction is reversible, but summarization isn't</strong>. Both reduce context lengths, but they behave very differently.</p>
<p>to make both methods coexist, we have to track some like context length thresholds. At the top like you'll have your models hard context limit, say like 1 million tokens, pretty common today.</p>
<p>But in reality most models start degrading much earlier, typically maybe around 200k, and you'll begin to see what we call a context rot, like repetitions, slower inferences, degraded quality.</p>
<p>So by doing a lot of evaluation, it's very important for you to identify that pre-rot threshold, it's typically 128K to to 200K, and use it as the trigger for context reduction.</p>
<p>And whenever like your context size approaches it, you have to trigger context reduction, but starting from compaction, not summarization.</p>
<p>And compaction doesn't mean like compressing the entire history. You know, we might compact like the oldest 50% of tool calls while keeping the newer ones in full detail so the model still has fresh view shot examples to of like how to use tools properly.</p>
<p>Otherwise like in the in the worst case, the model will will imitate the behavior and output those compact format with with missing fields and that's totally wrong.</p>
<p>And after compaction, we have to check how much free context that we actually gain from this like like compaction operation. Sometimes like in this graph, after multiple rounds of compaction, the gain is tiny because like even it's compact, it still uses context.</p>
<p>And that's when we go for summarization, but also keep in mind that when summarizing, we always use the full version of the data, not the compact one.</p>
<p>And we still like keep the last few tool calls and tool results in full detail, not summary, because it can allow the model to know where it left off and will continue like like more smoothly.</p>
<p>Otherwise, you'll see like after summarization sometimes the model will change its style, change its tone, and we find out like keeping a few few like tool call tool result examples really help.</p>
<h2>Context Isolation:Communicating vs. Sharing Memory</h2>
<p>Cognition's blog shows that they warn against using multi-agent setups because like when you have multiple agents syncing information between them becomes a nightmare.</p>
<strong>Multi-process or multi-thread coordination</strong> has been a classic challenge in the early days of computer programming, and I think we could borrow some wisdoms here.
<p>in the Go programming language community there's a famous quote from this gopher, "Do not communicate by sharing memory, instead share memory by communicating.”</p>
<a href="https://chatgpt.com/share/68f4f8c3-baac-8004-9cf7-421375260909">https://chatgpt.com/share/68f4f8c3-baac-8004-9cf7-421375260909</a>
<p>Of course, this isn't directly about agent and it's sometimes even wrong for for for agents, but I think the important thing is it highlights two distinct patterns here which is by communicating or by sharing memory.</p>
<p>Like if we translate the term memory here into context, we can see that parallel pretty clear. "By communicating" is like the easier one to understand because it is the classic sub-agent setup here.</p>
<p>For example, the main agent writes a prompt and it, the prompt is sent to a sub-agent, and the sub-agent's entire context only consists of that instruction.</p>
<p>We think if a task has a like short, clear instruction and only the final output matters, say like searching a codebase for a specific snippet, then just use the communication pattern and keep it simple.</p>
<p>Because the main agent doesn't care how the sub-agent find the code, it only needs the result.</p>
<p>And this is what Claud Code does, typically using its like task tool to delegate like a separated clear task to some sub-agents.</p>
<p>But for more complex scenarios, in contrast, "by sharing memory" means that the sub-agent can see the entire previous context. It means like all the tool use, tool use history, tool usage history, but it, the sub-agent has its own system prompt and its own action space.</p>
<p>For example, imagine a deep research scenario, the final report depends on a lot of intermediate searches and notes. And in that case, you should consider using the share memory pattern or in our language "by sharing context," because even you can save all that notes and and searches into file and making the the sub-agent to read everything again, but you're just wasting latency and context.</p>
<p>And if you count the amount of token, maybe you're using even more token to to do this. so we think for those scenario that requires a full history, just use a share memory pattern.</p>
<p>But be aware that sharing context is kind of expensive because each sub-agent has a larger input to prefill, which is like you'll spend more on like input tokens, and since the system prompt and the access space differs, you cannot re-reuse the KV cache, so you have to pay the full price.</p>
<h2>Context Offloading:Layered Action Space</h2>
<p>when people say offload, they usually mean like moving parts of the working context into external files.</p>
<p>But as system grows, especially if you decide to integrate MCP one day, you realize that the tools themselves can also take up a lot of context, and having too many tools in context leads to confusion.</p>
<p>We call it context confusion, and the model might call like the wrong ones or even like non-existing ones.</p>
<p>So we have to find a way to also offload the tools. A common approach right now is like doing dynamic RAG on tool descriptions,  for example like loading tools on demand based on the current task or the current status.</p>
<p>But that also causes two issues. First of all, like since tool definitions sit at the front of the context, your KV resets every time.</p>
<p>And most importantly, the model's past calls to remove tools are still in the context, so it might foot the model into like calling invalid tools or invalid or using invalid parameters.</p>
<p>So to address this, Manus is experimenting with a new layered action space. essentially, we can let Manus to choose from three different levels of abstractions: number one, function calling, number two, sandbox utilities, and number three, packages and API.</p>
<p>We go deeper into into these three layers of layer action space. Let's start from level one, function calling, and this is a classic, everyone knows it. It is schema safe thanks to constraint decoding, but we all know the downsides.</p>
<p>For example, we mentioned like breaking the cache and maybe too many tool calls will cause some confusion, too many tools may cause confusion.</p>
<p>So Manus uses a fixed number of atomic functions, for example like reading and writing files, executing shell commands, searching files in internet, and maybe some like browser browser operations.</p>
<p>these atomic functions have super clear boundaries, and they can work together to compose like much more complex workflows.</p>
<p>Then we offload everything else to the next layer, which is the sandbox utilities. As you know, each Manus session runs inside a full virtual machine sandbox. It's running on our own customized Linux system, and that means Manus can use the shell commands to run pre-install utility that we develop for Manus.</p>
<p>For example, we have some format converters, we have like speech recognition utilities, and even a very special, we call it MCP CLI which is how we call MCP.</p>
<p>We do not inject MCP tools to the function colony space. Instead, we do everything inside that sandbox through in the command line interface.</p>
<p>And utilities are great because you can add new capabilities without touching the model's model's calling space, and you know it's just some like commands pre-installed in your computer.</p>
<p>And if you're familiar with Linux, you always know how to find those new commands and you can even run like like d-help to to to to to to figure out how to use a new tool.</p>
<p>And another good thing is for larger outputs, they can just write to files or return the result in pages.</p>
<p>And you can use all those Linux tools like grep, cat, less, more, like to to to to to process that results on the fly. So the trade-off here like it's it's super good for large outputs, but it's also not that good for low latency back and forth interactions with the front end.</p>
<p>Because you always have to visualize the interactions of your agent and show it to the user.</p>
<p>And then we have another layer, the final layer, we call it packages and APIs. You know, here Manus can write Python scripts to call pre-authorized API or custom packages.</p>
<p>For example, Manus might use a 3D designing library for modeling or call a financial API to fetch market data. And here actually we've purchased all these API on behalf of a user and pay the money for them.</p>
<p>It's included in the subscription. So we basically we have a lot of like API keys pre-installed in Manus and Manus can access these APIs using the keys.</p>
<p>I think these are perfect for task that requires lots of computation in memory, but do not need to push all that data into the model context.</p>
<p>For example, imagine if you're analyzing a stock's entire year of price data, you don't feed the model all the numbers. Instead, you should let the script to compute it and only put the summary back into the context.</p>
<p>And you know, since code and APIs are super composable, you can actually chain a lot of things in one step.</p>
<p>For example, in a typical API, you can get city names, get city ID, get weather, all in one Python script.</p>
<p>There's also a paper like from one of my friend called Code Act. A lot of people were like discussing about it. I think it it's like the same idea because like code is composable and it can like like like do a lot of things in one step.</p>
<p>But also it's like it's not schema safe. It's very very hard to do like a strange decoding on codec.</p>
<p>So we think you should find the right uh scenario for these features. For us, we use all as we mentioned everything that's like that can handle inside a like like compiler or interpreter runtime, we do that using code.</p>
<p>Otherwise, we use like sandbox utilities or function calls.</p>
<p>And the good thing is if you have these three layers from model's point, all three levels still go through the standard function calls, so the interface stays simple, cache friendly, and orthogonal across functions.</p>
<p>Because you know, we mentioned sandbox utilities, you're still accessing these tools using the shell tool, accessing these tools using the shell function.</p>
<p>And also like if you're using APIs in third-party applications, you're just using the file function to write or read file and then execute it, execute it using the shell function.</p>
<p>So you think it does not add like like add overhead to the model. It's still all the things that models are trained and they're already familiar with.</p>
<h2>Connecting the Five Dimensions and Avoiding Over-engineering</h2>
<p>Let's zoom out and connect the five dimensions: offload, reduce, retrieve, isolate, and cache. You can find find out that they are not independent.</p>
<strong>We can see that offload and retrieve enables more efficient reduction and stable retrieve makes isolation safe, but isolation, oh yeah, isolation also slows down contacts and reduces the frequency of reduction.</strong>
<p>However, more isolation and reduction also affects cache efficiency and the quality of output. So at the end of the day, I think context engineering is the science in art that requires a perfect balance between multiple potentially conflicting objectives.</p>
<p>I want to leave with maybe one final thought, and it's kind of the opposite of everything I just said, which is <strong>please avoid context over-engineering</strong>.</p>
<p>looking back at the at the like past six or seven months since Manis launch, actually the biggest leap we've ever seen didn't came from like adding more fancy context management layers or clever retrieval hacks.</p>
<p>They all came from simplifying or from removing unnecessary tricks and trusting the model a little more.</p>
<p>Every time we simplify the architecture, the system got faster, more stable, and smarter, because we think context engineering should uh, the goal of context engineering is to make the model's job simpler, but not harder.</p>
<p>So if you like take one thing from today, I think it should be <strong>build less and understand more</strong>.</p>
<h1>Q&A</h1>
<h2>Q&A - Shell Tools and Sandboxing</h2>
<p>Q: how does the LLM call the various shell tools? How does it know which tools exist and how to invoke them?</p>
<p>Maybe you can explain a little bit about kind of the multi the multi-tier kind of sandboxing setup that you use with Manus.</p>
<p>A: First of all, we have a hint in the system prompt telling Manas that, hey, there's a lot of pre-installed command line utilities located in some specific folder.</p>
<p>And also like for the most like frequently used ones, we already like injected in the system prompt, but it's super compact. We do not like tell the the agent how to use the tools.</p>
<p>We only list them and we can tell the agent that you can use the the the <code>--help</code> uh flag safely because all the utilities are developed by our team and they have the same format.</p>
<h2>Q&A - Indexing vs. File System for Context Retrieval</h2>
<p>Q: I know you talked a lot about using file system. What's your take on using indexing, um, and do you utilize like do you spin up vector stores on the fly if the context you're working with gets sufficiently large?</p>
<p>A:there's no no right and wrong in this space, like you've mentioned, uh, but at Manis we do not use index databases because right now, you know, every sandbox in Mana session is a new one and user want to like interact with things fast, so actually we don't have the time to like build the index on the fly.</p>
<p>So we're more like Claude Code, we rely on like like grep and and and glob. But I think like if you like consider to build some something like more long-term memory or like if you want to integrate some like like enterprise knowledge base, you still have to rely on that like um like external vector index because like it's only about the the amount of information that you can access.</p>
<p>But for like Manus like it operates in a sandbox and for coding agent, you operate in the codebase, so it it depends on the scale.</p>
<p>Q:So let's say I'm a user, I have my Manus account, I interact with Manas across many sessions. Do you have the notion of memory?</p>
<p>So Claude has Claude MD files, they persist across all the different sessions of Claude Code. How about you guys? How do you handle kind of long-term memory?</p>
<p>A:actually in Manus we have a concept called knowledge, which is kind of like like explicit memory.</p>
<p>For example, like every time you can tell Manas, hey, remember like uh every time I ask for something, deliver is in maybe in Excel, and it's not automatically inserted into some memory.</p>
<p>It will pop up a a dialogue and say, here's what I learned from our previous conversation and would you like accept it or reject it? So this is the explicit one, it requires user confirmation.</p>
<p>but also like we are discovering new ways to do it more automatically. For example, like um uh, a pretty interesting uh thing in agents is that like compared to chatbots, user often like correct correct the agent more oftenly.</p>
<p>For example, like a common uh mistake that Manas make is when doing like data visualization, you know, if you're using Chinese, Japanese, or Korean, a lot of time there will be some font issues and there will be errors in those render render visualizations.</p>
<p>So the user will often say like, hey, you should like use use like not and CJK font. And for these kind of things, the user will will a different user will will have the same correction and we need to maybe they'll find out a way to like to leverage these kind of a collective feedback and use it.</p>
<p>That's kind of like we call it self-improving agent with online learning, but in a parameter free way.</p>
<h2>Q&A - Adapting to Evolving Models</h2>
<p>Q: You mentioned towards the end of your talk that, um, you you gained a lot from removing things, and a lot of that is probably because of the fact that also the models are getting better.</p>
<p>So model capabilities in increasing and so you can kind of remove scaffolding over time. How do you think about this?</p>
<p>Because this is one of the biggest challenges that I've faced is like over time the model gets better, and I can remove things like certain parts of my scaffolding, so you're building on top of this, the the foundation that's like the water's rising.</p>
<p>do you revisit your architecture every some number of months with new releases and just delete as the models get better, and how do you how do you approach that problem?</p>
<p>A: this is a super good good question here because you know, actually we have already um refactored Manis for five times, and we've launched Manis in March and now it's October already, five times.</p>
<p>So we think like you cannot stop because like models are not only improving, but they are changing. Models' behaviors are changing over time.</p>
<p>one way is you can you can work closely with those like model providers, but we also have another like internal theory for how we evaluate or how we design our agent architecture.</p>
<p>I cover a little bit on Twitter before. It's basically like we all, we do not care about a the the a static the performance of a static uh benchmark.</p>
<p>Instead, we like we fix the AR agent architecture and we switch between models.</p>
<p>If if like your architecture can gain a lot from switching from a weaker model to a stronger model, then somehow your your architecture is more future-proof because like the the the the weaker model tomorrow is might be as good as a stronger model today.</p>
<p>so we think like switching like between uh weaker and strong models can give you some like early signals of what will happen next year and give you some time to prepare your architecture.</p>
<p>for Manus, um, we often like do these kind of review like every every one or two months, and we often like um, do some like um, yeah, do some like like research internally using like open source models and maybe like early access to prep proprietary models to like prepare the the the next release like even before the launch of the next model.</p>
<h2>Q&A - Data Storage Formats</h2>
<p>What about um best practices or considerations for um format for storing data?</p>
<p>So like markdown files, plain text, log, uh, anything you prefer in particular?</p>
<p>I think obviously it's, yeah, how do you think about that kind of file formats for?</p>
<p>A: I think like like it's the not about like plain text or markdown, but we always like prioritize line based um formats because like it allows like the models to use like grep or like read from read from a range of range of lines.</p>
<p>And also like markdown can sometime cause some troubles. You know, um, models are trained train trained to use markdown really well, and sometimes it will maybe for for for for some model, I don't I don't want to say that name, but but they often like output too many bullet points if you use markdown too too often.</p>
<p>Yeah, so actually we we want to use more plain text.</p>
<h2>Q&A - Prompting for Summarization</h2>
<p>How about on the topic of um compaction versus summarization?</p>
<p>Let's hit on summarization. This is an interesting one that I've been asked a lot before, uh, how do you prompt to produce good summaries?</p>
<p>So for example, summarization like you said, it's irreversible, so if you don't prompt it properly, you can actually lose information.</p>
<p>The best answer I came up with is just tuning your prompt <strong>for high recall,</strong> but how do you approach this?</p>
<p>how do you think about prompting for summarization?</p>
<p>A:we tried a lot of a lot like optimizing the prompt for summarization, but it turns out a simple approach works really well is that you do not use a free form like prompt to let the AI generate everything.</p>
<p>Instead, you could define a kind of a schema. It's just a form, there's a lot of fields and let the AI to fill them.</p>
<p>For example, like here are the files that I that I've modified and here's the goal of the user, here's what I left off.</p>
<p>And if you use this kind of like a more structured schema, at least like like the output is kind of stable and you can iterate on this, so just do not use like free form summarizations.</p>
<h2>Q&A - Compaction of Search Results</h2>
<p>How about with context, how about with compaction then?</p>
<p>And actually, I want to make sure I understood that. So with compaction, let's say it's a like a search tool, you have the raw search tool output and would it be that would be your raw message and then the compaction would just be like uh a file name or something, is that right?</p>
<p>A:Yeah, it is. It's not only about like the tool call, it's also like applied to the to the result of the tool.</p>
<p>we interestingly we find out that almost every every action in Manas is just kind of like reversible if you can offload it to a to the file system or an external state.</p>
<p>for most of these tasks, you already have a unique identifier for it. For example, for file operations, of course, you have the file path.</p>
<p>For like browser operations, you have the URL, and even for search search um actions, you have the query.</p>
<p>So it's it's naturally it's already there.</p>
<p>Lance: And just want to hit that again because it I've had this problem a lot. So for example, I'm an agent that uses search, I perform a, it returns a token-heavy tool call.</p>
<p>I don't want to return that whole tool message to um the agent.</p>
<p>I've done things like some kind of summarization or compaction and send the summary back, but how do you approach that because you might want all that information to be accessible for the agent for his next decision, but you don't want that huge context block to live inside your message history?</p>
<p>So how do you approach that? You could send the whole message back uh, but then remove it later, that's what Claude does now.</p>
<p>You could do a summarization first and send the summary over. Um, you could do you could send everything and then do compaction so that later on you don't have the whole context in your message history.</p>
<p>You only have like a link to the file. How do you think about that specifically if you see what I'm saying?</p>
<p>A: I know actually it depends on the scenario. For for example, like for like complex search, I mean for complex search, I mean it's not just one query.</p>
<p>For example, like you have multiple queries and you want to like like gather some important things and drop everything else.</p>
<p>in this case, I think we should use sub-agents or internally we call it agent as tool. So for the from the model's perspective, it's still a kind of function, maybe called advanced search.</p>
<p>It's a function called event search, but what it triggers is actually another sub-agent, but that sub-agent is more like a workflow or agentic workflow that has a fixed output schema, and that is the result that returns to the agent.</p>
<p>But for like other kinds of more simpler search, for example, just like searching Google, like we just use the full detail format and like append it into the context and rely on the compactions thing.</p>
<p>But also like we always like instruct the model to like write down like the intermediate insights or key findings into files in case that like the compaction happens earlier than than the model expected.</p>
<p>And if you like do this really well, actually you don't lose a lot of information um by compaction because sometimes like those like old tool calls are irrelevant after time.</p>
<h2>Q&A - Agent-to-Agent Communication & MapReduce</h2>
<p>Q:I like the idea of agent as tool, we do that quite a bit and that does make that that is that is highly effective, but that brings up another interesting point about, and and you referenced this a little bit, agent agent communication.</p>
<p>How do you address that?</p>
<p>So Walden Yen from from Cognition had a very nice blog post talking about this is like a major problem that they have with Devin.</p>
<p>so like kind of communication between agents, how do you think about that problem and yeah, ensuring sufficient information is transferred but not overloading like you said the prefill of the sub-agent with too much context?</p>
<p>A: we've launched a feature called Wide Research a month ago, like it's basically like we call, yeah, internally we call it agentic map reduce because we we got inspired from the design of MapReduce.</p>
<p>And it's kind of special for Manus because, there's a full virtual machine behind the session, so one way we pass like information or pass context from the main agent to sub-agent is by sharing the same sandbox.</p>
<p>So the file system is there and you can only pass like the like different path here.</p>
<p>And I think like like sending information to sub-agent is not that hard. The the more more complex thing is about how to like like have the the correct output from different agents.</p>
<p>And what we did here is like we have a trick for every every time if the main agent want to spawn up a new sub-agent or or maybe 10 sub-agents, you have to design, you have to let the main agent to to define the output schema.</p>
<p>And in the in the sub-agent perspective, you have a special tool called <code>submit_result</code>, and we use constraint decoding to ensure that what the the sub-agent submits back to to the main agent is the schema that is defined by the main agent.</p>
<p>Yeah, so you can imagine that this kind of MapReduce operation, it will generate a kind of like spreadsheet and the spreadsheet is constrained by the schema.</p>
<p>Lance:That's an interesting theme that seems to come up a lot with how you design Manus, you use schemas and structured outputs both for summarization and for this agent agent communication.</p>
<p>So it's kind of like use schemas as contracts between agent sub-agent or between like a tool and your agent to ensure that sufficient information is passed in a structured way in a complete way uh, like when you're doing summarization, you use a schema as well.</p>
<h2>Q&A - Model Choice and Open Models</h2>
<p>I'm poking around some other interesting questions here. Uh, any thoughts on models like uh, I think you guys are use Anthropic, but do you work with open models?</p>
<p>do you do fine-tuning? You talked a lot about kind of working with KV cache, so for that, maybe using open models.</p>
<p>How do you think about like model choice?</p>
<p>A: actually right now we don't use any like open source model right now because I think it's not about quality, it's interestingly it's about cost.</p>
<p>we often think that open source model can lower the cost, but if you're at the scale of Manis and and if you're building a real agent, which the input is way longer than the output, then KV cache is super important.</p>
<p>And distributed KV cache is very hard to implement if you use like open source solutions.</p>
<p>if you use like those like um frontier pro uh LLM providers, they have more solid infrastructure for like distributed cache uh globally.</p>
<p>So sometimes like if you do the math, uh at least for Manis, we find out that using like like these flagship models can sometimes can they can be even more cheaper than like using open source models.</p>
<p>And right now, we're not only using Anthropic force.</p>
<p>Like Anthropic's model is the best choice for like agentic task, but we're also like seeing like the progress in Gemini and in Open New model.</p>
<p>I think right now like these like frontier labs are not converging in directions. For example, like if you're doing coding, of course, you should use uh Claude.</p>
<p>And if you uh want to do like more multimodal multimodality things, you should use Gemini.</p>
<p>And open model is super good at like like complex math and reasoning. So I think for application companies like us, one of our advantage is that we do not have to build on top of only one model.</p>
<p>You can do some like task level routing or maybe even subtask or step level routing if you can like like calculate like if you can can pull in that kind of KV hash validation.</p>
<p>So I think it's advantage for us and we do a lot of evaluations internally to know which models to use for which subtask.</p>
<p>Lance:with KV cache, so what specific features from the or, yeah, what from the providers are you using for cache management?</p>
<p>I know like Anthropic has input caching as an example.</p>
<h2>Q&A - Tool Selection and Layered Action Space (Revisited)</h2>
<p>Q:tool selection is a good one. Um, right, so you were talking about this, you don't use like uh, indexing of tool descriptions and fetching tools on the fly based on semantic similarity.</p>
<p>How do you handle that? Like what's what's the threshold for too many tools?</p>
<p>tool choice is a classic. How do you think about that?</p>
<p>A: first of all, it depends on the model. Different model has different capacity for like tools, but I think a rule of thumb is try not to like um include more than like 30 tools.</p>
<p>It's just a random number in my mind, but actually I think like if you're building a we call it a general AI agent like Manis, you want to make sure those like native functions are super atomic.</p>
<p>So actually there are not that much like atomic function that we need to put inside the action space.</p>
<p>So like for Manus, we right now we only have like like 10 or 20 like atomic function, and everything else is in the sandbox.</p>
<p>we don't have to like um to pull things like dynamically.</p>
<p>Lance:Let's explain that a little bit more, so so you have, let's say, 10 tools that can be called directly um by the agent, but then I guess it's like you said the agent can also choose to for example write a script and then execute a script.</p>
<p>So that expands its action space hugely without giving it like you don't have an independent tool for each possible script, of course, that's insane.</p>
<p>So so our very general tool to like write a script and then run it does a lot.</p>
<p>A:why we are super confident to call Manis a general agent?</p>
<p>Because it runs on a computer, and computers are Turing complete. The computer is the best invention of human.</p>
<p>Like theoretically, like an agent can do anything that an maybe a junior intern can do using a computer.</p>
<p>So with the shell tool and the and the text editor, we think it's already complete, so you can offload a lot of things right to sandbox.</p>
<p>Lance:You mentioned code with code agents. My understanding is the model will actually always produce a script and that'll then be run inside a code sandbox for so every tool call is effectively like a script is generated and run.</p>
<p>It sounds like you do some hybrid where sometimes Manas can just call tools directly, but other times it can actually choose to do something in the sandbox, is that right?</p>
<p>A:I think this is this is super important because like actually we try to use entirely to use uh Codec for Manas, but the problem is if you're using code, you cannot leverage like constraint decoding and things can go wrong.</p>
<p>Codec has some like special use cases as I mentioned earlier in slides, for example, like processing a a large amount of data.</p>
<p>You don't have to like port everything in the tool result.</p>
<p>It's that you put it inside like maybe the runtime memory of Python and you only get the result back to to the model.</p>
<p>So we think you should do it like in a hybrid way.</p>
<h2>Q&A - Planning and To-Do Lists</h2>
<p>Q: Tell me about planning and and I know Manus has this to-do tool or it generates a to-do list and start of tasks.</p>
<p>A:at the beginning Manus uses that <code>to-do.md</code> paradigm.</p>
<p>it's kind of, I I don't want to use the word stupid, but actually it wastes a lot of turn.</p>
<p>You know, um, like back in maybe March or April, like if you like check the log of some Manas task, maybe like one-third of the action is about like updating the the to-do list.</p>
<p>It wastes a lot of like like tokens.</p>
<p>so right now we're using a more like structuralized planning. For example, like uh, if you use Manus, there's a planner at the bottom of like the system.</p>
<p>Internally, it's also kind of a tool called it's, we implemented using the agent as tool paradigm so that like there's a separate agent that that is managing the plan.</p>
<p>So actually right now the latest version of Manus, we are no longer using that <code>to-do.md</code> thing. </p>
<p>like <code>todo.md</code> still works and it can generate like good results, but if you want to say save tokens, you can find another way.</p>
<h2>Q&A - Multi-Agent Design and Roles</h2>
<p>So you might have like a planning agent with its own context window, makes a plan, produces like some kind of plan object, maybe it's a file or maybe it just calls sub-agents directly.</p>
<p>How do you think about that like and how many different sub-agents do you typically recommend using?</p>
<p>A: I think this is also like depends on your design, but here at Manis actually Manis is not kind of like the typical multi-agent system.</p>
<p>For example, like we've seen a lot of like different agent that divides by role.</p>
<p>For example, like you have a designer agent or design or like programming agent, manager agent, we don't do that because we think like uh why we have this is because this is how like human company works and this is due to the limitation of like human context.</p>
<p>So in Manus, Manas is a multi-agent system, but we do not divide by role.</p>
<p>We only have very few agents. For example, we have a huge like general executor agent and a planner agent and a knowledge management agent and maybe like some some, yeah, data API registration agent.</p>
<p>so we are very very cautious about adding more sub-agents because of the reason that we've mentioned before, communication is very hard.</p>
<p>And we implement more kinds of like sub-agents as agent as tools as we mentioned before.</p>
<p>Lance:I see this mistake a lot, or I don't know if it's a mistake, but you see anthropomorph, anthropomorphizing agents a lot like it's my designer agent, and I think it's kind of a forced analogy to think about like a human org chart in your sub-agents.</p>
<p>it's like a planner and knowledge manager. A knowledge manager might do what like um, like what will be the task of knowledge manager like?</p>
<p>A:we mentioned like we have a knowledge system in Manus.</p>
<p>What the knowledge agent does is that it reviews like the conversation between the user and the agent and and figure out like what should be like saved in in the long-term memory.</p>
<h2>Q&A - Safety and Guardrailing in Sandboxed Environments</h2>
<p>How about guardrailing? Someone asked a question about kind of safety and guardrailing.</p>
<p>A: if you have a sandbox that's connected to the internet, everything is dangerous.so we have put a lot of effort like in guard railing, like at least we do not let the information to get out of the sandbox.</p>
<p>For example, like if you like got prompt injected, uh, we have some like uh checks on like outgoing traffic.</p>
<p>For example, like we'll ensure that no like token things will go out of the sandbox.</p>
<p>And if the the user wants to like print something out of the sandbox, we have those kind of like like like um what we call it uh removing, yeah, removing things and to to ensure that no information go out of the sandbox.</p>
<p>But you know, um, for another kind of thing is that we have a browser inside of Manus, and the browser is very complicated.</p>
<p>For example, like if you log into some like um your websites, you can choose to let Manis to persist your login state, and this turns out to be like like very tricky because like sometime the content of the web page can also be like malicious, maybe they they're doing like like prompt injection.</p>
<p>And this, I think, is somehow like out of scope for application company. So we're moving uh, we're working very closely with those computer use model provider.</p>
<p>For example, like Anthropic and Google. Yeah, they're adding a lot of guardrails here.</p>
<p>So right now in Manas, every time you do some like sensitive operations whether or inside the um the browser or in the sandbox, Manas will will require a manual confirmation and you must accept it or otherwise you have to take over it to finish it yourself.</p>
<p>So I think like it's pretty hard for us to like design a a like kind of a very like well-designed solution, but it's a progressive approach.</p>
<p>So right now we're letting the user to take over more frequently, but like if the guard rail itself in the model gets better, we can do less.</p>
<h2>Q&A - Evaluation Strategies</h2>
<p>How about the topic of evals? This has been discussed a lot quite a bit online if you probably seen, you know, Claude Code, they talked a lot about just doing less formal evals at least for code because code evals are more or less saturated, lots of internal dog fooding.</p>
<p>How do you think about evals? Are they useful? What evals are actually useful?</p>
<p>What's your approach?</p>
<p>A:Yes, yeah, you know, at the beginning uh, at the launch of Manis, we're using like public academic benchmarks like Gaia, but then like after after launching to the public, we find out that it's super misaligned.</p>
<p>models are that that gets like high scores on Gaia, the user don't like it.</p>
<p>So right now we use like three, we have three different kinds of evaluations.</p>
<p>First of all, most importantly is that for every like completed session in Manas, we'll request the user to like give a feedback to give one to five stars.</p>
<p>this is the gold standard. Like we always care about like the average user rating. This is number one.</p>
<p>And number two, we're still using some like like internal automated tests with like verifiable results.</p>
<p>For example, like we have like created our own data set with like clear answers. But also like uh we, yeah, we we still use a lot of like public academic benchmarks, but we also uh created some um some data sets that's more focused on execution because like most benchmark out there are more about like read-only tasks.</p>
<p>So we designed some like like um like executing tasks or transactional task because we have the sandbox, we can like frequently reset the test environment.</p>
<p>So these are the automated parts. And most importantly, number number three, we have a lot of interns, you know, you have to use a lot of real human interns to do like like uh evaluations on things like website generation or data visualization because like it's very hard to design a good reward model that knows whether the output is visually appealing.</p>
<strong>it's about the taste.</strong>
<h2>Q&A - RL with Verifiable Rewards vs. Tool Calling Agents</h2>
<p>I do want to ask you about this emerging trend of of reinforcement learning with verifiable rewards versus just building tool calling agents.</p>
<p>So like Claude Code, extremely good, and they have the benefit because they built the harness and they can perform RL on their harness and it can get really really good with the tools they provide in the harness.</p>
<p>Do you guys do RL um, or how do you think about that?</p>
<p>Because of course, in that case, you would have you using open models.</p>
<p>I've been playing with this quite a bit lately. How do you think about that, just like using tool calling out of the box with model providers versus doing RL yourself inside your environment with your with your with your harness?</p>
<p>A:I've been doing like free training, post training, RL for a lot of years, but I have to say that right now if you like if you have like in um like sufficient resource, you can try.</p>
<p>But actually like we, as I mentioned earlier, MCP is a big changer here because like if you want to support MCP, you're not using a fixed action space.</p>
<p>And if it's not a fixed action space, it's very very hard to design a good like reward, and you cannot generate a lot of like the the rollouts and feedbacks will be unbalanced.</p>
<p>So if you want to build a model using like that supports MCP, you are literally building a foundation model by yourself.</p>
<p>So I think like every everyone in the in the community like model companies they're doing the same thing.</p>
<p>They're doing the same thing for you. So right now I don't think we should spend that much time on doing RL right now, but like as I mentioned earlier, we are just discovering like like exploring new ways to do like maybe call it like personalization or some sort of online learning, but using like parameter freeway.</p>
<p>For example, like collective feedbacks.</p>
<p>Lance: one little one along those lines is is it the case that for example Anthropics done reinforcement learning at verified rewards on some set of tools using Claude Code.</p>
<p>Have you found that you can kind of mock your your your harness to use similar tool names to kind of unlock the same capability if that makes sense?</p>
<p>Like um, for example, like I believe they've just, you know, they've obviously performed, you know, they it utilized glob, uses GP, uses some other set of tools for manipulating the file system.</p>
<p>Can you effectively reproduce that same functionality by having the exact same tools with the same tool name, same descriptions in your harness or kind of how do you think about that like unlocking um, unlocking the, yeah, right, you see what I'm saying?</p>
<p>A:I know the clear answer here, but for us we actually try not to use the same name because like it it will like if you design your own function, you maybe have like different requirements for that function, and the parameters, the input arguments might be different.</p>
<p>So you don't want to like confuse the model like if the model is trained on a lot of like post training data that has some like internal tools, you don't want to to to let the models to be confused.</p>]]></content>
        <author>
            <name>Shemol</name>
            <email>shemol106@gmail.com</email>
            <uri>https://shemol.tech</uri>
        </author>
    </entry>
    <entry>
        <title type="html"><![CDATA[Two dark clouds over Agent_ real-time interaction with the environment and learning from experience]]></title>
        <id>https://shemol.tech/two-dark-clouds=over-agent</id>
        <link href="https://shemol.tech/two-dark-clouds=over-agent"/>
        <updated>2025-10-19T00:00:00.000Z</updated>
        <summary type="html"><![CDATA[对 boj老师agent 上两朵乌云的学习]]></summary>
        <content type="html"><![CDATA[<h1>Two dark clouds over Agent: real-time interaction with the environment and learning from experience</h1>
<a href="https://01.me/files/agent-learn-from-experience/dist/1">https://01.me/files/agent-learn-from-experience/dist/1</a>
<p>Co-Founder & Chief Scientist, Pine AI</p>
<p>The Challenge of real-time interaction</p>
<li>high latency in voice interaction (tens of seconds)</li>
<li>GUI operation is 3-5 times slower than human actions</li>
<li>the serial bottleneck of the traditional ReAct loop</li>
<p>technical breakthrough</p>
<li>SEAL architecture（Streaming, Event-driven Agent Loop）</li>
<p>  - perception layer: Streaming processing of speech signals</p>
<p>  - thinking layer: Interactive ReAct with asynchronous observation, thinking, and action</p>
<p>  - execution layer: feedback Loop VLA/TTS</p>
<p>The challenge of learning from experience</p>
<p>core challenge</p>
<li>Every task starts from scratch</li>
<li> Unable to accumulate domain knowledge</li>
<li>lack of proficiency improvement</li>
<p>three major paradigns of agent learning from experience </p>
<p>1. Post-training：RL parameter update</p>
<p>2. In-context Learning：Attention soft update</p>
<p>3. Externalized Learning：</p>
<p>  - RAG: persistent Experience Storage</p>
<p>  - Tool Generation: Agent Self-Evolution</p>
<p>Scientist Shunyu Yao pointed out the first issue: the lack of interaction with real people during an agent’s task execution, and the second issue: the absence of a mechanism for learning from experience.</p>
<p>(So I went to read that blog)</p>
<h2><strong>The Second Half - Shunyu Yao</strong></h2>
<a href="https://ysymyth.github.io/The-Second-Half/">https://ysymyth.github.io/The-Second-Half/</a>
<p>In the first half, we continuously developed new training methods and models, achieving consistent results in benchmark tests. We kept creating more challenging benchmarks and consistently scored high on these tests, cycling through this process repeatedly. Ultimately, we found an effective method capable of achieving generalization: reinforcement learning.</p>
<p>This recipe has been largely standardized and requires little new thinking; as long as the above cycle is continuously followed, performance can keep improving. Therefore, a fundamental rethinking of the evaluation method is necessary.</p>
<p>The issue is that despite using AI to defeat world champions in chess and Go, surpass most humans on the SAT and bar exams, and achieve gold-medal levels in competitions, the world hasn't changed much—at least from an economic or GDP perspective.</p>
<p>The author refers to it as the utility problem.</p>
<p>Previous evaluation settings differ from the real-world setup in many ways. Two examples:</p>
<li>Evaluations should run automatically. Typically, an agent receives a task and acts autonomously, subsequently earning a task reward. However, in reality, the agent must continuously interact with humans throughout the entire task process—you can’t just send an extremely long message to customer support, wait ten minutes, and expect to receive a detailed reply that solves all your problems.</li>
<li>The evaluation "should" follow the independent and identically distributed (i.i.d.) principle. If the test set contains 500 tasks, each task must be executed independently, and the overall evaluation result is derived by aggregating task metrics. However, in reality, task processing tends to be sequential rather than parallel. As Google engineers become more familiar with the codebase, their ability to handle Google3 issues continuously improves; meanwhile, software engineering agents—even when addressing numerous problems within the same codebase—fail to achieve such incremental progress. We clearly need long-term memory mechanisms (<a href="https://yitaoliu17.com/assets/pdf/ICLR_2025_CER.pdf">existing methods</a> already enable this), but academia lacks both suitable benchmarks to validate its necessity and the academic courage to question the foundational assumption of machine learning: the i.i.d. hypothesis.</li>
<p>In the first half of artificial intelligence development, these assumptions established benchmarks without issue, as enhancing intelligence typically increased utility when AI capabilities were relatively low. However, universal methodologies now ensure effectiveness under these assumptions. Thus, the key to navigating the new landscape of the second half lies in:</p>
<li>Develop novel evaluation settings or tasks for practical applications.</li>
<li>Solve problems according to the established plan, or refine the solution by introducing innovative elements. Repeat this cycle.</li>
<p>While the first half of the game is filled with incremental approaches and models, the second half will, to some extent, filter them out. Unless new premises that break conventions can be established, universal solutions will completely overshadow those gradual methods—only then will there be an opportunity to pursue truly disruptive research.</p>
<p>and I came across an expression that struck me as incredibly clever. I absolutely adore the following passage:</p>
<blockquote>Thinking, or reasoning, is a <strong>strange</strong> kind of action - </blockquote>
<blockquote>it does not directly affect the external world, yet the space of </blockquote>
<blockquote>reasoning is open-ended and combintocially infinite — you can think </blockquote>
<blockquote>about a word, a sentence, a whole passage, or 10000 random English </blockquote>
<blockquote>words, but the world around you doesn’t immediate change. In the </blockquote>
<blockquote>classical RL theory, it is a terrible deal and makes decision-making </blockquote>
<blockquote>impossible. Imagine you need to choose one out of two boxes, and there’s</blockquote>
<blockquote> only one box with $1M and the other one empty. You’re expected to earn </blockquote>
<blockquote>$500k. Now imagine I add infinite empty boxes. You’re expected to earn </blockquote>
<blockquote>nothing. But by adding reasoning into the action space of any RL </blockquote>
<blockquote>environment, we make use of the language pre-training priors to </blockquote>
<blockquote>generalize, and we afford to have flexible test-time compute for </blockquote>
<blockquote>different decisions. It is a really <strong>magical</strong> thing and I</blockquote>
<blockquote> apologize for not fully making sense of it here, I might need to write </blockquote>
<blockquote>another blog post just for it. You’re welcome to read <a href="https://arxiv.org/abs/2210.03629">ReAct</a></blockquote>
<blockquote> for the original story of reasoning for agents and read my vibes at the</blockquote>
<blockquote> time. For now, my intuitive explanation is: even though you add </blockquote>
<blockquote>infinite empty boxes, you have seen them throughout your life in all </blockquote>
<blockquote>kinds of games, and choosing these boxes prepare you to better choose </blockquote>
<blockquote>the box with money for any given game. My abstract explanation would be:</blockquote>
<blockquote> <strong>language generalizes through reasoning in agents</strong>.</blockquote>
<h1>Section 1: Agent interaction with environment in real-time</h1>
<h2>Real-time interaction challenges of voice agents</h2>
<h3>Fundamental contradiction: Serial processing vs. real-time requirements</h3>
<li>Must wait: first listen, then think; only after thinking can one speak.</li>
<li>Blocking wait: Every link becomes a bottleneck</li>
<p>  - user finish speaking(VAD)→speech recognition(ASR)→ complete sentence</p>
<p>  - complete sentence → llm with thinking → complete output after thinking</p>
<p>  - complete thinking → split into sentences→Speech synthesis(TTS) → voice response</p>
<li>cumulative delay: The total delay far exceeds human tolerance</li>
<h3>The dilemma of fast versus slow response</h3>
<p>fast response make mistakes easily and slow response burns the users’ patience.</p>
<p>unable to Anticipate and deliberate while listening</p>
<h3>technology bottleneck</h3>
<p>perception phase</p>
<li>voice:Waiting for the entire sentence to end before converting to text results in high latency; feeding fragmented speech into the speech recognition model leads to low recognition accuracy.</li>
<li>vision:High prefill latency for 2K token screenshots</li>
<p>thinking phase</p>
<li>Complete input is required to think.</li>
<li>Unable to predict user intent.</li>
<li>Test-time scaling exacerbates the delay.</li>
<p>execution phase</p>
<li>only can act when think ends</li>
<li>Every step of the GUI operation requires taking a new screenshot for consideration.</li>
<h1>architecture innovation:SEAL(Streaming,Event-driven Agent Loop)</h1>
<p>Core idea: Abstract all interactions into asynchronous event streams to achieve low-latency, interruptible real-time interaction.</p>
<p>1. perception layer</p>
<p>Converting continuous real-world signals (speech, GUI video) into discrete event streams</p>
<p>1. thinking layer</p>
<p>Async event processing, think while listening, speak while thinking, generate interleaved sequences of thought and action.</p>
<p>1. execution layer</p>
<p>Converting discrete action commands back into continuous real-world signals (TTS voice, mouse movements)</p>
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/Two+dark+clouds+over+Agent_+re_1770869824890.png" alt="" />
<h2>Layer 1 perception layer</h2>
<p>input: sequential signal:voice stream,GUI video stream</p>
<p>output:speech_start,interrupt,laugh,speech_fragment,ui_change etc.</p>
<p>Streaming speech perception model replacing VAD+ASR</p>
<p>Streaming Speech-Aware Models Based on Open-Source Autoregressive LLMs</p>
<li>Unlike traditional ASR models such as Whisper, which use an autoregressive architecture, this approach reduces speech recognition latency.</li>
<p>  - Streaming processing of input speech tokens</p>
<p>  - Streaming text and acoustic events</p>
<li>Based on open-source LLM post-training</li>
<p>  - Retaining dialogue context and supporting in-context learning significantly improve the recognition accuracy of user personal information and domain-specific terminology.</p>
<p>  - With world knowledge and common sense, the recognition rate for brand names, amounts, etc., has significantly improved.</p>
<p>The output information is rich, encompassing not only text but also acoustic events.</p>
<p>Real-time transcription text segment</p>
<p>Special Tokens（Acoustic event）：</p>
<li><speak_start></li>
<li><speak_end></li>
<li><interrupt></li>
<li><emotion:happy></li>
<li><laugh><sigh></li>
<li><music></li>
<h2>Layer 2:thinking Layer</h2>
<p>Based on an event-driven loop, it enables interruptible and asynchronous listening while thinking, and speaking while thinking.</p>
<p>Input</p>
<p>discrete event stream(from event queue)</p>
<p>output</p>
<p>Interlaced thoughts and action commands</p>
<h2>core innovation:interactive ReAct</h2>
<p>traditional ReAct</p>
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/Two+dark+clouds+over+Agent_+re_1770869826813.png" alt="" />
<p>Interactive ReAct:</p>
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/Two+dark+clouds+over+Agent_+re_1770869827574.png" alt="" />
<h2>Interactive ReAct:Think while Listening</h2>
<p>traditional ReAct:Once interrupted, all previous thoughts are invalidated and must be started over from the beginning.</p>
<p>Interactive ReAct:Preserve the interrupted thought process and, after adding new user input, allow the model to continue thinking based on previous context.</p>
<h2>Interactive ReAct:Speak while Thinking</h2>
<p>Use "preludes" to strive for deep thinking about events and reduce first-character delay.</p>
<h2>Layer 3:Execution Layer</h2>
<p>Convert discrete action commands into continuous real-world signals.</p>
<p>Input</p>
<p>speak(…),click(…)</p>
<p>Output</p>
<p>sequential signal(Voice waveform, mouse trajectory)</p>
<h2>last mile for GUI operation</h2>
<p>The agent struggles to output coordinates. Solution: Draw inspiration from the VLA model in the field of Robotics and perform post-training on the model using RL, enabling it to directly output actions.</p>
<li>Option 1: The main model directly outputs mouse click coordinates.</li>
<li>Option 2:Train a standalone VLA model to mimic human mouse movement patterns:Adopting a closed-loop feedback model of "move, fine-tune, click.”</li>
<p>More human-like in speech synthesis: Generate labeled text, then produce speech with TTS.</p>
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/Two+dark+clouds+over+Agent_+re_1770869828406.png" alt="" />
<h1>Agent learning from experience</h1>
<p>Paradigm 1: Post-Training</p>
<p>Method: Parameter Update (Post-training)</p>
<li>Update weights through gradient descent</li>
<li>Requires a large amount of annotated data</li>
<li>The model is fixed after training.</li>
<li>The learning process is slow and expensive.</li>
<p>Paradigm Two: In-Context Learning</p>
<p>Method: In-context Learning</p>
<li>Implicit learning through the attention mechanism.</li>
<li>Using long context as temporary memory</li>
<li>Learning effects are limited to the current conversation and are not permanent.</li>
<p>Paradigm Three: Externalized Learning</p>
<p>Method: Externalizing Knowledge and Processes</p>
<li>RAG: Efficient, Reliable, Hallucination-Free Knowledge</li>
<li>Tool-generation: Codify processes to achieve self-evolution.</li>
<li>Transcending the limitations of parametric knowledge</li>
<p>Best Practice: Contextual Embeddings + Contextual BM25+Reranking + Top-20 chunks</p>
<p>Fine-tuning vs. RAG: An Empirical Comparison of Knowledge Injection Methods</p>
<p>Based on the paper: Fine-Tuning or Retrieval? Comparing Knowledge Injection in LLMs</p>
<a href="https://aclanthology.org/2024.emnlp-main.15.pdf">https://aclanthology.org/2024.emnlp-main.15.pdf</a>
<p>Core insight of the paper: RAG is not only more effective but also avoids the issues of knowledge forgetting and hallucinations that may arise from fine-tuning.</p>
<p>Tool Generation - Enabling Agent Self-Evolution</p>
<a href="https://arxiv.org/abs/2505.20286">https://arxiv.org/abs/2505.20286</a>
<p>Minimum Predefined Principle</p>
<li>Minimalist Architecture: Equipped with only a single core component (Web proxy)</li>
<li>Avoid over-engineering: Do not presuppose complex tools and workflows.</li>
<li>Generality first: Reduce domain-specific hardcoding</li>
<p>Maximum Self-Evolution Mechanism</p>
<p>core ability</p>
<p>1. Self-create tools: Generate new tools based on task requirements.</p>
<p>2. Capability Enhancement: Iteratively improve the performance of existing tools</p>
<p>3. Experience Reuse: Solidifying successful patterns into reusable components.</p>
<p>MCP-Zero Active Tool Discovery</p>
<p>Traditional methods dilemma:</p>
<li>Full injection: The complete toolset occupies a large number of tokens → Context explosion.</li>
<li>Static retrieval: Based on initial query selection, unable to predict task evolution. Debugging files requires file system + code analysis + command execution.</li>
<p>MCP-Zero: From Passive to Active</p>
<p>Core Concept: Enabling Agents to Proactively Identify Capability Gaps and Request Tools On-Demand</p>
<p>1. Active Tool Request: Agent generates structured requirements</p>
<p>2. Hierarchical Semantic Routing: First Filter Servers, Then Match Tools</p>
<p>3. Iterative Capability Expansion: Dynamically Discovering and Building Toolchains During Execution</p>
<p>Externalizing learning to transcend the limitations of attention is an inevitable trend.</p>
<p>The biggest lesson that can be read from 70 years of AI research is that general methods that leverage computation are ultimately the most effective, and by a large margin. </p>]]></content>
        <author>
            <name>Shemol</name>
            <email>shemol106@gmail.com</email>
            <uri>https://shemol.tech</uri>
        </author>
    </entry>
    <entry>
        <title type="html"><![CDATA[2025巴菲特股东大会笔记]]></title>
        <id>https://shemol.tech/2025-buffet</id>
        <link href="https://shemol.tech/2025-buffet"/>
        <updated>2025-05-05T00:00:00.000Z</updated>
        <summary type="html"><![CDATA[2025巴菲特股东大会笔记]]></summary>
        <content type="html"><![CDATA[<h1>2025巴菲特股东大会笔记</h1>
<strong>我来自新泽西。您今天选中我在这里发问，我觉得是非常好的机会，你常告诉我们投资的一些原则，常提醒我们要有耐心，您可不可以再给我们提示一下。</strong>
<blockquote>有时候机会稍纵即逝，必须当机立断。1966年我接到一个电话——细节不便多说——一位女士表示愿意以600万美元的价格出售她丈夫的公司，其中包括200万美元资产和900多项业务，预计年税前利润可达200万美元。这个价格听起来相当诱人。</blockquote>
<blockquote>我和查理立即讨论了这个机会。虽然查理不认识这位女士，但他知道她的合伙人Ben Rosser。我们猜测卖方可能是位富有的寡妇，或是她的丈夫出于某种原因急于脱手。直到12月31日，我们仍在研究账目，试图理解她的出售动机。</blockquote>
<blockquote>第二天早上，Will Phillips来电提醒我们：东海岸人士对中西部人可能存在偏见。如果这位女士来自爱荷华州，她的行事方式可能与东部人不同。面对年收益率高达33%的交易，确实很难保持耐心。</blockquote>
<blockquote>这件事让我明白：真正的良机来临时，不需要再等待。当有利可图且合理的机遇出现时，就要果断行动。耐心固然重要，但更重要的是识别机会的敏锐和行动的决心。当然，市场机会从不会永远等待任何人。</blockquote>
<blockquote>耐心确实是重要的品质，但更重要的是在机会来临时能够当机立断。当机遇突然出现在你面前时——可能只是一个5秒钟的电话——你必须立即判断是否值得把握。在商业决策上，最忌讳的就是自我怀疑。很多时候，正是因为犹豫不决而错失良机。这就是为什么我认为经商是如此令人着迷的事情，也是我最大的乐趣所在。</blockquote>
<blockquote>虽然我已经90多岁了，财富也远超常人，但我依然每天清晨都满怀期待地来到办公室。这不仅仅是一份工作，而是能够帮助他人、创造价值的快乐源泉。这种热情是可以传承的，我希望我的孩子们也能体会到这种快乐。</blockquote>
<blockquote>就像我和查理60多年来建立的合作伙伴关系一样，我们始终与志同道合的人共事。这种合作模式从未让我们失望，也一直是我们寻找新伙伴的标准。正因如此，今天在座的每一位董事和团队成员才能如此默契配合。当你确信某个机会确实有利时，就不该再犹豫——必须立即行动。</blockquote>
<strong>我来自加州，感谢您帮我们准备这样的大会。你说过除了乔布斯之外，没有人有办法创造苹果这样的公司，但是蒂姆·库克却做得很好。沃伦你是创造伯克希尔的人，你认为格雷格·阿贝尔是这当中意外的人才，但是他看起来是非常正常的人，抱歉，这是非常好的方式来讲一个人是正常的人，你告诉我一下，为什么你认为格雷格·阿贝尔在未来几十年会是非常棒的接棒人？</strong>
<blockquote>你提出了一个非常关键的问题。在我们这个行业，要组建一支优秀的投资团队并非易事。在美国这样广阔的市场中，要培养出适合资本运作的环境需要长期积累，特别是在资本配置方面更是如此。这需要时间沉淀，更需要找到一群志同道合、能够互相信任的伙伴。多年来，我始终以审慎的态度对待每项投资决策，仔细评估其中的风险。</blockquote>
<blockquote>昨天我参观了公司的展览现场，那些充满热情的员工给我留下了深刻印象。他们不图回报，只是单纯热爱自己的工作。这种态度令人敬佩。我认为，选择一份自己热爱的事业至关重要。我职业生涯中遇到过五位老板，每位都让我受益匪浅。但最终我选择了创业，因为做自己喜欢的事才是最好的工作状态。</blockquote>
<blockquote>不是每个人都能像我这样幸运，在七八岁时就找到终生热爱的事业。就像著名指挥家格伦·米勒的故事，他的乐团最初默默无闻，直到1941年才因独特风格一鸣惊人。如果你足够幸运，在年轻时就能找到真正热爱的事业，就不要太在意起薪高低。但要注意选择正确的公司和老板，有些工作确实不值得去做。</blockquote>
<blockquote>我们生活在一个伟大的国家，处在最好的时代。这就是为什么我决定将接力棒交给格雷格·阿贝尔。但要打造像伯克希尔这样的企业，绝非一朝一夕之功。在财务方面，有句话说得好：'致富一次就够了，不必冒不必要的风险。'市场上总有人通过借贷或金融杠杆牟利，指望最后能找到接盘者。但请记住，这种投机行为终将付出代价。</blockquote>
<blockquote>虽然我无法重来人生，但如果可以，我依然会选择做自己热爱的事业。迄今为止，这对我来说都是无比精彩的人生旅程。</blockquote>
<blockquote>关于刚才那位先生的问题，我想说的是：即便暂时没有遇到合适的机会，也不必过分焦虑。人生中总会遇到恰当的时机，也会遇见真正适合的人。就像寻找终身伴侣一样，也许你会一见钟情，但即便错过了一个人，也不代表就再也遇不到合适的对象。有些值得等待的人和事，往往会在最恰当的时机出现。</blockquote>
<strong>我来自马州，感谢您今天有这个时间跟我们讲话，我是一个年轻人，我想要进行投资，想听听您的看法。你在早期的时候你学习到哪些教训？我这样的年轻人希望能够发展自己的投资哲理，你有什么样的建议呢？</strong>
<blockquote>这是个非常好的问题。真希望我年轻时有人能给我这样的建议。这其实关乎你选择与什么样的人为伍——不要期待自己每次都能做出完美决定。如果你的生命有特定的发展方向，就要寻找值得尊敬的人成为伙伴。就像我过去几年合作的几位朋友，虽然他们的规模远不及伯克希尔·哈撒韦，但选择与志同道合者同行才是明智之举。可惜这些道理往往要到人生后半程才能真正领悟。</blockquote>
<blockquote>与其盲目追随富豪模仿他们的成功模式，不如寻找你真心欣赏的智者。我自己就是这样做的：向优秀的人学习，在实践中成长。如果你已经找到有意义的事业，而且没有迫切的财务压力，不妨像查理·芒格那样，花时间与智者相处。我说的这些人都在超越本职地创造价值，找到这样的伙伴并分享成功是莫大的幸运。即便暂时找不到，也不要放弃追求，持续努力终会遇到志同道合之人。</blockquote>
<blockquote>记得当年我去GEICO求职时，面对紧闭的大门，完全不知道门后是谁。但十分钟后我就遇到了改变我一生的人。永远不要忘记那些帮助过你的人，要用实际行动回报他们。当然，有时事情也会不尽如人意。如果你有幸身处良好环境，一定要懂得珍惜。比如生在美国就已经比世界上大多数人幸运——全球80亿人口中只有3亿多美国人，这本身就是一种优势。但要记住：任何时候都不要违背自己的原则去迎合他人。</blockquote>
<blockquote>投资对我而言充满乐趣。很多人赚到钱就离开这个行业，但真正应该寻找的是能让你终身热爱的事业。像汤姆·墨菲这样独具慧眼的人实在难得——他直到98岁仍保持着发现他人潜力的敏锐。要成为更好的自己，就要寻找这样的导师。伯克希尔的成功也得益于此：从1963年就开始合作的Sandy Gottesman，共事30多年的Walter Scott，到如今已合作25年的格雷格·阿贝尔...与这样的人同行永远不会错。</blockquote>
<blockquote>有趣的是，这样做似乎还能延年益寿。我和这些伙伴们都出奇地长寿——也许是因为常喝可乐（笑），但更可能是因为我们都做着热爱的事。幸福的人往往更长寿，这是我的切身感受。</blockquote>
<strong>巴菲特先生，阿吉特·贾恩、格雷格·阿贝尔，我叫作皮特·陈，我来自上海，这是第一次参加伯克希尔的股东大会。我今天的问题是，就是关于生命上升的问题，在您生命是不是也有最低点？在最低点的时候如何突破低点而冲破难关的？</strong>
<blockquote>每个人的人生都会有高潮和低谷，这很正常。感谢你的问题，虽然对我来说这些可能微不足道。就拿查理来说，他也经历过许多艰难时刻，但这就是人生的一部分——没有谁能够永远一帆风顺。</blockquote>
<blockquote>我并不是要给你最好的建议，但我想说的是：低潮是每个人一生中都会反复遇到的。也许对你来说，某些低谷会显得特别沉重，但请相信，遭遇挫折并不意味着世界末日。我可以向你保证，即使经历低谷，你也不会因此倒下。有些人面对困境时，可能会被轻视或嘲笑，但真正伟大的人即使暂时运气不佳，也会坚信转机即将到来。所以，不要认为运气只是运气。</blockquote>
<blockquote>如果你正在经历低谷，比如健康问题，那确实难以言喻。但请记住，我们生活在一个美好的时代。想想看，如果你出生在一百年前、五百年前，甚至更早的动荡年代，命运可能会截然不同。相比之下，我们这一代人已经非常幸运了。经过二十几代人的努力，人类文明已经发展到了前所未有的高度。二十年前，许多事情可能超出个人的掌控范围，但今天，我们可以更有智慧地应对挑战。</blockquote>
<blockquote>我会建议你把注意力放在生命中的美好事物上。坏事总会发生，这是不可避免的，但即使在困难时期，美好的人生依然可以把握。这就是我的观点。</blockquote>
<blockquote>就我个人而言，94年来，我从未遇到过真正糟糕的事情，我的许多朋友也是如此。我想喝可乐就喝，想做什么就做，至少到目前为止，一切都还不错。</blockquote>
<blockquote>再举个例子：职业橄榄球运动员的巅峰期可能只有到30岁或40岁，但他们早已习惯了这种生命周期。同样，如果你选择某个行业，从一开始就要明白它的规律。棒球运动员也是如此，每个位置都有其特定的挑战。</blockquote>
<blockquote>查理和我经常讨论一个问题：人的身体并不需要过度的运动。我们很注重保持健康，但不会过度消耗自己。我用运动员做比喻，是想让你明白——关注积极的一面更重要。如果你想延长生命，并且足够幸运（比如像你这样，远道而来依然精力充沛，还能与这么多聪明、有趣的人交流学习），那么你已经比过去几百年、几千年的大多数人都幸运了。这就是我想分享给你的。</blockquote>
<strong>尊敬的巴菲特先生，我是来自波兰、现居芝加哥的阿里萨。您74年前那个寒冷一月天的故事深深激励了我——1951年的某个周六，您坐了8小时火车从纽约到华盛顿，只为学习保险知识，即便到达后发现科特办公室大门紧闭。这份执着一直指引着我。2011年，15岁的我怀着同样的决心给您写信请求见面。您回信说您的时间所剩不多，大约只有3000天。如今5000多天过去了，从1951年至今，您的热情始终鼓舞着我。今天我再次恳请您：能否给我四分之一的时间？哪怕只是在您办公室停留一小时？我知道您行程繁忙，但作为一个在波兰历经磨难的幸存者，我交友谨慎却真诚。请不要拒绝我——此刻有四万人站在您身后支持这个请求，我们都是光明正大地表达敬意。最后，请允许我再次恭敬地请求：您是否愿意与我分享生命中的任意一小时？感谢您宝贵的时间。</strong>
<blockquote>这真是太棒了！请稍等一下——其实不必详细介绍我的生平，我很清楚自己的故事。感谢你在这个四万人的场合提出如此有趣的问题。让我分享一个年轻时的经验：</blockquote>
<blockquote>在我创业初期，常常独自驾车跨越各州拜访不同公司。那时我还很年轻，企业也没有专门的投资人关系部门，通常都是CEO直接接待。我总是担心会被拒之门外，后来摸索出一个方法：当请求见面时，我会明确表示'只需要10分钟'——除非对方主动要求延长。这个时间限制的主动权要掌握在自己手里。</blockquote>
<blockquote>这让我想起70年前煤矿业的经典提问：'如果被困荒岛十年，你会选择持有哪家竞争者的股票？'管理者们总热衷于谈论对手，就像孩子们喜欢比较玩具。但我学会了引导对话重点——确保他们不只谈论竞争者，更要阐述自己的核心竞争力。</blockquote>
<blockquote>如今企业结构越来越复杂，各部门就像独立拼图。投资人关系团队总会强调购买股票的好处，这个职能正变得越来越庞大。但关键在于用你自己的方式理解企业。伯克希尔就有独特的管理哲学，虽然我们提供充足资料供研究，但实在无法逐一面试四万多人的请求。</blockquote>
<blockquote>我由衷欣赏你的执着，但不得不坦白相告：这就是我们能提供的全部了。你的努力令人钦佩，但规则必须公平适用于所有人。</blockquote>
<p>推荐观看纪录片 - 成为凯瑟琳格雷厄姆</p>
<strong>在2017年伯克希尔年度股东大会上，我们讨论过大型科技公司的投资价值。如今这些企业——微软、苹果、亚马逊等——已经发展到不需要外部融资的阶段，它们拥有充沛的自有资金，并将大量资源投入人工智能领域的发展。我的问题是：与过去相比，您对这些科技巨头在资产负债表结构和资产配置策略方面的看法是否发生了变化？特别是考虑到它们当前雄厚的现金储备和向AI领域的大规模投资转向。</strong>
<blockquote>确实如此，这些企业之所以能获得丰厚利润，正是因为它们投入了大量资本。任何生意都需要资本投入，这是毋庸置疑的。以可口可乐为例，其装瓶业务需要大量前期投资购置设备，但进入运营阶段后，所需的追加资本就相对较少，却能产生可观的回报率。而销售环节的资本需求则更为有限，这种商业模式非常出色且经久不衰。</blockquote>
<blockquote>从资本运作的角度来看，保险业是个特殊案例。财产险和意外险业务需要充足的资金作为担保，但可以利用保费进行投资运作。这类资本密集型业务如果管理得当，会带来极佳的回报。苹果公司就是另一个典型案例——它几乎不需要额外融资，还能持续回购股票。虽然股价会有波动，但其商业模式非常稳健。</blockquote>
<blockquote>在投资领域，很多人通过资本管理获得了巨额财富。他们成功的秘诀在于善用他人资金，并收取管理费。即便业绩不佳，这些管理者仍能获得可观收入；而表现优异者自然会吸引更多资金。这就是资本市场的运作机制，我们不必苛责。</blockquote>
<blockquote>查理和我经过多年思考，最终选择采用这种模式：用投资者的资金来创造收益，同时让他们分担风险。这确实是最理想的商业模式之一。当然，这种模式也存在被滥用的可能，我们在美国和加拿大都见过类似案例。</blockquote>
<strong>我13岁，来自于佛罗里达，我的哥哥15岁，我今天也跟我的父亲一起来，所以谢谢你们今天主持了股东大会，这是我第一次参加您的股东大会。我的问题是，在上高中的课程里面，怎样才能够影响到以后做伟大投资的课程？你可不可以扩展地说一下？</strong>
<blockquote>人生中遇到的老师往往能给你带来最深刻的影响。我很幸运，不仅在学校遇到了优秀的老师，还从雇主和前辈那里学到了很多。我的父亲就是我的第一位投资导师——因为他在投资行业工作，每个周六我都能观察他是如何做生意的。我还阅读了大量投资类书籍，这些是其他孩子很少接触的知识。</blockquote>
<blockquote>记得在奥马哈公共图书馆，我偶然发现了一本19世纪的投资著作，后来在纽约又找到了更多珍贵的书籍。虽然我热爱阅读，但远不及查理·芒格那样博览群书。有人曾问我最想和谁共进午餐，我的答案永远是查理。他就像一座行走的图书馆，总能从书中发现真知灼见。保持好奇心，找到志同道合的老师至关重要。</blockquote>
<blockquote>我先后在三所学校就读，最后进入华盛顿大学。每所学校都能遇到两三位让我受益匪浅的老师。他们不仅传授知识，更给予我特别的关注和指导。比如本杰明·格雷厄姆教授，他就像父亲一样悉心教导我。还有《伟大的大桥飞人》这本书，让我获得了重要的人生启示。</blockquote>
<blockquote>父亲常说每个人都是独特的。也许你现在觉得迷茫，但终会找到适合自己的道路。在学校里，你会遇到那些特别投缘的老师——无论是交谈方式还是教学方法都让你如沐春风。我在哥伦比亚大学时，格雷厄姆教授就给予了我父亲般的关怀。</blockquote>
<blockquote>回顾过去，至少有十位导师对我的人生产生了深远影响。他们都有一个共同点：愿意花额外时间帮助年轻人。我认为优质的教育体验更多来自这种个人化的师生关系，而非学校本身。这些感悟已经超出了我原本想要表达的范围。</blockquote>]]></content>
        <author>
            <name>Shemol</name>
            <email>shemol106@gmail.com</email>
            <uri>https://shemol.tech</uri>
        </author>
    </entry>
    <entry>
        <title type="html"><![CDATA[张维迎：如果天黑就出发，越走天越亮，谁都会有信心（转载）]]></title>
        <id>https://shemol.tech/zwy</id>
        <link href="https://shemol.tech/zwy"/>
        <updated>2025-04-13T00:00:00.000Z</updated>
        <summary type="html"><![CDATA[如果天黑就出发，越走天越亮，谁都会有信心。]]></summary>
        <content type="html"><![CDATA[<h1>张维迎：如果天黑就出发，越走天越亮，谁都会有信心（转载）</h1>
<p>翻自己的微信收藏的时候看到这篇文章，点开发现已经被公众号删除了，因此搜到原文转载到自己的博客上，仅供自己学习使用。</p>
<p>原发于公众号：WSJ中文</p>
<p>过去很长的一段时间里，张维迎觉得有点孤独了。时代的思潮剧烈变化，张维迎坚持了几十年的观念，鲜少得到回应，他没动摇过，只是觉得遗憾。但在过去几年，张维迎细微地发现，年轻人所持的立场开始向他靠近，“我感到很开心” 。</p>
<p>张维迎从北大光华管理学院院长的位置退下来，已经有十三年了，身边的争议少了很多。这反而让他有更多的时间来整理自己的想法，他也对自己此前的观点做了一些修订和进一步的推进。</p>
<p>“企业家精神”和“市场经济”是理解张维迎观点的关键词，在现在的他看来，市场经济不太能让人大富大贵，只是能让普通人有活得不错的可能性，但这也弥足珍贵。</p>
<p>张维迎越来越觉得市场经济真正的意义，就是让最具创造力的、最雄心勃勃的人，“只能给我们人类干好事儿，不能干坏事”。张维迎站在了经济学的立学之本——理性人假说的对立面，“我对人性是失望的，”市场经济在张维迎看来提供了一种约束人性的机制，“我们管不了自己的，那就交给市场经济来管。”</p>
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E5%BC%A0%E7%BB%B4%E8%BF%8E%EF%BC%9A%E5%A6%82%E6%9E%9C%E5%A4%A9%E9%BB%91%E5%B0%B1%E5%87%BA%E5%8F%91%EF%BC%8C%E8%B6%8A%E8%B5%B0%E5%A4%A9%E8%B6%8A%E4%BA%AE%EF%BC%8C%E8%B0%81%E9%83%BD%E4%BC%9A%E6%9C%89%E4%BF%A1%E5%BF%83%EF%BC%88%E8%BD%AC%E8%BD%BD%EF%BC%89_1770869795643.jpg" alt="" />
<p>在市场经济的逻辑中，理应是让合适的人在最合适的位置上，而企业家在这样的过程中扮演着最重要的角色。“我们目前取得了不错的成绩，有目共睹都来自改革开放，而企业家在其中扮演了重要的角色，”张维迎觉得从新数字化的行业、互联网、电子商务到制造业，私营企业功不可没，“我们能够出口那么多的低成本产品到国际市场，那其实全是企业家努力的结果。”</p>
<p>张维迎觉得市场经济像是空气，正常情况下没人察觉，“它老在那儿”，所有人都习以为常，觉得没什么了不起的。但一旦没了，大家才觉得它重要，甚至是非它不可，必须要有它才能让生命体存活下去。</p>
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E5%BC%A0%E7%BB%B4%E8%BF%8E%EF%BC%9A%E5%A6%82%E6%9E%9C%E5%A4%A9%E9%BB%91%E5%B0%B1%E5%87%BA%E5%8F%91%EF%BC%8C%E8%B6%8A%E8%B5%B0%E5%A4%A9%E8%B6%8A%E4%BA%AE%EF%BC%8C%E8%B0%81%E9%83%BD%E4%BC%9A%E6%9C%89%E4%BF%A1%E5%BF%83%EF%BC%88%E8%BD%AC%E8%BD%BD%EF%BC%89_1770869796404.jpg" alt="" />
<p>张维迎出生在西北的山沟沟里——陕西吴堡县的辛庄村。他从未觉得这让他有什么尴尬，更没觉得自卑，他甚至觉得这是种财富。他是真正的农民的儿子， 举起例子来不是《平凡的世界》，就是家庭联产承包责任制。“《平凡的世界》就是我们那边的事情，故事的发生地离我们老家很近。”</p>
<p>张维迎举这个例子是因为他说起其中的一个情节，双水村的书记田福堂有个工作——每天早上打铃喊村里的人去上工。有一天，他和往常一样去打铃，但发现没什么动静，只有他的铃声在田野间荡来荡去。结果再一看，原来大家早就出去上工了。田福堂有点不明白，怎么现在也不用人来叫了，原本这可不是一件容易的事。故事发生在家庭联产承包责任制实行后，因为农民可以从劳动中获得自己的一份收入，不再是吃大锅饭，多劳多得，村里人的积极性明显提高了。</p>
<p>张维迎用这个例子来说明，体制、机制的创新在经济的运行中是多么重要。“如果没有家庭联产承包责任制，单纯靠田福堂摇铃铛，是没多少人有真正的积极性的。”张维迎觉得这就是经济运行的道理。在过去很多人的理解中，需求是可以用货币创造出来的，信心是可以塑造的，经济是可以刺激的。而在张维迎看来，经济发展本来就是一个自然而然的过程，“如果天黑就出发，越走天越亮，谁都会有信心。”</p>
<p>张维迎在经济学家中显得与众不同，他热衷于对公众发言，也没耽搁学术研究，两件事对他来说同样重要，他不想放弃其中的任何一个。张维迎最近出了两本新书——《重新理解企业家精神》和《回望》 。在前者中，他把这几年关于企业家精神的最新思考汇集成册，而在后者中，他好像换了个人，用流畅的文笔写自己成长历程中重要的人，从父母到老师，再到他的发小，情感从笔尖顺畅地流露出来。</p>
<p>和张维迎的文字一样，跟他聊起来也不会觉得他是中国著名的经济学家，没什么架子，总是脸带笑容，说话也很轻。视频的另一边，张维迎带着笑准点出现，穿着个羽绒马甲。即便早已经在京安家工作多年，但张维迎未改乡音，一听就能分辨出他是西北人。</p>
<p>张维迎上一次意外地站在公众面前是因为一首《信天游》。改变了他命运的恩师何炼成仙逝，疫情下城际流动成了难题，张维迎没有办法送他最后一程，他写了一篇文章叫《何老师，再听我一曲信天游》，里面附上了一首歌词。</p>
<p>歌词里说：“第一次见面你轻轻摸我的头，最后一次见面你微笑不开口。你曾为我欣喜也曾为我愁，你还曾夸过我唱的信天游。”</p>
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E5%BC%A0%E7%BB%B4%E8%BF%8E%EF%BC%9A%E5%A6%82%E6%9E%9C%E5%A4%A9%E9%BB%91%E5%B0%B1%E5%87%BA%E5%8F%91%EF%BC%8C%E8%B6%8A%E8%B5%B0%E5%A4%A9%E8%B6%8A%E4%BA%AE%EF%BC%8C%E8%B0%81%E9%83%BD%E4%BC%9A%E6%9C%89%E4%BF%A1%E5%BF%83%EF%BC%88%E8%BD%AC%E8%BD%BD%EF%BC%89_1770869797232.jpg" alt="" />
<p>1977 年高考恢复前，高中毕业后的张维迎回到村里，当了团支部书记和民兵连副指导员，甚至不知道有文科和理科一说。张维迎最终被录取到了西北大学新成立的政治经济学专业，何炼成是这个专业的负责人，这一年，这个专业刚刚成立，招生 50 人，张维迎这才得以用新增的扩招名额迈过了大学的门槛，进而改变了自己的命运。离家上大学的那天，全村人给张维迎送行，家人还请大家吃了米糕烩菜。</p>
<p>“如果没有何老师力主扩招，我可能大学也没得上了。”张维迎时刻记得何炼成对他的帮助，即便这种扩招不是直接针对他个人的。当时西北大学在何炼成的主导下全新创立了一个经济学专业，这意味着有 50 名考生能因此获得改变命运的机会。</p>
<p>即便是后来成了北京大学的老师，张维迎也一直和何炼成保持着联系，一有机会就回去看看他。“我的经济学基础是何老师给我打下的。”从 1951 年毕业后就到西北大学的何炼成，因为诸多变故，没能正经带过一个学生，直到高考恢复后，张维迎和他的同学算是何炼成带的第一波学生，这批学生在何炼成眼里就跟自己的孩子无异。</p>
<p>张维迎任北大光华管理学院院长后，试行了一系列改革，遇到不少阻力，何炼成听说后给当时的北大校长写信，其实他也不认识当时的北大校长，只不过因为是湖南老乡，何炼成想试一试帮自己的学生一次。</p>
<p>何老师仙逝，张维迎写下了一段歌词，找来了自己的老乡、已经小有名气的信天游歌手丁文军，请他谱曲和演唱。唱完了，丁文军给张维迎发过来，说要是满意就找个正经录音棚给录出来，又找了几张图片，做了个视频，发在了师弟的公众号上。没想到就这么火了，大家以为是张维迎自己唱的，他连忙否认， 说视频的结尾处都写了，演唱者不是他，只是大家不注意看。</p>
<p>张维迎说，《回望》不算是创作，写的都是对他有恩情或者记忆深刻的人，最多算是一种流露。“不是说我要写一个东西，而是有一些东西必须写出来，这些情感在我的脑子里已经装不下了。”</p>
<p>张维迎很执拗，甚至有点特立独行，和大多数的学者不同，他不喜欢拉帮结派，没有依靠谁，说的话都是自己认同的。甚至，即便何炼成对他影响很大，但依旧没有和他成立一个学派，他不喜欢这样做。张维迎现在也不太在意外界的看法了，唯独在意的就是自己对自己的所作所为是不是满意。</p>
<p>张维迎已经过了 60 岁生日，到了 50 岁，他突然觉得自己“知天命”了，想清楚了自己，几十年在做什么，未来应该做什么，算是给了自己一个清楚的交待。在张维迎看来，他这些年在做的事情就是试图改变公众的观念，事实上，他已经做了大量的工作，“只是之前没有自觉的意识”。现在，在他看来，自己至少要做到不要装，对于早已经功成身就的他来说，已经没有什么必须要摆出来的姿态了。</p>
<p>这么看来，张维迎越来越自由了。</p>
<p>以下是我们与张维迎的对话：</p>
<strong>《WSJ.》：</strong>“理性人”是经典经济学理论的前提，你怎么看所谓的“理性人假说”？
<strong>张维迎 ：</strong>理性有好多种选择，不是只有一种选择。另外就是看得多远，小偷也是理性的，做企业家也是理性的，这完全是不一样的。我从来不批评个人，做什么都是自己的选择，我所关心的是每个人为什么这么做，背后是什么原因。
<p>本来像北大清华这些优秀的毕业生，大家都往商界走，这就是国家的幸运。如果他们都直接削尖了脑袋往体制内钻，这就是国家的不幸。</p>
<p>经济生机勃勃的时候，越来越多的人想创造、创业的时候，机会就很多。接下来的几年，我也为毕业生发愁。我记得在做北大光华管理学院院长的时候，我就很关注学生的教育、去向和机会，如果一个人拿到几个offer，几个工作，然后挑，我就很开心。</p>
<p>现在我听说，两个人都拿不到一个offer，我很担心这样的情况。只有让有企业家精神的人自由地创业的时候，才能有更多的就业机会。市场让人们有更多的选择，这样心情就舒畅，因素太多了以后闹得人心情不舒畅，人心情不舒畅，创造力就很难发挥出来。创造力在心情舒畅的时候，才是最高的。</p>
<strong>《WSJ.》：</strong>中国是不是已经进入到分蛋糕的阶段了？我们的蛋糕已经足够大了吗？
<strong>张维迎 ：</strong>从经济学来看，没有哪个国家是真正到了一个分蛋糕的阶段。在我来看，不存在所谓的分蛋糕阶段。如果一个社会总是在让蛋糕变大，做蛋糕本身就是个分蛋糕的过程。蛋糕本身分得不合理的话，蛋糕就不可能变大。蛋糕之所以变大，是因为分得相对比较合理。
<p>可以说，我们都在改革开放的过程中受益了，我们真正要解决的就是不合理的、不公平的因素，怎么取消这些因素。比如说，在生意当中，需要我们真正靠自己的能力、努力，而不是靠关系搞到合同。只要靠关系搞合同，蛋糕分配就不合理。所以我们真正要解决的是这些问题，不是看到谁手里的蛋糕大，就给他拿走，这样下去咱们以后的蛋糕就没了。你可以拿现在的，拿不了未来的。</p>
<p>另外有一点，我们需要注意到，财富是会变化的，有的时候我们在大城市看到一些很高的楼，觉得这是一个很大的资产。也许三年以后，这些楼一分钱都卖不出去。你去底特律看一下，大量的房产没人要，白给都没人要。财富不是物质的，看的不是面积多大、重量多少，财富是市场能利用这个资产创造多少价值。一旦这个资产没有创造价值，它就没有意义。</p>
<p>比如说现在有一架波音 747 飞机白送你，但不让你起飞，也不让你在上面开餐馆，对你来说，这个资产是正的还是负的？答案很明显，它是负的，不打理它很快就破旧了，前期的维修费也要投入进去很多。所以说只有在运动的过程中，财富能够用来创造价值时才算是财富。如果不能创造价值，财富就不是财富。</p>
<p>很多人看到大楼，就觉得为什么不给我分点？就算是可以给你分点，分给你后你就知道这不是财富，因为它在你手中没有什么价值。</p>
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E5%BC%A0%E7%BB%B4%E8%BF%8E%EF%BC%9A%E5%A6%82%E6%9E%9C%E5%A4%A9%E9%BB%91%E5%B0%B1%E5%87%BA%E5%8F%91%EF%BC%8C%E8%B6%8A%E8%B5%B0%E5%A4%A9%E8%B6%8A%E4%BA%AE%EF%BC%8C%E8%B0%81%E9%83%BD%E4%BC%9A%E6%9C%89%E4%BF%A1%E5%BF%83%EF%BC%88%E8%BD%AC%E8%BD%BD%EF%BC%89_1770869798040.jpg" alt="" />
<strong>《WSJ.》：</strong>对疫情后的中国经济有什么样的预期？是否已经注定走入一个下行阶段？
<strong>张维迎 ：</strong>经过几十年的高增长，速度一定会降下来的。按照我们公布的数据，就目前的体量来看，能稳定地维持在 3% 的话，已经很了不起了。之前的发展模式，是基于经过市场检验过的技术，不用出研发费，投产就可以卖钱，那当然发展快了。越靠近前沿创新，速度自然就慢下来了，这没有什么丢人的。不是因为做得不好才慢下来，做得好也会慢下来。
<p>但问题是，下来的这个速度能不能维持，这才是挑战。我觉得 3% 能维持，当然这有很多挑战，但如果我们不努力，完全是负的也是可能的。这样的例子很多，之前的阿根廷是发达国家、富有国家，巴西、委内瑞拉也是。</p>
<strong>《WSJ.》：</strong>我们不断地在提刺激经济，这似乎成了应对经济停滞的金玉良药，这种思路存在怎样的问题？
<strong>张维迎 ：</strong>经济怎么刺激？经济要靠内在的冲动去发展，货币政策可以降低利率，可以给大家补贴。但这些并不解决根本问题，根本问题还是冲动。经济发展要靠企业家，不能靠发货币。
<p>在我们讨论经济问题的时候，有的理论会限制我们的思维，这种理论的形式一般看起来都很完美，其实给我们带来的伤害就更大，所以我感觉特别遗憾。有好多人认为经济是可以掌控的，这里试一下，那里试一下。好像发展经济，就像现在操作一个键盘，但实际上经济是一个内在自发的人的冲动。</p>
<strong>《WSJ.》：</strong>你在如何理解市场经济，它对一个良性社会而言应该扮演怎样的角色？在传统的理解中，它有资源分配的重要作用，这种观念是否需要更新？
<p>张维迎：我对人性本身没有那么乐观，所以我才特别希望有一种体制能够弥补人性不好的一面，可以迫使人必须纠错。我现在对市场经济的看法，可能跟很多人不一样，跟过去的我也不一样。过去我们提到市场经济，总是在说资源配置，但这是错的。我觉得市场经济真正的意义，就是让最具创造力的、最雄心勃勃的人，只能给我们人类干好事，不能干坏事。</p>
<p>举一个例子，在市场经济中，马斯克只能给我们干好事，他干不了坏事，为什么干不了?如果一旦他干坏事，那客户就不接受，投资人不接受，他就完蛋了。他说要送人到火星上去，假如上去后人都死了，下面就没人报名了。我们知道我们约束不了自己， 只能依靠体制，这个体制就是市场经济。</p>
<strong>《WSJ.》：</strong>你似乎没有在传统经济学者的轨道上运行，你愿意用更多的时间和精力面对公众，为什么会做出这样的选择？
<strong>张维迎 ：</strong>任何一个学科，发达以后都会越来越分化，越来越专业化、技术化。每一个学科都有好多人在做不同的方面，很多可能不是大家感兴趣的。我觉得每个人的偏好、性格不一样，接受的教育不一样，这都是正常的。
<p>但只要是真心地、认真地做，都值得我们欣赏。我的好多同事做的研究是非常量化的，我觉得也挺好的。我们也不应该要求每个学者所做的事情，都一定要对社会立刻产生什么样的影响。</p>
<p>但我们仍然要是真诚的，说的是自己相信的，说的是自己认为的，而不是说的是别人喜欢听的，讨好任何人都是不负责任的。我认为，真正负责的人就是，我认为是这样，我就这样说，我认为不是这样，我就不这样说。</p>
<p>我们知识人有一种倾向，因为觉得自己正确，别人不听自己的，就喜欢以权力运作的方式迫使人服从。我不赞成一种观点的时候，可以用另一种观点来表达， 但是不能用权力去推行。一旦用了强力，这就和自由主义经济学的初衷背离了。我们只能用说理的方式，别人听不进去没有办法，但是不能诉诸强力。</p>
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E5%BC%A0%E7%BB%B4%E8%BF%8E%EF%BC%9A%E5%A6%82%E6%9E%9C%E5%A4%A9%E9%BB%91%E5%B0%B1%E5%87%BA%E5%8F%91%EF%BC%8C%E8%B6%8A%E8%B5%B0%E5%A4%A9%E8%B6%8A%E4%BA%AE%EF%BC%8C%E8%B0%81%E9%83%BD%E4%BC%9A%E6%9C%89%E4%BF%A1%E5%BF%83%EF%BC%88%E8%BD%AC%E8%BD%BD%EF%BC%89_1770869798799.jpg" alt="" />
<strong>《WSJ.》</strong>：目前来看，似乎对于资本的批判有所退潮。在全球经济下行的压力下，996 真的成了一些人口中的“福报”，你怎么看待这样的转向？
<strong>张维迎</strong>：我不断地跟大家说，老板比员工还要辛苦，这是一个基本现象，他的工作时间比员工都要长。
<p>之前我在一个采访里注意到，有一个私人老板，做了一个工厂，后来把这个工厂卖给了另外一个老板，但是新的老板还让他继续管这个工厂。记者就问他：你卖了自己创办的工厂以后最大的感受是什么?他说：“我原来一到月底就发愁，要找钱给人家发工资。现在呢，我越到月底就越开心，我去领钱。”</p>
<p>雇主和雇员在一个企业中承担的责任是完全不对称的，但是我们很多人认识不到这一点，之前也有不少人觉得老板是在剥削我们。这几年似乎舆论有了一些变化，不少企业都倒闭了，员工没工作了。所以现在大家都希望老板别辞职，老板一辞职，员工就没饭吃了。</p>
<strong>《WSJ.》：</strong>有一种说法认为，信心比黄金更重要。目前在全球范围内，似乎经济都走到了一个下行区间，你觉得信心或者预期，在当下的经济发展中意味着什么？我们怎么才能恢复信心？
<strong>张维迎：</strong>人类有时候不会特别担心眼前的障碍，更担心长远的期待是什么样的。我举个例子，一个人要走 100 里路，下午 5 点出发，他会越走越害怕。为什么呢？因为越走天越黑。
<p>但如果是早晨 5 点出发，虽然天还是黑的，但是他不会害怕，为什么?因为越走天越亮。下午 5 点开始走，越走越暗，大家一定是越来越没信心。大家看的是长远的路，而不是眼前有没有障碍。</p>
<p>回到微观层面，要让大家有信心，最重要的是什么?我觉得是有自主性，我的命运我自己掌握。我可能选择了一个方向，我有选择的权利，选择的后果全都是由我的行为决定的，那我就有信心。</p>
<p>比如说参加高考，我们可能没考到一个好学校，但是我们不怪谁，这就类似于我的命运我掌握。假如考完以后，能不能上大学、要去哪个大学，不是由分数决定的，而是由另外一个人根据偏好或者完全通过抓阄决定的，那我们就不会有这个信心了，也就不会去努力了。</p>
<p>真正的信心来自一种感觉，那就是我的命运我自己能不能掌握。要真正让大家有信心，从概率上讲至少要让大家觉得我可以改变自己。如果在概率上讲我都没办法改变自己，那我努力有什么用？怎么能有信心呢？</p>
<p>回到企业家，也是这样。企业家都是有冒险精神的，没有一个企业家认为做生意一定能赚钱，他只是对能赚钱充满信心。但是，如果能不能赚钱或者能不能成功，在很大程度与自己的努力有关，他就会拼命去努力。</p>
<p>我能掌握自己的命运，我就会有信心。不是说不会失败，而是说即使失败，我也失败得心服口服。失败以后还会再来，总结经验继续来。如果最后的成功与失败，和我的努力没多大关系，是被别人操纵的，失败了就不会再来。</p>
<strong>《WSJ.》：</strong>身在北大，曾经任光华管理学院院长，这是一个精英的摇篮。但是从你身上，我看到的却是浓重的“底层关怀”，这是很多学者不具备的，你是如何同时实现两者的？
<strong>张维迎：</strong>人可能都是不一样的，大家有不同的性格和选择，但是我也没有故意要有你说的“底层关怀”。我可能天性就是比较本分吧，我是什么样就是什么样。因为一个人在生长过程中的每一个时段遇到的人物其实对他都是有影响的，但是影响最大的是父母。
<p>我没有因为我来自农村而自卑，其实我更多的是感动——经历了这么多的事情。我写下的东西都是我的真情实感，你的历史和出身本身就是你的一部分，要珍惜。</p>
<p>我觉得人还是要本真一点，不要装，你是什么就是什么，你装别人也能看出来，不要以为别人都是傻子。你说的人文关怀之类，我没有故意那么想，我本来就是这样。</p>
<strong>《WSJ.》：</strong>目前很多年轻人面临很大的生活压力，无论是就业还是一线城市的生活压力，阶层跃升也成为一件很困难的事情。就北大这样的学校来说，似乎成为贫困人家的孩子无法企及的地方，“寒门难出贵子”似乎已经成了一个我们不愿意看到但已经存在的现实。对此，你有怎样的观察？
<strong>张维迎 ：</strong>首先，我们要认识到，改革开放这几十年，我们的阶层跨越还是非常大的。
<p>另一方面，确实感觉到近年来社会出现了固化问题，如果这个问题真出现的话，那是需要我们关注的。但到目前为止，我个人还不那么悲观。</p>
<p>你问到这里，我想起 2021 年上通选课的时候，班上有三百来人，其中也有清华、人大的学生，我做过一个调查：90% 多的学生来自城市，不到 10% 来自农村。</p>
<p>听到这里，可能很多人觉得这个情况很严重，但当问他们父母的出身时，我发现 80% 以上学生的父母是从农村来的。所以当时我写了一篇文章叫《北大学生哪里来?农门二级跳》，第一级跳是从农村流动到城市，第二级跳是下一代进入北大。过去几十年，中国的流动还是很大的 ，但农村人要直接考进北大当然还是很难，因为农村的教育水平还是比不上城市。但一旦进城后，出身农村的家长对下一代的学习也抓得很紧，所以他们的下一代就有较高的可能性进入北大清华等一流大学的机会。我的调查是有一定代表性的，但是我们仍然不能忽视这个问题。</p>
<p>高考的问题很多，但是高考目前在我们中国仍然是最为公平的事。我没机会上北大，高考时我就知道我肯定上不了北大，但是最后我来北大当老师了，这也挺好。</p>
<p>回到企业家，我们了解的那些有名的企业家或者富豪榜上的人很多都是出身贫寒的。我还专门去查过，马化腾、马云都是非常一般的出身，很多富豪本身就是农民，没机会上大学。</p>
<p>这就是我自己为什么拥护市场经济。真正的市场经济能带来高度的垂直流动，靠人的创造力和企业家精神到市场当中去拼搏。我特别喜欢熊彼特说的一段话：市场经济下的富人俱乐部就像一个高级酒店，总是住满了客人，但住客的名字是不断变化的，有人退出，有人进来，这种流动就是检验一个社会是否健康的很重要的指标。</p>
<p>我目前观察的情况还是不那么悲观，为什么我特别珍惜市场化的改革？因为只有在市场经济下，普通人才有希望出人头地，如果不在市场经济下，普通人是没有希望的。</p>
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E5%BC%A0%E7%BB%B4%E8%BF%8E%EF%BC%9A%E5%A6%82%E6%9E%9C%E5%A4%A9%E9%BB%91%E5%B0%B1%E5%87%BA%E5%8F%91%EF%BC%8C%E8%B6%8A%E8%B5%B0%E5%A4%A9%E8%B6%8A%E4%BA%AE%EF%BC%8C%E8%B0%81%E9%83%BD%E4%BC%9A%E6%9C%89%E4%BF%A1%E5%BF%83%EF%BC%88%E8%BD%AC%E8%BD%BD%EF%BC%89_1770869799547.jpg" alt="" />
<strong>《WSJ.》：</strong>企业家精神在一定意义上是舶来品吗？对于中国来说，有自发性的企业家精神基因吗？
<strong>张维迎 ：</strong>我们中有一种人总是不安分的， 他们想做事、想做别人不愿意做或者做不了的事，他们有一种冲动，敢冒险，也愿意面对失败。这种人从古到今就有，我们人类最早是从非洲出来的，但是哪些人会离开非洲?就是有企业家精神的那些人。
<p>但我们一般讲的企业家精神，更多地还是商业方面的。因为对人来说，商业活动有挑战、对人的素质要求也高。但是在中国的传统中，这一类人慢慢都被科举制度“修理”了。科举制度把所有的诱惑都放在了官场上，古代有企业家精神的人都去官场了。所以我们古代的政府就聚集了一批很优秀的人，他们都是有能力的人。</p>
<p>但对于社会来说，这就是一个损失，因为政府是分配财富的机制，不是创造财富的地方。真正具有企业家精神的人，去做企业才能发挥真正的价值，这就是古代中国跟近现代西方不一样的地方。</p>
<p>举一个英国的例子，一些很有才能的人，他们不认可英国国教，包括一些清教徒。但是他们仍然有一种商业冲动，要去做企业家，当这些人从事工商业活动，他们就更具有创造力。</p>
<p>中国 2000 年历史的一个巨大变化，就是在改革开放后，优秀的人开始做企业了。但是我们这种文化仍然很脆弱，还没有完全改变过来。</p>
<p>我从 20 世纪 80 年代开始想做的一个事，就是改变人的观念，包括公众对商业的观念、对企业家的观念，包括我讲的观念现代化——“十大观念转变”。</p>
<p>我还在北大光华管理学院做院长的时候，考公务员的学生很少。现在的学生都争着抢着去考公务员。优秀的人都去分配财富，不如优秀的人都去创造财富。</p>]]></content>
        <author>
            <name>Shemol</name>
            <email>shemol106@gmail.com</email>
            <uri>https://shemol.tech</uri>
        </author>
    </entry>
    <entry>
        <title type="html"><![CDATA[Eino学习笔记-1-ChatModel]]></title>
        <id>https://shemol.tech/Eino-learning-notes-1-ChatModel</id>
        <link href="https://shemol.tech/Eino-learning-notes-1-ChatModel"/>
        <updated>2025-04-11T00:00:00.000Z</updated>
        <summary type="html"><![CDATA[Eino学习笔记，持续更新，也可能不更新。]]></summary>
        <content type="html"><![CDATA[<h1>Eino学习笔记-1-ChatModel</h1>
<p>ChatModel 是 Eino 框架中对对话大模型的抽象，它提供了统一的接口来与不同的大模型服务（如 OpenAI、Ollama 等）进行交互。</p>
<p>这个组件在以下场景中发挥重要作用：</p>
<li>自然语言对话</li>
<li>文本生成和补全</li>
<li>工具调用的参数生成</li>
<li>多模态交互（文本、图片、音频等）</li>
<h1>组件定义</h1>
<h2>接口定义</h2>
<blockquote>代码位置：eino/components/model/interface.go</blockquote>
<p>``<code>go</p>
<p>type ChatModel interface {</p>
<p>    Generate(ctx context.Context, input []<em>schema.Message, opts ...Option) (</em>schema.Message, error)</p>
<p>    Stream(ctx context.Context, input []<em>schema.Message, opts ...Option) (</em>schema.StreamReader[*schema.Message], error)</p>
<p>    BindTools(tools []*schema.ToolInfo) error</p>
<p>}</p>
</code>`<code>
<p>Generate方法</p>
<li>功能：生成完整的模型响应</li>
<li>参数：</li>
<p>  - ctx：上下文对象，用于传递请求级别的信息，同时也用于传递Callback Manager</p>
<p>  - input：输入消息列表</p>
<p>  - opts：可选参数，用于配置模型行为</p>
<li>返回值：</li>
<p>  - </code>*schema.Message<code>：模型生成的响应消息</p>
<p>  - error：生成过程中的错误信息</p>
<p>Stream方法</p>
<li>功能：以流式方式生成模型响应</li>
<li>参数：与Generate方法相同</li>
<li>返回值：</li>
<p>  - </code><em>schema.StreamReader[</em>schema.Message]<code>：模型响应的流式读取器</p>
<p>  - error：生成过程中的错误信息</p>
<p>BindTools方法</p>
<li>功能：为模型绑定可用的工具</li>
<li>参数：</li>
<p>  - tools：工具信息列表</p>
<li>返回值：</li>
<p>  - error：绑定过程中的错误信息</p>
<p>核心定位 该接口是对话模型的核心抽象层，支持两种调用模式：</p>
<li>Generate ：同步生成完整响应（适合常规对话场景）</li>
<li>Stream ：流式响应处理（适合长文本生成/实时交互）</li>
<p>架构特性</p>
</code>`<code>go
<p>type ChatModel interface {</p>
<p>    // 同步生成（典型AI对话模式）</p>
<p>    Generate(ctx context.Context, input []<em>schema.Message, opts ...Option) (</em>schema.Message, error)</p>
<p>    // 流式处理（适合逐段输出场景）</p>
<p>    Stream(ctx context.Context, input []*schema.Message, opts ...Option) (</p>
<p>        <em>schema.StreamReader[</em>schema.Message], error)</p>
<p>    // 工具绑定机制（支持功能扩展）</p>
<p>    BindTools(tools []*schema.ToolInfo) error</p>
<p>}</p>
</code>`<code>
<p>关键设计点 ：</p>
<li>多模型支持 ：通过接口抽象实现不同AI引擎（OpenAI/MAAS）的兼容</li>
<li>上下文感知 ：使用 context.Context 支持超时控制、链路追踪等</li>
<li>可扩展参数 ： ...Option 可变参数为不同实现提供配置扩展能力</li>
<li>工具热绑定 ： BindTools 实现运行时功能增强（推测支持Function Calling等特性）</li>
<p>工程实践 ：</p>
<p>   通过 </code>//go:generate<code> 指令自动生成 </code>ChatModelMock<code>模拟实现，说明：</p>
<li>接口优先设计原则</li>
<li>完善的单元测试支持</li>
<li>依赖注入能力（方便不同环境下的测试）</li>
<p>注意事项 ：</p>
<li>并发安全 ：注释明确提示 </code>BindTools<code> 与 </code>Generate<code> 存在非原子性操作，暗示需要同步控制</li>
<li>消息协议 ：依赖<schema.Message>定义的消息格式（需结合具体协议分析）</li>
<li>流式生命周期 ：</code> StreamReader<code> 需要配合Close操作确保资源释放</li>
<h2>Message结构体</h2>
<blockquote>代码位置：eino/schema/message.go</blockquote>
</code>`<code>go
<p>type Message struct {   </p>
<p>    // Role 表示消息的角色（system/user/assistant/tool）</p>
<p>    Role RoleType</p>
<p>    // Content 是消息的文本内容</p>
<p>    Content string</p>
<p>    // MultiContent 是多模态内容，支持文本、图片、音频等</p>
<p>    MultiContent []ChatMessagePart</p>
<p>    // Name 是消息的发送者名称</p>
<p>    Name string</p>
<p>    // ToolCalls 是 assistant 消息中的工具调用信息</p>
<p>    ToolCalls []ToolCall</p>
<p>    // ToolCallID 是 tool 消息的工具调用 ID</p>
<p>    ToolCallID string</p>
<p>    // ResponseMeta 包含响应的元信息</p>
<p>    ResponseMeta *ResponseMeta</p>
<p>    // Extra 用于存储额外信息</p>
<p>    Extra map[string]any</p>
<p>}</p>
</code>`<code>
<p>Message结构体是模型交互的基本结构，支持：</p>
<li>多种角色：system（系统）、user（用户）、assistant（ai）、tool（工具）</li>
<li>多模态内容：文本、图片、音频、视频、文件</li>
<li>工具调用：支持模型调用外部工具和函数</li>
<li>元信息：包含响应原因、token使用统计等</li>
<h2>公共Option</h2>
<p>Model组件提供了一组公共Option用于配置模型行为：</p>
<blockquote>代码位置：eino/components/model/option.go</blockquote>
</code>`<code>go
<p>type Options struct {</p>
<p>    // Temperature 控制输出的随机性</p>
<p>    Temperature *float32</p>
<p>    // MaxTokens 控制生成的最大 token 数量</p>
<p>    MaxTokens *int</p>
<p>    // Model 指定使用的模型名称</p>
<p>    Model *string</p>
<p>    // TopP 控制输出的多样性</p>
<p>    TopP *float32</p>
<p>    // Stop 指定停止生成的条件</p>
<p>    Stop []string</p>
<p>}</p>
</code>`<code>
<p>可以通过以下方式设置Option：</p>
</code>`<code>go
<p>// 设置温度</p>
<p>WithTemperature(temperature float32) Option</p>
<p>// 设置最大 token 数</p>
<p>WithMaxTokens(maxTokens int) Option</p>
<p>// 设置模型名称</p>
<p>WithModel(name string) Option</p>
<p>// 设置 top_p 值</p>
<p>WithTopP(topP float32) Option</p>
<p>// 设置停止词</p>
<p>WithStop(stop []string) Option</p>
</code>`<code>
<h1>使用方式</h1>
<h2>单独使用</h2>
</code>`<code>go
<p>import (</p>
<p>    "context"</p>
<p>    "fmt"</p>
<p>    "io"</p>
<p>    "github.com/cloudwego/eino-ext/components/model/openai"</p>
<p>    "github.com/cloudwego/eino/components/model"</p>
<p>    "github.com/cloudwego/eino/schema"</p>
<p>)</p>
<p>// 初始化模型 (以openai为例)</p>
<p>cm, err := openai.NewChatModel(ctx, &openai.ChatModelConfig{</p>
<p>    // 配置参数</p>
<p>})</p>
<p>// 准备输入消息</p>
<p>messages := []*schema.Message{</p>
<p>    {</p>
<p>       Role:    schema.System,</p>
<p>       Content: "你是一个有帮助的助手。",</p>
<p>    },</p>
<p>    {</p>
<p>       Role:    schema.User,</p>
<p>       Content: "你好！",</p>
<p>    },</p>
<p>}</p>
<p>// 生成响应</p>
<p>response, err := cm.Generate(ctx, messages, model.WithTemperature(0.8))</p>
<p>// 响应处理</p>
<p>fmt.Print(response.Content)</p>
<p>// 流式生成</p>
<p>streamResult, err := cm.Stream(ctx, messages)</p>
<p>defer streamResult.Close()</p>
<p>for {</p>
<p>    chunk, err := streamResult.Recv()</p>
<p>    if err == io.EOF {</p>
<p>       break</p>
<p>    }</p>
<p>    if err != nil {</p>
<p>       // 错误处理</p>
<p>    }</p>
<p>    // 响应片段处理</p>
<p>    fmt.Print(chunk.Content)</p>
<p>}</p>
</code>`<code>
<h2>在编排中使用</h2>
</code>`<code>go
<p>import (</p>
<p>    "github.com/cloudwego/eino/schema"</p>
<p>    "github.com/cloudwego/eino/compose"</p>
<p>)</p>
<p>/<em>*</em> 初始化ChatModel</p>
<p>* cm, err := xxx</p>
<p>*/</p>
<p>// 在 Chain 中使用</p>
<p>c := compose.NewChain[[]<em>schema.Message, </em>schema.Message]()</p>
<p>c.AppendChatModel(cm)</p>
<p>// 在 Graph 中使用</p>
<p>g := compose.NewGraph[[]<em>schema.Message, </em>schema.Message]()</p>
<p>g.AddChatModelNode("model_node", cm)</p>
</code>`<code>
<h1>Option和Callback使用</h1>
<h2>Option使用示例</h2>
</code>`<code>go
<p>import "github.com/cloudwego/eino/components/model"</p>
<p>// 使用 Option</p>
<p>response, err := cm.Generate(ctx, messages,</p>
<p>    model.WithTemperature(0.7),</p>
<p>    model.WithMaxTokens(2000),</p>
<p>    model.WithModel("gpt-4"),</p>
<p>)</p>
</code>`<code>
<h2>Callback使用示例</h2>
</code>`<code>go
<p>import (</p>
<p>    "context"</p>
<p>    "fmt"</p>
<p>    "github.com/cloudwego/eino/callbacks"</p>
<p>    "github.com/cloudwego/eino/components/model"</p>
<p>    "github.com/cloudwego/eino/compose"</p>
<p>    "github.com/cloudwego/eino/schema"</p>
<p>    callbacksHelper "github.com/cloudwego/eino/utils/callbacks"</p>
<p>)</p>
<p>// 创建 callback handler</p>
<p>handler := &callbacksHelper.ModelCallbackHandler{</p>
<p>    OnStart: func(ctx context.Context, info <em>callbacks.RunInfo, input </em>model.CallbackInput) context.Context {</p>
<p>       fmt.Printf("开始生成，输入消息数量: %d\n", len(input.Messages))</p>
<p>       return ctx</p>
<p>    },</p>
<p>    OnEnd: func(ctx context.Context, info <em>callbacks.RunInfo, output </em>model.CallbackOutput) context.Context {</p>
<p>       fmt.Printf("生成完成，Token 使用情况: %+v\n", output.TokenUsage)</p>
<p>       return ctx</p>
<p>    },</p>
<p>    OnEndWithStreamOutput: func(ctx context.Context, info <em>callbacks.RunInfo, output </em>schema.StreamReader[*model.CallbackOutput]) context.Context {</p>
<p>       fmt.Println("开始接收流式输出")</p>
<p>       defer output.Close()</p>
<p>       return ctx</p>
<p>    },</p>
<p>}</p>
<p>// 使用 callback handler</p>
<p>helper := callbacksHelper.NewHandlerHelper().</p>
<p>    ChatModel(handler).</p>
<p>    Handler()</p>
<p>/<em>*</em> compose a chain</p>
<p>* chain := NewChain</p>
<p>* chain.appendxxx().</p>
<p>*       appendxxx().</p>
<p>*       ...</p>
<p>*/</p>
<p>// 在运行时使用</p>
<p>runnable, err := chain.Compile()</p>
<p>if err != nil {</p>
<p>    return err</p>
<p>}</p>
<p>result, err := runnable.Invoke(ctx, messages, compose.WithCallbacks(helper))</p>
</code>`<code>
<h1>已有实现</h1>
<p>1. OpenAI ChatModel: 使用 OpenAI 的 GPT 系列模型 <a href="https://www.cloudwego.io/zh/docs/eino/ecosystem_integration/chat_model/chat_model_openai">ChatModel - OpenAI</a></p>
<p>2. Ollama ChatModel: 使用 Ollama 本地模型 <a href="https://www.cloudwego.io/zh/docs/eino/ecosystem_integration/chat_model/chat_model_ollama">ChatModel - Ollama</a></p>
<p>3. ARK ChatModel: 使用 ARK 平台的模型服务 <a href="https://www.cloudwego.io/zh/docs/eino/ecosystem_integration/chat_model/chat_model_ark">ChatModel - ARK</a></p>
<h1>自行实现参考</h1>
<p>实现自定义的 ChatModel 组件时，需要注意以下几点：</p>
<p>1. 注意要实现公共的 option</p>
<p>2. 注意实现 callback 机制</p>
<p>3. 在流式输出时记得完成输出后要 close writer</p>
<h2>Option机制</h2>
<p>自定义 ChatModel 如果需要公共 Option 以外的 Option，可以利用组件抽象的工具函数实现自定义的 Option，例如：</p>
</code>`<code>go
<p>import (</p>
<p>    "time"</p>
<p>    "github.com/cloudwego/eino/components/model"</p>
<p>)</p>
<p>// 定义 Option 结构体</p>
<p>type MyChatModelOptions struct {</p>
<p>    Options    *model.Options</p>
<p>    RetryCount int</p>
<p>    Timeout    time.Duration</p>
<p>}</p>
<p>// 定义 Option 函数</p>
<p>func WithRetryCount(count int) model.Option {</p>
<p>    return model.WrapImplSpecificOptFn(func(o *MyChatModelOptions) {</p>
<p>       o.RetryCount = count</p>
<p>    })</p>
<p>}</p>
<p>func WithTimeout(timeout time.Duration) model.Option {</p>
<p>    return model.WrapImplSpecificOptFn(func(o *MyChatModelOptions) {</p>
<p>       o.Timeout = timeout</p>
<p>    })</p>
<p>}</p>
</code>`<code>
<h2>Callback处理</h2>
<p>ChatModel 实现需要在适当的时机触发回调，以下结构由 ChatModel 组件定义：</p>
</code>`<code>go
<p>import (</p>
<p>    "github.com/cloudwego/eino/schema"</p>
<p>)</p>
<p>// 定义回调输入输出</p>
<p>type CallbackInput struct {</p>
<p>    Messages    []*schema.Message</p>
<p>    Model       string</p>
<p>    Temperature *float32</p>
<p>    MaxTokens   *int</p>
<p>    Extra       map[string]any</p>
<p>}</p>
<p>type CallbackOutput struct {</p>
<p>    Message    *schema.Message</p>
<p>    TokenUsage *schema.TokenUsage</p>
<p>    Extra      map[string]any</p>
<p>}</p>
</code>`<code>
<h1>完整实现示例</h1>
</code>`<code>go
<p>import (</p>
<p>    "context"</p>
<p>    "errors"</p>
<p>    "net/http"</p>
<p>    "time"</p>
<p>    "github.com/cloudwego/eino/callbacks"</p>
<p>    "github.com/cloudwego/eino/components/model"</p>
<p>    "github.com/cloudwego/eino/schema"</p>
<p>)</p>
<p>type MyChatModel struct {</p>
<p>    client     *http.Client</p>
<p>    apiKey     string</p>
<p>    baseURL    string</p>
<p>    model      string</p>
<p>    timeout    time.Duration</p>
<p>    retryCount int</p>
<p>}</p>
<p>type MyChatModelConfig struct {</p>
<p>    APIKey string</p>
<p>}</p>
<p>func NewMyChatModel(config <em>MyChatModelConfig) (</em>MyChatModel, error) {</p>
<p>    if config.APIKey == "" {</p>
<p>       return nil, errors.New("api key is required")</p>
<p>    }</p>
<p>    return &MyChatModel{</p>
<p>       client: &http.Client{},</p>
<p>       apiKey: config.APIKey,</p>
<p>    }, nil</p>
<p>}</p>
<p>func (m <em>MyChatModel) Generate(ctx context.Context, messages []</em>schema.Message, opts ...model.Option) (*schema.Message, error) {</p>
<p>    // 1. 处理选项</p>
<p>    options := &MyChatModelOptions{</p>
<p>       Options: &model.Options{</p>
<p>          Model: &m.model,</p>
<p>       },</p>
<p>       RetryCount: m.retryCount,</p>
<p>       Timeout:    m.timeout,</p>
<p>    }</p>
<p>    options.Options = model.GetCommonOptions(options.Options, opts...)</p>
<p>    options = model.GetImplSpecificOptions(options, opts...)</p>
<p>    // 2. 开始生成前的回调</p>
<p>    ctx = callbacks.OnStart(ctx, &model.CallbackInput{</p>
<p>       Messages: messages,</p>
<p>       Config: &model.Config{</p>
<p>          Model: *options.Options.Model,</p>
<p>       },</p>
<p>    })</p>
<p>    // 3. 执行生成逻辑</p>
<p>    response, err := m.doGenerate(ctx, messages, options)</p>
<p>    // 4. 处理错误和完成回调</p>
<p>    if err != nil {</p>
<p>       ctx = callbacks.OnError(ctx, err)</p>
<p>       return nil, err</p>
<p>    }</p>
<p>    ctx = callbacks.OnEnd(ctx, &model.CallbackOutput{</p>
<p>       Message: response,</p>
<p>    })</p>
<p>    return response, nil</p>
<p>}</p>
<p>func (m <em>MyChatModel) Stream(ctx context.Context, messages []</em>schema.Message, opts ...model.Option) (<em>schema.StreamReader[</em>schema.Message], error) {</p>
<p>    // 1. 处理选项</p>
<p>    options := &MyChatModelOptions{</p>
<p>       Options: &model.Options{</p>
<p>          Model: &m.model,</p>
<p>       },</p>
<p>       RetryCount: m.retryCount,</p>
<p>       Timeout:    m.timeout,</p>
<p>    }</p>
<p>    options.Options = model.GetCommonOptions(options.Options, opts...)</p>
<p>    options = model.GetImplSpecificOptions(options, opts...)</p>
<p>    // 2. 开始流式生成前的回调</p>
<p>    ctx = callbacks.OnStart(ctx, &model.CallbackInput{</p>
<p>       Messages: messages,</p>
<p>       Config: &model.Config{</p>
<p>          Model: *options.Options.Model,</p>
<p>       },</p>
<p>    })</p>
<p>    // 3. 创建流式响应</p>
<p>    // Pipe产生一个StreamReader和一个StreamWrite，向StreamWrite中写入可以从StreamReader中读到，二者并发安全。</p>
<p>    // 实现中异步向StreamWrite中写入生成内容，返回StreamReader作为返回值</p>
<p>    // <em>*</em>StreamReader是一个数据流，仅可读一次，组件自行实现Callback时，既需要通过OnEndWithCallbackOutput向callback传递数据流，也需要向返回一个数据流，需要对数据流进行一次拷贝</p>
<p>    // 考虑到此种情形总是需要拷贝数据流，OnEndWithCallbackOutput函数会在内部拷贝并返回一个未被读取的流</p>
<p>    // 以下代码演示了一种流处理方式，处理方式不唯一</p>
<p>    sr, sw := schema.Pipe<a href="1">*model.CallbackOutput</a></p>
<p>    // 4. 启动异步生成</p>
<p>    go func() {</p>
<p>       defer sw.Close()</p>
<p>       // 流式写入</p>
<p>       m.doStream(ctx, messages, options, sw)</p>
<p>    }()</p>
<p>    // 5. 完成回调</p>
<p>    _, nsr := callbacks.OnEndWithStreamOutput(ctx, sr)</p>
<p>    return schema.StreamReaderWithConvert(nsr, func(t <em>model.CallbackOutput) (</em>schema.Message, error) {</p>
<p>       return t.Message, nil</p>
<p>    }), nil</p>
<p>}</p>
<p>func (m <em>MyChatModel) BindTools(tools []</em>schema.ToolInfo) error {</p>
<p>    // 实现工具绑定逻辑</p>
<p>    return nil</p>
<p>}</p>
<p>func (m <em>MyChatModel) doGenerate(ctx context.Context, messages []</em>schema.Message, opts <em>MyChatModelOptions) (</em>schema.Message, error) {</p>
<p>    // 实现生成逻辑</p>
<p>    return nil, nil</p>
<p>}</p>
<p>func (m <em>MyChatModel) doStream(ctx context.Context, messages []</em>schema.Message, opts <em>MyChatModelOptions, sr </em>schema.StreamWriter[*model.CallbackOutput]) {</p>
<p>    // 流式生成文本写入sr中</p>
<p>    return</p>
<p>}</p>
</code>``
<h1>参考资料</h1>]]></content>
        <author>
            <name>Shemol</name>
            <email>shemol106@gmail.com</email>
            <uri>https://shemol.tech</uri>
        </author>
    </entry>
    <entry>
        <title type="html"><![CDATA[Eino学习笔记-2]]></title>
        <id>https://shemol.tech/eino-learning-notes-2</id>
        <link href="https://shemol.tech/eino-learning-notes-2"/>
        <updated>2025-04-11T00:00:00.000Z</updated>
        <summary type="html"><![CDATA[Eino学习笔记，继续学习，想参加黑客马拉松。]]></summary>
        <content type="html"><![CDATA[<h1>Eino学习笔记-2</h1>
<h1>Components</h1>
<p>大模型应用开发的三种应用模式：</p>
<p>1. 直接对话模式：处理用户输入并生成相应回答</p>
<p>2. 知识处理模式：对文本文档进行语义化处理、存储和检索</p>
<p>3. 工具调用模式：基于上下文做出决策并调用相应工具</p>
<p>Eino将常用能力抽象为可复用的组件（Components）</p>
<p>组件抽象和几种模式对应关系如下：</p>
<p>对话处理类组件：</p>
<p>1. 模块化处理和大模型交互参数的组件抽象：<code>ChatTemplate</code> </p>
<p>2. 直接和大模型交互的组件抽象：<code>ChatModel</code> </p>
<p>文本语义处理类组件：</p>
<p>1. 获取和处理文本文档的组件抽象：<code>Document.Loader</code> 、<code>Document.Transformer</code> </p>
<p>2. 文本文档语义化处理的组件抽象：<code>Embedding</code> </p>
<p>3. Embedding之后将数据索引进行存储的组件抽象：<code>Indexer</code> </p>
<p>4. 将语义相关文本文档进行索引和召回的组件抽象：<code>Retriever</code> </p>
<p>决策执行类组件：</p>
<p>大模型能够做决策并调用工具的组件抽象：<code>ToolsNode</code></p>
<p>自定义组件：</p>
<p>用户自定义代码逻辑的组件抽象：<code>Lambda</code> </p>
<p>Eino的组件抽象秉持着以下设计原则：</p>
<p>1. 模块化和标准化：将一系列功能相同的能力抽象成统一的模块，组件间职能明确、边界清晰，支持灵活的组合。</p>
<p>2. 可扩展性，接口的设计保持尽可能小的模块能力约束，让组件的开发者能方便的实现自定义组件的开发。</p>
<p>3. 可复用性，把最常用的能力和实现进行封装，提供给开发者开箱即用的工具使用。</p>
<h1>Chain & Graph 编排功能</h1>
<p>编排：对Components原子能力进行组合、串联。</p>
<li>不能让业务逻辑融入到编排中。</li>
<li>大模型应用的核心是 “对提供原子能力的组件” 进行组合串联，组件是编排的 “第一公民”。</li>
<li>抽象视角看编排：编排是在构建一张网络，数据则在这个网络中流动，网络的每个节点都对流动的数据有格式/内容的要求，一个能顺畅流动的数据网络，关键就是 “<strong>上下游节点间的数据格式是否对齐</strong>？”。</li>
<li>业务场景的复杂度会反映在编排产物的复杂性上，只有<strong>横向的治理能力</strong>才能让复杂场景不失控。</li>
<li>大模型是会持续保持高速发展的，大模型应用也是，只有<strong>具备扩展能力的应用才拥有生命力</strong>。</li>
<p>Eino提供了基于Graph模型（edge+node）的，<strong>以组件为原子节点</strong>的，<strong>以上下游类型对齐为基础</strong>的编排解决方案。</p>
<li>以组件为核心，规范了业务功能的封装方式。</li>
<li>业务逻辑复杂度封装到组件内部，编排层拥有更全局的视角，让逻辑层次变得清晰。</li>
<li>提供了切面能力，callback机制支持了基于节点的统一治理能力（什么是切面能力）</li>
<li>提供了call option的机制，扩展性是快速迭代中的系统最基本的诉求</li>
<li>提供了“类型对齐”的开发方式的强化，降低开发者心智负担，把golang的类型安全特性发挥出来</li>
<li>提供了“流的自动转换”能力，让“流”在编排系统的复杂性来源榜中除名（<strong>Eino流式编程</strong>）</li>
<hr />
<p>Graph缺点：基于 “点” “边” 模型的 Graph 在使用时，要求开发者要使用 <code>graph.AddXXXNode()</code> 和 <code>graph.AddEdge()</code> 两个接口来创建一个数据通道，强大但是略显复杂。</p>
<p>Eino 封装了接口更易于使用的 <code>Chain</code>。Chain 是对 Graph 的封装，除了 “环” 之外，Chain 暴露了几乎所有 Graph 的能力。</p>]]></content>
        <author>
            <name>Shemol</name>
            <email>shemol106@gmail.com</email>
            <uri>https://shemol.tech</uri>
        </author>
    </entry>
    <entry>
        <title type="html"><![CDATA[关于学习方式-倪爽（转载）]]></title>
        <id>https://shemol.tech/learning-ways-from-nishuang</id>
        <link href="https://shemol.tech/learning-ways-from-nishuang"/>
        <updated>2025-04-11T00:00:00.000Z</updated>
        <summary type="html"><![CDATA[设计师倪爽老师的学习方式。]]></summary>
        <content type="html"><![CDATA[<h1>关于学习方式-倪爽（转载）</h1>
<p>原推链接：<a href="https://x.com/nishuang/status/1787939646129008771">https://x.com/nishuang/status/1787939646129008771</a></p>
<p>我也来分享一个我的学习方法，可以简称为“把小孩扔进水里只要开始淹不死后面小孩不但能全自动学会游泳而且还会反问你游泳也需要学的吗…实用学习法”</p>
<a href="https://x.com/hashtag/%E6%B4%BB%E5%88%B0%E6%AD%BB%E5%AD%A6%E5%88%B0%E6%AD%BB?src=hashtag_click">#活到死学到死</a>
<p>我学设计过程中干的事情和大家都一样，比如模仿、练习、研究、学习原理、学习方法和技巧…等等，不同有三点：</p>
<li>我特别不喜欢做学校那种练习（跟背单词一样属于虚构的假事情），我都是拿自己、公司、客户的真实项目做练习</li>
<p>这种真实案例，能让我聚焦于设计，避免很多设计师自我感动、自己骗自己的真自恋、假自信</p>
<li>先接下设计任务，再学习设计思路和方法</li>
<p>看似压力很大，其实难度可控，而且可以帮我慢慢积累出真正的自信。开会发言时信心满满、说几句就自我暗示说一句“对！”，那还是简单模仿，而不是真正的自信</p>
<li>先设计、再学习、再研究、再模仿，在实战中积累经验，再回去安心进修、有针对性地深入学习，最后就可以从策略、经验、方法论等等各个层面去模仿高人，而不是仅仅模仿皮毛</li>
<p>传统教育的顺序，便于把你培养成美工、码农之类的技术工人，对设计师这种创造性工作而言，教育的效率比不上学习的效率</p>
<p>听起来是不是很怪？</p>
<p>其实很多人用类似方法学习和成长</p>
<p>这个“把小孩扔进水里”学习方法的好处是它完全基于强烈的正向激励，像我这种好奇心多于耐心、用智力模拟毅力的性格，适合这种看起来很难、其实自驱动的学习方法</p>
<p>直到今天，我还在天天学设计，天天把自己扔进水里。</p>]]></content>
        <author>
            <name>Shemol</name>
            <email>shemol106@gmail.com</email>
            <uri>https://shemol.tech</uri>
        </author>
    </entry>
    <entry>
        <title type="html"><![CDATA[如何看待关税和股市大跌？股神巴菲特这样说（转载）]]></title>
        <id>https://shemol.tech/buffett-on-tariffs-stock-market</id>
        <link href="https://shemol.tech/buffett-on-tariffs-stock-market"/>
        <updated>2025-04-09T00:00:00.000Z</updated>
        <summary type="html"><![CDATA[如何看待关税和股市大跌？]]></summary>
        <content type="html"><![CDATA[<h1>如何看待关税和股市大跌？股神巴菲特这样说（转载）</h1>
<p>美国对中国征收104%关税，看到一位前辈转了一篇这个文章。转载到博客上好好学习一下，仅供自己学习使用。</p>
<p>原文链接：<a href="https://ec.ltn.com.tw/article/breakingnews/5005890">如何看待關稅和股市大跌？股神巴菲特這樣說</a></p>
<p>〔財經頻道／綜合報導〕美國總統川普徵收對等關稅，引發全球市場震盪。身為史上最著名的投資人之一，「股神」巴菲特（Warren Buffett）的觀念一直受到外界關注，有外媒就整理出巴菲特過往言論，發現巴菲特多年來都有對關稅和股市下跌這2個主題發表看法過，報導認為，如果能夠了解巴菲特如何看待這些事情，或許有助於投資人掌握目前動盪市場的脈絡。</p>
<p>《CNBC》報導，巴菲特最近一次公開談論關稅，是在3月初接受美國媒體《CBS》新聞主播奧唐納（Norah O’Donnell）的專訪，巴菲特表示，關稅通常會導致物價上漲，並說：「隨著時間過去，關稅在最終將轉變成商品稅。」他甚至打趣說：「牙仙不會為此付錢！」</p>
<p>報導指出，巴菲特很可能已經預見接下來會發生什麼事，首先是「通膨」，當2018年被問到川普首批較溫和的關稅時，巴菲特表示，包括鋁和鋼鐵在內的關稅，已經推高了旗下部分子公司的成本。雖然川普開始課徵外國商品關稅前，美國就已出現通膨跡象，但巴菲特說：「關稅情勢將會讓通膨問題更加惡化。」</p>
<p>巴菲特所擔心的另一個潛在影響是「貿易戰」，也就是美國與貿易夥伴之間互相提高關稅、來回報復，這可能會拖累全球經濟成長。在3月那場專訪中，巴菲特甚至表示，關稅在某種程度上就是一種戰爭行為。</p>
<p>2019年，在美中貿易緊張升高的情況下，巴菲特的說法更為直白。他在 《CNBC》的專訪中說：「如果我們真的發動貿易戰，那對全世界都不好，因為全世界經濟是互相連動的。」</p>
<p>在川普宣布最新一輪關稅後，標普500指數已出現下跌，不過尚未正式進入「熊市」（即從近期高點下跌 20% 以上），分析師指出，如果真的進入熊市，很可能是因為投資人擔憂貿易戰可能引發全球經濟衰退。</p>
<p>而這不是巴菲特第一次面對全球性經濟衰退。2008年，當全球金融危機引發熊市時，巴菲特曾在《紐約時報》發表專欄文章，並在文章中說：「全球金融體系正陷入混亂，無論是在美國還是其他地區。更糟的是，這些問題正逐漸滲透到實體經濟，現在已經像決堤般失控」、「短期內，失業率將上升、商業活動將停滯，新聞標題也只會越來越駭人聽聞。」</p>
<p>巴菲特接著說：「所以……我開始買進美國股票了。」</p>
<p>巴菲特坦承，他無法預測市場下一步會怎麼走。事實上，在他2008年10月發表這篇文章後，標普500指數又跌了5個月才觸底反彈。</p>
<p>但正如巴菲特一直強調的，企業整體總是能夠持續創新，並長期提升獲利能力，進一步帶動股市長期上漲。而在2008年時，巴菲特指出，許多投資者都不願將自己的資金置於風險之中。</p>
<p>不過巴菲特認為，對於這些穩健企業長期繁榮的擔憂，那是沒有意義的，他寫說：「這些企業的確會偶爾遇到獲利起伏，就像過去一樣。但在5年、10年、20年後，大多數大型公司仍會創下獲利新高。」</p>
<p>巴菲特偏好在股票價格相對便宜時買進，這樣長期報酬才會更高。巴菲特在 2008 年寫說：「簡單來說，壞消息是投資人最好的朋友，它讓你可以用打折的價格，買下一部分美國的未來。」</p>]]></content>
        <author>
            <name>Shemol</name>
            <email>shemol106@gmail.com</email>
            <uri>https://shemol.tech</uri>
        </author>
    </entry>
    <entry>
        <title type="html"><![CDATA[RPC学习笔记]]></title>
        <id>https://shemol.tech/RPC-learning-notes</id>
        <link href="https://shemol.tech/RPC-learning-notes"/>
        <updated>2025-04-08T00:00:00.000Z</updated>
        <summary type="html"><![CDATA[RPC学习笔记]]></summary>
        <content type="html"><![CDATA[<h1>RPC学习笔记</h1>
<p>RPC - Remote Procedure Call - 远程过程调用</p>
<p>用来解决分布式系统通信问题，核心特点是可以像调用本地一样发起远程调用。RPC其实不只是微服务云原生专用名词，只要涉及到网络通信，就可能用到RPC。</p>
<p>举两个例子：</p>
<li>大型分布式应用系统可能会依赖消息队列、分布式缓存、分布式数据库以及统一配置中心等，应用程序与依赖的这些中间件之间都可以通过RPC进行通信。比如etcd，它作为一个统一的配置服务，客户端就是通过gRPC框架与服务端进行通信的。</li>
<li>Kubernetes本身就是分布式的，Kubernetes的kube-apiserver与整个分布式集群中的每个组件间的通讯，都是通过gRPC框架进行的。</li>
<p>RPC涉及：</p>
<li>序列化： 将对象转换为可传输的字节流（序列化）及逆向还原（反序列化），解决跨网络和跨语言的数据交换问题。</li>
<li>压缩算法： 减少网络传输的数据量，降低带宽消耗和延迟。</li>
<li>协议： 定义客户端与服务端通信的规则，包括传输格式和交互模式：HTTP/2、TCP、UDP。</li>
<li>动态代理： 屏蔽远程调用的复杂性，使开发者像调用本地方法一样使用远程服务：JDK动态代理、字节码增强。</li>
<li>服务注册与发现： 动态管理服务实例的可用性，支持负载均衡和故障转移。注册中心如ZooKeeper、Consul、ETCD，记录服务地址和元数据。</li>
<li>加密：保障数据传输的机密性和完整性，防止中间人攻击和数据篡改。</li>
<li>网络通信：网络IO模型，实现高效、稳定的网络通信，处理连接管理、数据收发等底层细节。网络通信说起来简单，但实际上是一个非常复杂的过程，这个过程主要包括：对端节点的查找、网络连接的建立、传输数据的编码解码以及网络连接的管理等等。RPC对网络通信的整个过程做了完整包装，在搭建分布式系统时，它会使网络通信逻辑的开发变得简单，同时也会让网络通信变得更加安全可靠。</li>
<p>RPC集群涉及：</p>
<li>监控</li>
<li>熔断限流</li>
<li>优雅启停</li>
<li>多协议</li>
<li>分布式链路跟踪</li>
<p>RPC真正强大的地方：</p>
<li>连接管理</li>
<li>健康检测</li>
<li>负载均衡</li>
<li>优雅启停机</li>
<li>异常重试</li>
<li>业务分组</li>
<li>熔断限流</li>
<p>如果没有RPC框架，那要怎么调用另外一台服务器的接口呢？</p>
<p>RPC是帮助我们屏蔽网络编程细节，实现调用远程方法就跟调用本地（同一个项目中的方法）一样的体验，我们不需要因为这个方法是远程调用就需要编写很多与业务无关的代码。</p>
<p>RPC的作用主要体现在两方面：</p>
<li>屏蔽远程调用和本地调用的区别，让我们觉得这就是调用项目内的方法；</li>
<li>隐藏底层网络通信的复杂性，让我们更专注于业务逻辑。</li>
<h1>序列化</h1>
<p>网络传输的数据必须是二进制数据，但调用方请求的出入参数都是对象。需要提前把它转换成可传输的二进制数据，而且要求转换算法是可逆的。</p>
<p>数据的数据头一般用于身份识别，包括协议标识、数据大小、请求类型、序列化类型等信息；消息体主要是请求的业务参数信息和扩展属性等。</p>
<h1>反序列化</h1>
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/RPC%E5%AD%A6%E4%B9%A0%E7%AC%94%E8%AE%B0_1770869763125.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/RPC%E5%AD%A6%E4%B9%A0%E7%AC%94%E8%AE%B0_1770869764196.png" alt="" />
<p>RPC不仅可以解决通信问题，还可以发MQ、分布式缓存、数据库。</p>
<p>RPC和HTTP都属于应用层协议。</p>
<p>RPC请求在发送到网络中之前，他需要把方法调用的请求参数转成二进制；转成二进制后，写入本地Socket中，然后被网卡发送到网络设备中。</p>
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/RPC%E5%AD%A6%E4%B9%A0%E7%AC%94%E8%AE%B0_1770869765391.png" alt="" />
<p>设计可扩展的、向后兼容的协议，关键点就是利用好Header中的扩展字段以及Payload中的扩展字段，通过扩展字段向后兼容。</p>
<p>不同场景下合理选择序列化方法。</p>
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/RPC%E5%AD%A6%E4%B9%A0%E7%AC%94%E8%AE%B0_1770869766673.png" alt="" />
<p>常用的序列化方法：</p>
<li>JDK原生序列化</li>
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/RPC%E5%AD%A6%E4%B9%A0%E7%AC%94%E8%AE%B0_1770869768004.png" alt="" />
<p>实际上任何一种序列化框架，核心思想就是设计一种序列化协议。</p>
<li>JSON：典型的Key-Value方式，没有数据类型，是一种文本型序列化框架。</li>
<p>  但是JSON序列化有两个问题：</p>
<p>  - 额外空间开销比较大，对于大数据量服务意味着巨大的内存和磁盘开销；</p>
<p>  - JSON没有类型，但像Java这种强类型语言，需要通过反射统一解决，所以性能不好。</p>
<p>  所以如果RPC框架选用JSON序列化，服务提供者与服务调用者之间传输的数据量要相对较小，否则将严重影响性能。</p>
<li>Hessian：动态类型、二进制、紧凑的，并且可跨语言移植的一种序列化框架。Hessian协议要比JDK、JSON更紧凑，性能上要比JDK、JSON序列化高效很多，并且生成的字节数也更少。</li>
<p>  - 但是Hessian本身有问题，官方版本对Java里面一些常见对象的类型不支持。</p>
<p>    - Linked系列，LinkedHashMap、LinkedHashSet等，但可以通过扩展CollectionDeserializer类修复；</p>
<p>    - Locale类，可以通过扩展ContextSerializerFactory类修复；</p>
<p>    - Byte/Short反序列化的时候变成Integer</p>
<li>Protobuf：Google公司内部的混合语言数据标准，结构化数据存储格式，可以用于结构化数据序列化，支持Java、Python、C++、Go等语言。Protobuf使用的时候需要定义IDL（Interface description language），然后使用不同语言的IDL编译器，生成序列化工具类，优点是</li>
<p>  - 序列化后体积相比JSON、Hessian小很多；</p>
<p>  - IDL能清晰的描述语义，所以足以帮助并保证应用程序之间的类型不会丢失，无需类似XML解析器；</p>
<p>  - 序列化反序列化速度很快，不需要通过反射获取类型；</p>
<p>  - 消息格式升级和兼容性不错，可以做到向后兼容。</p>
<p>  Protobuf不需要依赖IDL文件，可以直接对Java领域对象进行反序列化操作，在效率上跟Protobuf差不多，生成的二进制格式和Protobuf是完全相同的，可以说是一个Java版本的Protobuf序列化框架。但在使用过程中，遇到过一些不支持的情况：</p>
<p>  序列化协议还有Message Pack、kryo等。</p>
<p>  影响选择序列化工具的因素：</p>
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/RPC%E5%AD%A6%E4%B9%A0%E7%AC%94%E8%AE%B0_1770869769098.png" alt="" />
<p>首选序列化协议还是Hessian与Protobuf，因为他们在性能、时间开销、空间开销、通用性、兼容性和安全性上，都满足了我们的要求。其中Hessian在使用上更加方便，在对象的兼容性上更好；Protobuf则更加高效，通用性上更有优势。</p>
<p>RPC框架在使用时需要注意哪些问题？</p>
<li>对象构造的过于复杂。属性很多，并且存在多层嵌套。</li>
<li>对象过于庞大。</li>
<li>使用序列化框架不支持的类作为入参类。</li>
<li>对象有复杂的继承关系。</li>
<p>RPC框架在网络通信上更倾向于哪种网络IO模型？</p>
<p>常见的网络IO模型</p>
<li>同步阻塞IO（BIO）</li>
<li>同步非阻塞IO（NIO）</li>
<li>IO多路复用</li>
<li>异步非阻塞IO（AIO）</li>
<p>只有AIO为异步IO，其他都是同步IO。</p>
<p>阻塞IO（blocking IO）是最简单、最常见的IO模型。在Linux中，默认情况下所有的socket都是blocking的，先看下操作流程。</p>
<p>首先，应用进程发起IO系统调用后，应用进程被阻塞，转到内核空间处理。然后，内核开始等待数据，等待到数据之后，再将内核中的数据拷贝到用户内存中，整个IO处理完毕后返回进程。最后应用的进程解除阻塞状态，运行业务逻辑。</p>
<p>系统内核处理IO操作分为两个阶段—等待数据和拷贝数据。而在这两个阶段中，应用进程中IO操作的线程会一直都处于阻塞状态，如果是基于Java多线程开发，那么每一个IO操作都要占用线程，直至IO操作结束。</p>
<p>IO多路复用</p>
<p>多路复用IO是在高并发场景中使用最为广泛的一种IO模型。如Java的NIO、Redis、Nginx的底层实现就是此类IO模型的应用，经典的Reactor模式也是基于此类IO模型。</p>
<p>多个网络连接的IO可以注册到一个复用器（select）上，当用户进程调用了select，那么整个进程会被阻塞。同时，内核会“监视”所有select负责的socket，当任何一个socket中的数据准备好了，select就会返回。这个时候用户进程再调用read操作，将数据从内核中拷贝到用户进程。</p>
<p>这里我们可以看到，当用户进程发起了select调用，进程会被阻塞，当发现该select负责的socket有准备好的数据时才返回，之后才发起一次read，整个流程要比阻塞IO要复杂，似乎也更浪费性能。但它最大的优势在于，用户可以在一个线程内同时处理多个socket的IO请求。<strong>用户可以注册多个socket，然后不断的调用select读取被激活的socket，即可达到在同一个线程内同时处理多个IO请求的目的。</strong>而在同步阻塞模型中，必须通过多线程的方式才能达到这个目的。</p>
<p>为什么说阻塞IO和IO多路复用最常见？</p>
<p>实际在网络IO的应用上，需要的是系统内核的支持以及编程语言的支持。</p>
<p>在系统内核的支持上，现在大多数系统都会支持阻塞IO、非阻塞IO和IO多路复用，但像信号驱动IO、异步IO，只有高版本的Linux系统才会支持。</p>
<p>在编程语言上，无论是C++还是Java，在高性能的网络编程框架的编写上，大多数都是基于Reactor模式，其中最为典型的便是Java的Netty框架，而Reactor模式是基于IO多路复用的。当然，在非高发场景下，同步阻塞IO是最为常见的。</p>
<p>RPC框架在网络通信上倾向于选择哪种网络IO模型？</p>
<p>RPC调用在大多数情况下，是一个高并发调用的场景，考虑到系统内核的支持、编程语言的支持以及IO模型本身的特点，在RPC框架的实现中，在网络通信的处理上，我们会选择IO多路复用的方式。开发语言的网络通信框架选型上，最优的选择是基于Reactor模式实现的框架，如Java语言，首选框架便是Netty框架（Java还有很多其他NIO框架，但目前Netty应用的最为广泛），并且在Linux环境下，也要开启epoll来提升系统性能（Windows环境下是无法开启epoll的，因为系统内核不支持）。</p>
<blockquote>什么是基于Reactor模式的网络IO模型？</blockquote>
<p>> </p>
<blockquote>基于Reactor模式的网络IO模型是一种<strong>事件驱动的高性能网络编程模型</strong>，通过将I/O事件的监听、分发与业务逻辑处理解耦，实现对高并发连接的统一管理和高效响应。其核心是通过<strong>多路复用技术</strong>（如Select、epoll、kqueue）监控多个连接事件，并基于事件类型分发给对应的处理器，避免了传统阻塞式IO的线程资源浪费。</blockquote>
<blockquote>核心组件：</blockquote>
<blockquote>- Reactor（反应器）：负责监听所有I/O事件，并通过事件循环（Event Loop）将就绪事件分发给对应的处理器。它是整个模型的中枢，通常在一个独立线程中运行。使用多路复用器（如Selector）轮询注册Channel，检测连接、读、写等事件。</blockquote>
<blockquote>- Acceptor（连接处理器）：专门处理连接建立事件，接收客户端连接请求，并将新建立的SocketChannel注册到Reactor中，后续监听其读/写事件。</blockquote>
<blockquote>- Handler（事件处理器）：处理具体的业务逻辑（如数据读取、处理、写回），通常为：</blockquote>
<blockquote>读处理器：处理读就绪事件，从Channel读取数据并解码。写处理器：处理写就绪事件，将处理结果编码后写回客户端。业务处理器：执行计算、数据库操作等耗时任务，可能由线程池异步处理。</blockquote>
<p>零拷贝 zero copy</p>
<p>系统内核处理IO操作分为两个阶段—等待数据和拷贝数据。等待数据，就是系统内核在等待网卡接收到数据后，把数据写到内核中；而拷贝数据，就是系统内核在获取到数据后，将数据拷贝到用户进程的空间中。</p>
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/RPC%E5%AD%A6%E4%B9%A0%E7%AC%94%E8%AE%B0_1770869770162.png" alt="" />
<p>应用进程的每一次写操作，都会把数据写到用户空间的缓冲区中，再由CPU将数据拷贝到系统内核的缓冲区中，之后再由DMA将这份数据拷贝到网卡中，最后由网卡发送出去。这里我们可以看到，一次写操作数据要拷贝两次才能通过网卡发送出去，而用户进程的读操作则是将整个流程反过来，数据同样会拷贝两次才能让应用程序读取到数据。</p>
<p>应用进程的一次完整的读写操作，都需要在用户空间与内核空间中来回拷贝，并且每一次拷贝，都需要CPU进行一次上下文切换（由用户进程切换到系统内核，或由系统内核切换到用户进程）。</p>
<p>零拷贝技术</p>
<p>零拷贝，就是取消用户空间与内核空间之间的数据拷贝操作，应用进程每一次的读写操作，都可以通过一种方式，让应用进程向用户空间写入或者读取数据，就如同直接向内核空间写入或者读取数据一样，再通过DMA将内核中的数据拷贝到网卡，或将网卡中的数据copy到内核。</p>
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/RPC%E5%AD%A6%E4%B9%A0%E7%AC%94%E8%AE%B0_1770869771154.png" alt="" />
<p>零拷贝有两种解决方案</p>
<li>mmap+write方式：核心原理是通过虚拟内存解决。</li>
<li>sendfile方式</li>
<blockquote>mmap+write</blockquote>
<blockquote>实现原理：</blockquote>
<blockquote>- 内存映射机制：通过mmap系统调用将内核读缓冲区直接映射到用户进程的虚拟地址空间，实现内核与用户空间的共享内存。此过程无需将数据从内核缓冲区拷贝到用户缓冲区，仅建立地址映射关系。</blockquote>
<blockquote>数据传输流程：</blockquote>
<blockquote>- 第一次拷贝（DMA）：磁盘数据通过DMA直接传输到内核读缓冲区。</blockquote>
<blockquote>  共享映射：用户通过虚拟内存映射访问内核缓冲区数据。</blockquote>
<blockquote>- 第二次拷贝（CPU）：调用write时，CPU将内核读缓冲区的数据拷贝到内核Socket缓冲区。</blockquote>
<blockquote>- 第三次拷贝（DMA）：DMA将Socket缓冲区的数据发送到网卡。</blockquote>
<blockquote>优势与局限：</blockquote>
<blockquote>优点：</blockquote>
<blockquote>- 减少一次CPU拷贝（内核→用户缓冲区的拷贝被消除）。</blockquote>
<blockquote>- 允许应用程序直接操作映射内存，适合需要对数据进行预处理（如修改、压缩）的场景。</blockquote>
<blockquote>缺点：</blockquote>
<blockquote>- 仍存在4次上下文切换（两次系统调用）和3次数据拷贝。</blockquote>
<blockquote>- 维护内存映射需要额外开销，可能因文件被截断导致异常（如SIGBUS信号）。</blockquote>
<blockquote>sendfile</blockquote>
<blockquote>sendfile方式将read和write合并为一次系统调用，直接在内核空间完成数据传输。</blockquote>
<blockquote>数据传输流程（分两种模式）</blockquote>
<blockquote>基础模式（无SG-DMA支持）：</blockquote>
<blockquote>- 第一次拷贝（DMA）：磁盘→内核读缓冲区。</blockquote>
<blockquote>- 第二次拷贝（CPU）：内核读缓冲区→内核Socket缓冲区。</blockquote>
<blockquote>- 第三次拷贝（DMA）：Socket缓冲区→网卡。</blockquote>
<blockquote>SG-DMA优化模式：</blockquote>
<blockquote>仅需两次DMA拷贝：内核读缓冲区直接通过DMA Scatter/Gather技术传输到网卡，无需CPU参与Socket缓冲区拷贝。</blockquote>
<blockquote>优点：</blockquote>
<blockquote>- 系统调用次数减少到1次，上下文切换仅2次。</blockquote>
<blockquote>- 在支持SG-DMA的硬件下实现真正的零拷贝（仅两次DMA拷贝）。</blockquote>
<blockquote>- 吞吐量提升显著，适合大文件传输。</blockquote>
<blockquote>缺点：</blockquote>
<blockquote>- 数据对用户空间完全不可见，无法在传输前处理数据。</blockquote>
<blockquote>- 依赖操作系统和硬件支持。</blockquote>
<p>> </p>
<blockquote>若需数据预处理（如修改文件内容），选择mmap+write。</blockquote>
<blockquote>若仅需高效传输且无需处理数据，优先使用sendfile（尤其支持SG-DMA环境）。</blockquote>
<p>Netty中的零拷贝</p>
<p>完全站在了用户空间上，也就是JVM上，它的零拷贝技术主要是偏向于数据操作的优化上。</p>
<li>Netty提供了CompositeByteBuf类，它可以将多个ByteBuf合并为一个逻辑上的Bytebuf，避免了各个Bytebuf之间的拷贝。</li>
<li>ByteBuf支持slice操作，因此可以将ByteBuf分解为多个共享同一个存储区域的Bytebuf，避免了内存的拷贝。</li>
<li>通过wrap操作，我们可以将byte[]数组、ByteBuf、ByteBuffer等包装成一个Netty ByteBuf对象，进而避免拷贝操作。</li>
<p>Netty还提供FileRegion中包装NIO的FileChannel.transferTo()方法实现了零拷贝，这与Linux中的sendfile方式在原理上也是一样的。</p>
<p>动态代理：面向接口编程，屏蔽RPC处理流程（这个没有看过代码说实话不是特别清楚）</p>
<p>关于网络通信，只要记住—可靠的传输。</p>
<p>RPC会自动给接口生成一个代理类，当我们在项目中注入接口的时候，运行过程中实际绑定的是这个接口生成的代理类。这样在接口方法被调用的时候，它实际上是被生成代理类拦截到了，这样我们就可以在生成的代理类里面，加入远程调用逻辑。</p>
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/RPC%E5%AD%A6%E4%B9%A0%E7%AC%94%E8%AE%B0_1770869772582.png" alt="" />
<li>代理类是在运行中生成的，那么代理框架生成生成代理类的速度、生成代理类的字节码大小等等，都会影响到性能—生成的字节码越小，运行所占资源就越小。</li>
<li>我们生成的代理类，是用于接口方法请求拦截的，所以每次调用接口方法的时候，都会执行生成的代理类，这时生成的代理类的执行效率就需要很高效。</li>
<li>我们希望选择一个使用起来方便的代理类框架。API设计是否好理解、社区活跃度、还有就是依赖复杂度。</li>
<p>gRPC</p>
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/RPC%E5%AD%A6%E4%B9%A0%E7%AC%94%E8%AE%B0_1770869773779.png" alt="" />
<p>协议封装</p>
<p>我们需要在方法调用参数的二进制数据后面增加“断句”符号来分隔出不同的请求，在两个“断句”符号中间放的内容就是我们请求的二进制数据，这个过程叫做协议封装。</p>
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/RPC%E5%AD%A6%E4%B9%A0%E7%AC%94%E8%AE%B0_1770869775929.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/RPC%E5%AD%A6%E4%B9%A0%E7%AC%94%E8%AE%B0_1770869777676.png" alt="" />
<p>服务发现：到底是要CP还是AP？</p>
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/RPC%E5%AD%A6%E4%B9%A0%E7%AC%94%E8%AE%B0_1770869779051.png" alt="" />
<p>1. 服务注册：在服务提供方启动的时候，将对外暴露的接口注册到注册中心中，注册中心将这个服务节点的IP和接口保存下来。</p>
<p>2. 服务订阅：在服务调用方启动的时候，去注册中心查找并订阅服务提供方的IP，然后缓存到本地，并用于后续的远程调用。</p>
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/RPC%E5%AD%A6%E4%B9%A0%E7%AC%94%E8%AE%B0_1770869780595.png" alt="" />
<p>如果使用DNS来进行服务发现：</p>
<p>如果我们用DNS来实现服务发现，所有的服务提供者节点都配置在了同一个域名下，调用方的确可以通过DNS拿到随机的一个服务提供者的IP，并与之建立长连接，看上去没有问题，但是需要考虑以下情况：</p>
<li>如果IP端口下线，服务调用者能否及时摘除服务节点？</li>
<li>如果在之前已经上线了一部分服务节点，这时突然对服务进行扩容，那么新上线的服务节点能否及时接收流量？</li>
<p>答案都是“不能”。这是因为为了提升性能和减少DNS服务的压力，DNS采取了多级缓存机制，一般配置的缓存时间较长。</p>
<p>基于ZooKeeper的服务发现</p>
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/RPC%E5%AD%A6%E4%B9%A0%E7%AC%94%E8%AE%B0_1770869781800.png" alt="" />
<p>1. 服务平台管理端先在ZooKeeper中创建一个服务根路径，可以根据接口名命名（例如：/service/com.demo.xxService），在这个路径再创建服务提供方目录与服务调用方目录（例如：provider、consumer），分别用来存储服务提供方的节点信息和服务调用方的节点信息。</p>
<p>2. 当服务提供方发起注册时，会在服务提供方目录创建一个临时节点，节点中存储该服务提供方的注册信息。</p>
<p>3. 当服务调用方发起订阅时，则在服务调用方目录中创建一个临时节点，节点中存储该服务调用方的信息，同时服务调起方watch该服务的服务提供方目录（/service/com.demo.xxService/provider）中所有的服务节点数据。</p>
<p>4. 当服务提供方目录下有节点数据发起变更时，ZooKeeper就会通知给发起订阅的服务调用方。</p>
<p>基于消息总线的最终一致性的注册中心</p>
<p>ZooKeeper的一大特点就是强一致性，ZooKeeper集群的每个节点的数据每次发生更新操作，都会通知其它ZooKeeper节点同时执行更新。它要求保证每个节点的数据能够实时的完全一致，这也就直接导致了ZooKeeper集群性能上的下降。</p>
<p>而RPC框架的服务发现，在服务节点刚上线时，服务调用方是可以容忍在一段时间之后（比如几秒钟之后）发现这个新上线的节点的。毕竟服务节点刚上线之后的几秒内，甚至更长的一段时间内没有接收到请求流量，对整个服务集群是没有什么影响的，所以我们可以牺牲掉CP（强制一致性），而选择AP（最终一致），来换取整个注册中心集群的性能和稳定性。</p>
<p>因为要求最终一致性，可以考虑采用消息总线机制。注册数据可以全量缓存在每个注册中心内存中，通过消息总线来同步数据。当有一个注册中心节点接收到服务节点注册，会产生一个消息推送给服务总线，再通过消息总线通知给其它注册中心节点更新数据并进行服务下发，从而达到注册中心间数据最终一致性，具体流程如下图：</p>
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/RPC%E5%AD%A6%E4%B9%A0%E7%AC%94%E8%AE%B0_1770869783000.png" alt="" />
<h1>后记</h1>
<p>后续应该要解读一些gRPC和Kitex的代码，此外还有字节跳动云原生的公众号的文章可以学习一下。真正重要的事情其实是抛开那些开源项目掌握更基础的知识。</p>]]></content>
        <author>
            <name>Shemol</name>
            <email>shemol106@gmail.com</email>
            <uri>https://shemol.tech</uri>
        </author>
    </entry>
    <entry>
        <title type="html"><![CDATA[关于黄金作为投资物]]></title>
        <id>https://shemol.tech/gold-as-investment</id>
        <link href="https://shemol.tech/gold-as-investment"/>
        <updated>2025-04-05T00:00:00.000Z</updated>
        <summary type="html"><![CDATA[黄金不是好的投资物。]]></summary>
        <content type="html"><![CDATA[<h1>关于黄金作为投资物</h1>
<p>本意其实是想把看到的知识消化吸收后按照自己的话讲出来，所以仅供自己学习使用。</p>
<p>首先放上巴菲特的话：</p>
<blockquote>我从未想过把我的股票换成黄金，我宁愿去下注那些优秀的企业，并相信其内在价值会稳定增长。这些企业是由优秀的经理人运营，销售人们现在和未来都喜爱的产品。相比从南非地下挖出一些金属，经过运输投保等一系列手续后，再放到诺克斯堡的金库里，人们更愿意用他们辛勤劳动赚来的工资去买喜诗花生糖、可口可乐或者类似的东西。</blockquote>
<p>> </p>
<blockquote>虽然我的父亲热衷于金本位，但我从来没有为黄金兴奋过，虽然没有真正拥有过黄金，但我在一个崇尚黄金的家庭里长大，我已经给过他机会了，只是我一直不明白黄金的内在价值是什么？你知道我们会在波仙珠宝店售卖黄金制品。但我永远不会卖掉股票去买黄金，用<strong>生产性资产</strong>去交换<strong>非生产性资产</strong>的想法，对我而言非常陌生。</blockquote>
<p>> </p>
<blockquote>巴菲特在以往的致股东信里也谈到过黄金：</blockquote>
<p>> </p>
<blockquote>除了类现金资产，还有一类资产也同样不适合持有-自身不产生任何现金流入，只是期望其他人未来会以更高的价格买走的资产，如黄金艺术品古董等。巴菲特将他们称为无生产力资产，与之对应的是是自身能够产出现金流的有生产力的资产。</blockquote>
<p>> </p>
<blockquote>巴菲特以黄金为例来阐述两种资产的区别，他说<strong>全球黄金储量共计约17万吨，如果熔化重铸可以做成一个边长21米的立方体，这个立方体被人们从地球某处挖出来，后提炼融化再挖个洞埋起来，然后派一堆人站在周围守着，他永远不会铲除任何东西。</strong>大家买它只是希望未来会有更多人愿意出更高的价格买它。</blockquote>
<p>黄金属于不产生任何现金流，只是期望其他人未来会以更高的价格买走的资产，属于非生产性资产。</p>
<p>下面是陈嘉禾老师的观点</p>
<p>黄金不产生价值。好的股票可以不断产生利润，便宜的价格可以让我们以很低的价格拿到利润。但是黄金不会这样，不会因为保管的好就多生出黄金。</p>
<p>每年多产生资产，好的资产日拱一卒，慢慢复利的回报就越来越多。黄金的“增长为0”的属性，在历史角度来看，也不是好的投资物。这个和巴菲特的观点是一样的。</p>
<p>黄金无法交易。当持有股票时，我们在享受股票基本面增长的同时，还可以不停做交易、增加基本面。一旦发现有别的股票性价比更高，我们就可以做个交易：于是基本面又愉快地增加了。长此以往，我们持有股票的投资组合，其基本面增长，可以远快于股票本身基本面的增长。对于其它一些资产，比如房地产、收藏品，事情也是一样。但是黄金不同，所有黄金都是一样的。因为黄金太简单了，简单到没人会傻到把定价定错，因此黄金的持有者就很难在交易中获得基本面的增长。而这种增长，却是价值投资者非常依赖的神器。</p>
<p>黄金在避险中没用。许多人买黄金的理由是“将来灾难也许用得上”。但是如果一个社会乱到需要用黄金，那么今天以同等价格购买的食品（罐头）、武器和药品，绝对比黄金值钱。</p>
<p>之前还和飒飒说过，之前疫情的时候想要买一些黄金，看来这个想法是错误的。太棒了又抛弃了一个自己错误的想法！</p>
<p>那么唐二僧老师给的答案就更直接了，买黄金其实是买自己内心的安全感，买一个情绪价值。</p>
<p>黄金的性质稳定不能成为它珍贵的理由，黄金的稀缺也并不能成为它珍贵的理由，黄金被达成了共识也不能成为珍贵的理由。</p>
<p>（但我又在想，那黄金集三个性质于一身，是不是可以成为它珍贵的理由呢，同理加密货币呢？这个就不懂了，我还是太外行了！</p>
<p>买黄金是在买一个情绪价值，就当是强制储蓄！</p>
<p>参考资料：</p>
<a href="https://mp.weixin.qq.com/s/9cwjIGgqYPG-6DNk8Z-PAQ">投资闲谈：巴菲特谈黄金</a>
<a href="https://mp.weixin.qq.com/s/fffqAMS3jYIVyyNxpuAREw">猫猫看市：为啥我不爱黄金</a>
<a href="https://mp.weixin.qq.com/s/fffqAMS3jYIVyyNxpuAREw">我们买黄金到底是在买什么？</a>]]></content>
        <author>
            <name>Shemol</name>
            <email>shemol106@gmail.com</email>
            <uri>https://shemol.tech</uri>
        </author>
    </entry>
    <entry>
        <title type="html"><![CDATA[KubeEdge-Sedna源码解析（转载）]]></title>
        <id>https://shemol.tech/kubeedge-sedna-sourcecode-analysis</id>
        <link href="https://shemol.tech/kubeedge-sedna-sourcecode-analysis"/>
        <updated>2025-01-09T00:00:00.000Z</updated>
        <summary type="html"><![CDATA[转载：sedna源码解析]]></summary>
        <content type="html"><![CDATA[<h1>KubeEdge-Sedna源码解析（转载）</h1>
<p>原文作者：<a href="https://github.com/jaypume">jaypume</a></p>
<p>原公开课视频：<a href="https://www.bilibili.com/video/BV1hg4y1b78L">https://www.bilibili.com/video/BV1hg4y1b78L</a></p>
<p>原文地址：<a href="https://github.com/jaypume/article/blob/main/sedna/%E8%BE%B9%E4%BA%91%E5%8D%8F%E5%90%8CAI%E6%A1%86%E6%9E%B6Sedna%E6%BA%90%E7%A0%81%E8%A7%A3%E6%9E%90/README.MD">https://github.com/jaypume/article/blob/main/sedna/边云协同AI框架Sedna源码解析/README.MD</a></p>
<p>转载供自己学习使用，方便查阅。</p>
<h1>KubeEdge-Sedna概述</h1>
<p>Sedna是在KubeEdge SIG AI中孵化的一个边云协同AI项目。得益于KubeEdge提供的边云协同能力，Sedna可以实现跨边云的协同训练和协同推理能力，如联合推理、增量学习、联邦学习、终身学习等。Sedna支持目前广泛使用的AI框架，如TensorFlow/Pytorch/MindSpore等，现有AI类应用可以无缝迁移到Sedna, 快速实现边云协同的训练和推理，可在降低成本、提升模型性能、保护数据隐私等方面获得提升。</p>
<p>项目主页：</p>
<p>https://github.com/kubeedge/sedna</p>
<p>文档参考：</p>
<p>https://sedna.readthedocs.io</p>
<h2>整体架构</h2>
<p>Sedna的边云协同基于KubeEdge提供的如下能力实现</p>
<p>* 跨边云应用统一编排</p>
<p>* Router: 管理面云边高可靠消息通道</p>
<p>* EdgeMesh: 数据面跨边云微服务发现和流量治理</p>
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/KubeEdge-Sedna%E6%BA%90%E7%A0%81%E8%A7%A3%E6%9E%90%EF%BC%88%E8%BD%AC%E8%BD%BD%EF%BC%89_1770869745557.png" alt="" />
<strong>基本组件</strong>：
<li><strong>GlobalManager</strong></li>
<p>  - 统一边云协同AI任务管理</p>
<p>  - 跨边云协同管理与协同</p>
<p>  - 中心配置管理</p>
<li><strong>LocalController</strong></li>
<p>  - 边云协同AI任务的本地流程控制</p>
<p>  - 本地通用管理: 模型，数据集，状态同步等</p>
<li><strong>Lib</strong></li>
<p>  - 面向AI开发者和应用开发者，暴露边云协同AI功能给应用</p>
<li><strong>Worker</strong></li>
<p>  - 执行训练或推理任务, 基于现有AI框架开发的训练/推理程序</p>
<p>  - 不同特性对应不同的worker组，worker可部署在边上或云上，并进行协同</p>
<h2>工程目录</h2>
<p>| 目录 | 说明 |</p>
<p>| --- | --- |</p>
<p>| .github | Sedna github CICD流水线配置。 |</p>
<p>| LICENSES | Sedna Licenses以及相关vendor Licenses。 |</p>
<p>| build | GM/LC等管理面构建的Dockersfile；生成的CRD定义yaml文件；CRD样例yaml文件; |</p>
<p>| cmd | GM/LC管里面的启动函数。 |</p>
<p>| components | 监控和图形化展示的组件。 |</p>
<p>| docs | proposals和安装文档。 |</p>
<p>| examples | 协同推理、增量学习、终身学习、联邦学习的使用样例。 |</p>
<p>| hack | 面向开发者的代码生成工具、及其他开发会用到的脚本。 |</p>
<p>| lib | Sedna Library，用于开发边云协同AI应用的Python依赖库。 |</p>
<p>| pkg | API定义；生成的CRD的client-go代码；Sedna GM/LC 管里面的核心代码。 |</p>
<p>| scripts | 面向使用者的安装脚本。 |</p>
<p>| test | E2E测试代码及测试工具。 |</p>
<p>| vendor | 依赖的第三方项目源码。 |</p>
<h1>Sedna管理面源码解析（Go）</h1>
<h2>GM: Global Manager</h2>
<h3>GM，一个K8S operator</h3>
<strong>operator是什么？</strong>
<blockquote>An Operator is an application-specific controller that extends the Kubernetes API to create, configure and manage instances of complex stateful applications on behalf of a Kubernetes user. It builds upon the basic Kubernetes resource and controller concepts, but also includes domain or application-specific knowledge to automate common tasks better managed by computers. 1</blockquote>
<p>对于Sedna，Sedna控制了边云协同AI应用中，如何配置worker部署启动参数、如何协同、如何流转等，那么我们可以这么定义：<strong>Sedna GM是“边云协同AI应用“这个特定领域的控制器</strong>。</p>
<blockquote>The following components form the three main parts of an operator:</blockquote>
<blockquote>- <em>API</em>: The data that describes the operand’s configuration. The API includes:</blockquote>
<blockquote>  - <strong><em>Custom resource definition (CRD)</em></strong>, which defines a schema of settings available for configuring the operand.</blockquote>
<blockquote>  - <strong><em>Programmatic API</em></strong>, which defines the same data schema as the CRD and is implemented using the operator’s programming language, such as <a href="https://developers.redhat.com/blog/category/go/"><em>Go</em></a><em>.</em></blockquote>
<blockquote>  - <strong><em>Custom resource (CR)</em></strong>, which specifies values for the settings defined by the CRD; these values describe the configuration of an operand.</blockquote>
<blockquote>- <strong><em>Controller</em></strong>: The brains of the operator. The controller creates managed resources based on the description in the custom resource; controllers are implemented using the operator’s programming language, such as Go. <a href="about:blank#fn2">2</a></blockquote>
<p>通过上面Redhat的定义，我们可以看到组成一个k8s operator几个重要的概念包括 CRD、API、CR和Controller。</p>
<p>下面是Sedna GM Operator的示意图：</p>
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/KubeEdge-Sedna%E6%BA%90%E7%A0%81%E8%A7%A3%E6%9E%90%EF%BC%88%E8%BD%AC%E8%BD%BD%EF%BC%89_1770869746491.jpg" alt="" />
<p>接下来的章节会按照组成K8S operator的几个组件来展开说明，包括CR、CRD、API、Controller，其中Controller是主要的控制逻辑模块。</p>
<h3>CR</h3>
<p>Sedna本身支持边云协同推理、增量学习、终身学习、联邦学习，为了方便解读代码，本文结合终身学习具体特性和样例来分析。其他三个特性的代码实现存在共通之处，可以类比参考。</p>
<strong>CR样例</strong>
<p>这里贴了一段终身学习<a href="https://github.com/kubeedge/sedna/blob/main/build/crd-samples/sedna/lifelonglearningjobv1alpha1.yaml">CR样例</a>，可以基于这个CR通过kubectl来创建对应的终身学习资源对象，详细使用步骤可以参考<a href="https://github.com/kubeedge/sedna/tree/main/examples/lifelong_learning/atcii">这里</a>。其中关键的字段解释如下：</p>
<li>dataset：指定数据集对象名称，数据集也是一个CR资源。</li>
<li>trainSpec：终身学习中，训练worker的启动参数，包括镜像和环境变量等容器配置。</li>
<li>trigger：终身学习中，启动训练worker的触发条件。</li>
<li>evalSpec：终身学习中，评估work的启动参数，包括镜像和环境变量等容器配置。</li>
<li>deploySpec：终身学习中，推理work的启动参数，包括镜像和环境变量等容器配置。</li>
<li>outputDir：终身学习中，训练生成的模型文件输出路径。</li>
<code>build/crd-samples/sedna/lifelonglearningjobv1alpha1.yaml</code>
<p>``<code>yaml</p>
<p>apiVersion: sedna.io/v1alpha1</p>
<p>kind: LifelongLearningJob</p>
<p>metadata:</p>
<p>  name: atcii-classifier-demo</p>
<p>spec:</p>
<p>  dataset:</p>
<p>    name: "lifelong-dataset"</p>
<p>    trainProb: 0.8</p>
<p>  trainSpec:</p>
<p>    template:</p>
<p>      spec:</p>
<p>        nodeName:  "edge-node"</p>
<p>        containers:</p>
<p>          - image: kubeedge/sedna-example-lifelong-learning-atcii-classifier:v0.3.0</p>
<p>            name:  train-worker</p>
<p>            imagePullPolicy: IfNotPresent</p>
<p>            args: ["train.py"]</p>
<p>            env:</p>
<p>              - name: "early_stopping_rounds"</p>
<p>                value: "100"</p>
<p>              - name: "metric_name"</p>
<p>                value: "mlogloss"</p>
<p>    trigger:</p>
<p>      checkPeriodSeconds: 60</p>
<p>      timer:</p>
<p>        start: 02:00</p>
<p>        end: 24:00</p>
<p>      condition:</p>
<p>        operator: ">"</p>
<p>        threshold: 500</p>
<p>        metric: num_of_samples</p>
<p>  evalSpec:</p>
<p>    template:</p>
<p>      spec:</p>
<p>        nodeName:  "edge-node"</p>
<p>        containers:</p>
<p>          - image: kubeedge/sedna-example-lifelong-learning-atcii-classifier:v0.3.0</p>
<p>            name:  eval-worker</p>
<p>            imagePullPolicy: IfNotPresent</p>
<p>            args: ["eval.py"]</p>
<p>            env:</p>
<p>              - name: "metrics"</p>
<p>                value: "precision_score"</p>
<p>              - name: "metric_param"</p>
<p>                value: "{'average': 'micro'}"</p>
<p>              - name: "model_threshold"</p>
<p>                value: "0.5"</p>
<p>  deploySpec:</p>
<p>    template:</p>
<p>      spec:</p>
<p>        nodeName:  "edge-node"</p>
<p>        containers:</p>
<p>        - image: kubeedge/sedna-example-lifelong-learning-atcii-classifier:v0.3.0</p>
<p>          name:  infer-worker</p>
<p>          imagePullPolicy: IfNotPresent</p>
<p>          args: ["inference.py"]</p>
<p>          env:</p>
<p>          - name: "UT_SAVED_URL"</p>
<p>            value: "/ut_saved_url"</p>
<p>          - name: "infer_dataset_url"</p>
<p>            value: "/data/testData.csv"</p>
<p>          volumeMounts:</p>
<p>          - name: utdir</p>
<p>            mountPath: /ut_saved_url</p>
<p>          - name: inferdata</p>
<p>            mountPath: /data/</p>
<p>          resources:</p>
<p>            limits:</p>
<p>              memory: 2Gi</p>
<p>        volumes:</p>
<p>          - name: utdir</p>
<p>            hostPath:</p>
<p>              path: /lifelong/unseen_task/</p>
<p>              type: DirectoryOrCreate</p>
<p>          - name: inferdata</p>
<p>            hostPath:</p>
<p>              path:  /data/</p>
<p>              type: DirectoryOrCreate</p>
<p>  outputDir: "/output"</p>
</code>`<code>
<h3>CRD</h3>
<p>CRD可以看作是CR的模板，在k8s集群能创建对应CR之前需要将对应的CRD在k8s集群中进行声明。CRD对应的yaml文件可以手动编写或自动生成，对于一些相对复杂的CRD定义建议采用通过k8s相关工具生成。比如Sedna这里使用的是kubebuilder的<a href="https://book.kubebuilder.io/reference/controller-gen.html#controller-gen-cli">controller-gen</a>进行自动生成与更新，Sedna项目提供了封装好的脚本，直接通过</code>make crds<code>命令即可生成和更新对应</code>build/crds/<code>目录下的CRD文件。相关shell脚本可以参考这个文件</code>Makefile<code>中的</code>crds: controller-gen<code>。</p>
<p>想要完成一个CRD定义，最重要的是需要指定group、version和kind，通常简称为GVK。而CR资源对象本身称为Resource，相较于面向对象中的概念，Resouce类比为Object，Kind类比于Class，也就可以说Resource是Kind的实例。下表展示了终身学习CRD和CR对应的GVR和GVK：</p>
<p>|  | Group | Version | Resource | Kind |</p>
<p>| --- | --- | --- | --- | --- |</p>
<p>| CRD | apiextensions.k8s.io | v1 | lifelonglearningjobs.sedna.io | CustomResourceDefinition |</p>
<p>| CR | sedna.io | v1alpha1 | lifelonglearningjob | LifelongLearningJob |</p>
<p>在K8S集群中资源是以REST URI的形式来组织的，组织的路径如下：</p>
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/KubeEdge-Sedna%E6%BA%90%E7%A0%81%E8%A7%A3%E6%9E%90%EF%BC%88%E8%BD%AC%E8%BD%BD%EF%BC%89_1770869747439.jpg" alt="" />
<p>了解了上述的规则后，我们可以快速的拼接好要管理的k8s资源对象的REST URI地址，这为某些不能依赖k8s client（kubectl, client-go等）的情况下访问集群资源提供了简便的方式。比如：</p>
<p>通过Rest接口查看终身学习CRD描述：</p>
</code>`<code>plain text
<p>curl -k --cert ./client.crt --key ./client.key https://127.0.0.1:5443/apis/apiextensions.k8s.io/v1beta1/customresourcedefinitions/lifelonglearningjobs.sedna.io</p>
</code>`<code>
<p>通过Rest接口查看终身学习CR列表：</p>
</code>`<code>plain text
<p>curl -k --cert ./client.crt --key ./client.key https://127.0.0.1:5443/apis/sedna.io/v1alpha1/lifelonglearningjobs</p>
</code>`<code>
<p>比如如果某些编程语言没有官方的k8s client SDK， 那么可以统一采用如上Rest接口形式进行封装。</p>
<p>下面是Sedna 终身学习CRD定义，一些需要关注的字段如下：</p>
<li></code>apiVersion: apiextensions.k8s.io/v1<code>，当前所有的CRD都扩展自apiextensions.k8s.io/v1这个Version。</li>
<li></code>kind: CustomResourceDefinition<code>，当前所有的CRD都继承自CustomResourceDefinition这个Kind。</li>
<li></code>spec.group: sedna.io<code>，自定义资源的Group名称为sedna.io。</li>
<li></code>spec.names.kind: LifelongLearningJob<code>, 自定义资源新增加的类型，这里是LifelongLearningJob。</li>
<li></code>spec.names.shortNames: - ll<code>，在使用kubectl可以使用这个缩写”ll“查询到LifelongLearningJob资源。</li>
</code>build/crds/sedna.io_lifelonglearningjobs.yaml<code>
</code>`<code>yaml
<p>apiVersion: apiextensions.k8s.io/v1</p>
<p>kind: CustomResourceDefinition</p>
<p>metadata:</p>
<p>  annotations:</p>
<p>    controller-gen.kubebuilder.io/version: v0.4.1</p>
<p>  creationTimestamp: null</p>
<p>  name: lifelonglearningjobs.sedna.io</p>
<p>spec:</p>
<p>  group: sedna.io</p>
<p>  names:</p>
<p>    kind: LifelongLearningJob</p>
<p>    listKind: LifelongLearningJobList</p>
<p>    plural: lifelonglearningjobs</p>
<p>    shortNames:</p>
<p>    - ll</p>
<p>    singular: lifelonglearningjob</p>
<p>  scope: Namespaced</p>
<p>  versions:</p>
<p>  - name: v1alpha1</p>
<p>	...</p>
<p>status:</p>
<p>  acceptedNames:</p>
<p>    kind: ""</p>
<p>    plural: ""</p>
<p>  conditions: []</p>
<p>  storedVersions: []</p>
</code>`<code>
<h3>API</h3>
<p>上面提到我们的CRD是自动生成的，那生成这些CRD所需要的API基础定义在哪里呢？</p>
</code>pkg/apis/sedna/v1alpha1/lifelonglearningjob_types.go<code>
</code>`<code>go
<p>package v1alpha1</p>
<p>import (</p>
<p>	v1 "k8s.io/api/core/v1"</p>
<p>	metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"</p>
<p>)</p>
<p>// 这里展示了</p>
<p>// +genclient</p>
<p>// +k8s:deepcopy-gen:interfaces=k8s.io/apimachinery/pkg/runtime.Object</p>
<p>// +kubebuilder:resource:shortName=ll</p>
<p>// +kubebuilder:subresource:status</p>
<p>// 整体的LifelongLearningJob的API定义，主要包含Spec和Status定义，分别代表期望状态和实际状态。</p>
<p>type LifelongLearningJob struct {</p>
<p>	metav1.TypeMeta   </code>json:",inline"<code></p>
<p>	metav1.ObjectMeta </code>json:"metadata"<code></p>
<p>	Spec              LLJobSpec   </code>json:"spec"<code></p>
<p>	Status            LLJobStatus </code>json:"status,omitempty"<code></p>
<p>}</p>
<p>// 在创建LifelongLearningJob时候需要配置的参数；如果需要扩展终身学习字段的接口，可以在这里修改。</p>
<p>type LLJobSpec struct {</p>
<p>	Dataset    LLDataset    </code>json:"dataset"<code></p>
<p>	TrainSpec  LLTrainSpec  </code>json:"trainSpec"<code></p>
<p>	EvalSpec   LLEvalSpec   </code>json:"evalSpec"<code></p>
<p>	DeploySpec LLDeploySpec </code>json:"deploySpec"<code></p>
<p>	// the credential referer for OutputDir</p>
<p>	CredentialName string </code>json:"credentialName,omitempty"<code></p>
<p>	OutputDir      string </code>json:"outputDir"<code></p>
<p>}</p>
<p>type LLDataset struct {</p>
<p>	Name      string  </code>json:"name"<code></p>
<p>	TrainProb float64 </code>json:"trainProb"<code></p>
<p>}</p>
<p>// 剩下还有一些结构体定义省略了。</p>
</code>`<code>
<p>上面的代码片段中，补充了额外的说明，需要注意的有如下几点：</p>
<li></code>// +kubebuilder...<code> ：注释是给kubebuilder等代码自动生成工具的配置参数，会被这些工具解析。</li>
<li></code>type LifelongLearningJob struct{...}<code>：定义了终身学习CRD整体API，主要包含Spec和Status定义，分别代表期望状态和实际状态。</li>
<li></code>type LLJobSpec struct {...}<code>：在创建LifelongLearningJob CR时需要配置的参数；如果需要扩展终身学习字段的接口，可以在这里修改。</li>
<p>其他协同推理、增量学习、联邦学习相关的API定义都可以在</code>pkg/apis/sedna/v1alpha1/<code>这个目录下找到。</p>
<strong>更新client-go代码</strong>
<p>一旦新增或者更新了</code>*_types.go<code>中的定义，则需要执行如下命令进行client-go代码更新：</p>
</code>`<code>plain text
<p>bash hack/update-codegen.sh</p>
</code>`<code>
<p>生成的代码位于</code>pkg/client<code>：</p>
</code>`<code>plain text
<p>➜  pkg tree client -L 2</p>
<p>client</p>
<p>├── clientset</p>
<p>│   └── versioned</p>
<p>├── informers</p>
<p>│   └── externalversions</p>
<p>└── listers</p>
<p>    └── sedna</p>
</code>`<code>
<p>client-go中的代码会在后面的Contrller逻辑中用到。</p>
<strong>更新CRD定义</strong>
<p>一旦新增或者更新了</code>*_types.go<code>中的定义，则需要执行如下命令进行CRD代码更新：</p>
</code>`<code>plain text
<p>make crds</p>
</code>`<code>
<p>生成的CRD定义yaml文件位于</code>build/crds<code>。更新这些定义之后，也需要同步在K8s集群中重新</code>kubectl apply<code>一下，以将新的CRD在集群中生效。</p>
<h3>Controller</h3>
<p>终身学习最主要的控制逻辑在这个</code>pkg/globalmanager/controllers/lifelonglearning/lifelonglearningjob.go<code>文件里面，包括训练评估Worker什么时候触发、Worker参数如何同步到边缘等。</p>
<p>在进入到终身学习的控制逻辑之前，整体的调用流程可以参考下面伪代码：</p>
</code>`<code>go
<p>cmd/sedna-gm/sedna-gm.go/main() 【1】</p>
<p>pkg/globalmanager/controllers/manager.go/New() 【2】读取GM配置文件。</p>
<p>pkg/globalmanager/controllers/manager.go/Start() 【3】启动GM进程。</p>
<p>    - clientset.NewForConfig()：【4】调用client-go生成了Sedna CRD client。</p>
<p>    - NewUpstreamController()：【5】创建UpstreamController，每个GM进程有一个UpstreamController</p>
<p>    - uc.Run(stopCh)：启动一个for循环协程，来处理</p>
<p>        - pkg/globalmanager/controllers/upstream.go/syncEdgeUpdate() </p>
<p>    - NewRegistry()：【6】注册所有controller。</p>
<p>        - f.SetDownstreamSendFunc()【7】</p>
<p>            -> pkg/globalmanager/controllers/lifelonglearning/downstream.go</p>
<p>        - f.SetUpstreamHandler()【8】</p>
<p>            -> pkg/globalmanager/controllers/lifelonglearning/upstream.go/updateFromEdge()</p>
<p>        - f.Run()【9】</p>
<p>    - ws.ListenAndServe() 【10】</p>
</code>`<code>
<p>下面为了讲解Sedna LifelongLeraningJob Controller的控制逻辑，也按照上面1~10的标号来讲解：</p>
<h3>【1】main函数入口</h3>
</code>sedna-gm.go<code>是GM模块的启动入口，主要包括日志初始化配置、</code>app.NewControllerCommand()<code>中执行了参数的解析、启动GM对应的controller。
</code>cmd/sedna-gm/sedna-gm.go<code>
</code>`<code>go
<p>func main() {</p>
<p>   rand.Seed(time.Now().UnixNano())</p>
<p>   command := app.NewControllerCommand()</p>
<p>   logs.InitLogs()</p>
<p>   defer logs.FlushLogs()</p>
<p>   if err := command.Execute(); err != nil {</p>
<p>      os.Exit(1)</p>
<p>   }</p>
<p>}</p>
</code>`<code>
<h3>【2】GM系统配置加载</h3>
<p>GM加载系统配置，包括K8S集群配置、启动监听的Websocket地址端口、KB服务的地址等。</p>
</code>pkg/globalmanager/controllers/manager.go<code>
</code>`<code>go
<p>// New creates the controller manager</p>
<p>func New(cc <em>config.ControllerConfig) </em>Manager {</p>
<p>   config.InitConfigure(cc)</p>
<p>   return &Manager{</p>
<p>      Config: cc,</p>
<p>   }</p>
<p>}</p>
</code>`<code>
</code>pkg/globalmanager/config/config.go<code>
</code>`<code>go
<p>// ControllerConfig indicates the config of controller</p>
<p>type ControllerConfig struct {</p>
<p>   // KubeAPIConfig indicates the kubernetes cluster info which controller will connected</p>
<p>   KubeConfig string </code>json:"kubeConfig,omitempty"<code></p>
<p>   // Master indicates the address of the Kubernetes API server. Overrides any value in KubeConfig.</p>
<p>   // such as https://127.0.0.1:8443</p>
<p>   // default ""</p>
<p>   Master string </code>json:"master"<code></p>
<p>   // Namespace indicates which namespace the controller listening to.</p>
<p>   // default ""</p>
<p>   Namespace string </code>json:"namespace,omitempty"<code></p>
<p>   // websocket server config</p>
<p>   // Since the current limit of kubeedge(1.5), GM needs to build the websocket channel for communicating between GM and LCs.</p>
<p>   WebSocket WebSocket </code>json:"websocket,omitempty"<code></p>
<p>   // lc config to info the worker</p>
<p>   LC LCConfig </code>json:"localController,omitempty"<code></p>
<p>   // kb config to info the worker</p>
<p>   KB KBConfig </code>json:"knowledgeBaseServer,omitempty"<code></p>
<p>   // period config min resync period</p>
<p>   // default 30s</p>
<p>   MinResyncPeriodSeconds int64 </code>json:"minResyncPeriodSeconds,omitempty"<code></p>
<p>}</p>
</code>`<code>
<h3>【3】GM整体初始化</h3>
<p>GM整体初始化的步骤如下，包括初始化Sedna CRD client、绑定并启动边云消息通信处理函数、启动各个特定对应的controller、启动websock开始监听消息。</p>
</code>pkg/globalmanager/controllers/manager.go<code>
</code>`<code>go
<p>// Start starts the controllers it has managed</p>
<p>func (m *Manager) Start() error {</p>
<p>   ...</p>
<p>   // 初始化Sedna CRD client，Controller会监听Sedna CR 增删改查的变化，并执行对应的处理逻辑。</p>
<p>   sednaClient, err := clientset.NewForConfig(kubecfg)</p>
<p>   ...</p>
<p>   sednaInformerFactory := sednainformers.NewSharedInformerFactoryWithOptions(sednaClient, genResyncPeriod(minResyncPeriod), sednainformers.WithNamespace(namespace))</p>
<p>   // 初始化UpstreamController，用于处理边缘LC上传的消息</p>
<p>   uc, _ := NewUpstreamController(context)</p>
<p>   downstreamSendFunc := messagelayer.NewContextMessageLayer().SendResourceObject</p>
<p>   stopCh := make(chan struct{})</p>
<p>   go uc.Run(stopCh)</p>
<p>   // 针对每个特性（协同推理、终身学习等），绑定对应的消息处理函数</p>
<p>   for name, factory := range NewRegistry() {</p>
<p>      ...</p>
<p>      f.SetDownstreamSendFunc(downstreamSendFunc)</p>
<p>      f.SetUpstreamHandler(uc.Add)</p>
<p>      ...</p>
<p>      // 启动各个特性对应controller</p>
<p>      go f.Run(stopCh)</p>
<p>   }</p>
<p>   ...</p>
<p>   // 启动整体GM的websocket，默认监听在0.0.0.0:9000这个端口地址</p>
<p>   ws := websocket.NewServer(addr)</p>
<p>   ...</p>
<p>}</p>
</code>`<code>
<h3>【4】CRD client初始化</h3>
</code>clientset.NewForConfig()<code>调用的原始函数位于</code>pkg/client/clientset/versioned/clientset.go<code>，前面提到这里是由client-go工具根据Sedna CRD定义自动生成的代码，通过go语言调用CRD定义的资源对象的增删改查。
<p>下面代码是LifelongLearningJob Controller初始化的函数，其中就依赖client-go生成的CRD client代码。主要做了这么几件事：</p>
<li>获取LifelongLearningJob的Informer。Informer可以看作是controller的K8S api-server的”本地缓存“，用来减少api-server的数据读取压力。</li>
<li>配置LifelongLearningJob Controller的参数或成员变量，包括k8s client、sedna client、GM controller通用配置。</li>
<li>绑定LifelongLearningJob CRD资源的Add、Update、Delete对应事件的回调函数。</li>
</code>pkg/globalmanager/controllers/lifelonglearning/lifelonglearningjob.go<code>
</code>`<code>go
<p>// New creates a new LifelongLearningJob controller that keeps the relevant pods</p>
<p>// in sync with their corresponding LifelongLearningJob objects.</p>
<p>func New(cc *runtime.ControllerContext) (runtime.FeatureControllerI, error) {</p>
<p>   cfg := cc.Config</p>
<p>   podInformer := cc.KubeInformerFactory.Core().V1().Pods()</p>
<p>   // 获取LifelongLearningJob的Informer</p>
<p>   jobInformer := cc.SednaInformerFactory.Sedna().V1alpha1().LifelongLearningJobs()</p>
<p>   eventBroadcaster := record.NewBroadcaster()</p>
<p>   eventBroadcaster.StartRecordingToSink(&v1core.EventSinkImpl{Interface: cc.KubeClient.CoreV1().Events("")})</p>
<p>   // 配置LifelongLearningJob Controller的参数</p>
<p>   jc := &Controller{</p>
<p>      kubeClient: cc.KubeClient,</p>
<p>      client:     cc.SednaClient.SednaV1alpha1(),</p>
<p>      queue:      workqueue.NewNamedRateLimitingQueue(workqueue.NewItemExponentialFailureRateLimiter(runtime.DefaultBackOff, runtime.MaxBackOff), Name),</p>
<p>      cfg:        cfg,</p>
<p>   }</p>
<p>   // 绑定LifelongLearningJob CRD资源的Add、Update、Delete对应事件的回调函数。</p>
<p>   jobInformer.Informer().AddEventHandler(cache.ResourceEventHandlerFuncs{</p>
<p>      AddFunc: func(obj interface{}) {</p>
<p>         jc.enqueueController(obj, true)</p>
<p>         jc.syncToEdge(watch.Added, obj)</p>
<p>      },</p>
<p>      UpdateFunc: func(old, cur interface{}) {</p>
<p>         jc.enqueueController(cur, true)</p>
<p>         jc.syncToEdge(watch.Added, cur)</p>
<p>      },</p>
<p>      DeleteFunc: func(obj interface{}) {</p>
<p>         jc.enqueueController(obj, true)</p>
<p>         jc.syncToEdge(watch.Deleted, obj)</p>
<p>      },</p>
<p>   })</p>
<p>   jc.jobLister = jobInformer.Lister()</p>
<p>   jc.jobStoreSynced = jobInformer.Informer().HasSynced</p>
<p>   // 绑定Pod对应的增删改对应事件的回调函数。</p>
<p>   podInformer.Informer().AddEventHandler(cache.ResourceEventHandlerFuncs{</p>
<p>      AddFunc:    jc.addPod,</p>
<p>      UpdateFunc: jc.updatePod,</p>
<p>      DeleteFunc: jc.deletePod,</p>
<p>   })</p>
<p>   jc.podStore = podInformer.Lister()</p>
<p>   jc.podStoreSynced = podInformer.Informer().HasSynced</p>
<p>   return jc, nil</p>
<p>}</p>
</code>`<code>
<p>下面截图也展示了Sedna CRD client在其他模块的一些引用。</p>
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/KubeEdge-Sedna%E6%BA%90%E7%A0%81%E8%A7%A3%E6%9E%90%EF%BC%88%E8%BD%AC%E8%BD%BD%EF%BC%89_1770869748199.png" alt="" />
<h3>【5】消息处理初始化</h3>
</code>uc.Run()<code>里面会初始化UpstreamController，UpstreamController用来处理边缘发送过来的所有消息。
<p>for循环持续的监听</code>context.upstreamChannel<code>， 一旦有消息则通过</code>uc.updateHandlers[kind]<code>根据kind类型获取对应的handler，并调用此handler回调函数进行消息处理。</code>uc.updateHandlers<code>是一个map，里面存储了协同推理、增量学习、联邦学习、终身学习对应的updateHandlers.</p>
</code>pkg/globalmanager/controllers/upstream.go<code>
</code>`<code>go
<p>// syncEdgeUpdate receives the updates from edge and syncs these to k8s.</p>
<p>func (uc *UpstreamController) syncEdgeUpdate() {</p>
<p>   for {</p>
<p>      select {</p>
<p>      case <-uc.messageLayer.Done():</p>
<p>         klog.Info("Stop sedna upstream loop")</p>
<p>         return</p>
<p>      default:</p>
<p>      }</p>
<p>      update, err := uc.messageLayer.ReceiveResourceUpdate()</p>
<p>	  ...</p>
<p>      handler, ok := uc.updateHandlers[kind]</p>
<p>      if ok {</p>
<p>         err := handler(name, namespace, operation, update.Content)</p>
<p>         ...</p>
<p>      }</p>
<p>   }</p>
<p>}</p>
</code>`<code>
</code>ReceiveFromEdge<code>提供一个阻塞的通道，来接收来自边缘节点LC发送的消息，消息的类型为</code>nodeMessage<code>。
</code>pkg/globalmanager/messagelayer/ws/context.go<code>
</code>`<code>go
<p>// ReceiveResourceUpdate receives and handles the update</p>
<p>func (cml <em>ContextMessageLayer) ReceiveResourceUpdate() (</em>ResourceUpdateSpec, error) {</p>
<p>   nodeName, msg, err := wsContext.ReceiveFromEdge()</p>
<p>   ...</p>
<p>}</p>
</code>`<code>
<h3>【6】Controller注册</h3>
</code>NewRegistry()<code>函数注册了所有特性初始化函数，如果想扩展新的边云协同特性，需要在这里添加对应的New函数。
</code>pkg/globalmanager/controllers/registry.go<code>
</code>`<code>go
<p>func NewRegistry() Registry {</p>
<p>   return Registry{</p>
<p>      ji.Name:      ji.New,</p>
<p>      fe.Name:      fe.New,</p>
<p>      fl.Name:      fl.New,</p>
<p>      il.Name:      il.New,</p>
<p>      ll.Name:      ll.New,</p>
<p>      reid.Name:    reid.New,</p>
<p>      va.Name:      va.New,</p>
<p>      dataset.Name: dataset.New,</p>
<p>      objs.Name:    objs.New,</p>
<p>   }</p>
<p>}</p>
</code>`<code>
<h3>【7】云端消息同步到边缘</h3>
</code>f.SetDownstreamSendFunc()<code>绑定了各个特性对应的边缘同步消息函数</code>syncToEdge()<code>。
<p>对于终身学习来说，其同步消息的步骤主要包括：</p>
<li>获取到对应的数据集指定的节点，Dataset CRD对象中有一个字段记录了Node名称）。</li>
<li>获取到训练、评估、部署对应的节点名称，这些名称基于Annotation记录。</li>
<li>根据LifelongLearningJob所处训练、评估、部署阶段不同，发送消息到不同的节点上。</li>
</code>pkg/globalmanager/controllers/lifelonglearning/downstream.go<code>
</code>`<code>go
<p>func (c *Controller) syncToEdge(eventType watch.EventType, obj interface{}) error {</p>
<p>   // 获取到对应的数据集指定的节点（Dataset CRD对象中有一个字段记录了Node名称）</p>
<p>   ds, err := c.client.Datasets(job.Namespace).Get(context.TODO(), dataName, metav1.GetOptions{})</p>
<p>   </p>
<p>   // 获取到训练、评估、部署对应的节点名称</p>
<p>   getAnnotationsNodeName := func(nodeName sednav1.LLJobStage) string {</p>
<p>      return runtime.AnnotationsKeyPrefix + string(nodeName)</p>
<p>   }</p>
<p>   ann := job.GetAnnotations()</p>
<p>   if ann != nil {</p>
<p>      trainNodeName = ann[getAnnotationsNodeName(sednav1.LLJobTrain)]</p>
<p>      evalNodeName = ann[getAnnotationsNodeName(sednav1.LLJobEval)]</p>
<p>      deployNodeName = ann[getAnnotationsNodeName(sednav1.LLJobDeploy)]</p>
<p>   }</p>
<p>   </p>
<p>   ...</p>
<p>   // 根据LifelongLearningJob所处阶段不同，发送消息到不同的节点上</p>
<p>   switch jobStage {</p>
<p>   case sednav1.LLJobTrain:</p>
<p>      doJobStageEvent(trainNodeName)</p>
<p>   case sednav1.LLJobEval:</p>
<p>      doJobStageEvent(evalNodeName)</p>
<p>   case sednav1.LLJobDeploy:</p>
<p>      doJobStageEvent(deployNodeName)</p>
<p>   }</p>
<p>   return nil</p>
<p>}</p>
</code>`<code>
<h3>【8】边缘消息同步到云端</h3>
</code>f.SetUpstreamHandler()<code> 绑定了各个特性对应的云端消息消息同步函数</code>updateFromEdge()<code>。
<p>对于终身学习来说，其同步消息主要做了几件事：</p>
<li>根据不同的边缘节点任务完成情况，变更当前LifelongLearningJob的整体状态。</li>
<li>将当前LifelongLearningJob的整体状态写回k8s，也就是LifelongLearningJob这个CR的Status字段。</li>
<li>解析边缘消息结构体，当前是以json的形式定义的，消息体示例如下：</li>
<p>GM接收到的消息体示例：</p>
</code>`<code>json
<p>{</p>
<p>    "phase": "train",</p>
<p>    "status": "completed",</p>
<p>    "output": {</p>
<p>        "models": [{</p>
<p>            "classes":  ["road", "fence"],</p>
<p>            "current_metric": null,</p>
<p>            "format": "pkl",</p>
<p>            "metrics": null,</p>
<p>            "url": "/output/train/1/index.pkl"</p>
<p>        }],</p>
<p>        "ownerInfo": null</p>
<p>    }</p>
<p>}</p>
</code>`<code>
</code>pkg/globalmanager/controllers/lifelonglearning/upstream.go<code>
</code>`<code>go
<p>// updateFromEdge syncs the edge updates to k8s</p>
<p>func (c *Controller) updateFromEdge(name, namespace, operation string, content []byte) error {</p>
<p>   var jobStatus struct {</p>
<p>      Phase  string </code>json:"phase"<code></p>
<p>      Status string </code>json:"status"<code></p>
<p>   }</p>
<p>   </p>
<p>   // 把边缘消息结构体进行解析。</p>
<p>   err := json.Unmarshal(content, &jobStatus)</p>
<p>   ...</p>
<p>   cond := sednav1.LLJobCondition{</p>
<p>      Status:             v1.ConditionTrue,</p>
<p>      LastHeartbeatTime:  metav1.Now(),</p>
<p>      LastTransitionTime: metav1.Now(),</p>
<p>      Data:               string(condDataBytes),</p>
<p>      Message:            "reported by lc",</p>
<p>   }</p>
<p>   // 根据不同的边缘节点任务状态实现，变更当前LifelongLearningJob的整体状态</p>
<p>   switch strings.ToLower(jobStatus.Status) {</p>
<p>   case "ready":</p>
<p>      cond.Type = sednav1.LLJobStageCondReady</p>
<p>   case "completed":</p>
<p>      cond.Type = sednav1.LLJobStageCondCompleted</p>
<p>   case "failed":</p>
<p>      cond.Type = sednav1.LLJobStageCondFailed</p>
<p>   case "waiting":</p>
<p>      cond.Type = sednav1.LLJobStageCondWaiting</p>
<p>   default:</p>
<p>      return fmt.Errorf("invalid condition type: %v", jobStatus.Status)</p>
<p>   }</p>
<p>   // 将当前LifelongLearningJob的整体状态写回k8s，也就是LifelongLearningJob这个CR的Status字段。</p>
<p>   err = c.appendStatusCondition(name, namespace, cond)</p>
<p>   ...</p>
<p>}</p>
</code>`<code>
<h3>【9】Controller核心处理逻辑</h3>
</code>f.run()<code>会调用各个特性对应Controller的处理函数，下面是LifelongLearningJob的</code>run()<code>函数。
<p>先会通过</code>WaitForNamedCacheSync<code>去等待Pod和LifelongLearningJob资源对象是否已经同步到Informer中。如果已经同步，则会启动指定数量的worker对LifelongLearningJob进行处理。</p>
</code>pkg/globalmanager/controllers/lifelonglearning/lifelonglearningjob.go<code>
</code>`<code>go
<p>// Run starts the main goroutine responsible for watching and syncing jobs.</p>
<p>func (c *Controller) Run(stopCh <-chan struct{}) {</p>
<p>   workers := 1</p>
<p>   defer utilruntime.HandleCrash()</p>
<p>   defer c.queue.ShutDown()</p>
<p>   klog.Infof("Starting %s controller", Name)</p>
<p>   defer klog.Infof("Shutting down %s controller", Name)</p>
<p>   if !cache.WaitForNamedCacheSync(Name, stopCh, c.podStoreSynced, c.jobStoreSynced) {</p>
<p>      klog.Errorf("failed to wait for %s caches to sync", Name)</p>
<p>      return</p>
<p>   }</p>
<p>   klog.Infof("Starting %s workers", Name)</p>
<p>   for i := 0; i < workers; i++ {</p>
<p>      go wait.Until(c.worker, time.Second, stopCh)</p>
<p>   }</p>
<p>   <-stopCh</p>
<p>}</p>
</code>`<code>
<p>在</code>c.worker<code>方法中，会调用</code>processNextWorkItem()<code>去处理对应的LifelongLearningJob资源对象，</p>
</code>pkg/globalmanager/controllers/lifelonglearning/lifelonglearningjob.go<code>
</code>`<code>go
<p>// worker runs a worker thread that just dequeues items, processes them, and marks them done.</p>
<p>// It enforces that the syncHandler is never invoked concurrently with the same key.</p>
<p>func (c *Controller) worker() {</p>
<p>   for c.processNextWorkItem() {</p>
<p>   }</p>
<p>}</p>
</code>`<code>
</code>c.processNextWorkItem()<code>会调用</code>c.sync()<code>函数来处理特性相关的逻辑。
</code>pkg/globalmanager/controllers/lifelonglearning/lifelonglearningjob.go<code>
</code>`<code>go
<p>func (c *Controller) sync(key string) (bool, error) {</p>
<p>   //省略了部分代码</p>
<p>   ns, name, err := cache.SplitMetaNamespaceKey(key)</p>
<p>   sharedJob, err := c.jobLister.LifelongLearningJobs(ns).Get(name)</p>
<p>   // if job was finished previously, we don't want to redo the termination</p>
<p>   if IsJobFinished(&job) {</p>
<p>      return true, nil</p>
<p>   }</p>
<p>   // transit this job's state machine</p>
<p>   needUpdated, err = c.transitJobState(&job)</p>
<p>   if needUpdated {</p>
<p>      if err := c.updateJobStatus(&job); err != nil {</p>
<p>         return forget, err</p>
<p>      }</p>
<p>      if jobFailed && !IsJobFinished(&job) {</p>
<p>         // returning an error will re-enqueue LifelongLearningJob after the backoff period</p>
<p>         return forget, fmt.Errorf("failed pod(s) detected for lifelonglearningjob key %q", key)</p>
<p>      }</p>
<p>      forget = true</p>
<p>   }</p>
<p>   return forget, err</p>
<p>}</p>
</code>`<code>
<p>sync是对具体的LifelongLearningJob进行逻辑处理了，主要做了这么几件事：</p>
<li>通过</code>SplitMetaNamespaceKey<code>将LifelongLearningJob的key切分为namespace和name。</li>
<li>通过</code>c.jobLister<code>获取LifelongLearningJob的资源对象。</li>
<li>通过</code>transitJobState<code>来分析当前job应该进入到训练、评估、部署阶段了。</li>
<li>如果LifelongLearningJob的Status更新了，那么需要通过</code>c.updateJobStatus()<code>写回k8s资源对象中，这样通过</code>kubectl<code>查询到的就是最新的状态了，比如说当前在评估阶段、生成的模型路径在哪里等信息。</li>
<li>任务失败等异常处理。</li>
</code>`<code>go
<p>// transit this job's state machine</p>
<p>needUpdated, err = c.transitJobState(&job)</p>
</code>`<code>
<p>其中</code>transitJobState()<code>是终身学习任务流转的核心逻辑，包括训练、评估、部署分别在什么时候启动、停止等的控制，详细流转逻辑可以结合下图对应的状态流转图进行分析。</p>
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/KubeEdge-Sedna%E6%BA%90%E7%A0%81%E8%A7%A3%E6%9E%90%EF%BC%88%E8%BD%AC%E8%BD%BD%EF%BC%89_1770869748995.png" alt="" />
<h3>【10】websocket监听启动</h3>
<p>启动一个websocket地址，用于接收【8】中提到的边侧传过来的消息，默认启动的IP端口是</code>0.0.0.0:9000<code>。</p>
</code>pkg/globalmanager/controllers/manager.go<code>
</code>`<code>go
<p>addr := fmt.Sprintf("%s:%d", m.Config.WebSocket.Address, m.Config.WebSocket.Port)</p>
<p>ws := websocket.NewServer(addr)</p>
<p>err = ws.ListenAndServe()</p>
</code>`<code>
<h2>LC: Local Controller</h2>
<p>LC是部署在边缘节点，主要负责本地的任务管理和消息代理。LC入口函数在</code>cmd/sedna-lc/sedna-lc.go<code>,相关入口分析可以参考GM章节。下面这里贴一下本地任务管理注册的函数入口：</p>
</code>cmd/sedna-lc/app/server.go<code>
</code>`<code>go
<p>// runServer runs server</p>
<p>func runServer() {</p>
<p>   c := gmclient.NewWebSocketClient(Options)</p>
<p>   if err := c.Start(); err != nil {</p>
<p>      return</p>
<p>   }</p>
<p>   dm := dataset.New(c, Options)</p>
<p>   mm := model.New(c)</p>
<p>   jm := jointinference.New(c)</p>
<p>   fm := federatedlearning.New(c)</p>
<p>   im := incrementallearning.New(c, dm, mm, Options)</p>
<p>   lm := lifelonglearning.New(c, dm, Options)</p>
<p>   s := server.New(Options)</p>
<p>   for _, m := range []managers.FeatureManager{</p>
<p>      dm, mm, jm, fm, im, lm,</p>
<p>   } {</p>
<p>      s.AddFeatureManager(m)</p>
<p>      c.Subscribe(m)</p>
<p>      err := m.Start()</p>
<p>      if err != nil {</p>
<p>         klog.Errorf("failed to start manager %s: %v",</p>
<p>            m.GetName(), err)</p>
<p>         return</p>
<p>      }</p>
<p>      klog.Infof("manager %s is started", m.GetName())</p>
<p>   }</p>
<p>   s.ListenAndServe()</p>
<p>}</p>
</code>`<code>
<h3>本地任务管理</h3>
</code>Manager<code>这个对象负责了在边缘的任务管理，下面是其数据结构定义：
</code>pkg/localcontroller/managers/lifelonglearning/lifelonglearningjob.go<code>
</code>`<code>go
<p>// LifelongLearningJobManager defines lifelong-learning-job Manager</p>
<p>type Manager struct {</p>
<p>   Client                 clienttypes.ClientI</p>
<p>   WorkerMessageChannel   chan workertypes.MessageContent</p>
<p>   DatasetManager         *dataset.Manager</p>
<p>   LifelongLearningJobMap map[string]*Job</p>
<p>   VolumeMountPrefix      string</p>
<p>}</p>
</code>`<code>
<p>主要的管理流程可以参考如下</code>startJob()<code>代码，其中主要：</p>
<li>监控并处理同步到边缘的Dataset对象，比如监听数据集样本数量是否达到阈值，达到则启动训练。</li>
<li>根据当前任务不同阶段，触发不同阶段的训练、评估、部署任务。本地不直接起任务，而是把对应的状态上报到GM，由GM统一调度来启动对应的训练、评估、部署任务。</li>
</code>pkg/localcontroller/managers/lifelonglearning/lifelonglearningjob.go<code>
</code>`<code>go
<p>// startJob starts a job</p>
<p>func (lm *Manager) startJob(name string) {</p>
<p>   ...</p>
<p>    </p>
<p>   // 监控并处理同步到边缘的Dataset对象。 </p>
<p>   go lm.handleData(job)</p>
<p>   tick := time.NewTicker(JobIterationIntervalSeconds * time.Second)</p>
<p>   for {</p>
<p>      // 根据当前任务不同阶段，触发不同阶段的训练、评估、部署任务。</p>
<p>      select {</p>
<p>      case <-job.JobConfig.Done:</p>
<p>         return</p>
<p>      case <-tick.C:</p>
<p>         cond := lm.getLatestCondition(job)</p>
<p>         jobStage := cond.Stage</p>
<p>         switch jobStage {</p>
<p>         case sednav1.LLJobTrain:</p>
<p>            err = lm.trainTask(job)</p>
<p>         case sednav1.LLJobEval:</p>
<p>            err = lm.evalTask(job)</p>
<p>         case sednav1.LLJobDeploy:</p>
<p>            err = lm.deployTask(job)</p>
<p>         default:</p>
<p>            klog.Errorf("invalid phase: %s", jobStage)</p>
<p>            continue</p>
<p>         }</p>
<p>		 ...</p>
<p>      }</p>
<p>   }</p>
<p>}</p>
</code>`<code>
<p>除了整体的任务流程管理，其他功能还包括数据集的监控、模型的下载、任务本地数据库备份等功能。</p>
<h3>消息代理</h3>
<p>LC除了把状态变化的消息往云端传之外，还会在本地</code>0.0.0.0:9100<code>端口启动一个HTTP Server，用来接收把Lib库传输过来的消息整合并统一传输给GM，起到消息代理的作用。下面是注册的rest接口路由和消息处理函数。</p>
</code>pkg/localcontroller/server/server.go<code>
</code>`<code>go
<p>// register registers api</p>
<p>func (s <em>Server) register(container </em>restful.Container) {</p>
<p>	ws := new(restful.WebService)</p>
<p>	ws.Path(fmt.Sprintf("/%s", constants.ServerRootPath)).</p>
<p>		Consumes(restful.MIME_XML, restful.MIME_JSON).</p>
<p>		Produces(restful.MIME_JSON, restful.MIME_XML)</p>
<p>	ws.Route(ws.POST("/workers/{worker-name}/info").</p>
<p>		To(s.messageHandler).</p>
<p>		Doc("receive worker message"))</p>
<p>	container.Add(ws)</p>
<p>}</p>
</code>`<code>
</code>pkg/localcontroller/server/server.go<code>
</code>`<code>go
<p>// messageHandler handles message from the worker</p>
<p>func (s <em>Server) messageHandler(request </em>restful.Request, response *restful.Response) {</p>
<p>   var err error</p>
<p>   workerName := request.PathParameter("worker-name")</p>
<p>   workerMessage := workertypes.MessageContent{}</p>
<p>   err = request.ReadEntity(&workerMessage)</p>
<p>   if workerMessage.Name != workerName || err != nil {</p>
<p>      var msg string</p>
<p>      if workerMessage.Name != workerName {</p>
<p>         msg = fmt.Sprintf("worker name(name=%s) in the api is different from that(name=%s) in the message body",</p>
<p>            workerName, workerMessage.Name)</p>
<p>      } else {</p>
<p>         msg = fmt.Sprintf("read worker(name=%s) message body failed, error: %v", workerName, err)</p>
<p>      }</p>
<p>      klog.Errorf(msg)</p>
<p>      err = s.reply(response, http.StatusBadRequest, msg)</p>
<p>      if err != nil {</p>
<p>         klog.Errorf("reply messge to worker(name=%s) failed, error: %v", workerName, err)</p>
<p>      }</p>
<p>   }</p>
<p>   if m, ok := s.fmm[workerMessage.OwnerKind]; ok {</p>
<p>      m.AddWorkerMessage(workerMessage)</p>
<p>   }</p>
<p>   err = s.reply(response, http.StatusOK, "OK")</p>
<p>   if err != nil {</p>
<p>      klog.Errorf("reply message to worker(name=%s) failed, error: %v", workerName, err)</p>
<p>      return</p>
<p>   }</p>
<p>}</p>
</code>`<code>
<h1>Sedna Lib源码解析（Python）</h1>
<p>Lib是面向AI开发者和应用开发者的Python 库，方便开发者把自己已有的代码改造成边云协同的。</p>
<p>下面是Lib的工程目录结构：</p>
</code>`<code>plain text
<p>➜  sedna tree lib -L 2</p>
<p>lib</p>
<p>├── __init__.py</p>
<p>├── MANIFEST.in</p>
<p>├── OWNERS</p>
<p>├── requirements.dev.txt</p>
<p>├── requirements.txt    // Sedna Python的依赖</p>
<p>├── sedna</p>
<p>│   ├── algorithms  // 边云协同算法</p>
<p>│   ├── backend     // 支持的后端，tensorflow/pytorch</p>
<p>│   ├── common</p>
<p>│   ├── core        // 主要特性的实现逻辑</p>
<p>│   ├── datasources // 支持的数据源格式，比如txt、csv等</p>
<p>│   ├── __init__.py</p>
<p>│   ├── README.md</p>
<p>│   ├── service     // 需要启动server的组件，比如kb等</p>
<p>│   ├── VERSION</p>
<p>│   └── __version__.py</p>
<p>└── setup.py</p>
</code>`<code>
<p>下面摘取每个部分比较典型的代码进行分析讲解。</p>
<h3>core</h3>
<p>core部分主要是封装了用户的回调函数，比如下面的train，主要是回调用户封装的tensorflow、pytorch、mindspore的train函数。</p>
<li>配置后处理函数。</li>
<li>调用云端知识库进行训练、推理等。</li>
<li>更新云端知识库。在终身学习中，云端知识库用来保存新的模型和样本，会被不断更新。</li>
<li>将当前训练任务执行的情况发送给LC，比如训练任务是否完成、训练后的指标是多少。</li>
</code>lib/sedna/core/lifelong_learning/lifelong_learning.py<code>
</code>`<code>python
<p>def train(self, train_data,</p>
<p>          valid_data=None,</p>
<p>          post_process=None,</p>
<p>          **kwargs):</p>
<p>    is_completed_initilization = \</p>
<p>        str(Context.get_parameters("HAS_COMPLETED_INITIAL_TRAINING",</p>
<p>                                   "false")).lower()</p>
<p>    if is_completed_initilization == "true":</p>
<p>        return self.update(train_data,</p>
<p>                           valid_data=valid_data,</p>
<p>                           post_process=post_process,</p>
<p>                           **kwargs)</p>
<p>    # 配置后处理函数</p>
<p>    callback_func = None</p>
<p>    if post_process is not None:</p>
<p>        callback_func = ClassFactory.get_cls(</p>
<p>            ClassType.CALLBACK, post_process)</p>
<p>    res, seen_task_index = \</p>
<p>        self.cloud_knowledge_management.seen_estimator.train(</p>
<p>            train_data=train_data,</p>
<p>            valid_data=valid_data,</p>
<p>            **kwargs</p>
<p>        ) </p>
<p>    # 调用云端知识库进行训练、或推理</p>
<p>    unseen_res, unseen_task_index = \</p>
<p>        self.cloud_knowledge_management.unseen_estimator.train()</p>
<p>    # 更新云端知识库</p>
<p>    task_index = dict(</p>
<p>        seen_task=seen_task_index,</p>
<p>        unseen_task=unseen_task_index)</p>
<p>    task_index_url = FileOps.dump(</p>
<p>        task_index, self.cloud_knowledge_management.local_task_index_url)</p>
<p>    task_index = self.cloud_knowledge_management.update_kb(task_index_url)</p>
<p>    res.update(unseen_res)</p>
<p>    ...</p>
<p>    </p>
<p>    # 将当前训练任务执行的情况发送给LC，比如训练任务是否完成、训练后的指标是多少</p>
<p>    self.report_task_info(</p>
<p>            None, K8sResourceKindStatus.COMPLETED.value, task_info_res)</p>
<p>        self.log.info(f"Lifelong learning Train task Finished, "</p>
<p>                      f"KB index save in {task_index}")</p>
<p>        return callback_func(self.estimator, res) if callback_func else res</p>
<p>    </p>
<p>    ...</p>
</code>`<code>
<h3>backend</h3>
</code>MSBackend<code>这里定义了一种Sedna支持的后端框架Mindspore，如果对于某个框架定义好典型的train、predict、evaluate函数，那么Sedna Lib 将能够支持将其作为backend，能够实现把现有AI代码进行简单的封装即可实现边云协同的能力。
</code>lib/sedna/backend/mindspore/__init__.py<code>
</code>`<code>python
<p>class MSBackend(BackendBase):</p>
<p>    def __init__(self, estimator, fine_tune=True, **kwargs):</p>
<p>        super(MSBackend, self).__init__(estimator=estimator,</p>
<p>                                        fine_tune=fine_tune,</p>
<p>                                        **kwargs)</p>
<p>        self.framework = "mindspore"</p>
<p>        if self.use_npu:</p>
<p>            context.set_context(mode=context.GRAPH_MODE,</p>
<p>                                device_target="Ascend")</p>
<p>        elif self.use_cuda:</p>
<p>            context.set_context(mode=context.GRAPH_MODE,</p>
<p>                                device_target="GPU")</p>
<p>        else:</p>
<p>            context.set_context(mode=context.GRAPH_MODE,</p>
<p>                                device_target="CPU")</p>
<p>        if callable(self.estimator):</p>
<p>            self.estimator = self.estimator()</p>
<p>    def train(self, train_data, valid_data=None, **kwargs):</p>
<p>        if callable(self.estimator):</p>
<p>            self.estimator = self.estimator()</p>
<p>        if self.fine_tune and FileOps.exists(self.model_save_path):</p>
<p>            self.finetune()</p>
<p>        self.has_load = True</p>
<p>        varkw = self.parse_kwargs(self.estimator.train, **kwargs)</p>
<p>        return self.estimator.train(train_data=train_data,</p>
<p>                                    valid_data=valid_data,</p>
<p>                                    **varkw)</p>
<p>    def predict(self, data, **kwargs):</p>
<p>        if not self.has_load:</p>
<p>            self.load()</p>
<p>        varkw = self.parse_kwargs(self.estimator.predict, **kwargs)</p>
<p>        return self.estimator.predict(data=data, **varkw)</p>
<p>    def evaluate(self, data, **kwargs):</p>
<p>        if not self.has_load:</p>
<p>            self.load()</p>
<p>        varkw = self.parse_kwargs(self.estimator.evaluate, **kwargs)</p>
<p>        return self.estimator.evaluate(data, **varkw)</p>
</code>`<code>
<h3>datasource</h3>
<p>datasource中封装了常用的数据集格式处理函数，这样就不需要通过</p>
</code>lib/sedna/datasources/__init__.py<code>
</code>`<code>python
<p>class CSVDataParse(BaseDataSource, ABC):</p>
<p>    """</p>
<p>    csv file which contain Structured Data parser</p>
<p>    """</p>
<p>    # 提供了方便的数据集解析函数，</p>
<p>    def parse(self, <em>args, </em>*kwargs):</p>
<p>        x_data = []</p>
<p>        y_data = []</p>
<p>        label = kwargs.pop("label") if "label" in kwargs else ""</p>
<p>        usecols = kwargs.get("usecols", "")</p>
<p>        if usecols and isinstance(usecols, str):</p>
<p>            usecols = usecols.split(",")</p>
<p>        if len(usecols):</p>
<p>            if label and label not in usecols:</p>
<p>                usecols.append(label)</p>
<p>            kwargs["usecols"] = usecols</p>
<p>        for f in args:</p>
<p>            if isinstance(f, (dict, list)):</p>
<p>                res = self.parse_json(f, **kwargs)</p>
<p>            else:</p>
<p>                if not (f and FileOps.exists(f)):</p>
<p>                    continue</p>
<p>                res = pd.read_csv(f, **kwargs)</p>
<p>            if self.process_func and callable(self.process_func):</p>
<p>                res = self.process_func(res)</p>
<p>            if label:</p>
<p>                if label not in res.columns:</p>
<p>                    continue</p>
<p>                y = res[label]</p>
<p>                y_data.append(y)</p>
<p>                res.drop(label, axis=1, inplace=True)</p>
<p>            x_data.append(res)</p>
<p>        if not x_data:</p>
<p>            return</p>
<p>        self.x = pd.concat(x_data)</p>
<p>        self.y = pd.concat(y_data)</p>
</code>`<code>
<h3>algorithms</h3>
<p>Sedna面向边云协同AI场景，需要有针对这种场景下的算法。本身也集成了若干典型的难例识别算法，比如下面的交叉熵阈值的算法，能够在边侧模型不自信的时候识别到对应的样本。</p>
<p>Sedna不仅是为了集成这些基础的算法，更是为了支持边云协同框架下能扩展更多实用的算法来优化边云整体训练、推理性能。这才是Sedna Lib的这套框架所希望实现的。</p>
</code>lib/sedna/algorithms/hard_example_mining/hard_example_mining.py<code>
</code>`<code>python
<p>@ClassFactory.register(ClassType.HEM, alias="CrossEntropy")</p>
<p>class CrossEntropyFilter(BaseFilter, abc.ABC):</p>
<p>    """</p>
<p>    <strong>Object detection</strong> Hard samples discovery methods named </code>CrossEntropy<code></p>
<p>    Parameters</p>
<p>    ----------</p>
<p>    threshold_cross_entropy: float</p>
<p>        hard coefficient threshold score to filter img, default to 0.5.</p>
<p>    """</p>
<p>    def __init__(self, threshold_cross_entropy=0.5, **kwargs):</p>
<p>        self.threshold_cross_entropy = float(threshold_cross_entropy)</p>
<p>    def __call__(self, infer_result=None) -> bool:</p>
<p>        """judge the img is hard sample or not.</p>
<p>        Parameters</p>
<p>        ----------</p>
<p>        infer_result: array_like</p>
<p>            prediction classes list, such as</p>
<p>            [class1-score, class2-score, class2-score,....],</p>
<p>            where class-score is the score corresponding to the class,</p>
<p>            class-score value is in [0,1], who will be ignored if its</p>
<p>            value not in [0,1].</p>
<p>        Returns</p>
<p>        -------</p>
<p>        is hard sample: bool</p>
<p>            </code>True<code> means hard sample, </code>False<code> means not.</p>
<p>        """</p>
<p>        if not infer_result:</p>
<p>            # if invalid input, return False</p>
<p>            return False</p>
<p>        log_sum = 0.0</p>
<p>        data_check_list = [class_probability for class_probability</p>
<p>                           in infer_result</p>
<p>                           if self.data_check(class_probability)]</p>
<p>        if len(data_check_list) != len(infer_result):</p>
<p>            return False</p>
<p>        for class_data in data_check_list:</p>
<p>            log_sum += class_data * math.log(class_data)</p>
<p>        confidence_score = 1 + 1.0 * log_sum / math.log(</p>
<p>            len(infer_result))</p>
<p>        return confidence_score < self.threshold_cross_entropy</p>
</code>``
<hr />
<p>1. <a href="https://www.redhat.com/en/topics/containers/what-is-a-kubernetes-operator">https://www.redhat.com/en/topics/containers/what-is-a-kubernetes-operator</a></p>
<p>2. <a href="https://developers.redhat.com/articles/2021/06/22/kubernetes-operators-101-part-2-how-operators-work">https://developers.redhat.com/articles/2021/06/22/kubernetes-operators-101-part-2-how-operators-work</a></p>]]></content>
        <author>
            <name>Shemol</name>
            <email>shemol106@gmail.com</email>
            <uri>https://shemol.tech</uri>
        </author>
    </entry>
    <entry>
        <title type="html"><![CDATA[炎拳]]></title>
        <id>https://shemol.tech/fire-punch</id>
        <link href="https://shemol.tech/fire-punch"/>
        <updated>2025-01-07T00:00:00.000Z</updated>
        <summary type="html"><![CDATA[看了炎拳，喜欢。]]></summary>
        <content type="html"><![CDATA[<h1>炎拳</h1>
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869252463.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869253503.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869254609.png" alt="" />
<p>第一话就没绷住…</p>
<p>换了pt上的资源看，是这么翻译的…</p>
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869255654.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869256745.png" alt="" />
<p>草…</p>
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869258164.png" alt="" />
<p>喜欢藤本树画的眼睛。</p>
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869259907.png" alt="" />
<p>大义呐。</p>
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869260972.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869261944.png" alt="" />
<p>因为他们吃人肉所以要把他们都杀掉…</p>
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869263274.png" alt="" />
<p>可怕…</p>
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869270079.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869271295.png" alt="" />
<p>真美好…</p>
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869272849.png" alt="" />
<p>名为活下去的诅咒…</p>
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869274123.png" alt="" />
<p>就算要遭受所有痛苦，一定要抗拒死亡。</p>
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869275398.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869276861.png" alt="" />
<p>草原来是这样。</p>
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869278216.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869279295.png" alt="" />
<p>傻逼极了。</p>
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869280513.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869281851.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869283113.png" alt="" />
<p>真的吗？</p>
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869284254.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869285521.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869286779.png" alt="" />
<p>这位是？</p>
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869287905.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869289178.png" alt="" />
<p>是她哥哥啊…</p>
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869290705.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869292601.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869293806.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869295261.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869297407.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869298730.png" alt="" />
<p>原来如此…所以大家才让萨恩走的…</p>
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869299979.png" alt="" />
<p>这是在说明性别嘛。</p>
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869301158.png" alt="" />
<p>真果断啊，马上把自己的手臂砍了。他也能再生…</p>
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869302417.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869303681.png" alt="" />
<p>？？？</p>
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869305243.png" alt="" />
<p>喜欢。</p>
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869310702.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869312172.png" alt="" />
<p>这也太变态了？十万个问号。</p>
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869313467.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869314785.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869316136.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869317501.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869318992.png" alt="" />
<p>…</p>
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869320002.png" alt="" />
<p>封面都很好看的。</p>
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869322346.png" alt="" />
<p>神秘的拍电影的人。</p>
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869323264.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869324665.png" alt="" />
<p>细节抠鼻屎。</p>
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869325713.png" alt="" />
<p>蛊惑人心！</p>
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869326955.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869328078.png" alt="" />
<p>草。</p>
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869333507.png" alt="" />
<p>好棒好棒好棒真的好棒好喜欢这个画面和对白。</p>
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869335220.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869336344.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869337802.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869339014.png" alt="" />
<p>导演说的对，这男的露出的表情好好笑。</p>
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869340460.png" alt="" />
<p>草上一页还在聊天讲闲话，短暂的黑屏之后就是这页，好赞。</p>
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869348828.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869350334.png" alt="" />
<p>暴怒！爆赞！</p>
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869351878.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869353222.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869354502.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869355580.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869356869.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869358078.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869359229.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869360536.png" alt="" />
<p>看的我直皱眉…</p>
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869361928.png" alt="" />
<p>已经失智了…</p>
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869364110.png" alt="" />
<p>这个人在拱火…</p>
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869365435.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869366864.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869368752.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869370304.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869371955.png" alt="" />
<p>忠言逆耳！这位才是神！</p>
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869373086.png" alt="" />
<p>就是这样。</p>
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869374145.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869375347.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869376735.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869378123.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869379417.png" alt="" />
<p>我们也是电池吗……这是我吗…</p>
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869381702.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869382969.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869384476.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869386454.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869387914.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869389235.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869392545.png" alt="" />
<p>出生啊…</p>
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869394470.png" alt="" />
<p>！！！！</p>
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869395839.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869397127.png" alt="" />
<p>笑死我了真的。</p>
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869398485.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869399698.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869401115.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869402535.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869403920.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869407784.png" alt="" />
<p>啊为了你的电影啊…</p>
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869409034.png" alt="" />
<p>怎么沦落到这个地步…</p>
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869410471.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869412011.png" alt="" />
<p>出生啊…</p>
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869413128.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869414316.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869415792.png" alt="" />
<p>好看。</p>
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869417406.png" alt="" />
<p>画面不错。</p>
<p>话说藤本树抽烟吗，怎么总有烟的画面。</p>
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869418792.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869420166.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869421529.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869422569.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869423948.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869425237.png" alt="" />
<p>我想救他们！</p>
<p>看，只要活着，事情也许就会有所改变。</p>
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869426771.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869428765.png" alt="" />
<p>不想输给这个世界的恶。</p>
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869430146.png" alt="" />
<p>剧本嘛，都只是导演的一厢情愿，比起剧本，我更关心演员的意志。</p>
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869431402.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869432885.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869433985.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869435394.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869436820.png" alt="" />
<p>傻逼在说话。</p>
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869438332.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869440834.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869442184.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869443379.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869444772.png" alt="" />
<p>草。</p>
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869446183.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869447662.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869449064.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869450447.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869451755.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869453056.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869454513.png" alt="" />
<p>万磁王上大号说话？</p>
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869456144.png" alt="" />
<p>又是这招，电锯人里是心脏，这里是头!</p>
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869457445.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869463685.png" alt="" />
<p>是这样呢导演大人。</p>
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869465025.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869466440.png" alt="" />
<p>…</p>
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869467803.png" alt="" />
<p>这里真的蛮有趣的，一直认为她是露娜，但是她说自己是露娜的时候反而明白了她不是露娜。</p>
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869469404.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869470510.png" alt="" />
<p>环境使人麻木了。</p>
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869472027.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869473267.png" alt="" />
<p>为什么！求你了告诉我吧。</p>
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869474755.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869476157.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869477433.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869478842.png" alt="" />
<p>是真的冰之魔女！</p>
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869480382.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869481864.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869483093.png" alt="" />
<p>谢谢配合…</p>
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869484278.png" alt="" />
<p>牛逼。</p>
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869485690.jpg" alt="" />
<p>笑死了。</p>
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869487151.png" alt="" />
<p>笑死了。</p>
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869488696.png" alt="" />
<p>笑死了臭傻逼。</p>
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869489902.png" alt="" />
<p>笑死了要。</p>
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869491200.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869492867.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869494181.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869495558.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869496727.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869497738.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869499047.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869500286.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869501723.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869503072.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869504697.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869506108.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869507318.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869508642.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869510008.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869511209.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869512594.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869514028.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869516409.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869517929.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869519157.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869520444.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869521838.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869525049.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869528940.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869530517.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869532062.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869533671.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869535308.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869539260.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869540515.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869541875.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869543247.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869544590.png" alt="" />
<p>好好笑。</p>
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869546165.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869547684.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869549008.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869550599.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869551978.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869553361.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869554888.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869556239.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869558092.png" alt="" />
<p>笑死。</p>
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869559291.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869560486.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869561764.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869563005.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869564626.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869566097.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869567484.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869568808.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869570100.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869571129.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869572415.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869573976.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869575242.png" alt="" />
<p>已经猜到了，会说，活下去。</p>
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869576533.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869577951.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869579740.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869581132.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869582743.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869584044.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869585418.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869587073.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869588474.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869589914.png" alt="" />
<p>所以是不是露娜啊…</p>
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869591165.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869592765.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869595802.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869597100.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869598782.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869600118.png" alt="" />
<p>名场面…</p>
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869601386.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869602967.png" alt="" />
<p>草这是哪里？？？</p>
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869604391.png" alt="" />
<p>撒谎？</p>
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869606143.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869607375.png" alt="" />
<p>哦不是她妹，只是那个人变傻了。</p>
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869608861.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869610199.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869611506.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869613060.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869614516.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869615947.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869620560.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869621976.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869623420.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869625008.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869626315.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869627730.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869629422.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869630872.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869632348.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869633904.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869635401.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869637562.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869639114.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869640408.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869641905.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869643377.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869644981.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869647455.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869648789.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869650132.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869653935.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869658832.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869660207.png" alt="" />
<p>那个人的孩子吗…</p>
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869661955.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869663297.png" alt="" />
<p>好聪明，又感觉其实疯了…</p>
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1771040373576.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869697405.png" alt="" />
<p>全是疯批。</p>
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869698915.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869700472.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869703744.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869705085.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869706614.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869708151.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869709388.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869710844.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869712200.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869713779.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869715133.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869716660.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869718056.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869719495.png" alt="" />
<p>？？？</p>
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869720863.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869722288.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869723717.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869725057.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%82%8E%E6%8B%B3_1770869727025.png" alt="" />]]></content>
        <author>
            <name>Shemol</name>
            <email>shemol106@gmail.com</email>
            <uri>https://shemol.tech</uri>
        </author>
    </entry>
    <entry>
        <title type="html"><![CDATA[电锯人一漫画重看]]></title>
        <id>https://shemol.tech/chainsaw-man-1</id>
        <link href="https://shemol.tech/chainsaw-man-1"/>
        <updated>2025-01-02T00:00:00.000Z</updated>
        <summary type="html"><![CDATA[重看了电锯人一的漫画。]]></summary>
        <content type="html"><![CDATA[<h1>电锯人一漫画重看</h1>
<p>重看电锯人。</p>
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%94%B5%E9%94%AF%E4%BA%BA%E4%B8%80%E6%BC%AB%E7%94%BB%E9%87%8D%E7%9C%8B_1770868982280.png" alt="" />
<p>赞爆了！</p>
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%94%B5%E9%94%AF%E4%BA%BA%E4%B8%80%E6%BC%AB%E7%94%BB%E9%87%8D%E7%9C%8B_1770868984321.png" alt="" />
<p>因为这个爱上了玛奇玛？</p>
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%94%B5%E9%94%AF%E4%BA%BA%E4%B8%80%E6%BC%AB%E7%94%BB%E9%87%8D%E7%9C%8B_1770868985528.png" alt="" />
<p>名场面！</p>
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%94%B5%E9%94%AF%E4%BA%BA%E4%B8%80%E6%BC%AB%E7%94%BB%E9%87%8D%E7%9C%8B_1770868986831.png" alt="" />
<p>可怜电次君！</p>
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%94%B5%E9%94%AF%E4%BA%BA%E4%B8%80%E6%BC%AB%E7%94%BB%E9%87%8D%E7%9C%8B_1770868988171.png" alt="" />
<p>藤本树在采访里说过，电次君会做一些意想不到的展开。</p>
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%94%B5%E9%94%AF%E4%BA%BA%E4%B8%80%E6%BC%AB%E7%94%BB%E9%87%8D%E7%9C%8B_1770868993032.png" alt="" />
<p>波奇塔它温我哭。</p>
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%94%B5%E9%94%AF%E4%BA%BA%E4%B8%80%E6%BC%AB%E7%94%BB%E9%87%8D%E7%9C%8B_1770868994325.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%94%B5%E9%94%AF%E4%BA%BA%E4%B8%80%E6%BC%AB%E7%94%BB%E9%87%8D%E7%9C%8B_1770868995562.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%94%B5%E9%94%AF%E4%BA%BA%E4%B8%80%E6%BC%AB%E7%94%BB%E9%87%8D%E7%9C%8B_1770868996892.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%94%B5%E9%94%AF%E4%BA%BA%E4%B8%80%E6%BC%AB%E7%94%BB%E9%87%8D%E7%9C%8B_1770868998049.png" alt="" />
<p>人就是这样的，实现了一个目标就会想着下一个目标，永远不满足永远心中有空缺，但我们也在不断的追求中完善自己，就算最后没有实现梦想，只是追求到死也没有关系，正如千年女优的主题，我喜欢这样不断追逐的自己。</p>
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%94%B5%E9%94%AF%E4%BA%BA%E4%B8%80%E6%BC%AB%E7%94%BB%E9%87%8D%E7%9C%8B_1770868999220.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%94%B5%E9%94%AF%E4%BA%BA%E4%B8%80%E6%BC%AB%E7%94%BB%E9%87%8D%E7%9C%8B_1770869000708.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%94%B5%E9%94%AF%E4%BA%BA%E4%B8%80%E6%BC%AB%E7%94%BB%E9%87%8D%E7%9C%8B_1770869002023.png" alt="" />
<p>帅！</p>
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%94%B5%E9%94%AF%E4%BA%BA%E4%B8%80%E6%BC%AB%E7%94%BB%E9%87%8D%E7%9C%8B_1770869003476.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%94%B5%E9%94%AF%E4%BA%BA%E4%B8%80%E6%BC%AB%E7%94%BB%E9%87%8D%E7%9C%8B_1770869004758.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%94%B5%E9%94%AF%E4%BA%BA%E4%B8%80%E6%BC%AB%E7%94%BB%E9%87%8D%E7%9C%8B_1770869006042.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%94%B5%E9%94%AF%E4%BA%BA%E4%B8%80%E6%BC%AB%E7%94%BB%E9%87%8D%E7%9C%8B_1770869007151.png" alt="" />
<p>就是这样。</p>
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%94%B5%E9%94%AF%E4%BA%BA%E4%B8%80%E6%BC%AB%E7%94%BB%E9%87%8D%E7%9C%8B_1770869008308.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%94%B5%E9%94%AF%E4%BA%BA%E4%B8%80%E6%BC%AB%E7%94%BB%E9%87%8D%E7%9C%8B_1770869009683.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%94%B5%E9%94%AF%E4%BA%BA%E4%B8%80%E6%BC%AB%E7%94%BB%E9%87%8D%E7%9C%8B_1770869010900.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%94%B5%E9%94%AF%E4%BA%BA%E4%B8%80%E6%BC%AB%E7%94%BB%E9%87%8D%E7%9C%8B_1770869012200.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%94%B5%E9%94%AF%E4%BA%BA%E4%B8%80%E6%BC%AB%E7%94%BB%E9%87%8D%E7%9C%8B_1770869013696.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%94%B5%E9%94%AF%E4%BA%BA%E4%B8%80%E6%BC%AB%E7%94%BB%E9%87%8D%E7%9C%8B_1770869015339.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%94%B5%E9%94%AF%E4%BA%BA%E4%B8%80%E6%BC%AB%E7%94%BB%E9%87%8D%E7%9C%8B_1770869016576.png" alt="" />
<p>所以不成为这种人就好了，就可以让恶魔害怕了。</p>
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%94%B5%E9%94%AF%E4%BA%BA%E4%B8%80%E6%BC%AB%E7%94%BB%E9%87%8D%E7%9C%8B_1770869017688.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%94%B5%E9%94%AF%E4%BA%BA%E4%B8%80%E6%BC%AB%E7%94%BB%E9%87%8D%E7%9C%8B_1770869019107.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%94%B5%E9%94%AF%E4%BA%BA%E4%B8%80%E6%BC%AB%E7%94%BB%E9%87%8D%E7%9C%8B_1770869020455.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%94%B5%E9%94%AF%E4%BA%BA%E4%B8%80%E6%BC%AB%E7%94%BB%E9%87%8D%E7%9C%8B_1770869021931.png" alt="" />
<p>喜欢这个场景。</p>
<p>话说个人感觉藤本树也不是以细腻的描绘见长的（？），跟自己笔下的藤野京不是很像吗，不如说就是对照自己创作出来的角色？</p>
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%94%B5%E9%94%AF%E4%BA%BA%E4%B8%80%E6%BC%AB%E7%94%BB%E9%87%8D%E7%9C%8B_1770869023582.png" alt="" />
<p>笑死了，超原谅！</p>
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%94%B5%E9%94%AF%E4%BA%BA%E4%B8%80%E6%BC%AB%E7%94%BB%E9%87%8D%E7%9C%8B_1770869024838.png" alt="" />
<p>是的，能通过兴趣来了解一个人。</p>
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%94%B5%E9%94%AF%E4%BA%BA%E4%B8%80%E6%BC%AB%E7%94%BB%E9%87%8D%E7%9C%8B_1770869025962.png" alt="" />
<p>颜控狐狸恶魔…</p>
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%94%B5%E9%94%AF%E4%BA%BA%E4%B8%80%E6%BC%AB%E7%94%BB%E9%87%8D%E7%9C%8B_1770869027404.png" alt="" />
<p>正是！</p>
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%94%B5%E9%94%AF%E4%BA%BA%E4%B8%80%E6%BC%AB%E7%94%BB%E9%87%8D%E7%9C%8B_1770869028735.png" alt="" />
<p>笑死了要。</p>
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%94%B5%E9%94%AF%E4%BA%BA%E4%B8%80%E6%BC%AB%E7%94%BB%E9%87%8D%E7%9C%8B_1770869030060.png" alt="" />
<p>太恶心了…</p>
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%94%B5%E9%94%AF%E4%BA%BA%E4%B8%80%E6%BC%AB%E7%94%BB%E9%87%8D%E7%9C%8B_1770869031337.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%94%B5%E9%94%AF%E4%BA%BA%E4%B8%80%E6%BC%AB%E7%94%BB%E9%87%8D%E7%9C%8B_1770869032612.png" alt="" />
<p>玛奇玛的死相！</p>
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%94%B5%E9%94%AF%E4%BA%BA%E4%B8%80%E6%BC%AB%E7%94%BB%E9%87%8D%E7%9C%8B_1770869033881.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%94%B5%E9%94%AF%E4%BA%BA%E4%B8%80%E6%BC%AB%E7%94%BB%E9%87%8D%E7%9C%8B_1770869035658.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%94%B5%E9%94%AF%E4%BA%BA%E4%B8%80%E6%BC%AB%E7%94%BB%E9%87%8D%E7%9C%8B_1770869036821.png" alt="" />
<p>枪之恶魔给的枪？</p>
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%94%B5%E9%94%AF%E4%BA%BA%E4%B8%80%E6%BC%AB%E7%94%BB%E9%87%8D%E7%9C%8B_1770869037969.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%94%B5%E9%94%AF%E4%BA%BA%E4%B8%80%E6%BC%AB%E7%94%BB%E9%87%8D%E7%9C%8B_1770869039564.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%94%B5%E9%94%AF%E4%BA%BA%E4%B8%80%E6%BC%AB%E7%94%BB%E9%87%8D%E7%9C%8B_1770869040860.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%94%B5%E9%94%AF%E4%BA%BA%E4%B8%80%E6%BC%AB%E7%94%BB%E9%87%8D%E7%9C%8B_1770869042153.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%94%B5%E9%94%AF%E4%BA%BA%E4%B8%80%E6%BC%AB%E7%94%BB%E9%87%8D%E7%9C%8B_1770869043567.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%94%B5%E9%94%AF%E4%BA%BA%E4%B8%80%E6%BC%AB%E7%94%BB%E9%87%8D%E7%9C%8B_1770869045034.png" alt="" />
<p>姬野前辈死了…</p>
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%94%B5%E9%94%AF%E4%BA%BA%E4%B8%80%E6%BC%AB%E7%94%BB%E9%87%8D%E7%9C%8B_1770869046349.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%94%B5%E9%94%AF%E4%BA%BA%E4%B8%80%E6%BC%AB%E7%94%BB%E9%87%8D%E7%9C%8B_1770869047570.png" alt="" />
<p>未来之恶魔！</p>
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%94%B5%E9%94%AF%E4%BA%BA%E4%B8%80%E6%BC%AB%E7%94%BB%E9%87%8D%E7%9C%8B_1770869048967.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%94%B5%E9%94%AF%E4%BA%BA%E4%B8%80%E6%BC%AB%E7%94%BB%E9%87%8D%E7%9C%8B_1770869050290.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%94%B5%E9%94%AF%E4%BA%BA%E4%B8%80%E6%BC%AB%E7%94%BB%E9%87%8D%E7%9C%8B_1770869051487.png" alt="" />
<p>玛奇玛不是人！</p>
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%94%B5%E9%94%AF%E4%BA%BA%E4%B8%80%E6%BC%AB%E7%94%BB%E9%87%8D%E7%9C%8B_1770869052876.png" alt="" />
<p>什么意思呢？</p>
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%94%B5%E9%94%AF%E4%BA%BA%E4%B8%80%E6%BC%AB%E7%94%BB%E9%87%8D%E7%9C%8B_1770869054112.png" alt="" />
<p>又出现了，必要之恶。</p>
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%94%B5%E9%94%AF%E4%BA%BA%E4%B8%80%E6%BC%AB%E7%94%BB%E9%87%8D%E7%9C%8B_1770869055299.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%94%B5%E9%94%AF%E4%BA%BA%E4%B8%80%E6%BC%AB%E7%94%BB%E9%87%8D%E7%9C%8B_1770869056613.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%94%B5%E9%94%AF%E4%BA%BA%E4%B8%80%E6%BC%AB%E7%94%BB%E9%87%8D%E7%9C%8B_1770869057908.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%94%B5%E9%94%AF%E4%BA%BA%E4%B8%80%E6%BC%AB%E7%94%BB%E9%87%8D%E7%9C%8B_1770869059027.png" alt="" />
<p>然后就给他了是吗。</p>
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%94%B5%E9%94%AF%E4%BA%BA%E4%B8%80%E6%BC%AB%E7%94%BB%E9%87%8D%E7%9C%8B_1770869060422.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%94%B5%E9%94%AF%E4%BA%BA%E4%B8%80%E6%BC%AB%E7%94%BB%E9%87%8D%E7%9C%8B_1770869061637.png" alt="" />
<p>帅电次。</p>
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%94%B5%E9%94%AF%E4%BA%BA%E4%B8%80%E6%BC%AB%E7%94%BB%E9%87%8D%E7%9C%8B_1770869062965.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%94%B5%E9%94%AF%E4%BA%BA%E4%B8%80%E6%BC%AB%E7%94%BB%E9%87%8D%E7%9C%8B_1770869064020.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%94%B5%E9%94%AF%E4%BA%BA%E4%B8%80%E6%BC%AB%E7%94%BB%E9%87%8D%E7%9C%8B_1770869065232.png" alt="" />
<p>和玛奇玛约会！</p>
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%94%B5%E9%94%AF%E4%BA%BA%E4%B8%80%E6%BC%AB%E7%94%BB%E9%87%8D%E7%9C%8B_1770869066556.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%94%B5%E9%94%AF%E4%BA%BA%E4%B8%80%E6%BC%AB%E7%94%BB%E9%87%8D%E7%9C%8B_1770869068498.png" alt="" />
<p>不愧是你啊藤本树…</p>
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%94%B5%E9%94%AF%E4%BA%BA%E4%B8%80%E6%BC%AB%E7%94%BB%E9%87%8D%E7%9C%8B_1770869069610.png" alt="" />
<p>改变人生吗…</p>
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%94%B5%E9%94%AF%E4%BA%BA%E4%B8%80%E6%BC%AB%E7%94%BB%E9%87%8D%E7%9C%8B_1770869070753.png" alt="" />
<p>电次哭了，玛奇玛也哭了！</p>
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%94%B5%E9%94%AF%E4%BA%BA%E4%B8%80%E6%BC%AB%E7%94%BB%E9%87%8D%E7%9C%8B_1770869072043.png" alt="" />
<p>蕾塞要出现了！</p>
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%94%B5%E9%94%AF%E4%BA%BA%E4%B8%80%E6%BC%AB%E7%94%BB%E9%87%8D%E7%9C%8B_1770869073393.png" alt="" />
<p>？？？</p>
<p>和记忆中有些偏差，竟然不记得这句话了，但是这么说真的没问题吗…因为对象是电次所以应该是没问题的吧…</p>
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%94%B5%E9%94%AF%E4%BA%BA%E4%B8%80%E6%BC%AB%E7%94%BB%E9%87%8D%E7%9C%8B_1770869077618.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%94%B5%E9%94%AF%E4%BA%BA%E4%B8%80%E6%BC%AB%E7%94%BB%E9%87%8D%E7%9C%8B_1770869079995.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%94%B5%E9%94%AF%E4%BA%BA%E4%B8%80%E6%BC%AB%E7%94%BB%E9%87%8D%E7%9C%8B_1770869081148.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%94%B5%E9%94%AF%E4%BA%BA%E4%B8%80%E6%BC%AB%E7%94%BB%E9%87%8D%E7%9C%8B_1770869082302.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%94%B5%E9%94%AF%E4%BA%BA%E4%B8%80%E6%BC%AB%E7%94%BB%E9%87%8D%E7%9C%8B_1770869083594.png" alt="" />
<p>灵活的择偶标准！</p>
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%94%B5%E9%94%AF%E4%BA%BA%E4%B8%80%E6%BC%AB%E7%94%BB%E9%87%8D%E7%9C%8B_1770869084644.png" alt="" />
<p>就是这样。</p>
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%94%B5%E9%94%AF%E4%BA%BA%E4%B8%80%E6%BC%AB%E7%94%BB%E9%87%8D%E7%9C%8B_1770869085845.png" alt="" />
<p>…</p>
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%94%B5%E9%94%AF%E4%BA%BA%E4%B8%80%E6%BC%AB%E7%94%BB%E9%87%8D%E7%9C%8B_1770869087107.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%94%B5%E9%94%AF%E4%BA%BA%E4%B8%80%E6%BC%AB%E7%94%BB%E9%87%8D%E7%9C%8B_1770869088450.png" alt="" />
<p>能觉得电次的笑话好笑的也是神人了，印象中没记错的话也是觊觎电次的心脏…</p>
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%94%B5%E9%94%AF%E4%BA%BA%E4%B8%80%E6%BC%AB%E7%94%BB%E9%87%8D%E7%9C%8B_1770869089691.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%94%B5%E9%94%AF%E4%BA%BA%E4%B8%80%E6%BC%AB%E7%94%BB%E9%87%8D%E7%9C%8B_1770869090986.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%94%B5%E9%94%AF%E4%BA%BA%E4%B8%80%E6%BC%AB%E7%94%BB%E9%87%8D%E7%9C%8B_1770869094281.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%94%B5%E9%94%AF%E4%BA%BA%E4%B8%80%E6%BC%AB%E7%94%BB%E9%87%8D%E7%9C%8B_1770869095577.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%94%B5%E9%94%AF%E4%BA%BA%E4%B8%80%E6%BC%AB%E7%94%BB%E9%87%8D%E7%9C%8B_1770869096779.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%94%B5%E9%94%AF%E4%BA%BA%E4%B8%80%E6%BC%AB%E7%94%BB%E9%87%8D%E7%9C%8B_1770869097959.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%94%B5%E9%94%AF%E4%BA%BA%E4%B8%80%E6%BC%AB%E7%94%BB%E9%87%8D%E7%9C%8B_1770869099569.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%94%B5%E9%94%AF%E4%BA%BA%E4%B8%80%E6%BC%AB%E7%94%BB%E9%87%8D%E7%9C%8B_1770869101091.png" alt="" />
<p>原来之前看到的饭制物料出处在这里！</p>
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%94%B5%E9%94%AF%E4%BA%BA%E4%B8%80%E6%BC%AB%E7%94%BB%E9%87%8D%E7%9C%8B_1770869102687.png" alt="" />
<p>这是哪首歌？</p>
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%94%B5%E9%94%AF%E4%BA%BA%E4%B8%80%E6%BC%AB%E7%94%BB%E9%87%8D%E7%9C%8B_1770869103910.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%94%B5%E9%94%AF%E4%BA%BA%E4%B8%80%E6%BC%AB%E7%94%BB%E9%87%8D%E7%9C%8B_1770869105231.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%94%B5%E9%94%AF%E4%BA%BA%E4%B8%80%E6%BC%AB%E7%94%BB%E9%87%8D%E7%9C%8B_1770869113596.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%94%B5%E9%94%AF%E4%BA%BA%E4%B8%80%E6%BC%AB%E7%94%BB%E9%87%8D%E7%9C%8B_1770869114942.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%94%B5%E9%94%AF%E4%BA%BA%E4%B8%80%E6%BC%AB%E7%94%BB%E9%87%8D%E7%9C%8B_1770869116068.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%94%B5%E9%94%AF%E4%BA%BA%E4%B8%80%E6%BC%AB%E7%94%BB%E9%87%8D%E7%9C%8B_1770869117636.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%94%B5%E9%94%AF%E4%BA%BA%E4%B8%80%E6%BC%AB%E7%94%BB%E9%87%8D%E7%9C%8B_1770869119024.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%94%B5%E9%94%AF%E4%BA%BA%E4%B8%80%E6%BC%AB%E7%94%BB%E9%87%8D%E7%9C%8B_1770869120537.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%94%B5%E9%94%AF%E4%BA%BA%E4%B8%80%E6%BC%AB%E7%94%BB%E9%87%8D%E7%9C%8B_1770869122027.png" alt="" />
<p>电次成人彘了…</p>
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%94%B5%E9%94%AF%E4%BA%BA%E4%B8%80%E6%BC%AB%E7%94%BB%E9%87%8D%E7%9C%8B_1770869124722.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%94%B5%E9%94%AF%E4%BA%BA%E4%B8%80%E6%BC%AB%E7%94%BB%E9%87%8D%E7%9C%8B_1770869126124.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%94%B5%E9%94%AF%E4%BA%BA%E4%B8%80%E6%BC%AB%E7%94%BB%E9%87%8D%E7%9C%8B_1770869128178.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%94%B5%E9%94%AF%E4%BA%BA%E4%B8%80%E6%BC%AB%E7%94%BB%E9%87%8D%E7%9C%8B_1770869129551.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%94%B5%E9%94%AF%E4%BA%BA%E4%B8%80%E6%BC%AB%E7%94%BB%E9%87%8D%E7%9C%8B_1770869130808.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%94%B5%E9%94%AF%E4%BA%BA%E4%B8%80%E6%BC%AB%E7%94%BB%E9%87%8D%E7%9C%8B_1770869132070.png" alt="" />
<p>玛奇玛真恐怖啊…</p>
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%94%B5%E9%94%AF%E4%BA%BA%E4%B8%80%E6%BC%AB%E7%94%BB%E9%87%8D%E7%9C%8B_1770869133472.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%94%B5%E9%94%AF%E4%BA%BA%E4%B8%80%E6%BC%AB%E7%94%BB%E9%87%8D%E7%9C%8B_1770869134784.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%94%B5%E9%94%AF%E4%BA%BA%E4%B8%80%E6%BC%AB%E7%94%BB%E9%87%8D%E7%9C%8B_1770869136230.png" alt="" />
<p>牛逼的。</p>
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%94%B5%E9%94%AF%E4%BA%BA%E4%B8%80%E6%BC%AB%E7%94%BB%E9%87%8D%E7%9C%8B_1770869137610.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%94%B5%E9%94%AF%E4%BA%BA%E4%B8%80%E6%BC%AB%E7%94%BB%E9%87%8D%E7%9C%8B_1770869139646.png" alt="" />
<p>压迫感十足啊玛奇玛。</p>
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%94%B5%E9%94%AF%E4%BA%BA%E4%B8%80%E6%BC%AB%E7%94%BB%E9%87%8D%E7%9C%8B_1770869141042.png" alt="" />
<p>好长的手臂。</p>
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%94%B5%E9%94%AF%E4%BA%BA%E4%B8%80%E6%BC%AB%E7%94%BB%E9%87%8D%E7%9C%8B_1770869142255.png" alt="" />
<p>是这样呢。</p>
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%94%B5%E9%94%AF%E4%BA%BA%E4%B8%80%E6%BC%AB%E7%94%BB%E9%87%8D%E7%9C%8B_1770869143480.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%94%B5%E9%94%AF%E4%BA%BA%E4%B8%80%E6%BC%AB%E7%94%BB%E9%87%8D%E7%9C%8B_1770869147676.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%94%B5%E9%94%AF%E4%BA%BA%E4%B8%80%E6%BC%AB%E7%94%BB%E9%87%8D%E7%9C%8B_1770869149244.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%94%B5%E9%94%AF%E4%BA%BA%E4%B8%80%E6%BC%AB%E7%94%BB%E9%87%8D%E7%9C%8B_1770869150366.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%94%B5%E9%94%AF%E4%BA%BA%E4%B8%80%E6%BC%AB%E7%94%BB%E9%87%8D%E7%9C%8B_1770869151831.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%94%B5%E9%94%AF%E4%BA%BA%E4%B8%80%E6%BC%AB%E7%94%BB%E9%87%8D%E7%9C%8B_1770869152891.png" alt="" />
<p>因为家人呢。</p>
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%94%B5%E9%94%AF%E4%BA%BA%E4%B8%80%E6%BC%AB%E7%94%BB%E9%87%8D%E7%9C%8B_1770869154160.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%94%B5%E9%94%AF%E4%BA%BA%E4%B8%80%E6%BC%AB%E7%94%BB%E9%87%8D%E7%9C%8B_1770869155367.png" alt="" />
<p>电次好聪明。</p>
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%94%B5%E9%94%AF%E4%BA%BA%E4%B8%80%E6%BC%AB%E7%94%BB%E9%87%8D%E7%9C%8B_1770869156661.png" alt="" />
<p>软肋…</p>
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%94%B5%E9%94%AF%E4%BA%BA%E4%B8%80%E6%BC%AB%E7%94%BB%E9%87%8D%E7%9C%8B_1770869157987.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%94%B5%E9%94%AF%E4%BA%BA%E4%B8%80%E6%BC%AB%E7%94%BB%E9%87%8D%E7%9C%8B_1770869159591.png" alt="" />
<p>天才！</p>
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%94%B5%E9%94%AF%E4%BA%BA%E4%B8%80%E6%BC%AB%E7%94%BB%E9%87%8D%E7%9C%8B_1770869160750.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%94%B5%E9%94%AF%E4%BA%BA%E4%B8%80%E6%BC%AB%E7%94%BB%E9%87%8D%E7%9C%8B_1770869162242.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%94%B5%E9%94%AF%E4%BA%BA%E4%B8%80%E6%BC%AB%E7%94%BB%E9%87%8D%E7%9C%8B_1770869163829.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%94%B5%E9%94%AF%E4%BA%BA%E4%B8%80%E6%BC%AB%E7%94%BB%E9%87%8D%E7%9C%8B_1770869165284.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%94%B5%E9%94%AF%E4%BA%BA%E4%B8%80%E6%BC%AB%E7%94%BB%E9%87%8D%E7%9C%8B_1770869166764.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%94%B5%E9%94%AF%E4%BA%BA%E4%B8%80%E6%BC%AB%E7%94%BB%E9%87%8D%E7%9C%8B_1770869168290.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%94%B5%E9%94%AF%E4%BA%BA%E4%B8%80%E6%BC%AB%E7%94%BB%E9%87%8D%E7%9C%8B_1770869169556.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%94%B5%E9%94%AF%E4%BA%BA%E4%B8%80%E6%BC%AB%E7%94%BB%E9%87%8D%E7%9C%8B_1770869170651.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%94%B5%E9%94%AF%E4%BA%BA%E4%B8%80%E6%BC%AB%E7%94%BB%E9%87%8D%E7%9C%8B_1770869172117.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%94%B5%E9%94%AF%E4%BA%BA%E4%B8%80%E6%BC%AB%E7%94%BB%E9%87%8D%E7%9C%8B_1770869173472.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%94%B5%E9%94%AF%E4%BA%BA%E4%B8%80%E6%BC%AB%E7%94%BB%E9%87%8D%E7%9C%8B_1770869182106.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%94%B5%E9%94%AF%E4%BA%BA%E4%B8%80%E6%BC%AB%E7%94%BB%E9%87%8D%E7%9C%8B_1770869183423.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%94%B5%E9%94%AF%E4%BA%BA%E4%B8%80%E6%BC%AB%E7%94%BB%E9%87%8D%E7%9C%8B_1770869184732.png" alt="" />
<p>有意思把战斗场景和秋小时候打雪仗结合起来了。</p>
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%94%B5%E9%94%AF%E4%BA%BA%E4%B8%80%E6%BC%AB%E7%94%BB%E9%87%8D%E7%9C%8B_1770869185973.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%94%B5%E9%94%AF%E4%BA%BA%E4%B8%80%E6%BC%AB%E7%94%BB%E9%87%8D%E7%9C%8B_1770869187345.png" alt="" />
<p>还望安息…</p>
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%94%B5%E9%94%AF%E4%BA%BA%E4%B8%80%E6%BC%AB%E7%94%BB%E9%87%8D%E7%9C%8B_1770869188983.png" alt="" />
<p>秋死了，我杀了秋…</p>
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%94%B5%E9%94%AF%E4%BA%BA%E4%B8%80%E6%BC%AB%E7%94%BB%E9%87%8D%E7%9C%8B_1770869189742.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%94%B5%E9%94%AF%E4%BA%BA%E4%B8%80%E6%BC%AB%E7%94%BB%E9%87%8D%E7%9C%8B_1770869191054.png" alt="" />
<p>？？？？十万个问号。</p>
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%94%B5%E9%94%AF%E4%BA%BA%E4%B8%80%E6%BC%AB%E7%94%BB%E9%87%8D%E7%9C%8B_1770869192380.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%94%B5%E9%94%AF%E4%BA%BA%E4%B8%80%E6%BC%AB%E7%94%BB%E9%87%8D%E7%9C%8B_1770869193748.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%94%B5%E9%94%AF%E4%BA%BA%E4%B8%80%E6%BC%AB%E7%94%BB%E9%87%8D%E7%9C%8B_1770869195086.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%94%B5%E9%94%AF%E4%BA%BA%E4%B8%80%E6%BC%AB%E7%94%BB%E9%87%8D%E7%9C%8B_1770869196400.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%94%B5%E9%94%AF%E4%BA%BA%E4%B8%80%E6%BC%AB%E7%94%BB%E9%87%8D%E7%9C%8B_1770869197551.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%94%B5%E9%94%AF%E4%BA%BA%E4%B8%80%E6%BC%AB%E7%94%BB%E9%87%8D%E7%9C%8B_1770869198797.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%94%B5%E9%94%AF%E4%BA%BA%E4%B8%80%E6%BC%AB%E7%94%BB%E9%87%8D%E7%9C%8B_1770869200246.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%94%B5%E9%94%AF%E4%BA%BA%E4%B8%80%E6%BC%AB%E7%94%BB%E9%87%8D%E7%9C%8B_1770869201638.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%94%B5%E9%94%AF%E4%BA%BA%E4%B8%80%E6%BC%AB%E7%94%BB%E9%87%8D%E7%9C%8B_1770869204552.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%94%B5%E9%94%AF%E4%BA%BA%E4%B8%80%E6%BC%AB%E7%94%BB%E9%87%8D%E7%9C%8B_1770869206062.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%94%B5%E9%94%AF%E4%BA%BA%E4%B8%80%E6%BC%AB%E7%94%BB%E9%87%8D%E7%9C%8B_1770869207503.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%94%B5%E9%94%AF%E4%BA%BA%E4%B8%80%E6%BC%AB%E7%94%BB%E9%87%8D%E7%9C%8B_1770869208830.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%94%B5%E9%94%AF%E4%BA%BA%E4%B8%80%E6%BC%AB%E7%94%BB%E9%87%8D%E7%9C%8B_1770869210374.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%94%B5%E9%94%AF%E4%BA%BA%E4%B8%80%E6%BC%AB%E7%94%BB%E9%87%8D%E7%9C%8B_1770869211564.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%94%B5%E9%94%AF%E4%BA%BA%E4%B8%80%E6%BC%AB%E7%94%BB%E9%87%8D%E7%9C%8B_1770869216628.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%94%B5%E9%94%AF%E4%BA%BA%E4%B8%80%E6%BC%AB%E7%94%BB%E9%87%8D%E7%9C%8B_1770869218007.png" alt="" />
<p>nb还能打飞到宇宙。</p>
<p>利用心脏来快速移动！</p>
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%94%B5%E9%94%AF%E4%BA%BA%E4%B8%80%E6%BC%AB%E7%94%BB%E9%87%8D%E7%9C%8B_1770869219826.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%94%B5%E9%94%AF%E4%BA%BA%E4%B8%80%E6%BC%AB%E7%94%BB%E9%87%8D%E7%9C%8B_1770869220953.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%94%B5%E9%94%AF%E4%BA%BA%E4%B8%80%E6%BC%AB%E7%94%BB%E9%87%8D%E7%9C%8B_1770869222454.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%94%B5%E9%94%AF%E4%BA%BA%E4%B8%80%E6%BC%AB%E7%94%BB%E9%87%8D%E7%9C%8B_1770869223901.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%94%B5%E9%94%AF%E4%BA%BA%E4%B8%80%E6%BC%AB%E7%94%BB%E9%87%8D%E7%9C%8B_1770869225114.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%94%B5%E9%94%AF%E4%BA%BA%E4%B8%80%E6%BC%AB%E7%94%BB%E9%87%8D%E7%9C%8B_1770869226526.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%94%B5%E9%94%AF%E4%BA%BA%E4%B8%80%E6%BC%AB%E7%94%BB%E9%87%8D%E7%9C%8B_1770869227991.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%94%B5%E9%94%AF%E4%BA%BA%E4%B8%80%E6%BC%AB%E7%94%BB%E9%87%8D%E7%9C%8B_1770869229119.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%94%B5%E9%94%AF%E4%BA%BA%E4%B8%80%E6%BC%AB%E7%94%BB%E9%87%8D%E7%9C%8B_1770869230462.png" alt="" />
<p>噢最后结局是把玛奇玛大卸八块然后吃了，我都忘了第一遍有没有看到最后了…</p>
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%94%B5%E9%94%AF%E4%BA%BA%E4%B8%80%E6%BC%AB%E7%94%BB%E9%87%8D%E7%9C%8B_1770869231707.png" alt="" />
<p>玛奇玛靠气味记人，所以玛奇玛才是狗啊！</p>
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%94%B5%E9%94%AF%E4%BA%BA%E4%B8%80%E6%BC%AB%E7%94%BB%E9%87%8D%E7%9C%8B_1770869232988.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%94%B5%E9%94%AF%E4%BA%BA%E4%B8%80%E6%BC%AB%E7%94%BB%E9%87%8D%E7%9C%8B_1770869235772.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%94%B5%E9%94%AF%E4%BA%BA%E4%B8%80%E6%BC%AB%E7%94%BB%E9%87%8D%E7%9C%8B_1770869237686.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%94%B5%E9%94%AF%E4%BA%BA%E4%B8%80%E6%BC%AB%E7%94%BB%E9%87%8D%E7%9C%8B_1770869238987.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%94%B5%E9%94%AF%E4%BA%BA%E4%B8%80%E6%BC%AB%E7%94%BB%E9%87%8D%E7%9C%8B_1770869240145.png" alt="" />
<img src="https://pub-187e5a9ad2b040c7aa8e0208d1291b32.r2.dev/%E7%94%B5%E9%94%AF%E4%BA%BA%E4%B8%80%E6%BC%AB%E7%94%BB%E9%87%8D%E7%9C%8B_1770869241452.png" alt="" />]]></content>
        <author>
            <name>Shemol</name>
            <email>shemol106@gmail.com</email>
            <uri>https://shemol.tech</uri>
        </author>
    </entry>
</feed>