-
Notifications
You must be signed in to change notification settings - Fork 0
Expand file tree
/
Copy pathsearch.json
More file actions
1 lines (1 loc) · 107 KB
/
search.json
File metadata and controls
1 lines (1 loc) · 107 KB
1
[{"title":"圆","url":"https://xhw994.github.io/2021/01/16/20200116/","content":"<p>十年前的今天,小圆播出了最有名的第三集。和许多人一样,我也是第三集播出后才乘上这股热潮。小圆对我们这个时代的人来说毫无疑问是一部里程碑式的动画作品,对日本本土如此,对国内也是如此。小圆是第一部让全B站疯狂的动画:考据、剪辑、音乐和各式各样的二次创作霸占了整个首页,并在当时还有人看的周刊排行上霸占八成以上的排名,其统治力与我所见到的创作者们的热忱远甚于之后的F/Z、刀剑神域、巨人,并一定程度上为B站之后的发展铺下道路。那全民的狂热让仅仅是观众的我也有莫大的参与感。用现在的话讲,是我第一份“集体记忆”。</p>\n<p>对小圆顶礼膜拜的我不满于BT站上流通的各种版本,为了一份用作收藏,可以反复观看的,完美的全集动画,我自学字幕、嵌字、转码、压缩。最后我为了这份收藏所做的付出和收获的快乐甚至凌驾于看动画和它的各种衍生作品。现在看虽然这一切都是无用功,但正是这份快乐化作的对电脑本身的兴趣让它不再仅仅是我打游戏的工具,更成为了养活自己的方式。可以说,小圆是近乎直接改变我人生轨迹的作品。如果没有小圆,我现在要么在非洲某个国家和一群十几年交情的朋友吃没有猪肉的火锅,要么就自诩建筑师在某个工地玩泥巴。小圆好看吗?我现在可能已经看不进去了。但这并不重要。小圆对我已经是超越了作品本身的符号,我对它也有太多这样那样的回忆。</p>\n<p>再后来,我第一次在电影院看剧场版动画。我发现我自己都能听懂,我才知道原来仅仅是看了几年动画日语就已经有如此水平了,似乎比认真学了三年的法语还强。也不知道是该喜还是该悲。</p>\n<p>再再后来,我第一次在微博上中奖,是花洒给画的头像。我选了焰魔,但没用多久。</p>\n<p>更后来,小圆外传手游放出消息。我第一时间在推特上预约,并在几个月后第一时间将开服通知标记为垃圾邮件。</p>\n<p>最近,小圆外传动画播了。因为不好看,我看过三集就弃了。</p>\n<p>希望还能看到老虚的小圆。如果没有,也不错。</p>\n","categories":["随笔"],"tags":["生活","动画"]},{"title":"猴戏","url":"https://xhw994.github.io/2021/01/15/20200115/","content":"<p>我本可以忍受黑暗,如果我不曾见过太阳。然而阳光已使我的荒凉成为更新的荒凉。从高贵的C系语言到爪哇,种种不适带来的巨大落差让我无所适从,让我怀疑起了人生的意义:我想起动物园里抓耳挠腮的猴子,想起我主动踏入失败的深渊,想起了再也看不到的广阔天空。我在深夜的浅色床单下痛哭。我是傻逼。</p>\n<p>我记得,某个人死了。我也记得,那个人是我。我终究没能战胜生活,成为了那99.99%的失败者。固然,多数人的一生都是这样愚蠢:从懵懂,到自命不凡,醒悟,最终妥协。我知道每一分钟都可以是改变生活的机会,而我没有这种机会。我可以是例外,但我不是例外。拥抱着屎黄色的咖喱味天空,我向着因陀罗缓缓跪下。轻吻,他的脚背。</p>\n<p>我选择好好上班,买口好的棺材。</p>\n","categories":["随笔"],"tags":["生活"]},{"title":"搭建基于.NET生态的聚焦网络爬虫","url":"https://xhw994.github.io/2019/07/29/20190729/","content":"<h1 id=\"萌芽\"><a href=\"#萌芽\" class=\"headerlink\" title=\"萌芽\"></a>萌芽</h1><p> 我曾经有段时间是很讨厌网络爬虫这种东西的。它们漫无目的地在互联网上东奔西走,窥探着一切想要被看到的和不想被看到的东西。回想起来,我初中二三年级的时候还会在网络上放一些初中生的无病呻吟和风花雪月的诗词。当时还和班里的才子和真·才女组成了一个诗词同好会之类的东西,每日浸淫于诗赋之间,互相切磋互相吹捧,颇有古时候举人的风采。但哪怕是中二的我也懂得,这些东西大多是见不得人的自娱自乐的产物(和现在如出一辙)。不是说有哪些会伤着人的语句,只是怕自己那稚嫩的文青风格被当作刻意的少年老成——也许我潜意识里确实有这样卖弄的心态——所以很怕被人看到吧。</p>\n<p> 那时流行一种被我们称作“萌芽体”的作文,是一种肆意引经据典,滥用排比,要么如黛玉般无端伤怀、要么装成个飞马的武将故作豪迈的文风,其命名来源于一本叫做《萌芽》的青年杂志,每期千篇一律的都是这种风格的文章,乍一看根本瞧不出这篇和那篇之间的区别,全是虚假空洞的屁话。但不幸的是,这些屁话在教师之间很是流行。它们通过堆砌华丽的辞藻,就像是要蹦出纸面一样地朝读者嘶吼着它们是多么优雅的艺术品。时间不充裕的阅卷老师看到这溢出纸面的“文气”,不消细看文章内容,只凭文章首尾的结构就可以打分,实在是方便还挑不出错。诚然,萌芽体对于作者的文学底蕴还是有很高的要求的,能从唐诗三百首里随手抄出几篇适用的句子的人,其文学功底一定能吊打大部分学生了。且比起初中生贫弱的生活经验,教师们更认同既定成型的诗词和习语这件事虽然令人不爽却也不是不能理解。但这并不能改变萌芽文等同于屁话这一真理。连散文都要借景抒情,言之有物。你一个萌芽体算老几?</p>\n<p> 我的作文虽说也会用些形容比喻,但终究没有跳出以叙事为主抒怀为辅的传统文章的路数。众所周知,叙事题材的质与量是和作者的阅历成正比的。应试教育的皮鞭逼着我每两三天就要写一篇作文出来,一个月一个月过去,我能写的内容越来越少,写出的内容也大同,成了只为了考试拿分的存在。后来我甚至开始在内容上胡编滥造,要按现在的话说就是失去了写作的灵魂。我明明把写作当成自我的诉求和与大人的对话方式,结果不论内容真假,反应在纸面上的评价并不会有什么区别。我也有想过不如放下自己的矜持,向萌芽体妥协。可惜做不到的事情就是做不到,在QQ空间上试水模仿的几篇文章都成了四不像,越看越恶心,被毕业后的我光速扫入了青春的垃圾堆。终于我醒悟了,明明用自己的方式写作就已经能拿学年第一了,为什么非要降维去学写不出实质内容的人用的旁门左道?既然自己这一路走来并无犯错,何必抛弃自己的风格呢?</p>\n<p> 顺带一提,如今的《萌芽》上萌芽体也不再泛滥,其内容回归《格言》之流的青少年文学,实在是令人欣慰。而不幸的是,我这几年的内心变化影响到了我的文字,写出来的东西像是变了质的萌芽体,成了旁门左道中的旁门左道。如果没有心境上的变化怕是很难回到当初大开大阖的文风了。那句话怎么说的来着:人终于会变成他们最讨厌的存在。</p>\n<p> 唉怎么又讲了一堆屁话,我明明只是想表达我有一些不想被爬虫看到的内容的。总之,和一切科技一样,爬虫这一概念是不带有善恶的标签的。通用的爬虫,如Google搜索引擎的前端,是不会在乎你我的喜悲的。它们只会遵循着繁复的规则,日复一日地构建互联网的索引。而如果你像我一样多愁善感,你可以编写一个在乎你的感受的爬虫,意即聚焦型网络爬虫。而这这就是本文的目的了。</p>\n<h1 id=\"爬虫\"><a href=\"#爬虫\" class=\"headerlink\" title=\"爬虫\"></a>爬虫</h1><p> 我选择使用.NET生态中人气最旺的Abot来完成这项任务。Abot爬虫由许多个小型的组件组成,拥有极高的可插拔性和可延伸性。它还使用了大量惰性初始化,最大化去掉了多余的运算,因此它的运行速度很快。可惜的是,由于维护人手的不足,Abot目前是不支持.NET Standard的。但如果不嫌麻烦的话稍微修改源代码再编译应该也不会很麻烦。</p>\n<p> 首先创建一个空的.NET Framework控制台程序,然后在NuGet包管理器中安装Abot(<a href=\"https://github.com/sjdirect/abot\" target=\"_blank\" rel=\"noopener\">源代码</a>)。注意不是AbotX,那是Abot的商业版本。</p>\n<center><img src=\"/2019/07/29/20190729/1.png\"></center>\n\n<h2 id=\"配置\"><a href=\"#配置\" class=\"headerlink\" title=\"配置\"></a>配置</h2><p> Abot提供了三种不同的配置方法:配置文件、配置对象、或是两者混用。</p>\n<h3 id=\"使用配置文件\"><a href=\"#使用配置文件\" class=\"headerlink\" title=\"使用配置文件\"></a>使用配置文件</h3><p> 此方式适用于.NET及ASP.NET环境下的程序。</p>\n<p> 在<code>app.config</code>或是<code>web.config</code>文件中加入如下字段,并进行相应的调整。</p>\n<figure class=\"highlight xml\"><table><tr><td class=\"gutter\"><pre><span class=\"line\">1</span><br><span class=\"line\">2</span><br><span class=\"line\">3</span><br><span class=\"line\">4</span><br><span class=\"line\">5</span><br><span class=\"line\">6</span><br><span class=\"line\">7</span><br><span class=\"line\">8</span><br><span class=\"line\">9</span><br><span class=\"line\">10</span><br><span class=\"line\">11</span><br><span class=\"line\">12</span><br><span class=\"line\">13</span><br><span class=\"line\">14</span><br><span class=\"line\">15</span><br><span class=\"line\">16</span><br><span class=\"line\">17</span><br><span class=\"line\">18</span><br><span class=\"line\">19</span><br><span class=\"line\">20</span><br><span class=\"line\">21</span><br><span class=\"line\">22</span><br><span class=\"line\">23</span><br><span class=\"line\">24</span><br><span class=\"line\">25</span><br><span class=\"line\">26</span><br><span class=\"line\">27</span><br><span class=\"line\">28</span><br><span class=\"line\">29</span><br><span class=\"line\">30</span><br><span class=\"line\">31</span><br><span class=\"line\">32</span><br><span class=\"line\">33</span><br><span class=\"line\">34</span><br><span class=\"line\">35</span><br><span class=\"line\">36</span><br><span class=\"line\">37</span><br><span class=\"line\">38</span><br><span class=\"line\">39</span><br><span class=\"line\">40</span><br><span class=\"line\">41</span><br><span class=\"line\">42</span><br><span class=\"line\">43</span><br><span class=\"line\">44</span><br><span class=\"line\">45</span><br><span class=\"line\">46</span><br><span class=\"line\">47</span><br><span class=\"line\">48</span><br><span class=\"line\">49</span><br><span class=\"line\">50</span><br><span class=\"line\">51</span><br><span class=\"line\">52</span><br><span class=\"line\">53</span><br></pre></td><td class=\"code\"><pre><span class=\"line\"><span class=\"tag\"><<span class=\"name\">configuration</span>></span></span><br><span class=\"line\"> <span class=\"tag\"><<span class=\"name\">configSections</span>></span></span><br><span class=\"line\"> <span class=\"tag\"><<span class=\"name\">section</span> <span class=\"attr\">name</span>=<span class=\"string\">\"abot\"</span> <span class=\"attr\">type</span>=<span class=\"string\">\"Abot.Core.AbotConfigurationSectionHandler, Abot\"</span>/></span></span><br><span class=\"line\"> <span class=\"tag\"></<span class=\"name\">configSections</span>></span></span><br><span class=\"line\"></span><br><span class=\"line\"> <span class=\"tag\"><<span class=\"name\">abot</span>></span></span><br><span class=\"line\"> <span class=\"tag\"><<span class=\"name\">crawlBehavior</span> </span></span><br><span class=\"line\"><span class=\"tag\"> <span class=\"attr\">maxConcurrentThreads</span>=<span class=\"string\">\"10\"</span> </span></span><br><span class=\"line\"><span class=\"tag\"> <span class=\"attr\">maxPagesToCrawl</span>=<span class=\"string\">\"1000\"</span> </span></span><br><span class=\"line\"><span class=\"tag\"> <span class=\"attr\">maxPagesToCrawlPerDomain</span>=<span class=\"string\">\"0\"</span> </span></span><br><span class=\"line\"><span class=\"tag\"> <span class=\"attr\">maxPageSizeInBytes</span>=<span class=\"string\">\"0\"</span></span></span><br><span class=\"line\"><span class=\"tag\"> <span class=\"attr\">userAgentString</span>=<span class=\"string\">\"Mozilla/5.0 (Windows NT 6.3; Trident/7.0; rv:11.0) like Gecko\"</span> </span></span><br><span class=\"line\"><span class=\"tag\"> <span class=\"attr\">crawlTimeoutSeconds</span>=<span class=\"string\">\"0\"</span> </span></span><br><span class=\"line\"><span class=\"tag\"> <span class=\"attr\">downloadableContentTypes</span>=<span class=\"string\">\"text/html, text/plain\"</span> </span></span><br><span class=\"line\"><span class=\"tag\"> <span class=\"attr\">isUriRecrawlingEnabled</span>=<span class=\"string\">\"false\"</span> </span></span><br><span class=\"line\"><span class=\"tag\"> <span class=\"attr\">isExternalPageCrawlingEnabled</span>=<span class=\"string\">\"false\"</span> </span></span><br><span class=\"line\"><span class=\"tag\"> <span class=\"attr\">isExternalPageLinksCrawlingEnabled</span>=<span class=\"string\">\"false\"</span></span></span><br><span class=\"line\"><span class=\"tag\"> <span class=\"attr\">httpServicePointConnectionLimit</span>=<span class=\"string\">\"200\"</span> </span></span><br><span class=\"line\"><span class=\"tag\"> <span class=\"attr\">httpRequestTimeoutInSeconds</span>=<span class=\"string\">\"15\"</span> </span></span><br><span class=\"line\"><span class=\"tag\"> <span class=\"attr\">httpRequestMaxAutoRedirects</span>=<span class=\"string\">\"7\"</span> </span></span><br><span class=\"line\"><span class=\"tag\"> <span class=\"attr\">isHttpRequestAutoRedirectsEnabled</span>=<span class=\"string\">\"true\"</span> </span></span><br><span class=\"line\"><span class=\"tag\"> <span class=\"attr\">isHttpRequestAutomaticDecompressionEnabled</span>=<span class=\"string\">\"false\"</span></span></span><br><span class=\"line\"><span class=\"tag\"> <span class=\"attr\">isSendingCookiesEnabled</span>=<span class=\"string\">\"false\"</span></span></span><br><span class=\"line\"><span class=\"tag\"> <span class=\"attr\">isSslCertificateValidationEnabled</span>=<span class=\"string\">\"false\"</span></span></span><br><span class=\"line\"><span class=\"tag\"> <span class=\"attr\">isRespectUrlNamedAnchorOrHashbangEnabled</span>=<span class=\"string\">\"false\"</span></span></span><br><span class=\"line\"><span class=\"tag\"> <span class=\"attr\">minAvailableMemoryRequiredInMb</span>=<span class=\"string\">\"0\"</span></span></span><br><span class=\"line\"><span class=\"tag\"> <span class=\"attr\">maxMemoryUsageInMb</span>=<span class=\"string\">\"0\"</span></span></span><br><span class=\"line\"><span class=\"tag\"> <span class=\"attr\">maxMemoryUsageCacheTimeInSeconds</span>=<span class=\"string\">\"0\"</span></span></span><br><span class=\"line\"><span class=\"tag\"> <span class=\"attr\">maxCrawlDepth</span>=<span class=\"string\">\"1000\"</span></span></span><br><span class=\"line\"><span class=\"tag\">\t <span class=\"attr\">maxLinksPerPage</span>=<span class=\"string\">\"1000\"</span></span></span><br><span class=\"line\"><span class=\"tag\"> <span class=\"attr\">isForcedLinkParsingEnabled</span>=<span class=\"string\">\"false\"</span></span></span><br><span class=\"line\"><span class=\"tag\"> <span class=\"attr\">maxRetryCount</span>=<span class=\"string\">\"0\"</span></span></span><br><span class=\"line\"><span class=\"tag\"> <span class=\"attr\">minRetryDelayInMilliseconds</span>=<span class=\"string\">\"0\"</span></span></span><br><span class=\"line\"><span class=\"tag\"> /></span></span><br><span class=\"line\"> <span class=\"tag\"><<span class=\"name\">authorization</span></span></span><br><span class=\"line\"><span class=\"tag\"> <span class=\"attr\">isAlwaysLogin</span>=<span class=\"string\">\"false\"</span></span></span><br><span class=\"line\"><span class=\"tag\"> <span class=\"attr\">loginUser</span>=<span class=\"string\">\"\"</span></span></span><br><span class=\"line\"><span class=\"tag\"> <span class=\"attr\">loginPassword</span>=<span class=\"string\">\"\"</span> /></span>\t </span><br><span class=\"line\"> <span class=\"tag\"><<span class=\"name\">politeness</span> </span></span><br><span class=\"line\"><span class=\"tag\"> <span class=\"attr\">isRespectRobotsDotTextEnabled</span>=<span class=\"string\">\"false\"</span></span></span><br><span class=\"line\"><span class=\"tag\"> <span class=\"attr\">isRespectMetaRobotsNoFollowEnabled</span>=<span class=\"string\">\"false\"</span></span></span><br><span class=\"line\"><span class=\"tag\">\t <span class=\"attr\">isRespectHttpXRobotsTagHeaderNoFollowEnabled</span>=<span class=\"string\">\"false\"</span></span></span><br><span class=\"line\"><span class=\"tag\"> <span class=\"attr\">isRespectAnchorRelNoFollowEnabled</span>=<span class=\"string\">\"false\"</span></span></span><br><span class=\"line\"><span class=\"tag\"> <span class=\"attr\">isIgnoreRobotsDotTextIfRootDisallowedEnabled</span>=<span class=\"string\">\"false\"</span></span></span><br><span class=\"line\"><span class=\"tag\"> <span class=\"attr\">robotsDotTextUserAgentString</span>=<span class=\"string\">\"abot\"</span></span></span><br><span class=\"line\"><span class=\"tag\"> <span class=\"attr\">maxRobotsDotTextCrawlDelayInSeconds</span>=<span class=\"string\">\"5\"</span> </span></span><br><span class=\"line\"><span class=\"tag\"> <span class=\"attr\">minCrawlDelayPerDomainMilliSeconds</span>=<span class=\"string\">\"0\"</span>/></span></span><br><span class=\"line\"> <span class=\"tag\"><<span class=\"name\">extensionValues</span>></span></span><br><span class=\"line\"> <span class=\"tag\"><<span class=\"name\">add</span> <span class=\"attr\">key</span>=<span class=\"string\">\"key1\"</span> <span class=\"attr\">value</span>=<span class=\"string\">\"value1\"</span>/></span></span><br><span class=\"line\"> <span class=\"tag\"><<span class=\"name\">add</span> <span class=\"attr\">key</span>=<span class=\"string\">\"key2\"</span> <span class=\"attr\">value</span>=<span class=\"string\">\"value2\"</span>/></span></span><br><span class=\"line\"> <span class=\"tag\"></<span class=\"name\">extensionValues</span>></span></span><br><span class=\"line\"> <span class=\"tag\"></<span class=\"name\">abot</span>></span> </span><br><span class=\"line\"><span class=\"tag\"></<span class=\"name\">configuration</span>></span> </span><br></pre></td></tr></table></figure>\n\n<h3 id=\"使用配置对象\"><a href=\"#使用配置对象\" class=\"headerlink\" title=\"使用配置对象\"></a>使用配置对象</h3><p> 创建一个<code>Abot.Poco.CrawlConfiguration</code>对象并修改其中的内容:</p>\n<figure class=\"highlight csharp\"><table><tr><td class=\"gutter\"><pre><span class=\"line\">1</span><br><span class=\"line\">2</span><br><span class=\"line\">3</span><br><span class=\"line\">4</span><br><span class=\"line\">5</span><br><span class=\"line\">6</span><br></pre></td><td class=\"code\"><pre><span class=\"line\">CrawlConfiguration crawlConfig = <span class=\"keyword\">new</span> CrawlConfiguration</span><br><span class=\"line\">{</span><br><span class=\"line\"> CrawlTimeoutSeconds = <span class=\"number\">100</span>,</span><br><span class=\"line\"> MaxConcurrentThreads = <span class=\"number\">10</span>,</span><br><span class=\"line\"> MaxPagesToCrawl = <span class=\"number\">1000</span>,</span><br><span class=\"line\">};</span><br></pre></td></tr></table></figure>\n\n<h3 id=\"配置文件与配置对象混用\"><a href=\"#配置文件与配置对象混用\" class=\"headerlink\" title=\"配置文件与配置对象混用\"></a>配置文件与配置对象混用</h3><figure class=\"highlight csharp\"><table><tr><td class=\"gutter\"><pre><span class=\"line\">1</span><br><span class=\"line\">2</span><br></pre></td><td class=\"code\"><pre><span class=\"line\">CrawlConfiguration crawlConfig = AbotConfigurationSectionHandler.LoadFromXml().Convert();</span><br><span class=\"line\">crawlConfig.MaxPagesToCrawl = <span class=\"number\">0</span>; <span class=\"comment\">// 无抓取上限</span></span><br></pre></td></tr></table></figure>\n\n<h2 id=\"上手\"><a href=\"#上手\" class=\"headerlink\" title=\"上手\"></a>上手</h2><p> 新建一个Crawler类,并声明一个<code>PoliteWebCrawler</code>对象。在Abot框架中,<code>PoliteWebCrawler</code>是一切指令和组件的入口。是的,组件,Abot提供了一套高度可自定义的插件接口,它们都可以通过导入<code>PoliteWebCrawler</code>的构造函数来使用。这一点后面会详解。</p>\n<figure class=\"highlight csharp\"><table><tr><td class=\"gutter\"><pre><span class=\"line\">1</span><br><span class=\"line\">2</span><br><span class=\"line\">3</span><br><span class=\"line\">4</span><br><span class=\"line\">5</span><br><span class=\"line\">6</span><br><span class=\"line\">7</span><br><span class=\"line\">8</span><br><span class=\"line\">9</span><br><span class=\"line\">10</span><br><span class=\"line\">11</span><br><span class=\"line\">12</span><br><span class=\"line\">13</span><br><span class=\"line\">14</span><br><span class=\"line\">15</span><br><span class=\"line\">16</span><br><span class=\"line\">17</span><br><span class=\"line\">18</span><br><span class=\"line\">19</span><br><span class=\"line\">20</span><br><span class=\"line\">21</span><br><span class=\"line\">22</span><br><span class=\"line\">23</span><br><span class=\"line\">24</span><br><span class=\"line\">25</span><br><span class=\"line\">26</span><br><span class=\"line\">27</span><br><span class=\"line\">28</span><br><span class=\"line\">29</span><br></pre></td><td class=\"code\"><pre><span class=\"line\"><span class=\"keyword\">using</span> Abot.Crawler;</span><br><span class=\"line\"><span class=\"keyword\">using</span> System;</span><br><span class=\"line\"></span><br><span class=\"line\"><span class=\"keyword\">namespace</span> <span class=\"title\">SampleSearchEngine</span></span><br><span class=\"line\">{</span><br><span class=\"line\"> <span class=\"keyword\">public</span> <span class=\"keyword\">class</span> <span class=\"title\">Crawler</span></span><br><span class=\"line\"> {</span><br><span class=\"line\"> <span class=\"keyword\">private</span> <span class=\"keyword\">readonly</span> PoliteWebCrawler _crawler;</span><br><span class=\"line\"> <span class=\"keyword\">public</span> Uri RootUrl { <span class=\"keyword\">get</span>; <span class=\"keyword\">set</span>; } = <span class=\"keyword\">new</span> Uri(<span class=\"string\">\"https://github.com/\"</span>);</span><br><span class=\"line\"> <span class=\"comment\">// 储存结果的容器,SitePage包含标题、URL、内容等页面基本元素</span></span><br><span class=\"line\"> <span class=\"keyword\">public</span> Dictionary<<span class=\"keyword\">string</span>, SitePage> Pages { <span class=\"keyword\">get</span>; <span class=\"keyword\">private</span> <span class=\"keyword\">set</span>; } = <span class=\"keyword\">new</span> Dictionary<<span class=\"keyword\">string</span>, SitePage>();</span><br><span class=\"line\"> <span class=\"keyword\">private</span> <span class=\"keyword\">int</span> _totalPages; <span class=\"comment\">// 统计所有找到的页面</span></span><br><span class=\"line\"> <span class=\"keyword\">private</span> <span class=\"keyword\">int</span> _pagesCrawled; <span class=\"comment\">// 统计爬过的页面</span></span><br><span class=\"line\"></span><br><span class=\"line\"> <span class=\"function\"><span class=\"keyword\">public</span> <span class=\"title\">Crawler</span>(<span class=\"params\"></span>)</span></span><br><span class=\"line\"><span class=\"function\"></span> {</span><br><span class=\"line\"> _crawler = <span class=\"keyword\">new</span> PoliteWebCrawler(); <span class=\"comment\">// 简单构造</span></span><br><span class=\"line\"> <span class=\"comment\">// _crawler = new PoliteWebCrawler(crawlConfig, null, null, null, null, null, null, null, null); // 复杂构造</span></span><br><span class=\"line\"> }</span><br><span class=\"line\"> }</span><br><span class=\"line\"></span><br><span class=\"line\"> [<span class=\"meta\">Serializable</span>]</span><br><span class=\"line\"> <span class=\"keyword\">public</span> <span class=\"keyword\">class</span> <span class=\"title\">SitePage</span></span><br><span class=\"line\"> {</span><br><span class=\"line\"> <span class=\"keyword\">public</span> <span class=\"keyword\">string</span> Url { <span class=\"keyword\">get</span>; <span class=\"keyword\">set</span>; }</span><br><span class=\"line\"> <span class=\"keyword\">public</span> <span class=\"keyword\">string</span> Title { <span class=\"keyword\">get</span>; <span class=\"keyword\">set</span>; }</span><br><span class=\"line\"> <span class=\"keyword\">public</span> <span class=\"keyword\">string</span> Content { <span class=\"keyword\">get</span>; <span class=\"keyword\">set</span>; }</span><br><span class=\"line\"> }</span><br><span class=\"line\">}</span><br></pre></td></tr></table></figure>\n\n<p> 如代码所示,可以看出<code>PoliteWebCrawler</code>有简单和复杂的构造函数。在复杂函数的参数全为<code>null</code>时两者相同,都是使用Abot默认的构建方法。如果有自定义的插件则可以在调用复杂构造函数时引入,但不能在对象已被创建后替换。</p>\n<h2 id=\"自定义爬虫\"><a href=\"#自定义爬虫\" class=\"headerlink\" title=\"自定义爬虫\"></a>自定义爬虫</h2><p> Abot的爬虫由9个部件组成:配置、决策、线程管理、资源调度、页面请求、超链接解析、内存管理,还有robot.txt文件的处理机制。我认为Abot提供的几个默认组件已经能满足绝大多数用户的需求。出于不想跑题的理由,这里只详细谈一谈最重要的决策器。</p>\n<h3 id=\"决策器\"><a href=\"#决策器\" class=\"headerlink\" title=\"决策器\"></a>决策器</h3><p> 这一定是每个写爬虫的人最关心的内容之一。开发者可以通过这个接口来决定爬哪些网页,抓取哪些内容,放弃哪些嵌套的链接以及何时需要重新抓取页面。</p>\n<figure class=\"highlight csharp\"><figcaption><span>自定义决策逻辑</span></figcaption><table><tr><td class=\"gutter\"><pre><span class=\"line\">1</span><br><span class=\"line\">2</span><br><span class=\"line\">3</span><br><span class=\"line\">4</span><br><span class=\"line\">5</span><br><span class=\"line\">6</span><br><span class=\"line\">7</span><br><span class=\"line\">8</span><br><span class=\"line\">9</span><br><span class=\"line\">10</span><br><span class=\"line\">11</span><br><span class=\"line\">12</span><br><span class=\"line\">13</span><br><span class=\"line\">14</span><br><span class=\"line\">15</span><br><span class=\"line\">16</span><br><span class=\"line\">17</span><br><span class=\"line\">18</span><br><span class=\"line\">19</span><br><span class=\"line\">20</span><br><span class=\"line\">21</span><br><span class=\"line\">22</span><br><span class=\"line\">23</span><br><span class=\"line\">24</span><br><span class=\"line\">25</span><br><span class=\"line\">26</span><br><span class=\"line\">27</span><br><span class=\"line\">28</span><br><span class=\"line\">29</span><br><span class=\"line\">30</span><br><span class=\"line\">31</span><br><span class=\"line\">32</span><br><span class=\"line\">33</span><br><span class=\"line\">34</span><br><span class=\"line\">35</span><br><span class=\"line\">36</span><br><span class=\"line\">37</span><br><span class=\"line\">38</span><br><span class=\"line\">39</span><br></pre></td><td class=\"code\"><pre><span class=\"line\"><span class=\"keyword\">public</span> <span class=\"keyword\">class</span> <span class=\"title\">CustomDecisionMaker</span> : <span class=\"title\">CrawlDecisionMaker</span>, <span class=\"title\">ICrawlDecisionMaker</span></span><br><span class=\"line\">{</span><br><span class=\"line\"> <span class=\"function\"><span class=\"keyword\">public</span> CrawlDecision <span class=\"title\">ShouldCrawlPage</span>(<span class=\"params\">PageToCrawl pageToCrawl, CrawlContext crawlContext</span>)</span></span><br><span class=\"line\"><span class=\"function\"></span> {</span><br><span class=\"line\"> <span class=\"keyword\">if</span> (pageToCrawl.Uri.Authority == <span class=\"string\">\"google.com\"</span>)</span><br><span class=\"line\"> {</span><br><span class=\"line\"> <span class=\"keyword\">return</span> <span class=\"keyword\">new</span> CrawlDecision { Allow = <span class=\"literal\">false</span>, Reason = <span class=\"string\">\"不爬取没用的\"</span> };</span><br><span class=\"line\"> }</span><br><span class=\"line\"> <span class=\"keyword\">return</span> <span class=\"keyword\">new</span> CrawlDecision { Allow = <span class=\"literal\">true</span> }; <span class=\"comment\">// 默认爬取所有网页</span></span><br><span class=\"line\"> }</span><br><span class=\"line\"></span><br><span class=\"line\"> <span class=\"function\"><span class=\"keyword\">public</span> CrawlDecision <span class=\"title\">ShouldCrawlPageLinks</span>(<span class=\"params\">CrawledPage crawledPage, CrawlContext crawlContext</span>)</span></span><br><span class=\"line\"><span class=\"function\"></span> {</span><br><span class=\"line\"> <span class=\"keyword\">if</span> (!crawlContext.CrawlConfiguration.IsExternalPageLinksCrawlingEnabled && !crawledPage.IsInternal)</span><br><span class=\"line\"> {</span><br><span class=\"line\"> <span class=\"keyword\">return</span> <span class=\"keyword\">new</span> CrawlDecision { Allow = <span class=\"literal\">false</span>, Reason = <span class=\"string\">\"不爬取外部链接\"</span> };</span><br><span class=\"line\"> }</span><br><span class=\"line\"> <span class=\"keyword\">return</span> <span class=\"keyword\">new</span> CrawlDecision { Allow = <span class=\"literal\">true</span> };</span><br><span class=\"line\"> }</span><br><span class=\"line\"></span><br><span class=\"line\"> <span class=\"function\"><span class=\"keyword\">public</span> CrawlDecision <span class=\"title\">ShouldDownloadPageContent</span>(<span class=\"params\">CrawledPage crawledPage, CrawlContext crawlContext</span>)</span></span><br><span class=\"line\"><span class=\"function\"></span> {</span><br><span class=\"line\"> <span class=\"keyword\">if</span> (crawledPage.HttpWebResponse.StatusCode != HttpStatusCode.OK)</span><br><span class=\"line\"> {</span><br><span class=\"line\"> <span class=\"keyword\">return</span> <span class=\"keyword\">new</span> CrawlDecision { Allow = <span class=\"literal\">false</span>, Reason = <span class=\"string\">\"只爬取返回OK的网页\"</span> };</span><br><span class=\"line\"> }</span><br><span class=\"line\"> <span class=\"keyword\">return</span> <span class=\"keyword\">new</span> CrawlDecision { Allow = <span class=\"literal\">true</span> };</span><br><span class=\"line\"> }</span><br><span class=\"line\"></span><br><span class=\"line\"> <span class=\"comment\">// 使用默认决策</span></span><br><span class=\"line\"> <span class=\"comment\">//public CrawlDecision ShouldRecrawlPage(CrawledPage crawledPage, CrawlContext crawlContext)</span></span><br><span class=\"line\"> <span class=\"comment\">//{</span></span><br><span class=\"line\"> <span class=\"comment\">// if (crawledPage.WebException == null)</span></span><br><span class=\"line\"> <span class=\"comment\">// {</span></span><br><span class=\"line\"> <span class=\"comment\">// return new CrawlDecision { Allow = false, Reason = \"无异常\" };</span></span><br><span class=\"line\"> <span class=\"comment\">// }</span></span><br><span class=\"line\"> <span class=\"comment\">// return new CrawlDecision { Allow = true };</span></span><br><span class=\"line\"> <span class=\"comment\">//}</span></span><br><span class=\"line\">}</span><br></pre></td></tr></table></figure>\n\n<p> 如果逻辑不复杂,开发者也可以通过植入的方式替换默认的决策逻辑。</p>\n<figure class=\"highlight csharp\"><figcaption><span>植入式修改决策逻辑</span></figcaption><table><tr><td class=\"gutter\"><pre><span class=\"line\">1</span><br><span class=\"line\">2</span><br><span class=\"line\">3</span><br><span class=\"line\">4</span><br><span class=\"line\">5</span><br><span class=\"line\">6</span><br><span class=\"line\">7</span><br><span class=\"line\">8</span><br></pre></td><td class=\"code\"><pre><span class=\"line\">crawler.ShouldCrawlPage((pageToCrawl, crawlContext) => </span><br><span class=\"line\">{</span><br><span class=\"line\">\t<span class=\"keyword\">if</span> (pageToCrawl.Uri.Authority == <span class=\"string\">\"google.com\"</span>)</span><br><span class=\"line\"> {</span><br><span class=\"line\"> <span class=\"keyword\">return</span> <span class=\"keyword\">new</span> CrawlDecision { Allow = <span class=\"literal\">false</span>, Reason = <span class=\"string\">\"不爬取没用的\"</span> };</span><br><span class=\"line\"> }</span><br><span class=\"line\"> <span class=\"keyword\">return</span> <span class=\"keyword\">new</span> CrawlDecision { Allow = <span class=\"literal\">true</span> }; <span class=\"comment\">// 默认爬取所有网页</span></span><br><span class=\"line\">});</span><br></pre></td></tr></table></figure>\n\n<h3 id=\"其他的组件\"><a href=\"#其他的组件\" class=\"headerlink\" title=\"其他的组件\"></a>其他的组件</h3><ul>\n<li>线程管理器:负责多线程的逻辑,我认为使用配置参数来调整其行为足以满足大部分的需求了。</li>\n<li>资源管理器:负责调度已被爬取和将要被爬取的页面的仓库。除非使用分布式的架构否则也不需要调整。</li>\n<li>页面请求器:负责发送HTTP请求并下载其内容。如果有比较复杂的重定向或用户认证机制便需要重新编写这一类,但如果是常规逻辑则只需要调整配置参数(如使用明文登陆或系统证书等)。</li>\n<li>超链接解析器:负责获取页面上的所有超链接并进行筛选。和决策器所不同的是,决策器掌管整个页面的决定,而解析器负责页面内的细节。</li>\n<li>内存管理单元:负责统筹进程的内存占用,并在空间不够的情况下做出相应的对策。可以配合配置文件里的最大线程参数一起使用。</li>\n<li>最后,robot.txt处理器决定是否听从文件的指示,听从哪些等。一般用户使用默认的就可以。</li>\n</ul>\n<h2 id=\"注册监听器\"><a href=\"#注册监听器\" class=\"headerlink\" title=\"注册监听器\"></a>注册监听器</h2><p> <code>PoliteWebCrawler</code>会监听5种不同的事件,每种监听又有同步和异步两种模式。如何处理这些事件是业务逻辑中最关键的地方。因此,关于爬虫的绝大部分逻辑都会嵌套进这些监听器中。</p>\n<figure class=\"highlight csharp\"><figcaption><span>监听异步事件</span></figcaption><table><tr><td class=\"gutter\"><pre><span class=\"line\">1</span><br><span class=\"line\">2</span><br><span class=\"line\">3</span><br><span class=\"line\">4</span><br><span class=\"line\">5</span><br></pre></td><td class=\"code\"><pre><span class=\"line\">_crawler.PageCrawlStartingAsync += Crawler_ProcessPageCrawlStarting;</span><br><span class=\"line\">_crawler.PageCrawlCompletedAsync += Crawler_ProcessPageCrawlCompleted;</span><br><span class=\"line\">_crawler.PageCrawlDisallowedAsync += Crawler_PageCrawlDisallowed;</span><br><span class=\"line\">_crawler.PageLinksCrawlDisallowedAsync += Crawler_PageLinksCrawlDisallowed;</span><br><span class=\"line\">_crawler.RobotsDotTextParseCompletedAsync += Crawler_RobotsDotTextParseCompleted;</span><br></pre></td></tr></table></figure>\n\n\n<h3 id=\"处理页面前\"><a href=\"#处理页面前\" class=\"headerlink\" title=\"处理页面前\"></a>处理页面前</h3><p> 这一事件触发于从资源仓库提取之后,处理页面之前。一些记录、页面的预处理可以在这一步进行。</p>\n<figure class=\"highlight csharp\"><table><tr><td class=\"gutter\"><pre><span class=\"line\">1</span><br><span class=\"line\">2</span><br><span class=\"line\">3</span><br><span class=\"line\">4</span><br><span class=\"line\">5</span><br><span class=\"line\">6</span><br></pre></td><td class=\"code\"><pre><span class=\"line\"><span class=\"function\"><span class=\"keyword\">private</span> <span class=\"keyword\">void</span> <span class=\"title\">Crawler_ProcessPageCrawlStarting</span>(<span class=\"params\"><span class=\"keyword\">object</span> sender, PageCrawlStartingArgs e</span>)</span></span><br><span class=\"line\"><span class=\"function\"></span>{</span><br><span class=\"line\"> _totalPages++;</span><br><span class=\"line\"> PageToCrawl pageToCrawl = e.PageToCrawl;</span><br><span class=\"line\"> _logger.Info(<span class=\"string\">$\"Crawling <span class=\"subst\">{pageToCrawl.Uri.AbsoluteUri}</span> which was found on page <span class=\"subst\">{pageToCrawl.ParentUri.AbsoluteUri}</span>...\"</span>);</span><br><span class=\"line\">}</span><br></pre></td></tr></table></figure>\n\n<h3 id=\"处理页面后\"><a href=\"#处理页面后\" class=\"headerlink\" title=\"处理页面后\"></a>处理页面后</h3><p> 这一事件触发于处理页面之后,从仓库中提取下一页面之前。爬虫的主要逻辑都应该放在这里。</p>\n<figure class=\"highlight csharp\"><table><tr><td class=\"gutter\"><pre><span class=\"line\">1</span><br><span class=\"line\">2</span><br><span class=\"line\">3</span><br><span class=\"line\">4</span><br><span class=\"line\">5</span><br><span class=\"line\">6</span><br><span class=\"line\">7</span><br><span class=\"line\">8</span><br><span class=\"line\">9</span><br><span class=\"line\">10</span><br><span class=\"line\">11</span><br><span class=\"line\">12</span><br><span class=\"line\">13</span><br><span class=\"line\">14</span><br><span class=\"line\">15</span><br><span class=\"line\">16</span><br><span class=\"line\">17</span><br><span class=\"line\">18</span><br><span class=\"line\">19</span><br><span class=\"line\">20</span><br><span class=\"line\">21</span><br><span class=\"line\">22</span><br><span class=\"line\">23</span><br><span class=\"line\">24</span><br><span class=\"line\">25</span><br><span class=\"line\">26</span><br><span class=\"line\">27</span><br><span class=\"line\">28</span><br><span class=\"line\">29</span><br><span class=\"line\">30</span><br><span class=\"line\">31</span><br><span class=\"line\">32</span><br></pre></td><td class=\"code\"><pre><span class=\"line\"><span class=\"function\"><span class=\"keyword\">private</span> <span class=\"keyword\">void</span> <span class=\"title\">Crawler_ProcessPageCrawlCompleted</span>(<span class=\"params\"><span class=\"keyword\">object</span> sender, PageCrawlCompletedArgs e</span>)</span></span><br><span class=\"line\"><span class=\"function\"></span>{</span><br><span class=\"line\"> CrawledPage crawledPage = e.CrawledPage;</span><br><span class=\"line\"></span><br><span class=\"line\"> <span class=\"keyword\">if</span> (crawledPage.WebException != <span class=\"literal\">null</span> || crawledPage.HttpWebResponse.StatusCode != HttpStatusCode.OK)</span><br><span class=\"line\"> {</span><br><span class=\"line\"> _logger.Warn(crawledPage.WebException,</span><br><span class=\"line\"> <span class=\"string\">$\"Crawl failed for <span class=\"subst\">{crawledPage.Uri.AbsoluteUri}</span>, the host returned <span class=\"subst\">{crawledPage.HttpWebResponse.StatusCode}</span> status code.\"</span>);</span><br><span class=\"line\"> }</span><br><span class=\"line\"> <span class=\"keyword\">else</span></span><br><span class=\"line\"> {</span><br><span class=\"line\"> _logger.Info(<span class=\"string\">$\"Crawl of page succeeded <span class=\"subst\">{crawledPage.Uri.AbsoluteUri}</span>\"</span>);</span><br><span class=\"line\"> }</span><br><span class=\"line\"> <span class=\"keyword\">if</span> (<span class=\"keyword\">string</span>.IsNullOrEmpty(crawledPage.Content.Text))</span><br><span class=\"line\"> {</span><br><span class=\"line\"> _logger.Info(<span class=\"string\">$\"Page had no content <span class=\"subst\">{crawledPage.Uri.AbsoluteUri}</span>\"</span>);</span><br><span class=\"line\"> }</span><br><span class=\"line\"></span><br><span class=\"line\"> <span class=\"comment\">// Use AngleSharp HTML parser.</span></span><br><span class=\"line\"> <span class=\"keyword\">var</span> document = crawledPage.AngleSharpHtmlDocument;</span><br><span class=\"line\"> <span class=\"keyword\">if</span> (!Pages.ContainsKey(crawledPage.Uri.AbsoluteUri))</span><br><span class=\"line\"> {</span><br><span class=\"line\"> <span class=\"comment\">// Store page information</span></span><br><span class=\"line\"> Pages.Add(crawledPage.Uri.AbsoluteUri, <span class=\"keyword\">new</span> SitePage</span><br><span class=\"line\"> {</span><br><span class=\"line\"> Url = crawledPage.Uri.AbsoluteUri,</span><br><span class=\"line\"> Title = document.Title,</span><br><span class=\"line\"> Content = document.TextContent, <span class=\"comment\">// 进行一些自定义的页面处理</span></span><br><span class=\"line\"> });</span><br><span class=\"line\"> _pagesCrawled++;</span><br><span class=\"line\"> }</span><br><span class=\"line\">}</span><br></pre></td></tr></table></figure>\n\n<h3 id=\"禁止爬取页面\"><a href=\"#禁止爬取页面\" class=\"headerlink\" title=\"禁止爬取页面\"></a>禁止爬取页面</h3><p> 这一事件触发于被禁止获取页面内容后,可以配合robot.txt处理机制和决定器一起使用。</p>\n<figure class=\"highlight csharp\"><table><tr><td class=\"gutter\"><pre><span class=\"line\">1</span><br><span class=\"line\">2</span><br><span class=\"line\">3</span><br><span class=\"line\">4</span><br><span class=\"line\">5</span><br></pre></td><td class=\"code\"><pre><span class=\"line\"><span class=\"function\"><span class=\"keyword\">private</span> <span class=\"keyword\">void</span> <span class=\"title\">Crawler_PageCrawlDisallowed</span>(<span class=\"params\"><span class=\"keyword\">object</span> sender, PageCrawlDisallowedArgs e</span>)</span></span><br><span class=\"line\"><span class=\"function\"></span>{</span><br><span class=\"line\"> PageToCrawl pageToCrawl = e.PageToCrawl;</span><br><span class=\"line\"> _logger.Info(<span class=\"string\">$\"Did not crawl page <span class=\"subst\">{pageToCrawl.Uri.AbsoluteUri}</span> due to <span class=\"subst\">{e.DisallowedReason}</span>.\"</span>);</span><br><span class=\"line\">}</span><br></pre></td></tr></table></figure>\n\n<h3 id=\"禁止爬取页面内超链接\"><a href=\"#禁止爬取页面内超链接\" class=\"headerlink\" title=\"禁止爬取页面内超链接\"></a>禁止爬取页面内超链接</h3><p> 这一事件触发于被禁止获取页面内某一超链接后,同样需要配合robot.txt处理机制和决定器一起使用。</p>\n<figure class=\"highlight csharp\"><table><tr><td class=\"gutter\"><pre><span class=\"line\">1</span><br><span class=\"line\">2</span><br><span class=\"line\">3</span><br><span class=\"line\">4</span><br><span class=\"line\">5</span><br></pre></td><td class=\"code\"><pre><span class=\"line\"><span class=\"function\"><span class=\"keyword\">private</span> <span class=\"keyword\">void</span> <span class=\"title\">Crawler_PageLinksCrawlDisallowed</span>(<span class=\"params\"><span class=\"keyword\">object</span> sender, PageLinksCrawlDisallowedArgs e</span>)</span></span><br><span class=\"line\"><span class=\"function\"></span>{</span><br><span class=\"line\"> CrawledPage crawledPage = e.CrawledPage;</span><br><span class=\"line\"> _logger.Info(<span class=\"string\">$\"Did not crawl the links on page <span class=\"subst\">{crawledPage.Uri.AbsoluteUri}</span> due to <span class=\"subst\">{e.DisallowedReason}</span>\"</span>);</span><br><span class=\"line\">}</span><br></pre></td></tr></table></figure>\n\n<h2 id=\"提取DOM内所有的文字信息\"><a href=\"#提取DOM内所有的文字信息\" class=\"headerlink\" title=\"提取DOM内所有的文字信息\"></a>提取DOM内所有的文字信息</h2><p> 由于我的最终目的是要做一个全文字搜索引擎,我需要提取出HTML页面中所有的文字信息。然而我用到的<code>AngleSharp</code>HTML解析器不但会提取出有用的文字信息,还会提取出<code><script></code>、<code><style></code>标签中的JavaScript和CSS代码…这设计的也太蠢了。于是我只好自己写一个。逻辑不难懂,就是个回溯法。但在彻底理解DOM的定义前走了不少弯路。</p>\n<figure class=\"highlight csharp\"><table><tr><td class=\"gutter\"><pre><span class=\"line\">1</span><br><span class=\"line\">2</span><br><span class=\"line\">3</span><br><span class=\"line\">4</span><br><span class=\"line\">5</span><br><span class=\"line\">6</span><br><span class=\"line\">7</span><br><span class=\"line\">8</span><br><span class=\"line\">9</span><br><span class=\"line\">10</span><br><span class=\"line\">11</span><br><span class=\"line\">12</span><br><span class=\"line\">13</span><br><span class=\"line\">14</span><br><span class=\"line\">15</span><br><span class=\"line\">16</span><br><span class=\"line\">17</span><br><span class=\"line\">18</span><br><span class=\"line\">19</span><br><span class=\"line\">20</span><br><span class=\"line\">21</span><br><span class=\"line\">22</span><br><span class=\"line\">23</span><br><span class=\"line\">24</span><br><span class=\"line\">25</span><br><span class=\"line\">26</span><br><span class=\"line\">27</span><br><span class=\"line\">28</span><br><span class=\"line\">29</span><br><span class=\"line\">30</span><br><span class=\"line\">31</span><br></pre></td><td class=\"code\"><pre><span class=\"line\"><span class=\"keyword\">private</span> <span class=\"keyword\">readonly</span> IEnumerable<<span class=\"keyword\">string</span>> filteredTags = <span class=\"keyword\">new</span> <span class=\"keyword\">string</span>[]</span><br><span class=\"line\">{</span><br><span class=\"line\"> <span class=\"string\">\"STYLE\"</span>, <span class=\"string\">\"SCRIPT\"</span></span><br><span class=\"line\">};</span><br><span class=\"line\"><span class=\"keyword\">private</span> <span class=\"keyword\">readonly</span> IEnumerable<<span class=\"keyword\">string</span>> filteredClassNames = <span class=\"keyword\">new</span> <span class=\"keyword\">string</span>[]</span><br><span class=\"line\">{</span><br><span class=\"line\"> <span class=\"string\">\"aspnethidden\"</span>, <span class=\"string\">\"header\"</span>, <span class=\"string\">\"footer\"</span>, <span class=\"string\">\"modal\"</span></span><br><span class=\"line\">};</span><br><span class=\"line\"><span class=\"function\"><span class=\"keyword\">private</span> <span class=\"keyword\">string</span> <span class=\"title\">TraverseDomGetContent</span>(<span class=\"params\">IElement node</span>)</span></span><br><span class=\"line\"><span class=\"function\"></span>{</span><br><span class=\"line\"> <span class=\"comment\">// Filter unwanted tags and class names (e.g. footers)</span></span><br><span class=\"line\"> <span class=\"keyword\">if</span> (filteredTags.Contains(node.TagName)) <span class=\"keyword\">return</span> <span class=\"literal\">null</span>;</span><br><span class=\"line\"> <span class=\"keyword\">if</span> (node.ClassName != <span class=\"literal\">null</span>)</span><br><span class=\"line\"> {</span><br><span class=\"line\"> <span class=\"keyword\">string</span> lowerClassName = node.ClassName.ToLower();</span><br><span class=\"line\"> <span class=\"keyword\">if</span> (filteredClassNames.Any(n => lowerClassName.Contains(n))) <span class=\"keyword\">return</span> <span class=\"literal\">null</span>;</span><br><span class=\"line\"> }</span><br><span class=\"line\"></span><br><span class=\"line\"> <span class=\"comment\">// Get and join all text content from children</span></span><br><span class=\"line\"> <span class=\"keyword\">string</span> childrenJoin = <span class=\"keyword\">string</span>.Empty;</span><br><span class=\"line\"> <span class=\"keyword\">foreach</span> (IElement child <span class=\"keyword\">in</span> node.Children)</span><br><span class=\"line\"> {</span><br><span class=\"line\"> <span class=\"keyword\">string</span> childString = TraverseDomGetContent(child);</span><br><span class=\"line\"> <span class=\"keyword\">if</span> (childString != <span class=\"literal\">null</span>) childrenJoin += childString + <span class=\"string\">\" \"</span>;</span><br><span class=\"line\"> }</span><br><span class=\"line\"></span><br><span class=\"line\"> <span class=\"comment\">// If all children have no content, return text content of current node.</span></span><br><span class=\"line\"> <span class=\"comment\">// Else return the content of children.</span></span><br><span class=\"line\"> <span class=\"keyword\">string</span> content = childrenJoin == <span class=\"keyword\">string</span>.Empty ? node.TextContent.Trim(trimmedCharacters) : childrenJoin.Trim();</span><br><span class=\"line\"> <span class=\"keyword\">return</span> <span class=\"keyword\">string</span>.IsNullOrEmpty(content) ? <span class=\"literal\">null</span> : content;</span><br><span class=\"line\">}</span><br></pre></td></tr></table></figure>\n\n<h2 id=\"运行\"><a href=\"#运行\" class=\"headerlink\" title=\"运行\"></a>运行</h2><p> 一切准备完毕后,运行爬虫并验证结果。最后将结果导出至文件或数据库。</p>\n<figure class=\"highlight csharp\"><table><tr><td class=\"gutter\"><pre><span class=\"line\">1</span><br><span class=\"line\">2</span><br><span class=\"line\">3</span><br><span class=\"line\">4</span><br><span class=\"line\">5</span><br><span class=\"line\">6</span><br><span class=\"line\">7</span><br><span class=\"line\">8</span><br><span class=\"line\">9</span><br><span class=\"line\">10</span><br><span class=\"line\">11</span><br><span class=\"line\">12</span><br><span class=\"line\">13</span><br><span class=\"line\">14</span><br><span class=\"line\">15</span><br><span class=\"line\">16</span><br><span class=\"line\">17</span><br><span class=\"line\">18</span><br><span class=\"line\">19</span><br><span class=\"line\">20</span><br><span class=\"line\">21</span><br><span class=\"line\">22</span><br><span class=\"line\">23</span><br><span class=\"line\">24</span><br><span class=\"line\">25</span><br><span class=\"line\">26</span><br><span class=\"line\">27</span><br><span class=\"line\">28</span><br><span class=\"line\">29</span><br><span class=\"line\">30</span><br><span class=\"line\">31</span><br><span class=\"line\">32</span><br><span class=\"line\">33</span><br><span class=\"line\">34</span><br><span class=\"line\">35</span><br><span class=\"line\">36</span><br><span class=\"line\">37</span><br><span class=\"line\">38</span><br><span class=\"line\">39</span><br><span class=\"line\">40</span><br><span class=\"line\">41</span><br><span class=\"line\">42</span><br><span class=\"line\">43</span><br><span class=\"line\">44</span><br><span class=\"line\">45</span><br><span class=\"line\">46</span><br><span class=\"line\">47</span><br><span class=\"line\">48</span><br></pre></td><td class=\"code\"><pre><span class=\"line\"><span class=\"function\"><span class=\"keyword\">public</span> <span class=\"keyword\">void</span> <span class=\"title\">Run</span>(<span class=\"params\"></span>)</span></span><br><span class=\"line\"><span class=\"function\"></span>{</span><br><span class=\"line\"> CrawlResult result = _crawler.Crawl(RootUrl); <span class=\"comment\">// 注意此为同步任务,且必须同步</span></span><br><span class=\"line\"></span><br><span class=\"line\"> <span class=\"keyword\">if</span> (result.ErrorOccurred)</span><br><span class=\"line\"> {</span><br><span class=\"line\"> _logger.Error(<span class=\"string\">$\"Crawl of <span class=\"subst\">{result.RootUri.AbsoluteUri}</span> completed with error: <span class=\"subst\">{result.ErrorException.Message}</span>\"</span>);</span><br><span class=\"line\"> }</span><br><span class=\"line\"> <span class=\"keyword\">else</span></span><br><span class=\"line\"> {</span><br><span class=\"line\"> _logger.Info(<span class=\"string\">$\"Crawl of <span class=\"subst\">{result.RootUri.AbsoluteUri}</span> completed.\"</span>);</span><br><span class=\"line\"> _logger.Info(<span class=\"string\">$\"Total pages founded: <span class=\"subst\">{_totalPages}</span>. Pages crawled:<span class=\"subst\">{_pagesCrawled}</span>.\"</span>);</span><br><span class=\"line\"></span><br><span class=\"line\"><span class=\"meta\">#<span class=\"meta-keyword\">if</span> DEBUG</span></span><br><span class=\"line\"> ExportToJson(Pages);</span><br><span class=\"line\"><span class=\"meta\">#<span class=\"meta-keyword\">else</span></span></span><br><span class=\"line\"> ExportToDatabase(Pages);</span><br><span class=\"line\"><span class=\"meta\">#<span class=\"meta-keyword\">endif</span></span></span><br><span class=\"line\"> }</span><br><span class=\"line\">}</span><br><span class=\"line\"></span><br><span class=\"line\"><span class=\"function\"><span class=\"keyword\">private</span> <span class=\"keyword\">void</span> <span class=\"title\">ExportToJson</span>(<span class=\"params\">Dictionary<<span class=\"keyword\">string</span>, SitePage> pages</span>)</span></span><br><span class=\"line\"><span class=\"function\"></span>{</span><br><span class=\"line\"> <span class=\"keyword\">string</span> fileName = <span class=\"string\">\"output.json\"</span>;</span><br><span class=\"line\"> <span class=\"keyword\">string</span> content = JsonConvert.SerializeObject(Pages);</span><br><span class=\"line\"> File.WriteAllText(fileName, content);</span><br><span class=\"line\"> _logger.Info(<span class=\"string\">$\"Wrote to <span class=\"subst\">{Path.GetFullPath(Directory.GetCurrentDirectory() + <span class=\"string\">'\\\\'</span> + fileName)}</span>\"</span>);</span><br><span class=\"line\">}</span><br><span class=\"line\"></span><br><span class=\"line\"><span class=\"function\"><span class=\"keyword\">private</span> <span class=\"keyword\">void</span> <span class=\"title\">ExportToDatabase</span>(<span class=\"params\">Dictionary<<span class=\"keyword\">string</span>, SitePage> pages</span>)</span></span><br><span class=\"line\"><span class=\"function\"></span>{</span><br><span class=\"line\"> <span class=\"keyword\">try</span></span><br><span class=\"line\"> {</span><br><span class=\"line\"> _entities.SitePages.AddOrUpdate(pages.Values.ToArray());</span><br><span class=\"line\"> _entities.SaveChanges();</span><br><span class=\"line\"> }</span><br><span class=\"line\"> <span class=\"keyword\">catch</span> (DbEntityValidationException ex)</span><br><span class=\"line\"> {</span><br><span class=\"line\"> <span class=\"keyword\">foreach</span> (<span class=\"keyword\">var</span> err <span class=\"keyword\">in</span> ex.EntityValidationErrors)</span><br><span class=\"line\"> {</span><br><span class=\"line\"> <span class=\"keyword\">foreach</span> (<span class=\"keyword\">var</span> subErr <span class=\"keyword\">in</span> err.ValidationErrors)</span><br><span class=\"line\"> {</span><br><span class=\"line\"> _logger.Error(<span class=\"string\">$\"Validation failed for property <span class=\"subst\">{subErr.PropertyName}</span> because <span class=\"subst\">{subErr.ErrorMessage}</span>.\"</span>);</span><br><span class=\"line\"> }</span><br><span class=\"line\"> }</span><br><span class=\"line\"> }</span><br><span class=\"line\"> _logger.Info(<span class=\"string\">\"Wrote to database.\"</span>);</span><br><span class=\"line\">}</span><br></pre></td></tr></table></figure>\n\n<h1 id=\"小结\"><a href=\"#小结\" class=\"headerlink\" title=\"小结\"></a>小结</h1><p> 至此关于Abot爬虫的概览便总结完毕。下一步我想做的是基于SQL Server的简易全文字搜索引擎。具体怎么实现等下回再说。</p>\n","categories":["随笔"],"tags":[".NET"]},{"title":"和密码库泄露的斗争","url":"https://xhw994.github.io/2019/04/15/20190414/","content":"<h1 id=\"要防弱密码,也要防数据库。但主要还是防数据库。\"><a href=\"#要防弱密码,也要防数据库。但主要还是防数据库。\" class=\"headerlink\" title=\"要防弱密码,也要防数据库。但主要还是防数据库。\"></a>要防弱密码,也要防数据库。但主要还是防数据库。</h1><p> 任何一个程序员,哪怕是软件工程的初学者,都应该清楚创建一个坚不可摧的密码的重要性。事实上,随着学习的深入,这四年内我的通用密码也已经改变了两次,并且对于那些使用一次后再也不需要访问的网站,我也开始使用<a href=\"https://keepersecurity.com/\" target=\"_blank\" rel=\"noopener\">Keeper</a>或是Google的随机密码服务了。但强大的密码并不能让我高枕无忧,因为大部分情况下,问题会出在储存密码的那一边。我到现在都有些不敢相信为什么Facebook会<a href=\"https://krebsonsecurity.com/2019/03/facebook-stored-hundreds-of-millions-of-user-passwords-in-plain-text-for-years/\" target=\"_blank\" rel=\"noopener\">明文存储</a>密码。但既然接受了这个事实,我便清楚地明白这世上没有几个可以让我放心的数据库。</p>\n<p> 我已经忘了第一次被盗用的情况了,能追溯的最早的是QQ号和VIP邮箱的盗用。某一段时间某个缺德份子在我的QQ空间里上传了大量的情色广告图片,但出人意料的是,他没有顺走我拿来养QQ宠物的Q币以及云盘(当时还不叫这个名字)里的敏感文件。真是一个有职业道德的人。后来我的VIP邮箱也被盗用了。缺德份子用我的账户信息在其他网站上进行了碰撞测试,并成功登陆了我的台服战网和一个我忘记我注册过的Steam账号。虽然没有造成太大的影响,但总归是心里面一根刺。雪上加霜的是,腾讯是不允许人工删除QQ号的。哪怕5年没有登陆我的QQ账号都没有被系统注销,更不用说VIP邮箱了。对它们我就只好不理不问。</p>\n<p> 那么我的密码是腾讯泄露的吗?QQ号我不知道,那是太久之前的事儿了,但VIP邮箱的泄露是有迹可循的。这里要感谢Youtube上 <a href=\"https://www.youtube.com/user/Computerphile/\" target=\"_blank\" rel=\"noopener\">Computerphile</a> 频道提供的网站 <a href=\"https://haveibeenpwned.com/\" target=\"_blank\" rel=\"noopener\">have i been pwned</a>(HIBP),它提供了快捷且安全的用户界面和API用来查询数据库泄露。</p>\n<h2 id=\"使用HIBP查验邮箱\"><a href=\"#使用HIBP查验邮箱\" class=\"headerlink\" title=\"使用HIBP查验邮箱\"></a>使用HIBP查验邮箱</h2><p> 在HIBP主界面键入邮箱,它很快便告诉我这次泄露的罪魁祸首是网易。</p>\n<center><img src=\"/2019/04/15/20190414/1.jpg\" title=\"This is an image\"></center>\n\n<p> 这件事我当时在微博上也有耳闻,但并没有想过太多,因为我本以为我没有用过网易服务的。我对着这个结果想了很久才意识到:十几年前,在我小学二年级的时候,我玩过《梦幻西游》,那是一款网易出品的游戏。当年的我还用零花钱买了将军令,也就是专门为《梦幻西游》开发的电子密保设备,这足可以看出我在小学就已经有一定的网络安全意识了。但又能如何呢?问题不是出在我身上,但泄露还是发生了。</p>\n<p> 根据链接的<a href=\"http://news.mydrivers.com/1/452/452173.htm\" target=\"_blank\" rel=\"noopener\">国内新闻</a>,这次发生在2015年的泄露总共包含五亿多条数据信息,包含用户名、MD5加密的密码、密码提示问题和答案、以及注册的IP和生日等。而HIBP官方给出的详细数据和新闻有相当的<a href=\"https://www.troyhunt.com/handling-chinese-data-breaches-in-have-i-been-pwned/\" target=\"_blank\" rel=\"noopener\">出入</a>:约有两亿三千万账户被拖库,且泄露的信息包含明文密码。且不说MD5的密码已经可以被轻松<a href=\"https://www.md5online.org/md5-decrypt.html\" target=\"_blank\" rel=\"noopener\">暴力破解</a>,就凭HIBP给出的数据更为详实这一点我就愿意相信我的明文密码已经被泄露了。</p>\n<p> 测试其他邮箱的结果也令我失望:<a href=\"https://www.troyhunt.com/adobe-credentials-and-serious/\" target=\"_blank\" rel=\"noopener\">Adobe</a>,<a href=\"https://www.nexusmods.com/games/news/12670/\" target=\"_blank\" rel=\"noopener\">Nexus Mod</a>都泄露过我的用户信息。其中Nexus只泄露了加盐的密码,而Adobe因为加密方式太过幼稚完全可以看作是明文泄露。</p>\n<h2 id=\"使用HIBP查验密码\"><a href=\"#使用HIBP查验密码\" class=\"headerlink\" title=\"使用HIBP查验密码\"></a>使用HIBP查验密码</h2><p> 泄露邮箱已经是件大事了,但明文密码的泄露要更加严峻。很多人一生只用一个弱密码,如果这个密码被泄露了那他所有的信息都玩儿完了。我虽然有一定的安全意识,但让我给每个服务都设置一个独立且能记住的密码也是不现实的。我的方法不过是牢记三种独特的密码,再根据网站名添加特殊符号罢了。</p>\n<p> HIBP提供了查验密码的用户界面和API。这里我使用用户界面来查询我的QQ密码:</p>\n<center><img src=\"/2019/04/15/20190414/2.jpg\" title=\"This is an image\"></center>\n\n<p> 结果残酷到让我想拿块豆腐撞死。其实我很想在这里写下我的密码的,但因为我的密保手机号处在冻结状态,我没办法更改我的QQ密码,因此只能作罢。如果用一句话来形容的话,那是一个看上去很酷也很安全,但仔细一想一定有无数人用过的密码。虽然我已经不用了但里面仍然有一些没清干净的信息,如果你们有人猜到那是什么密码的话还请手下留情。</p>\n<p> 查验其他密码的结果令我欣慰:我的三种密码和十几种变体没有任何一个被明文泄露过。这虽然不代表它们就一定是牢不可破的,但至少我现在不需要担忧过多。</p>\n<h1 id=\"HIBP真的安全吗?\"><a href=\"#HIBP真的安全吗?\" class=\"headerlink\" title=\"HIBP真的安全吗?\"></a>HIBP真的安全吗?</h1><p> 看到这里,有些人会说:你明文传送密码给HIBP难道就是安全的吗?你能保证这不是钓鱼吗?哪怕HIBP是可以信任的,哪怕它发送给后端的是加密过的密码,难道它就不能用加密好的密码去撞库吗?放心,HIBP早已经想好怎么解决这些问题了。</p>\n<p> HIBP使用SHA-1来存储密码,它虽然不如其他加密手段安全,但却是明文或MD5这样的弱加密与bcrypt等强加密的良好折中。毕竟这些不是敏感的用户信息,而是网上随处可见的泄露数据。</p>\n<p> 假设我想验证我的密码<code>P@ssw0rd</code>是否已被泄露,我首先要生成它的SHA-1的哈希值<code>21BD12DC183F740EE76F27B78EB39C8AD972A757</code>。现在如果我完全相信HIBP的话,我可以发送一个GET请求给HIBP来验证这段哈希值是否存在在它的数据库里。如果是的话我就可以改密码,如果不是的话我就可以关掉网页,高枕无忧了。但事实上,且不说我的包会不会被劫持,我真的愿意相信HIBP不会反向查询我的密码吗?答案是否定的。</p>\n<p> HIBP真正的安全之处在于它使用了k-Anonymity查询法。关于它的详细以及整个后端如何运作可以参考他们的<a href=\"https://www.troyhunt.com/ive-just-launched-pwned-passwords-version-2/\" target=\"_blank\" rel=\"noopener\">博客文章</a>。这里我只概述一下它的运行原理:</p>\n<p> 在上文中,我已经得到了我的密码的哈希值。但我并不需要将完整的哈希值发送给HIBP。相反,我只需要发送一定长度的子字符串给HIBP,配合本地验证我也可以得到我想要的结果。HIBP在这一步取k为5,也就是说,我只要发送哈希值的头5个字符<code>21BD1</code>就可以了。</p>\n<p> 收到请求后,HIBP会查询所有头5位为<code>21BD1</code>的哈希值并返回。返回的值大概有500条左右,其中每一条都对应着不同的密码,但它们的哈希值的前5位都是相同的。注意冒号后面的数字是这个哈希值在所有数据库泄露中出现的频率:</p>\n<blockquote>\n<p>(21BD1) 0018A45C4D1DEF81644B54AB7F969B88D65:1 (对应 “lauragpe”)<br>(21BD1) 00D4F6E8FA6EECAD2A3AA415EEC418D38EC:2 (对应 “alexguo029”)<br>(21BD1) 011053FD0102E94D6AE2F8B83D76FAF94F6:1 (对应 “BDnd9102”)<br>(21BD1) 012A7CA357541F0AC487871FEEC1891C49C:2 (对应 “melobie”)<br>(21BD1) 0136E006E24E7D152139815FB0FC6A50B15:2 (对应 “quvekyny”)<br>…</p>\n</blockquote>\n<p> 我收到这500多个哈希后,只需要搜索自己的哈希值是否存在在这一串返回值里就可以判断我的密码是否被盗用了,并且因为HIBP采用了固定的<code>k</code>值,数据库索引非常有效,所以整个流程毫不费时。</p>\n<p> 使用Postman模拟这一过,我得到了527个结果。每一条都代表着一个截取掉头5位的哈希值:</p>\n<center><img src=\"/2019/04/15/20190414/3.jpg\"></center>\n\n<p> 搜寻哈希值的后35位<code>2DC183F740EE76F27B78EB39C8AD972A757</code>,可知<code>P@ssw0rd</code>这个密码在所有已知的数据库泄露中共出现五万余次——不愧是非常差的密码。而从它的“前辈”<code>password</code>“仅仅”被泄露了三百多万次这一点可以看出使用Leet文字修饰密码这一方法也不能保证有效。</p>\n<center><img src=\"/2019/04/15/20190414/4.jpg\"></center>\n\n<h1 id=\"简易终端\"><a href=\"#简易终端\" class=\"headerlink\" title=\"简易终端\"></a>简易终端</h1><p> 某些密码管理软件(如<a href=\"https://1password.com/\" target=\"_blank\" rel=\"noopener\">1password</a>)默认使用HIBP来探测潜在的风险。但那些不使用密码管理软件,或者是用Chrome自带密码管理软件的人(比如我)也有查验密码的需求。所以我需要一个能快速查询大量密码的脚本:</p>\n<figure class=\"highlight python\"><figcaption><span>查询脚本,支持csv和txt文件</span></figcaption><table><tr><td class=\"gutter\"><pre><span class=\"line\">1</span><br><span class=\"line\">2</span><br><span class=\"line\">3</span><br><span class=\"line\">4</span><br><span class=\"line\">5</span><br><span class=\"line\">6</span><br><span class=\"line\">7</span><br><span class=\"line\">8</span><br><span class=\"line\">9</span><br><span class=\"line\">10</span><br><span class=\"line\">11</span><br><span class=\"line\">12</span><br><span class=\"line\">13</span><br><span class=\"line\">14</span><br><span class=\"line\">15</span><br><span class=\"line\">16</span><br><span class=\"line\">17</span><br><span class=\"line\">18</span><br><span class=\"line\">19</span><br><span class=\"line\">20</span><br><span class=\"line\">21</span><br><span class=\"line\">22</span><br><span class=\"line\">23</span><br><span class=\"line\">24</span><br><span class=\"line\">25</span><br><span class=\"line\">26</span><br><span class=\"line\">27</span><br><span class=\"line\">28</span><br><span class=\"line\">29</span><br><span class=\"line\">30</span><br><span class=\"line\">31</span><br><span class=\"line\">32</span><br><span class=\"line\">33</span><br><span class=\"line\">34</span><br><span class=\"line\">35</span><br><span class=\"line\">36</span><br><span class=\"line\">37</span><br><span class=\"line\">38</span><br><span class=\"line\">39</span><br><span class=\"line\">40</span><br><span class=\"line\">41</span><br><span class=\"line\">42</span><br><span class=\"line\">43</span><br><span class=\"line\">44</span><br><span class=\"line\">45</span><br><span class=\"line\">46</span><br></pre></td><td class=\"code\"><pre><span class=\"line\"><span class=\"keyword\">import</span> pandas <span class=\"keyword\">as</span> pd</span><br><span class=\"line\"><span class=\"keyword\">import</span> hashlib</span><br><span class=\"line\"><span class=\"keyword\">import</span> requests</span><br><span class=\"line\"><span class=\"keyword\">import</span> sys</span><br><span class=\"line\"></span><br><span class=\"line\"><span class=\"comment\"># Send GET request to HIBP API</span></span><br><span class=\"line\"><span class=\"comment\"># to check if the given password has been pwnd</span></span><br><span class=\"line\"><span class=\"function\"><span class=\"keyword\">def</span> <span class=\"title\">check_one</span><span class=\"params\">(p)</span>:</span></span><br><span class=\"line\"> api = <span class=\"string\">'https://api.pwnedpasswords.com/range/'</span></span><br><span class=\"line\"> <span class=\"comment\"># HIBP stores password with SHA-1 in uppercase</span></span><br><span class=\"line\"> h = hl.sha1(p.encode(<span class=\"string\">'utf-8'</span>)).hexdigest().upper()</span><br><span class=\"line\"> <span class=\"comment\"># k-Anonymity, k=5</span></span><br><span class=\"line\"> r = req.get(api + h[:<span class=\"number\">5</span>])</span><br><span class=\"line\"> <span class=\"keyword\">for</span> line <span class=\"keyword\">in</span> req.get(api + h[:<span class=\"number\">5</span>]).text.split(<span class=\"string\">'\\r\\n'</span>):</span><br><span class=\"line\"> <span class=\"keyword\">if</span> h[<span class=\"number\">5</span>:] <span class=\"keyword\">in</span> line:</span><br><span class=\"line\"> <span class=\"comment\"># Return breakage count</span></span><br><span class=\"line\"> <span class=\"keyword\">return</span> int(line.split(<span class=\"string\">':'</span>)[<span class=\"number\">1</span>])</span><br><span class=\"line\"> <span class=\"comment\"># Safe password</span></span><br><span class=\"line\"> <span class=\"keyword\">return</span> <span class=\"number\">0</span></span><br><span class=\"line\"></span><br><span class=\"line\">fname = sys.argv[<span class=\"number\">1</span>]</span><br><span class=\"line\">df = pd.read_csv(fname)</span><br><span class=\"line\"><span class=\"comment\"># Format dataframe if input is a text file</span></span><br><span class=\"line\"><span class=\"keyword\">if</span> fname.endswith(<span class=\"string\">'.txt'</span>):</span><br><span class=\"line\"> df.columns = [<span class=\"string\">'password'</span>]</span><br><span class=\"line\"><span class=\"comment\"># Group by password to reduce the number of requests</span></span><br><span class=\"line\">dfg = df.groupby(<span class=\"string\">'password'</span>)</span><br><span class=\"line\"></span><br><span class=\"line\">flag = <span class=\"literal\">False</span></span><br><span class=\"line\"><span class=\"comment\"># Iterate through all passwords</span></span><br><span class=\"line\"><span class=\"keyword\">for</span> name <span class=\"keyword\">in</span> dfg.groups.keys():</span><br><span class=\"line\"> t = check_one(name)</span><br><span class=\"line\"> <span class=\"keyword\">if</span> (t > <span class=\"number\">0</span>): <span class=\"comment\"># Bad password</span></span><br><span class=\"line\"> flag = <span class=\"literal\">True</span></span><br><span class=\"line\"> print(<span class=\"string\">\"'{}' has been compromised {} times!\"</span>.format(name, t))</span><br><span class=\"line\"> <span class=\"comment\"># Print more information if given a Chrome export</span></span><br><span class=\"line\"> <span class=\"keyword\">if</span> (fname.endswith(<span class=\"string\">'.csv'</span>)):</span><br><span class=\"line\"> g = dfg.get_group(name)[[<span class=\"string\">'name'</span>,<span class=\"string\">'username'</span>]].rename(columns={<span class=\"string\">'name'</span>:<span class=\"string\">'website'</span>})</span><br><span class=\"line\"> g.index = [<span class=\"string\">''</span>] * len(g)</span><br><span class=\"line\"> print(<span class=\"string\">\"It has been used in the following websites:\"</span>)</span><br><span class=\"line\"> print(g)</span><br><span class=\"line\"> print(<span class=\"string\">\"\"</span>)</span><br><span class=\"line\"></span><br><span class=\"line\"><span class=\"comment\"># All passwords pass the test, amazing!</span></span><br><span class=\"line\"><span class=\"keyword\">if</span> <span class=\"keyword\">not</span> flag:</span><br><span class=\"line\"> print(<span class=\"string\">\"Wow, such good passwords\"</span>)</span><br></pre></td></tr></table></figure>\n\n<p> 我的初衷是只读取文本文档的,但后来我发现Chrome是可以<a href=\"https://support.google.com/chrome/answer/95606?co=GENIE.Platform%3DDesktop&hl=zh-Hans\" target=\"_blank\" rel=\"noopener\">导出</a>存储的密码到csv文件的,于是我便引入了pandas来读取csv文件。下面是部分运行结果:</p>\n<blockquote>\n<p>‘admin’ has been compromised 44432 times!<br>It has been used in the following websites:<br> website username<br> <a href=\"http://www.somewebsite.com\" target=\"_blank\" rel=\"noopener\">www.somewebsite.com</a> admin</p>\n</blockquote>\n<blockquote>\n<p>‘ak607425’ has been compromised 5 times!<br>It has been used in the following websites:<br> website username<br> wc.wolframalpha.com <a href=\"mailto:fakeemail@gmail.com\" target=\"_blank\" rel=\"noopener\">fakeemail@gmail.com</a></p>\n</blockquote>\n<blockquote>\n<p>‘group13’ has been compromised 134 times!<br>It has been used in the following websites:<br> website username<br> quizlet.com NaN</p>\n</blockquote>\n<blockquote>\n<p>‘password’ has been compromised 3645804 times!<br>It has been used in the following websites:<br> website username<br> proj1.herokuapp.com <a href=\"mailto:fake@uni.ca\" target=\"_blank\" rel=\"noopener\">fake@uni.ca</a><br> proj2.herokuapp.com <a href=\"mailto:fake@uni.ca\" target=\"_blank\" rel=\"noopener\">fake@uni.ca</a><br> proj3.herokuapp.com <a href=\"mailto:myadmin@uni.ca\" target=\"_blank\" rel=\"noopener\">myadmin@uni.ca</a></p>\n</blockquote>\n<blockquote>\n<p>‘taiyuan’ has been compromised 90 times!<br>It has been used in the following websites:<br> website username<br> services.uni.ca In which city I was born (all lower case)?</p>\n</blockquote>\n<p> 测试的结果还是相当让我满意的,被泄露的要么是别人的账号(我之后会通知他们的),要么就是我根本不在乎的账号或是假账号。这里的排版因为Markdown的缘故被打乱了,实际的输出会好看一点。值得注意的是最后一条,Chrome不仅记录密码还记录了密保问题的答案。我可以通过验证用户名和密码的格式来解决这个问题,但目前这段代码已经够用了。</p>\n<h1 id=\"小结和反思\"><a href=\"#小结和反思\" class=\"headerlink\" title=\"小结和反思\"></a>小结和反思</h1><p> 对于密码库泄露和HIBP的探究就到此为止。毕竟我没有阻止密码库泄露的能力,碰上这种问题只能当作是天灾了。但这不代表我就应该对可能存在的风险视若无睹:使用密码管理器生成随机密码是最好的解决方法。如果网站不使用弱加密法或是明文存储密码,最好还能加点盐,那只要我的密码足够强,我的明文密码就不可能被泄露到网上去。缺德份子既然不会专门针对我的密码进行破解,那我只要尽力而为就足够了。</p>\n","categories":["随笔"],"tags":["算法"]},{"title":"不单纯程序员们的婚恋与语言困境以及博弈","url":"https://xhw994.github.io/2019/04/14/20190413/","content":"<h1 id=\"“单纯”的程序员\"><a href=\"#“单纯”的程序员\" class=\"headerlink\" title=\"“单纯”的程序员\"></a>“单纯”的程序员</h1><p> 根据我在社交平台上的观察,国内的程序员们总是要和HR们怼着干的。具体缘由按下不谈,但促使我写这篇文章的理由来自几天前网络平台上程序员与某HR的又一次博弈。具体来说就是这张网上流传出来的“求偶”文。</p>\n<center><img src=\"/2019/04/14/20190413/1.jpg\"></center>\n\n<p> 资产过亿的清华高材竟然想找个靠谱IT男过日子,除了“不要特别矮或者胖”和“88年”两条信息外没有任何附加需求。看完我觉得这真是对广大程序员的双商赤裸裸的侮辱。但知道这是骗局又能怎样呢?“高智商”这三个字对工科生们来说比任何挑逗都要有效。明知山有HR,挑战者们也要向HR山行,这才是有实干精神、不畏挑战的标兵程序员。当然,出现接下来的情况便是意料之中了。</p>\n<center><img src=\"/2019/04/14/20190413/2.jpg\"></center>\n<center><img src=\"/2019/04/14/20190413/3.jpg\"></center>\n\n<p> 翻看原微博的评论区,我发现这条羊头狗肉的招聘信息比我预想中的还有效。此HR的微信的好友位在当天晚上就已经被占据一空,并且她至少注册了另外两个微信号来处理大量的好友通知。我虽然觉得这种猎头方式很下作愚蠢,但不得不说在中国这片辽阔的土地上存在太多的冒险家和娱乐家。在互联网精神的指引下这俨然已经成为了一场狂欢。可是,这场智斗表面上虽然是成功揭发伪装的程序员们赢了,其实真正的胜者是收获了大量的简历的HR们。这里要赞一句道高一尺魔高一丈。</p>\n<p> 那么本着程序员的实干精神,我也来试试这道题,全当娱乐了。</p>\n<h1 id=\"娱乐与被娱乐\"><a href=\"#娱乐与被娱乐\" class=\"headerlink\" title=\"娱乐与被娱乐\"></a>娱乐与被娱乐</h1><p> 这两道都是数学题,那就用Haskell写好了。</p>\n<h2 id=\"双因子问题\"><a href=\"#双因子问题\" class=\"headerlink\" title=\"双因子问题\"></a>双因子问题</h2><p> 首先解决质数因子的问题。这里使用的是最直白最容易实现的<a href=\"https://zh.wikipedia.org/wiki/试除法\" target=\"_blank\" rel=\"noopener\">试除法</a>:</p>\n<figure class=\"highlight haskell\"><figcaption><span>试除法</span></figcaption><table><tr><td class=\"gutter\"><pre><span class=\"line\">1</span><br><span class=\"line\">2</span><br><span class=\"line\">3</span><br><span class=\"line\">4</span><br><span class=\"line\">5</span><br><span class=\"line\">6</span><br><span class=\"line\">7</span><br><span class=\"line\">8</span><br><span class=\"line\">9</span><br><span class=\"line\">10</span><br><span class=\"line\">11</span><br><span class=\"line\">12</span><br></pre></td><td class=\"code\"><pre><span class=\"line\"><span class=\"title\">divisors</span>:: <span class=\"type\">Int</span> -> [<span class=\"type\">Int</span>]</span><br><span class=\"line\"><span class=\"title\">divisors</span> n = [i | i <- [<span class=\"number\">2.</span>.(n `div` <span class=\"number\">2</span>)], n `mod` i == <span class=\"number\">0</span>]</span><br><span class=\"line\"></span><br><span class=\"line\"><span class=\"comment\">-- 并未用到的无限质数列表</span></span><br><span class=\"line\"><span class=\"title\">primes</span> :: [<span class=\"type\">Int</span>]</span><br><span class=\"line\"><span class=\"title\">primes</span> = [i | i <- [<span class=\"number\">2.</span>.], divisors i == []]</span><br><span class=\"line\"></span><br><span class=\"line\"><span class=\"title\">primeFactors</span> :: <span class=\"type\">Int</span> -> [<span class=\"type\">Int</span>]</span><br><span class=\"line\"><span class=\"title\">primeFactors</span> n = <span class=\"keyword\">case</span> divs <span class=\"keyword\">of</span></span><br><span class=\"line\"> [] -> [n]</span><br><span class=\"line\"> _ -> reverse $ divs ++ primeFactors(n `div` (head divs))</span><br><span class=\"line\"> <span class=\"keyword\">where</span> divs = take <span class=\"number\">1</span> $ divisors n</span><br></pre></td></tr></table></figure>\n\n<p> 因为这里的返回结果必定是从小到大排列的,我需要用到<code>reverse</code>函数把它翻过来。得到的结果是86627和8171,用时小于1毫秒。因为我的目标(707829217)并不是一个很大的数字,这种算法比<a href=\"https://zh.wikipedia.org/wiki/埃拉托斯特尼筛法\" target=\"_blank\" rel=\"noopener\">埃拉托斯特尼筛法</a>和<a href=\"https://en.wikipedia.org/wiki/Wheel_factorization\" target=\"_blank\" rel=\"noopener\">Wheel factorization</a>(第三方库<a href=\"https://hackage.haskell.org/package/primes-0.2.1.0/docs/Data-Numbers-Primes.html\" target=\"_blank\" rel=\"noopener\">Data.Numbers.Primes</a>的默认算法,第一步取n=6)要快一些。至于<a href=\"http://www.csie.ntnu.edu.tw/~u91029/Prime.html\" target=\"_blank\" rel=\"noopener\">更复杂的</a>筛选法,那就是用大炮打蚊子了。</p>\n<p> 接下来我写了一个<code>join</code>函数将返回的两个值并成一个,方便我接下来的计算,免得重复进行复制粘贴。注意这里要用Integer来防止数值溢出:</p>\n<figure class=\"highlight haskell\"><figcaption><span>合并数字</span></figcaption><table><tr><td class=\"gutter\"><pre><span class=\"line\">1</span><br><span class=\"line\">2</span><br></pre></td><td class=\"code\"><pre><span class=\"line\"><span class=\"title\">join</span> :: [<span class=\"type\">Int</span>] -> <span class=\"type\">Integer</span></span><br><span class=\"line\"><span class=\"title\">join</span> = read . concatMap show</span><br></pre></td></tr></table></figure>\n\n<p> 这一步的结果自然是866278171。这大概是那位HR的QQ号吧。如果是真的话我怀疑她的QQ号也被好友塞爆了。</p>\n<h2 id=\"数3的第一种方法\"><a href=\"#数3的第一种方法\" class=\"headerlink\" title=\"数3的第一种方法\"></a>数3的第一种方法</h2><p> 然后是第二个问题,计算从1到866278171为止的所有奇数总共有多少个3。比如3333这个数就含有4个3,1不包含3,这样。那么首先我要写一个函数计算单一整数有几个3:</p>\n<figure class=\"highlight haskell\"><figcaption><span>计算整数中3的数量</span></figcaption><table><tr><td class=\"gutter\"><pre><span class=\"line\">1</span><br><span class=\"line\">2</span><br><span class=\"line\">3</span><br></pre></td><td class=\"code\"><pre><span class=\"line\"><span class=\"title\">count</span> :: <span class=\"type\">Integer</span> -> <span class=\"type\">Integer</span></span><br><span class=\"line\"><span class=\"title\">count</span> <span class=\"number\">0</span> = <span class=\"number\">0</span></span><br><span class=\"line\"><span class=\"title\">count</span> n = (<span class=\"keyword\">if</span> n `mod` <span class=\"number\">10</span> == <span class=\"number\">3</span> <span class=\"keyword\">then</span> <span class=\"number\">1</span> <span class=\"keyword\">else</span> <span class=\"number\">0</span>) + count (n `div` <span class=\"number\">10</span>)</span><br></pre></td></tr></table></figure>\n\n<p> 这里使用了模除法,从个位数向首位逐一判断当前数位是否为3。接下来我就可以计算从1到n的奇数中总共有多少个3了。因为除了2以外的质数都是奇数,而奇数相乘只能获得奇数,我只需要使用<code>n - 2</code>就能完成迭代的条件了。又因为比3小的数字不可能含有3,所以基准情况可以设为3:</p>\n<figure class=\"highlight haskell\"><figcaption><span>计算从3到n的奇数中总共出现了多少个3</span></figcaption><table><tr><td class=\"gutter\"><pre><span class=\"line\">1</span><br><span class=\"line\">2</span><br><span class=\"line\">3</span><br><span class=\"line\">4</span><br></pre></td><td class=\"code\"><pre><span class=\"line\"><span class=\"title\">countThree</span> :: <span class=\"type\">Integer</span> -> <span class=\"type\">Integer</span></span><br><span class=\"line\"><span class=\"title\">countThree</span> n</span><br><span class=\"line\"> | n == <span class=\"number\">3</span> = <span class=\"number\">1</span></span><br><span class=\"line\"> | otherwise = count n + countThree (n - <span class=\"number\">2</span>)</span><br></pre></td></tr></table></figure>\n\n<p> 这一步的结果是,喜出望外、意料之中的,堆栈溢出!</p>\n<blockquote>\n<p>Prelude Main> countThree $ join $ primeFactors o<br>*** Exception: stack overflow</p>\n</blockquote>\n<p> 毕竟这是Haskell,只有递归嘛,4亿个函数堆叠在一起,溢出是可以理解的。</p>\n<p> 因为逻辑非常简单,我可以确保这个函数是正确的。事实上,它的确可以计算到7位为止的整数:</p>\n<blockquote>\n<p>Prelude Main> countThree 8662781<br>3550568<br>(2.06 secs, 1,150,702,176 bytes)</p>\n</blockquote>\n<p> 7位数就已经吞掉一个多GB,我的电脑只有8GB内存当然是不够它看的了。</p>\n<h2 id=\"数3的第二种方法:严格求值\"><a href=\"#数3的第二种方法:严格求值\" class=\"headerlink\" title=\"数3的第二种方法:严格求值\"></a>数3的第二种方法:严格求值</h2><p> 既然结果是正确的,那我只需要尝试优化这个函数就可以了。首先我想到的是<a href=\"https://zh.wikipedia.org/wiki/惰性求值\" target=\"_blank\" rel=\"noopener\">惰性求值</a>问题。Haskell是一个惰性语言,这在许多情况下都能优化运算效率。毕竟Spark和Linq都是惰性求值的,我没有道理怀疑它的优点。但Haskell是一个没有循环只有递归的语言,这就会造成许多问题了。我现在遇到的问题也许便是如此。如果展开<code>countThree</code>函数的话,它在堆栈上长这个样子:</p>\n<blockquote>\n<p>count n + countThree(n-2)<br>count n + (count(n-2) + countThree(n-4))<br>count n + (count(n-2) + (count(n-4) + countThree(n-6)))<br>…<br>count n + (count(n-2) + … + count(5) + count(3))</p>\n</blockquote>\n<p> 注意到Haskell的<code>+</code>运算符是严格求值的,它必须等待左右两边完全展开后才会进行运算,因此除非我优化递归函数本身,它必定会进行惰性运算——哪怕我用<code>seq</code>函数也不例外:</p>\n<figure class=\"highlight haskell\"><figcaption><span>使用seq的严格求值</span></figcaption><table><tr><td class=\"gutter\"><pre><span class=\"line\">1</span><br><span class=\"line\">2</span><br><span class=\"line\">3</span><br><span class=\"line\">4</span><br></pre></td><td class=\"code\"><pre><span class=\"line\"><span class=\"title\">countThreeSeq</span> :: <span class=\"type\">Integer</span> -> <span class=\"type\">Integer</span></span><br><span class=\"line\"><span class=\"title\">countThreeSeq</span> n</span><br><span class=\"line\"> | n == <span class=\"number\">3</span> = <span class=\"number\">1</span></span><br><span class=\"line\"> | otherwise = count n `seq` (count n + countThreeSeq (n - <span class=\"number\">2</span>))</span><br></pre></td></tr></table></figure>\n\n<p> 为了验证这一点,我启用了GHCI的最优化编译模式<code>ghci -fobject-code -O2</code>,编译后的结果仍然是堆栈溢出,只不过这次只用了不到5秒就报错了,比起之前的将近10秒还是有一定的进步。这里我想GHCI应该是是用了类似C++的Inline Function的方式来编译Count函数吧,待验证。</p>\n<h2 id=\"数3的第三种方法:尾调用\"><a href=\"#数3的第三种方法:尾调用\" class=\"headerlink\" title=\"数3的第三种方法:尾调用\"></a>数3的第三种方法:尾调用</h2><p> 尝试严格求值并没有取得任何成果,另一个方式便是优化递归函数了。经常写递归的人应该已经发现我第一个<code>countThree</code>函数的问题了:这不是一个<a href=\"https://zh.wikipedia.org/zh-cn/尾调用\" target=\"_blank\" rel=\"noopener\">尾调用</a>函数。那么使用尾调用的结果如何呢?</p>\n<figure class=\"highlight haskell\"><figcaption><span>使用尾调用</span></figcaption><table><tr><td class=\"gutter\"><pre><span class=\"line\">1</span><br><span class=\"line\">2</span><br><span class=\"line\">3</span><br><span class=\"line\">4</span><br><span class=\"line\">5</span><br></pre></td><td class=\"code\"><pre><span class=\"line\"><span class=\"title\">countThree</span> :: <span class=\"type\">Integer</span> -> <span class=\"type\">Integer</span></span><br><span class=\"line\"><span class=\"title\">countThree</span> n = countThree' n <span class=\"number\">0</span> <span class=\"keyword\">where</span></span><br><span class=\"line\"> countThree' n c</span><br><span class=\"line\"> | n == <span class=\"number\">3</span> = c</span><br><span class=\"line\"> | otherwise = countThree' (n - <span class=\"number\">2</span>) (c + (count n))</span><br></pre></td></tr></table></figure>\n\n<blockquote>\n<p>Prelude Main> countThree $ join $ primeFactors o<br>441684626<br>(250.44 secs, 136,963,604,072 bytes)</p>\n</blockquote>\n<p> 虽然用了4分多钟才运行完,但总算有结果了,它似乎又是一个QQ号。虽然应该没有这么巧的事情吧,但没准这个号的背后也是一个居心叵测的HR呢。我已经停用QQ很多年了就不去验证这么无聊的问题了,螃蟹留给其他人吃就好。</p>\n<h2 id=\"数3的第四种方法:fold\"><a href=\"#数3的第四种方法:fold\" class=\"headerlink\" title=\"数3的第四种方法:fold\"></a>数3的第四种方法:fold</h2><p> 真正的鲁迅粉丝是要凑够4种写法的。这里我想试验的方式分为3个步骤:</p>\n<ol>\n<li>生成从3到866278171的奇数序列</li>\n<li>分别计算每个奇数中3的个数,生成一个新的序列</li>\n<li>求整个序列的和</li>\n</ol>\n<figure class=\"highlight haskell\"><figcaption><span>使用fold遍历列表</span></figcaption><table><tr><td class=\"gutter\"><pre><span class=\"line\">1</span><br><span class=\"line\">2</span><br></pre></td><td class=\"code\"><pre><span class=\"line\"><span class=\"title\">countThreeF</span> :: <span class=\"type\">Integer</span> -> <span class=\"type\">Integer</span></span><br><span class=\"line\"><span class=\"title\">countThreeF</span> n = foldl1 (+) $ map count [<span class=\"number\">3</span>,<span class=\"number\">5.</span>.n]</span><br></pre></td></tr></table></figure>\n\n<p>效率如下:</p>\n<blockquote>\n<p>Prelude Main> countThreeF $ join $ primeFactors o<br>441684627<br>(254.24 secs, 171,614,745,432 bytes)</p>\n</blockquote>\n<p> 遍历了两遍并且使用了更多的内存,实在不能说是更优解。当然,因为866278171小于int32的最大值,我可以把Integer替换为Int来获取一定的空间效率。结果的确是快多了:</p>\n<blockquote>\n<p>Prelude Main> countThreeF’ 866278171<br>441684627<br>(85.30 secs, 41,581,489,720 bytes)</p>\n</blockquote>\n<h2 id=\"随想\"><a href=\"#随想\" class=\"headerlink\" title=\"随想\"></a>随想</h2><p> 对这两道题的探究就到此为止。数学上的推导不复杂,但Haskell果然不管用多少次都很有趣!虽然函数式会在未来的任意时间点都不会超越面向对象编程语言的热度,但不可否认的是,它对计算机语言发展的贡献极大。匿名函数、头等函数、惰性运算等特性都已经在大多数主流编程语言中实现,这足以证明它的价值。</p>\n<h1 id=\"非英语母语的困境\"><a href=\"#非英语母语的困境\" class=\"headerlink\" title=\"非英语母语的困境\"></a>非英语母语的困境</h1><p> 说起HR就让我想起一个月前的面试经历了。我面的是某人力资源软件的.NET开发部门,在挺过HR的电话和第一轮Phone Screen后,HR发了一份网上测试让我做。单纯的我当然以为她发给我的是编程题。讲道理我也没见过哪个不是技术性问题的测试。但这次还真就被我撞上了。</p>\n<p> 第一部分类似智商问卷。我中学的时候测出来的智商是143,就算我熬夜这么多年我想我的智商也不会掉到100去。事实也正是如此,不管是计算题、瑞文图形题还是3D空间变换都没有难到我。但是我万万没想到接下来的题竟然是考验英语能力的题。这一部分总共有三种题,平均每道题有6秒的答题时间。总题量超过之前的数学问题和智商问题的总和:</p>\n<p> 第一种是字母排列问题。比如ABC和DEF是相似的排列,因为他们都有123的顺序。同理,ZYX和CBA也有着相同的321顺序。</p>\n<p> 第二种是造词题。给出随机的至少10个英文字母A和目标长度L,要求从A中选取L个字母组成一个单词。比如,给出CENZOKX和长度4,我可以组成XEON这个单词。可以看出随着A和L的增长,这道题的难度乘指数级增长。</p>\n<p> 第三种是乱序单词游戏(Scrabled Word Game)。给出随机的至少12个英文字母A,要求组成英文单词。</p>\n<p> 第一种题并不困难,因为这其实不算英文题。但第二和第三种题对汉语母语的人太不友好了。因为汉字环境中成长的大脑根本无法形成还原乱序单词的功能。这是因为中文是世界上唯一存活的语素文字,而世界上大部分的人使用的都是表音文字。我可以还原一个乱序的句子或是成语,但我做不到拼凑乱序单词。就像你给我十几个笔画,我需要很长的时间才能凑出一个文字一样。也许对于拉丁语系的人来说这是个很容易的题,对于汉语背景的人,在6秒钟内做出来真的是太难了。难到我甚至怀疑这是不是专门用来筛选华侨的问卷。</p>\n<p> 后来我去看了他们公司的领英,又查了查他们的员工。这是个不到60人的小公司,查起来非常容易。结果是除了一个加拿大土生土长的华裔外,这个公司的确没有任何一个汉语姓的员工。唉,我想如果他们要筛选中国人的话这应该是最不显眼的方法了。表面上它的确是在考验你的英语能力,但背后却考验你的生长环境,偏偏你还不能说他们这是故意的,实在是用心良苦啊。</p>\n<p> 事情的后续是,因为有很多公司使用这个人力资源软件,我在这个平台上投递的简历统统没有任何回复。而我在其他平台投递的简历至少有1/5~1/8的几率被邀请面试。这也许是一个概率问题,但真的没法让我不多想。</p>\n<p> 更让我苦恼的是,这次经历让我有种智商被怀疑的挫败感。的确,这不是一个理性的想法,毕竟第一部分的智商问卷我不可能错到哪里去。这种智商题除了手滑选错答案外很难出现出现蒙对或是粗心答错之类的失误。但它毕竟跟第二部分的英文题在同一张试卷上。虽然我对第二部分有诸多不满,但如果两份问卷是分开作答,分开给结果的,那我根本不会在乎我在英语题上的折戟。可恰恰因为两份卷子是并在一起的,且事后我只收到一份HR方面不作任何解释的拒信,这就让我怀疑是不是我第一部分也出了什么差错——明明不可能有什么差错才对的。唉,实在是,没话说,不知道该怎么说才好了。</p>\n","categories":["随笔"],"tags":["生活","算法"]},{"title":"凉亭","url":"https://xhw994.github.io/2019/04/13/20190412/","content":"<h1 id=\"缘起\"><a href=\"#缘起\" class=\"headerlink\" title=\"缘起\"></a>缘起</h1><p> 我喜欢爬山,这大概是我生来便有的习惯。我出生在四面环山的盆地地区,那里最不缺的除了世人皆知的煤、面、醋、商,便是四顾茫茫的群山。我虽然不是一个爱出远门的人,但不论是太原或是温哥华,又或是被我看作未来养老之地的杭州的山,都已经被我来来回回攀了无数次。</p>\n<p> 爬山,固然是很累的活。许多人以为爬山的乐趣来源于登顶的成就感,我并不这么认为:毕竟我们爬的都是无数前人已经爬过的山。既然已被踏破、踏烂,那所谓的征服都不过是自我欺骗——每次爬山都只是一次必定成功的征程。主张结果论的人看到这里会说:“那么爬山还有什么意义?”其实爬山的乐趣就在这辛劳之间。哪怕登顶的成就是自欺欺人,炎日下的汗水、双腿的酸痛、同行的伙伴、沿途的趣闻、山顶的风光,这一切所见所感都是真实的,是主观能动性与付出换来的回报。</p>\n<p> 因此,我所以为的爬山是站在前人的肩膀上享受风景的一项十分安逸的运动。这份安逸不仅来源于得知此山已被踏破的安心,也来自于前人已经精心设计好的路径以及歇脚处。在温哥华,这些歇脚处可能只是几个没削皮的原木。而在中国的大多数名山,它们是一个个凉亭。登山者们可以在凉亭里休息,可以和其他登山者们一起聚餐、闲聊,分享沿途的心得。他们也可以通过研究凉亭的风格、维护情况、题字与刻字等来了解脚下的山和生活在山里的人们。就如同登山者用相机记录下凉亭和凉亭后的群山一样,凉亭不仅记着山的风景,也同样记下了来看山的人们。</p>\n<p> 软件工程师的生涯不过也是另一种登山罢了。我对这一概念的感触在毕业之后愈发的深厚。每个工程师都想造轮子,我曾经也不例外。我甚至妄想过把从操作系统到终端的所有软件都烙上我的印记。可是先不提现实与能力,这世上哪有那么多轮子需要造,不论是黑轮子还是白轮子,能滚的就是好轮子了。学会怎么用轮子造车才是切实可行的。工程师应当先是实干家,然后才是梦想家,而不是反过来。我曾经的想法与其说是狂妄,不如说是根本不知道自己应该做什么。</p>\n<p> 那么现在我知道我要做什么了吗?其实还是很朦胧的,但我不想再像飞蛾一样乱撞。如果生活和前途是一座大山,那我希望我能在它的肩上为自己留下一座座凉亭,给现在的自己歇歇脚,也能作为通往未来的路标。这便是我创立博客的动机。</p>\n<h1 id=\"路程\"><a href=\"#路程\" class=\"headerlink\" title=\"路程\"></a>路程</h1><p> 我尝试搭建过许多种博客,但没有哪个是真正让我满意的。价格、界面、维护成本…造出来的东西总是会有各种各样的问题。我想我又犯了上文提到的毛病了——不管是什么架构,能写文章、界面舒服就行。哪需要在乎是不是自己设计的,更不需要追求什么前后分离,SPA,那些都是虚的,只有自己写出来的东西才是实在的。</p>\n<p> 还好我不算太蠢,至少能在做了大量无用功后悬崖勒马。但这不代表那些无用功是一无是处的————至少我从中学到了很多东西,拓展了视野。以后有机会我也会记录那些废案的心得的。而且也正是因为漫长的积累与迷茫,我才能最终下定决心使用现在的轻量架构。弯路毕竟不是岔路,殊途终究要同归的。</p>\n<p> 本博客采用<a href=\"https://hexo.io\" target=\"_blank\" rel=\"noopener\">Hexo</a>引擎和<a href=\"https://github.com/forsigner/fexo\" target=\"_blank\" rel=\"noopener\">Fexo</a>主题。目前没有添加其他插件的想法。人要吸取教训,按需加量才是最好的。</p>\n<h1 id=\"面向\"><a href=\"#面向\" class=\"headerlink\" title=\"面向\"></a>面向</h1><p> 因为这个博客主要用来记录学习和工作的心得,而我又生活在北美区域,我本来是想用英文来写作的。可仔细一想,使用英文固然能方便也许存在的猎头们,却并不方便熟人们阅读,而只有后者才有可能是我的长期读者。我当然可以将某些技术文章翻译成其余四种我能使用的语言,但何必呢?代码和注释都是英文的了,难道还需要更多的解释吗?夹杂英文名词一事我也会尽量避免。毕竟中文还没有不可名状到部分华侨所鼓吹的地步,夹生饭也一定没有全熟的米饭好吃。</p>\n<p> 当然,选择中文有很大一部分是我的私心。我在国内的语文成绩优秀,但耐不住在加拿大的漫长时间严重影响了我的写字和写作水平。作为一个土生土长的中国人和一个业余的语言学爱好者,我深刻的了解中文在琳琅的语种中的独特性,它在我心里的地位高于其他任何语言。因此虽然字丑的问题大概是解决不了了,但写文章的本领我实在是不想丢。</p>\n<p> Xegnal这个名字,其实是基于我中文名的一个文字游戏。选择它作为博客的名字只是因为我不擅长起名罢了。Xegnal同时也是我规划中的某个项目的名字,现在就先让这个博客用着吧。还是那句话,开始写最重要,内容最重要。</p>\n<h1 id=\"结语\"><a href=\"#结语\" class=\"headerlink\" title=\"结语\"></a>结语</h1><p>文章要有结语才是文章,但世事却并不都有结语。没有的奶水是硬挤不出来的,所以今天就写到这里。什么时候又想废话了再回来添几笔罢。</p>\n","categories":["随笔"],"tags":["生活"]},{"title":"about","url":"https://xhw994.github.io/about/index.html","content":"","categories":[],"tags":[]},{"title":"category","url":"https://xhw994.github.io/category/index.html","content":"","categories":[],"tags":[]},{"title":"","url":"https://xhw994.github.io/css/personal-style.css","content":"article {\n padding-bottom: 5rem;\n}\n\n@font-face {\n font-family: \"Meiryo\";\n src: url(\"/fonts/Meiryo.eot\");\n /* IE9 */\n src: url(\"/fonts/Meiryo.eot?#iefix\") format(\"embedded-opentype\"), /* IE6-IE8 */\n url(\"/fonts/Meiryo.woff\") format(\"woff\"), /* chrome, firefox */\n url(\"/fonts/Meiryo.ttf\") format(\"truetype\"), /* chrome, firefox, opera, Safari, Android, iOS 4.2+ */\n url(\"/fonts/Meiryo.svg#Meiryo\") format(\"svg\");\n /* iOS 4.1- */\n font-style: normal;\n font-weight: normal;\n}","categories":[],"tags":[]},{"title":"link","url":"https://xhw994.github.io/link/index.html","content":"","categories":[],"tags":[]},{"title":"project","url":"https://xhw994.github.io/project/index.html","content":"","categories":[],"tags":[]},{"title":"牢骚","url":"https://xhw994.github.io/ramble/index.html","content":"<h3 id=\"2021-01-16\"><a href=\"#2021-01-16\" class=\"headerlink\" title=\"2021-01-16\"></a>2021-01-16</h3><p>我从来不是那根愤世嫉俗的鸡巴。我可能生下来就阳痿了。</p>\n<h3 id=\"2021-01-15\"><a href=\"#2021-01-15\" class=\"headerlink\" title=\"2021-01-15\"></a>2021-01-15</h3><p>曾经对着女性朋友说,二次元游戏的本质就是性欲。看着她们满脸尴尬我也自觉,无地自容。现在想想,我是傻逼,但我是对的,但我还是傻逼,但我还是对的。 </p>\n","categories":[],"tags":[]},{"title":"search","url":"https://xhw994.github.io/search/index.html","content":"","categories":[],"tags":[]},{"title":"tag","url":"https://xhw994.github.io/tag/index.html","content":"","categories":[],"tags":[]},{"title":"","url":"https://xhw994.github.io/images/favicon/manifest.json","content":"{\"name\":\"App\",\"icons\":[{\"src\":\"/android-icon-36x36.png\",\"sizes\":\"36x36\",\"type\":\"image/png\",\"density\":\"0.75\"},{\"src\":\"/android-icon-48x48.png\",\"sizes\":\"48x48\",\"type\":\"image/png\",\"density\":\"1.0\"},{\"src\":\"/android-icon-72x72.png\",\"sizes\":\"72x72\",\"type\":\"image/png\",\"density\":\"1.5\"},{\"src\":\"/android-icon-96x96.png\",\"sizes\":\"96x96\",\"type\":\"image/png\",\"density\":\"2.0\"},{\"src\":\"/android-icon-144x144.png\",\"sizes\":\"144x144\",\"type\":\"image/png\",\"density\":\"3.0\"},{\"src\":\"/android-icon-192x192.png\",\"sizes\":\"192x192\",\"type\":\"image/png\",\"density\":\"4.0\"}]}","categories":[],"tags":[]}]