分布式搜索elasticsearch:索引修复
在使用基于lucene的各类搜索引擎(如:elasticsearch、solr)时,有可能出现类似如下的错误:
Caused by: java.io.EOFException: read past EOF: NIOFSIndexInput(path=”/usr/local/sas/escluster/data/cluster/nodes/0/indices/index/5/index/_59ct.fdt”)
这是由lucene的索引损坏了导致的,引起的原因有很多种,比如:复制不完整,硬盘损坏等。
由于elasticsearch也是基于lucene的,所以lucene的一些小工具elasticsearch也可以用到,比如luke之类的。lucene里面也有很多比较好用的工具,比如下面要介绍的CheckIndex。它在lucene-core jar包的org.apache.lucene.index目录下。它的功能是检查索引的的健康情况和修复索引。如果检查出某些segments有错误,可以通过-fix参数执行修复操作,修复的过程就是创建一个新的segments,把所有引用错误segments的索引数据删除。
使用方法,先定位到es的lib目录下
1 | cd es_home/lib |
运行一下命令检查索引
1 | java -cp lucene-core-3.6.1.jar -ea:org.apache.lucene... org.apache.lucene.index.CheckIndex /usr/local/sas/escluster/data/cluster/nodes/0/indices/index/5/index/ |
检测结果如下:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 | Segments file=segments_2cg numSegments=26 version=3.6.1 format=FORMAT_3_1 [Lucene 3.1+] userData={translog_id=1347536741715} 1 of 26: name=_59ct docCount=4711242 compound=false hasProx=true numFiles=9 size (MB)=6,233.694 diagnostics = {mergeFactor=13, os.version=2.6.32-71.el6.x86_64, os=Linux, lucene.version=3.6.1 1362471 - thetaphi - 2012-07-17 12:40:12, source=merge, os.arch=amd64, mergeMaxNumSegments=-1, java.version=1.6.0_24, java.vendor=Sun Microsystems Inc.} has deletions [delFileName=_59ct_1b.del] test: open reader.........OK [3107 deleted docs] test: fields..............OK [25 fields] test: field norms.........OK [10 fields] test: terms, freq, prox...OK [36504908 terms; 617641081 terms/docs pairs; 742052507 tokens] test: stored fields.......ERROR [read past EOF: MMapIndexInput(path="/usr/local/sas/escluster/data/cluster/nodes/0/indices/index/5/index/_59ct.fdt")] java.io.EOFException: read past EOF: MMapIndexInput(path="/usr/local/sas/escluster/data/cluster/nodes/0/indices/index/5/index/_59ct.fdt") at org.apache.lucene.store.MMapDirectory$MMapIndexInput.readBytes(MMapDirectory.java:307) at org.apache.lucene.index.FieldsReader.addField(FieldsReader.java:400) at org.apache.lucene.index.FieldsReader.doc(FieldsReader.java:253) at org.apache.lucene.index.SegmentReader.document(SegmentReader.java:492) at org.apache.lucene.index.IndexReader.document(IndexReader.java:1138) at org.apache.lucene.index.CheckIndex.testStoredFields(CheckIndex.java:852) at org.apache.lucene.index.CheckIndex.checkIndex(CheckIndex.java:581) at org.apache.lucene.index.CheckIndex.main(CheckIndex.java:1064) test: term vectors........OK [0 total vector count; avg 0 term/freq vector fields per doc] FAILED WARNING: fixIndex() would remove reference to this segment; full exception: java.lang.RuntimeException: Stored Field test failed at org.apache.lucene.index.CheckIndex.checkIndex(CheckIndex.java:593) at org.apache.lucene.index.CheckIndex.main(CheckIndex.java:1064) WARNING: 1 broken segments (containing 4708135 documents) detected WARNING: 4708135 documents will be lost |
在检查结果中可以看到,分片5的_59ct.fdt索引文件损坏,.fdt文件主要存储lucene索引中存储的fields,所以在检查test: stored fields时出错。
下面的警告是说有一个损坏了的segment,里面有4708135个文档。
在原来的命令基础上加上-fix参数可以进行修复索引操作(ps:在进行修改前最好对要修复的索引进行备份,不要在正在执行写操作的索引上执行修复。
1 | java -cp lucene-core-3.6.1.jar -ea:org.apache.lucene... org.apache.lucene.index.CheckIndex /usr/local/sas/escluster/data/cluster/nodes/0/indices/index/5/index/ -fix |
执行完后会在原来检查完毕的信息后面会出现以下信息。
1 2 3 4 5 6 7 8 9 | NOTE: will write new segments file in 5 seconds; this will remove 4708135 docs from the index. THIS IS YOUR LAST CHANCE TO CTRL+C! 5... 4... 3... 2... 1... Writing... OK Wrote new segments file "segments_2ch" |
表示修复完成,移除了4708135个损坏文档。
损坏了4708135个文档,是挺多的了,对于这种损坏文档数太多的情况,一就是直接从之前备份的数据中恢复(如果有的话),还有就是读取索引,记录损坏文档的id,进行修复后重新索引。
本文固定链接: http://www.chepoo.com/elasticsearch-repair-index.html | IT技术精华网