当前位置: 首页 > 搜索 > 正文

heritrix3抓取的数据直接写入到mysql中

1 星2 星3 星4 星5 星 (2 次投票, 评分: 5.00, 总分: 5)
Loading ... Loading ...
baidu_share

在heritrix3抓取的过程中,我们需要把抓取过来的网页,经过分析,然后写入到数据库中。实现的方法为:继承Processor类重写innerProcess(CrawlURI curi)方法。实例如下:

package com.hq.override;
 
import java.io.IOException;
 
import org.archive.io.RecordingInputStream;
import org.archive.io.ReplayInputStream;
import org.archive.modules.CrawlURI;
import org.archive.modules.Processor;
import org.archive.net.UURI;
import org.jsoup.Jsoup;
import org.jsoup.nodes.Document;
 
import com.hq.beans.News;
import com.hq.jsoup.SingleJsoup;
import com.mysql.SqlExecutor;
 
public class MysqlWriterProccessor extends Processor {
 
	@Override
    protected void innerProcess(CrawlURI curi) {
        UURI uuri = curi.getUURI(); // Current URI.
 
        String uri = curi.getURI();
 
		if(!uri.equals("") & uri!= null & uri.endsWith(".html")){
			System.out.println("uri-----" + uri);
		}else{
			return;
		}
        // Only http and https schemes are supported.
        String scheme = uuri.getScheme();
        if (!"http".equalsIgnoreCase(scheme)
                && !"https".equalsIgnoreCase(scheme)) {
            return;
        }
        RecordingInputStream recis = curi.getRecorder().getRecordedInput();
        if (0L == recis.getResponseContentLength()) {
            return;
        }
 
        try {
        	ReplayInputStream replayis = recis.getMessageBodyReplayInputStream(); 
		//jsoup解析网页	
                Document doc = Jsoup.parse(replayis, "GBK", "http://news.163.com/");
		//解析出结果,写入到mysql中
 
		} catch (IOException e) {
			e.printStackTrace();
		}
	}  
 
 
	@Override
    protected boolean shouldProcess(CrawlURI curi) {
        return isSuccess(curi);
    }
}

本文固定链接: http://www.chepoo.com/heritrix3-crawled-data-is-written-directly-to-the-mysql.html | IT技术精华网

heritrix3抓取的数据直接写入到mysql中:等您坐沙发呢!

发表评论