MENU

使用Java爬虫批量查询百度收录和排名

June 15, 2017 • 技术笔记

背景:

1.作为SEO,每天都有一批数据需要记录,抓取方面,收录方面,流量方面等。
2.在收录方面,收录率是非常有意义的一个指标,可以直接反映整站或者某频道的收录情况。
3.最近学习了天码营的云音乐爬虫课程正好用查询百度收录和排名爬虫来复习下爬虫课程。

思路:

1.用搜索关键词构建百度搜索URL。
2.获取百度搜索结果页内容。
3.提取百度搜索结果(排名,标题,url)。
4.收录判定

代码:
public class PCBaiduHtmlParser {

    private static final String BASE_URL = "http://www.baidu.com/s?wd=";
    
    /**
     * 请求百度搜索结果页,并返回html源码
     * @param keywords
     * @return string htm
     */
    private String html(String keywords) {
        String html = null;
        try {
            String url = BASE_URL + URLEncoder.encode(keywords,"utf-8");
            CloseableHttpClient client = HttpClients.createDefault();
            HttpGet get = new HttpGet(url);
            try {
                get.setHeader("Cache-Control", "no-cache, must-revalidate");
                get.addHeader("Accept", "text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8");
                get.addHeader("User-Agent", "Mozilla/5.0 (Windows NT 6.1; WOW" +
                        "" +
                        "64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/42.0.2311.90 Safari/537.36");
                CloseableHttpResponse response = client.execute(get);
                HttpEntity entity = response.getEntity();
                if (entity != null) {
                    html = EntityUtils.toString(entity);
                }
                EntityUtils.consume(entity);
            } catch (IOException e) {
                LogConstant.spiderLog.error("html client.execute(get) error [" + e + "]");
            }
        } catch (UnsupportedEncodingException e) {
            LogConstant.spiderLog.error("html URLEncoder.encode error [" + e + "]");
        }
        return html;
    }
    
    /**
     * 解析搜索页面结果
     * @param keywords
     * @return SearchResult
     */
    public List<SearchResult> htmlParser(String keywords) {
        List<SearchResult> searchResults = new ArrayList<SearchResult>();
        String html = this.html(keywords);
        Document doc = Jsoup.parse(html);
        Elements contents = doc.select("div.result.c-container ");
        for (Element content : contents) {
            int id = Integer.parseInt(content.attr("id"));//排名
            String baiduurl = content.select("a").attr("href");//url
            String url = reUrl(baiduurl);
            String title = content.select("a").first().text();//标题
            searchResults.add(new SearchResult(id, title, baiduurl, url));
        }
        return searchResults;
    }
    
    /**
     * 百度url重定向得到真实链接
     * @param url
     * @return url
     */
    private String reUrl (String url) {

        String html = null;
        try {
            Connection.Response response = Jsoup.connect(url).timeout(3000).method(Connection.Method.GET).followRedirects(false).execute();
            html= response.header("Location");
            return html;
        } catch (Exception e) {
            LogConstant.spiderLog.error("html reUrl Jsoup.connect(url) error [" + e + "]");
        }
        return null;
    }

    /**
     * 查询百度收录
     * @param url
     * @return boolean 收录结果 false代表未收录 ,true代表未收录.
     */
    public boolean baiduRecord(String url) {
        List<SearchResult> SearchResults = this.htmlParser(url);
        for (SearchResult searchResult : SearchResults) {
            if (searchResult.getBaiduUrl().equals(url)) {
                return true;
            }
        }
        return false;
    }
}

原文链接

Last Modified: February 21, 2018
Archives QR Code
QR Code for this page
Tipping QR Code