老规矩,先把相关的git地址上齐

scrapyjs:===>scrapyjs<====

splash:  ===>splash<====

文档地址:===>doc for splash<===

在上一篇文档:关于动态js或者ajax的处理,我简单的说了一下splash的使用,如果融入到scrapy中,我们需要使用官网的提供做法,先安装scrapyjs库,然后安装好splash等docker 。这次的例子我们还是选择搜狗的微信搜索

准备环境

先安装scrapy-splash库:

pip install scrapy-splash

然后将我们的docker起起来

docker run -p 8050:8050 scrapinghub/splash

如果关于docker安装还有更多的问题,请查考:

splash安装文档

scrapy配置

  1. 将splash server的地址放在你的settings.py文件里面,如果是你在本地起的,那地址应该是http://127.0.0.1:8050,我的地址如下 SPLASH_URL = 'http://192.168.99.100:8050'

  2. 在你的下载器中间件:download_middleware 里面启用如下的中间文件,注意启用的顺序

  DOWNLOADER_MIDDLEWARES = {
    'scrapy_splash.SplashCookiesMiddleware': 723,
    'scrapy_splash.SplashMiddleware': 725,
    'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware': 810,
}

      6. 如果你有在你自己的settings.py里面启用DEFAULT_REQUEST_HEADERS ,请务必注释掉,目前看来是一个bug ,我已经给scrapy splash 官方提了这个bug

         https://github.com/scrapy-plugins/scrapy-splash/issues/67

          该bug 是由于default_request_headers 里面的host 与我要爬的sougou不匹配,这当然会出错,不得不说scrapy的官方维护人反应真的很迅速。大家添加的headers的时候注意这些细节内容。

编写代码

# -*- coding: utf-8 -*-
from scrapy import Request
from scrapy.spiders import Spider
from scrapy_splash import SplashRequest
from scrapy_splash import SplashMiddleware



class WeiXinSpider(Spider):
    name = 'weixin'
    # main address since it has the fun list of the products
    start_urls = [
        'http://weixin.sogou.com/weixin?page={}&type=2&query=%E4%B8%AD%E5%9B%BD'.format(a) for a in xrange(1,10)
    ]

    # allowed_domains = [
    #     'sogou.com'
    # ]

    # def __init__(self, *args, **kwargs):
    #      super(WeiXinSpider, self).__init__(*args, **kwargs)

    def start_requests(self):
        #text/html; charset=utf-8
        for url in self.start_urls:
            yield SplashRequest(url
                                ,self.parse
                                ,args={'wait':'0.5'}
                                #,endpoint='render.json'
                                )
        pass

    def parse(self, response):
        self.logger.info('now you can see the url %s' % response.url)
        div_results = response.xpath('//div[@class="results"]/div')
        if not div_results:
            self.logger.error(msg='there is not any body in the %s' % response.body)
            return
        for div_item in div_results:
            title = div_item.xpath('descendant::div[@class="txt-box"]//h4//text()')
            if title:
                txt = ''.join(title.extract())
                yield {'title':txt}

代码分析

其实现在SplashRequest就是对splash http api的另一层封装,有人问了,如果我不想使用scrapy ,只使用requests应该怎么玩呢,简单呀

import requests
import json

def get_content_from_splash():
    render_html = 'http://192.168.99.100:8050/render.html'
    url = 'http://www.cnblogs.com'
    body = json.dumps({"url": url, "wait": 5,'images':0,'allowed_content_types':'text/html; charset=utf-8'})
    headers = {'Content-Type': 'application/json'}
    print requests.post(url=render_html,headers=headers,data=body).text

if __name__ == '__main__':
    get_content_from_splash()

几行代码轻松实现,关于实现的原理,我就不多讲了,感觉没有啥特别的。

实际运行