4

I just started to learn scrapy. So I followed the scrapy documentation. I just written the first spider mentioned in that site.

import scrapy

class DmozSpider(scrapy.Spider):
    name = "dmoz"
    allowed_domains = ["dmoz.org"]
    start_urls = [
        "http://www.dmoz.org/Computers/Programming/Languages/Python/Books/",
        "http://www.dmoz.org/Computers/Programming/Languages/Python/Resources/"
    ]

    def parse(self, response):
        filename = response.url.split("/")[-2]
        with open(filename, 'wb') as f:
            f.write(response.body)

Upon running this scrapy crawl dmoz command on project's root directory, it shows the below error.

2015-06-07 21:53:06+0530 [scrapy] INFO: Scrapy 0.14.4 started (bot: tutorial)
2015-06-07 21:53:06+0530 [scrapy] DEBUG: Enabled extensions: LogStats, TelnetConsole, CloseSpider, WebService, CoreStats, MemoryUsage, SpiderState
Traceback (most recent call last):
  File "/usr/bin/scrapy", line 4, in <module>
    execute()
  File "/usr/lib/python2.7/dist-packages/scrapy/cmdline.py", line 132, in execute
    _run_print_help(parser, _run_command, cmd, args, opts)
  File "/usr/lib/python2.7/dist-packages/scrapy/cmdline.py", line 97, in _run_print_help
    func(*a, **kw)
  File "/usr/lib/python2.7/dist-packages/scrapy/cmdline.py", line 139, in _run_command
    cmd.run(args, opts)
  File "/usr/lib/python2.7/dist-packages/scrapy/commands/crawl.py", line 43, in run
    spider = self.crawler.spiders.create(spname, **opts.spargs)
  File "/usr/lib/python2.7/dist-packages/scrapy/command.py", line 34, in crawler
    self._crawler.configure()
  File "/usr/lib/python2.7/dist-packages/scrapy/crawler.py", line 36, in configure
    self.spiders = spman_cls.from_crawler(self)
  File "/usr/lib/python2.7/dist-packages/scrapy/spidermanager.py", line 37, in from_crawler
    return cls.from_settings(crawler.settings)
  File "/usr/lib/python2.7/dist-packages/scrapy/spidermanager.py", line 33, in from_settings
    return cls(settings.getlist('SPIDER_MODULES'))
  File "/usr/lib/python2.7/dist-packages/scrapy/spidermanager.py", line 23, in __init__
    for module in walk_modules(name):
  File "/usr/lib/python2.7/dist-packages/scrapy/utils/misc.py", line 65, in walk_modules
    submod = __import__(fullpath, {}, {}, [''])
  File "/home/avinash/tutorial/tutorial/spiders/dmoz_spider.py", line 3, in <module>
    class DmozSpider(scrapy.Spider):
AttributeError: 'module' object has no attribute 'Spider'
Avinash Raj
  • 166,785
  • 24
  • 204
  • 249

3 Answers3

6

You are using old Scrapy (0.14.4) with the most latest documentation.

Solution: upgrade to the latest version of Scrapy or read old docs, that suit currently installed version

Alik
  • 22,355
  • 5
  • 46
  • 63
  • how to find the scrapy version? – Avinash Raj Jun 07 '15 at 17:03
  • @AvinashRaj it usually outputs when you run `scrapy`. Your question already contains `2015-06-07 21:53:06+0530 [scrapy] INFO: Scrapy 0.14.4 started (bot: tutorial)` ;-) – Alik Jun 07 '15 at 17:04
  • @AvinashRaj the most latest version is 0.24.6. 0.14 was released more than 3 years ago! – Alik Jun 07 '15 at 17:06
  • @AvinashRaj install it with pip – Alik Jun 07 '15 at 17:08
  • but i installed other modules with `sudo apt-get install ...` – Avinash Raj Jun 07 '15 at 17:09
  • @AvinashRaj make a virtual environment then and install scrapy inside the envronment with pip. Or use scrapy's repository. Installation instructions are available at http://doc.scrapy.org/en/latest/topics/ubuntu.html – Alik Jun 07 '15 at 17:11
0

Use sudo pip install scrapy. If you get Python.h missing error, then install the python header files with sudo apt-get install python-dev (reference)

Community
  • 1
  • 1
Shadi
  • 8,653
  • 3
  • 38
  • 59
-2

Maybe try:

from scrapy import Spider

Just importing module isn't enough if you want to use it's classes

kedam6
  • 91
  • 4