scrapy parse_item method is not getting called

Here is my code. My parse_item method is not getting called.

from scrapy.contrib.spiders import CrawlSpider, Rule
from scrapy.contrib.linkextractors.sgml import SgmlLinkExtractor
from scrapy.selector import HtmlXPathSelector

class SjsuSpider(CrawlSpider):

    name = 'sjsu'
    allowed_domains = ['']
    start_urls = ['']
    # allow=() is used to match all links
    rules = [Rule(SgmlLinkExtractor(allow=()), follow=True),
             Rule(SgmlLinkExtractor(allow=()), callback='parse_item')]

    def parse_item(self, response):
        print "some message"
        open("sjsupages", 'a').write(response.body)


Your allowed domain should be ''.

Scrapy does not allow subdomains of an allowed domain.

Also, your rules could be written as:

rules = [Rule(SgmlLinkExtractor(), follow=True, callback='parse_item')]


 ? Scrapy Import Method from another Spider
 ? Scrapy - can't write to log in spider's __init__ method
 ? scrapy : another method to avoid a lot of try except
 ? Passing method as parameter in Python (Scrapy) - syntax
 ? Defining Additional Methods in a Scrapy Class
 ? Scrapy Only Cache Images
 ? Scrapy Only Cache Images
 ? Scrapy Only Cache Images
 ? Check if id exists in MongoDB with pymongo and scrapy
 ? Understand the scrapy framework architecture