How to return to the calling parse function while using yield in scrapy?

Here's what I want to achieve:

class Hello(Spider):
    #some stuff
    def parse(self, response):
        #get a list of url of cities using pickle and store in a list
        #Now for each city url I have to get list of monuments (using selenium) which is achieved by the below loops
        for c in cities:
            #get the list of monuments using selenium and iterate through each monument url contained in the division
            divs = sel.xpath('some xpath/div')
            for div in divs:
               monument_url=''.join(div.xpath('some xpath'))
               #For each monument url get the response and scrape the information
               yield Request(monument_url, self.parse_monument)
    def parse_monument(self, response):
        #scrape some information and return to the loop(i.e. return to "for div in divs:") 

Now what's happening is: 1. I get the list of all the monuments in all the city before yield statement is executed.
2. Whenever yield statement is executed it goes to the parse_monument function and doesn't return back to the loop, only scraping the list of monument that is present in the first city.

Is there any way to do this? Is there any way to get the response object that request method passes to the parse_monument without going to the parse_monument method so that I can use selectors to select elements that I require from the response?

Thank You !!


ANSWERS:


I don't think you can callback a function like you did. Here's a refactor:

class HelloSpider(scrapy.Spider):
    name = "hello"
    allowed_domains = ["hello.com"]
    start_urls = (
        'http://hello.com/cities'
    )

    def parse(self, response):
        cities = ['London','Paris','New-York','Shanghai']
        for city in cities:
            xpath_exp= 'some xpath[city="' + city + '"]/div/some xpath'
            for monument_url in response.xpath(xpath_exp).extract():
                yield Request(monument_url, callback=self.parse_monument)

    def parse_monument(self,response):
        pass

Request is an object, not a method. Scrapy will process the yielded Request object and execute the callback asychronously. You can view Request as a thread object.

The solution is by doing the reverse, you pass the data that you need from parse method to the Request instead, so you can process them inside parse_monument.

class Hello(Spider):

    def parse(self, response):
        for c in cities:
            divs = sel.xpath('some xpath/div')
            for div in divs:
               monument_url=''.join(div.xpath('some xpath'))

               data = ...   # set the data that you need from this loop

               # pass the data into request's meta
               yield Request(monument_url, self.parse_monument, meta={'data': data})

    def parse_monument(self, response):
        # retrieve the data from response's meta
        data = response.meta.get('data')
        ...


 MORE:


 ? Nested yield requests in Scrapy
 ? Python Yield prevents output/execution in Scrapy Web Spider Crawler
 ? Yield both items and callback request in scrapy
 ? Getting error with yield in scrapy python
 ? Getting error with yield in scrapy python
 ? Getting error with yield in scrapy python
 ? Combining base url with resultant href in scrapy
 ? Scrapy not working with return and yield together
 ? Scrapy not working with return and yield together
 ? Scrapy not working with return and yield together