0

I would like to download bulk images, using Google image search.

My first method; downloading the page source to a file and then opening it with open() works fine, but I would like to be able to fetch image urls by just running the script and changing keywords.

First method: Go to the image search (https://www.google.no/search?q=tower&client=opera&hs=UNl&source=lnms&tbm=isch&sa=X&ved=0ahUKEwiM5fnf4_zKAhWIJJoKHYUdBg4Q_AUIBygB&biw=1920&bih=982). View the page source in the browser and save it to a html file. When I then open() that html file with the script, the script works as expected and I get a neat list of all the urls of the images on the search page. This is what line 6 of the script does (uncomment to test).

If, however I use the requests.get() function to parse the webpage, as shown in line 7 of the script, it fetches a different html document, that does not contain the full urls of the images, so I cannot extract them.

Please help me extract the correct urls of the images.

Edit: link to the tower.html, I am using: https://www.dropbox.com/s/yy39w1oc8sjkp3u/tower.html?dl=0

This is the code, I have written so far:

import requests
from bs4 import BeautifulSoup

# define the url to be scraped
url = 'https://www.google.no/search?q=tower&client=opera&hs=cTQ&source=lnms&tbm=isch&sa=X&ved=0ahUKEwig3LOx4PzKAhWGFywKHZyZAAgQ_AUIBygB&biw=1920&bih=982'

# top line is using the attached "tower.html" as source, bottom line is using the url. The html file contains the source of the above url.
#page = open('tower.html', 'r').read()
page = requests.get(url).text

# parse the text as html
soup = BeautifulSoup(page, 'html.parser')

# iterate on all "a" elements.
for raw_link in soup.find_all('a'):
   link = raw_link.get('href')
   # if the link is a string and contain "imgurl" (there are other links on the page, that are not interesting...
   if type(link) == str and 'imgurl' in link:
        # print the part of the link that is between "=" and "&" (which is the actual url of the image,
        print(link.split('=')[1].split('&')[0])
  • Possibly related: https://stackoverflow.com/a/22871658/1291371, https://stackoverflow.com/a/35810041/1291371, https://stackoverflow.com/a/28487500/1291371 – Illia Zub Mar 27 '20 at 18:55

2 Answers2

1

Just so you're aware:

# http://www.google.com/robots.txt

User-agent: *
Disallow: /search



I would like to preface my answer by saying that Google heavily relies on scripting. It's very possible that you're getting different results because the page you're requesting via reqeusts doesn't do anything with the scripts supplied in on the page, whereas loading the page in a web browser does.

Here's what i get when I request the url you supplied

The text I get back from requests.get(url).text doesn't contain 'imgurl' in it anywhere. Your script is looking for that as part of its criteria and it's not there.

I do however see a bunch of <img> tags with the src attribute set to an image url. If that's what you're after, than try this script:

import requests
from bs4 import BeautifulSoup

url = 'https://www.google.no/search?q=tower&client=opera&hs=cTQ&source=lnms&tbm=isch&sa=X&ved=0ahUKEwig3LOx4PzKAhWGFywKHZyZAAgQ_AUIBygB&biw=1920&bih=982'

# page = open('tower.html', 'r').read()
page = requests.get(url).text

soup = BeautifulSoup(page, 'html.parser')

for raw_img in soup.find_all('img'):
  link = raw_img.get('src')
  if link:
    print(link)

Which returns the following results:

https://encrypted-tbn0.gstatic.com/images?q=tbn:ANd9GcQyxRHrFw0NM-ZcygiHoVhY6B6dWwhwT4va727380n_IekkU9sC1XSddAg
https://encrypted-tbn0.gstatic.com/images?q=tbn:ANd9GcRfuhcCcOnC8DmOfweuWMKj3cTKXHS74XFh9GYAPhpD0OhGiCB7Z-gidkVk
https://encrypted-tbn0.gstatic.com/images?q=tbn:ANd9GcSOBZ9iFTXR8sGYkjWwPG41EO5Wlcv2rix0S9Ue1HFcts4VcWMrHkD5y10
https://encrypted-tbn1.gstatic.com/images?q=tbn:ANd9GcTEAZM3UoqqDCgcn48n8RlhBotSqvDLcE1z11y9n0yFYw4MrUFucPTbQ0Ma
https://encrypted-tbn3.gstatic.com/images?q=tbn:ANd9GcSJvthsICJuYCKfS1PaKGkhfjETL22gfaPxqUm0C2-LIH9HP58tNap7bwc
https://encrypted-tbn2.gstatic.com/images?q=tbn:ANd9GcQGNtqD1NOwCaEWXZgcY1pPxQsdB8Z2uLGmiIcLLou6F_1c55zylpMWvSo
https://encrypted-tbn2.gstatic.com/images?q=tbn:ANd9GcSdRxvQjm4KWaxhAnJx2GNwTybrtUYCcb_sPoQLyAde2KMBUhR-65cm55I
https://encrypted-tbn3.gstatic.com/images?q=tbn:ANd9GcQLVqQ7HLzD7C-mZYQyrwBIUjBRl8okRDcDoeQE-AZ2FR0zCPUfZwQ8Q20
https://encrypted-tbn0.gstatic.com/images?q=tbn:ANd9GcQHNByVCZzjSuMXMd-OV7RZI0Pj7fk93jVKSVs7YYgc_MsQqKu2v0EP1M0
https://encrypted-tbn3.gstatic.com/images?q=tbn:ANd9GcS_RUkfpGZ1xJ2_7DCGPommRiIZOcXRi-63KIE70BHOb6uRk232TZJdGzc
https://encrypted-tbn2.gstatic.com/images?q=tbn:ANd9GcSxv4ckWM6eg_BtQlSkFP9hjRB6yPNn1pRyThz3D8MMaLVoPbryrqiMBvlZ
https://encrypted-tbn2.gstatic.com/images?q=tbn:ANd9GcQWv_dHMr5ZQzOj8Ort1gItvLgVKLvgm9qaSOi4Uomy13-gWZNcfk8UNO8
https://encrypted-tbn2.gstatic.com/images?q=tbn:ANd9GcRRwzRc9BJpBQyqLNwR6HZ_oPfU1xKDh63mdfZZKV2lo1JWcztBluOrkt_o
https://encrypted-tbn1.gstatic.com/images?q=tbn:ANd9GcQdGCT2h_O16OptH7OofZHNvtUhDdGxOHz2n8mRp78Xk-Oy3rndZ88r7ZA
https://encrypted-tbn1.gstatic.com/images?q=tbn:ANd9GcRnmn9diX3Q08e_wpwOwn0N7L1QpnBep1DbUFXq0PbnkYXfO0wBy6fkpZY
https://encrypted-tbn2.gstatic.com/images?q=tbn:ANd9GcSaP9Ok5n6dL5K1yKXw0TtPd14taoQ0r3HDEwU5F9mOEGdvcIB0ajyqXGE
https://encrypted-tbn0.gstatic.com/images?q=tbn:ANd9GcTcyaCvbXLYRtFspKBe18Yy5WZ_1tzzeYD8Obb-r4x9Yi6YZw83SfdOF5fm
https://encrypted-tbn0.gstatic.com/images?q=tbn:ANd9GcTnS1qCjeYrbUtDSUNcRhkdO3fc3LTtN8KaQm-rFnbj_JagQEPJRGM-DnY0
https://encrypted-tbn1.gstatic.com/images?q=tbn:ANd9GcSiX_elwJQXGlToaEhFD5j2dBkP70PYDmA5stig29DC5maNhbfG76aDOyGh
https://encrypted-tbn3.gstatic.com/images?q=tbn:ANd9GcQb3ughdUcPUgWAF6SkPFnyiJhe9Eb-NLbEZl_r7Pvt4B3mZN1SVGv0J-s
ngoue
  • 974
  • 1
  • 13
  • 23
  • I have tried scraping using urllib, which mostly gives me "Forbidden" back, which I believe is because of the Disallowing, that you mention. urllib works fine for anything but google images. I am aware that there are no "imgurl"-s in the text that requests parses. The results that you get are the thumbnails of the images. It is better than nothing, but I would like to harvest the full resolution images. Problem is that the parsing never contains that. Is there any way to make request follow the scripts and actually let it fetch the address of the source image? – Mikkel Bue Tellus Feb 16 '16 at 18:22
  • That's exactly why it gives you the "Forbidden" back. They've got a whole module built to parse a website's robots.txt file and determine if scraping is allowed. You could try using the `re` library and use regular expressions to find values instead. However, I think it'll be difficult to scrape Google's search pages... They make it hard for a reason. – ngoue Feb 16 '16 at 18:24
  • Anyway, thanks for the edit that fetches the thumbnails :) – Mikkel Bue Tellus Feb 16 '16 at 18:31
0

You can find the attributes by using 'data-src' or 'src' attribute.


REQUEST_HEADER = {
    'User-Agent': "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/43.0.2357.134 Safari/537.36"}

def get_images_new(self, prod_id, name, header, **kw):
        i=1
        man_code = "apple" #anything you want to search for
        url = "https://www.google.com.au/search?q=%s&source=lnms&tbm=isch" % man_code
        _logger.info("Subitemsyyyyyyyyyyyyyy: %s" %url)
        response = urlopen(Request(url, headers={
            'User-Agent': "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/43.0.2357.134 Safari/537.36"}))
        html = response.read().decode('utf-8')
        soup = BeautifulSoup(html, "html.parser")
        image_elements = soup.find_all("img", {"class": "rg_i Q4LuWd"})
        for img in image_elements:
            #temp1 = img.get('src')
            #_logger.info("11111[%s]" % (temp1))
            temp = img.get('data-src')
            if temp and i < 7:
                image = temp
                #_logger.error("11111[%s]" % (image))
                filename = str(i)
                if filename:
                    path = "/your/directory/" + str(prod_id) # your filename
                    if not os.path.exists(path):
                        os.mkdir(path)
                    _logger.error("ath.existath.existath.exist[%s]" % (image))
                    imagefile = open(path + "/" + filename + ".png", 'wb+')
                    req = Request(image, headers=REQUEST_HEADER)
                    resp = urlopen(req)
                    imagefile.write(resp.read())
                    imagefile.close()
                i += 1

  • Please elaborate your answer further with more details. Your current answer *You can find the attributes by using 'data-src' or 'src' attribute.* is very generic and doesn't seem to actually answer OP's question / problem as given in the question's description. – Ivo Mori Sep 29 '20 at 00:36