1

I'm trying to tell my script to get contents from a url or otherwise log it as an error.

I saw an example in Correct way to try/except using Python requests module? but it does not seem to work for the url in the code below.

The url is broken so I would expect the script to do the except block and log the error. It just gets stuck with no results or error.

import requests
import sys

url = 'https://m.sportsinteraction.com/fr/football/international/coupe-du-monde-feminine-pari/fifawomen-wc-219-reach-the-semi-finals-scotland-05-21-2019-322-1609228/'
try:
    r = requests.get(url)
except requests.exceptions.RequestException as e:
    print (e)
    sys.exit(1)

Below is a snip of the error I get:

enter image description here

Michael Okelola
  • 783
  • 5
  • 17

1 Answers1

1

This problem is quite an interesting one because of the following:

  1. The script is syntactically correct

  2. The url opens in certain locations

Since I'm using an older Chrome, I initially tried the solution of Python - selenium webdriver stuck at .get() in a loop but the solution persisted.

The next solution I then tried was to put a timeout on the get() statement, in other words:

import requests
import sys

url = 'https://m.sportsinteraction.com/fr/football/international/coupe-du-monde-feminine-pari/fifawomen-wc-219-reach-the-semi-finals-scotland-05-21-2019-322-1609228/'
try:
  r = requests.get(url, timeout = 3)
except requests.exceptions.RequestException as e:
  print (e)
  sys.exit(1)

This solution worked by stopping the request after the stipulated time, before going to the except block.

Michael Okelola
  • 783
  • 5
  • 17