I want to get some data from a webpage (https://www.evaschulze-aufgabenpool.de/index.php/s/smwP6ygck2SXRtF?path=%2FKlasse12) with python and selenium, but the content I want is dynamically generated and to see all the content you would have to scroll down at the webpage. To be more specifically I want to get all the folder names shown at the website, but it doesn't work. My attempt to just scroll down the whole webpage with selenium also doesn't seem to work right, but I don't know what I'm doing wrong or what else I could do to get all the folder names. So my question is: How can I make sure I always get all of the dynamically generated folders of the website.
Here's the code I'm using:
from time import sleep
from selenium import webdriver
url = "https://www.evaschulze-aufgabenpool.de/index.php/s/smwP6ygck2SXRtF?path=%2FKlasse12"
driver = webdriver.Chrome("chromedriver.exe")
driver.get(url)
driver.maximize_window()
sleep(3)
for i in range(5):
driver.execute_script("window.scrollTo(0, 1080)")
sleep(3)
data = driver.find_element_by_tag_name("table")
data = data.find_elements_by_tag_name("tr")
for element in data:
name = element.get_attribute("data-file")
if name is not None:
print(name)
driver.quit()