2

I am trying to use a loop to download a bunch of html pages and scrap inside data. But those pages have some javascript job runing when loading. So I am thinking using webclient may not be a good choice. But if I use webBrowser like below. it return empty html string after first call in the loop.

WebBrowser wb = new WebBrowser();
        wb.ScrollBarsEnabled = false;
        wb.ScriptErrorsSuppressed = true;
        wb.Navigate(url);
        while (wb.ReadyState != WebBrowserReadyState.Complete) { Application.DoEvents(); Thread.Sleep(1000); }
        html = wb.Document.DomDocument.ToString();
Mike Long
  • 343
  • 4
  • 16

1 Answers1

4

Your are correct that WebClient & all of the other HTTP client interfaces will completely ignore JavaScript; none of them are Browsers after all.

You want:

var html = wb.Document.GetElementsByTagName("HTML")[0].OuterHtml;

Note that if you load via a WebBrowser you don't need to scrape the raw markup; you can use DOM methods like GetElementById/TagName and so on.

The while loop is very VBScript, there is a DocumentCompleted event you should wire your code into.


private void Whatever()
{
    WebBrowser wb = new WebBrowser();
    wb.DocumentCompleted += Wb_DocumentCompleted;

    wb.ScriptErrorsSuppressed = true;
    wb.Navigate("http://stackoverflow.com");
}

private void Wb_DocumentCompleted(object sender, WebBrowserDocumentCompletedEventArgs e)
{
    var wb = (WebBrowser)sender;

    var html = wb.Document.GetElementsByTagName("HTML")[0].OuterHtml;
    var domd = wb.Document.GetElementById("copyright").InnerText;
    /* ... */
}
Alex K.
  • 165,803
  • 30
  • 257
  • 277