0

I want to save multiple websites that use html and javascript. As some of the information should be "processed" later on, it would be useful to save the websites in let's say "pure" html. Javascript unfortunately hides some information or links. The solution should be something similar to the result of saving a website with Firefox. The saved page doesn't have any javascript elements.

What I've tried to do: * Finding a way to use Firefox on command-line: didn't work * Trying to find the often mentioned javascript-plugin for wget: it's not out yet. Any ideas? Thank you!

Tomukas
  • 145
  • 1
  • 2
  • 8
  • possible duplicate of [Interpret and execute arbitrary Javascript in Linux CLI](http://stackoverflow.com/questions/957375/interpret-and-execute-arbitrary-javascript-in-linux-cli) – plasmid87 Nov 13 '13 at 12:17
  • If Java wasn't a limitation, you could use the WebKit or Gecko engines in a custom app. – CodeChimp Nov 13 '13 at 13:13
  • duplicate of [get a browser rendered html+javascript](http://stackoverflow.com/q/18720218) – Lucas Cimon Jan 10 '14 at 19:16

0 Answers0