
xidel
List of commands for xidel:
-
xidel:ai:e22c4 How to print every row of a table in a separate variable using xidel$ xidel input.html -e 'table_rows:=(//table//tr)', --output-format=jsontry on your machineexplain this command
-
xidel:tldr:12509 xidel: Print all URLs found by a Google search.$ xidel ${https:--www-google-com-search?q=test} --extract "//a/extract(@href, 'url[?]q=([^&]+)&', 1)[. != '']"try on your machineexplain this command
-
xidel:tldr:17594 xidel: Print all newest Stack Overflow questions with title and URL using pattern matching on their RSS feed.$ xidel ${http:--stackoverflow-com-feeds} --extract "${
{title:=-}<-title>{uri:=@href}<-link><-entry>+}" try on your machineexplain this command -
xidel:tldr:50028 xidel: Read the pattern from example.xml (which will also check if the element containing "ood" is there, and fail otherwise).$ xidel ${path-to-example-xml} --extract "${
ood<-foo> {-}<-bar><-x>}" try on your machineexplain this command -
xidel:tldr:6df30 xidel: Follow all links on a page and print the titles, with pattern matching.$ xidel ${https:--example-org} --follow "${{-}<-a>*}" --extract "${
{-}<-title>}" try on your machineexplain this command -
xidel:tldr:71515 xidel: Follow all links on a page and print the titles, with XPath.$ xidel ${https:--example-org} --follow ${--a} --extract ${--title}try on your machineexplain this command
-
xidel:tldr:e3329 xidel: Check for unread Reddit mail, Webscraping, combining CSS, XPath, JSONiq, and automatically form evaluation.$ xidel ${https:--reddit-com} --follow "${form(css('form-login-form')[1], {'user': '$your_username', 'passwd': '$your_password'})}" --extract "${css('#mail')-@title}"try on your machineexplain this command
-
xidel:tldr:f03f5 xidel: Follow all links on a page and print the titles, with CSS selectors.$ xidel ${https:--example-org} --follow "${css('a')}" --css ${title}try on your machineexplain this command
-
xidel:tldr:f3cc1 xidel: Print the title of all pages found by a Google search and download them.$ xidel ${https:--www-google-com-search?q=test} --follow "${--a-extract(@href, 'url[?]q=([^&]+)&', 1)[- != '']}" --extract ${--title} --download ${'{$host}-'}try on your machineexplain this command