Curlgrep (often stylized as curlgrep or curl | grep) is an act of retrieving and filtering information from an external source, such as the Internet. The term is derived as the concatenation of two common command-line utilities, cURL and grep. Since many command-line intepreters allow the use of pipes and redirections, both utilities can be used to extract certain information from plaintext documents (e.g. HTML) stored in computer networks.


In computer administration

A curlgrep process may be invoked by retrieving the content by invoking the command curl, then using the redirection operator (|) to pipe the printed output of curl to be filtered by the grep command-line utility.

For example, to fetch a list of external links found on, a user may use the following command:

curl | grep -Eo "(http|https)://[a-zA-Z0-9./?=_%:-]*"

The curl utility will fetch and displays the full result of the webpage hosted at, while the grep utility will filter out the displayed results according to a specific regular expression rule to only select valid parts of the text containing valid HTTP(S) URL scheme.

In rootprints

Many rootheads tend to use the word curlgrep to refer to the retrieval and filtering process of an external content, while the term catgrep may be used to retrieve and filter information from an internal source (such as dotfiles). Some valid examples (in the English variant of rootprints) include:

  • i have curlgrepped that there are exactly 7114 711As on that website;
  • let’s see if eve can even curlgrep our encrypted feelings[], alice!
  • remember guys[], curlgrepping without *ptrs[] means plagiarism!