You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
## Task 6: Use cdx_toolkit to query the full CDX index and download those captures from AWS S3
596
+
## Task 6: Query the full CDX index and download those captures from AWS S3
597
597
598
-
TBA
598
+
Some of our users only want to download a small subset of the crawl. They want to run queries against an index, either the CDX index we just talked about, or in the columnar index, which we'll talk about later.
599
+
600
+
The CDX server API is documented [here](https://github.com/webrecorder/pywb/wiki/CDX-Server-API#api-reference) and can be accessed through a HTTP API.
601
+
602
+
Right now there is no specific tool in Java for query the CDX index, nevertheless, we do have a very useful Python tool for working with the CDX index: [cdx_toolkit](https://github.com/cocrawler/cdx_toolkit). Please refer to the [Python Whirlwind Tour](https://github.com/commoncrawl/whirlwind-python) for more details.
603
+
604
+
In this task we will achieve the same results using direct HTTP API calls and JWARC.
There's a lot going on here so let's unpack it a little.
641
+
642
+
#### Check that the crawl has a record for the page we are interested in
643
+
644
+
We check for capture results querying the index.commoncrawl.org with GET parameters, specifying the crawl (`CC-MAIN-2024-22-index`), the exact URL `an.wikipedia.org/wiki/Escopete` and the timestamp range `from=20240518015810` and `to=20240518015810`.
645
+
The result of this tells us that the crawl successfully fetched this page at timestamp `20240518015810`.
646
+
* Captures are named by the surtkey and the time.
647
+
648
+
[//]: #(* If you need to search across all crawls, of `--crawl CC-MAIN-2024-22`, you could pass `--cc` to search across all crawls.)
649
+
[//]: #(Here I'm tempted to mention that you should use the columnar index for this kind of operations, however cdx_toolkit iterate over all crawls when called with -cc, if I'm not wrong)
650
+
* You can use the parameter `limit=<N>` to limit the number of results returned - in this case because we have restricted the timestamp range to a single value, we only expect one result.
651
+
* URLs may be specified with wildcards to return even more results: `"an.wikipedia.org/wiki/Escop*"` matches `an.wikipedia.org/wiki/Escopulión` and `an.wikipedia.org/wiki/Escopete`.
652
+
653
+
#### Retrieve the fetched content as WARC
654
+
655
+
Next, we make another HTTP call to retrieve the content and save it locally as a new WARC file, again specifying the exact URL, crawl identifier, and timestamp range.
656
+
This creates the WARC file `TEST-000000.extracted.warc.gz`
657
+
658
+
[//]: #(Here there is no warcinfo when getting from data.commoncrawl.org, right?)
659
+
[//]: #(which contains a `warcinfo` record explaining what the WARC is, followed by the `response` record we requested. )
660
+
* If you check the cURL command, you'll find that it is using the offset and length of the WARC record (as returned by the CDX index query) to make an HTTP byte range request to `data.commoncrawl.org` that isolates and returns just the single record we want from the full file. It only downloads the response WARC record because our CDX index only has the response records indexed.
661
+
* Limit, timestamp, and crawl index parameters, as well as URL wildcards.
662
+
663
+
### Indexing the WARC and viewing its contents
664
+
665
+
Finally, we run `jwarc cdxj` that process the WARC to make a CDXJ index of it as in Task 3, and then list the records using `jwarc ls` as in Task 2.
599
666
600
667
## Task 7: Find the right part of the columnar index
601
668
@@ -643,12 +710,12 @@ TBA
643
710
644
711
1. Use the DuckDb techniques from [Task 8](#task-8-query-using-the-columnar-index--duckdb-from-outside-aws) and the [Index Server](https://index.commoncrawl.org) to find a new webpage in the archives.
645
712
2. Note its url, warc, and timestamp.
646
-
3. Now open up the Makefile from [Task 6](#task-6-use-cdx_toolkit-to-query-the-full-cdx-index-and-download-those-captures-from-aws-s3) and look at the actions from the cdx_toolkit section.
713
+
3. Now open up the Makefile from [Task 6](#task-6-query-the-full-cdx-index-and-download-those-captures-from-aws-s3) and look at the actions from the cdx_toolkit section.
647
714
4. Repeat the cdx_toolkit steps, but for the page and date range you found above.
648
715
649
716
## Congratulations!
650
717
651
-
You have completed the Whirlwind Tour of Common Crawl's Datasets using Python! You should now understand different filetypes we have in our corpus and how to interact with Common Crawl's datasets using Python. To see what other people have done with our data, see the [Examples page](https://commoncrawl.org/examples) on our website. Why not join our Discord through the Community tab?
718
+
You have completed the Whirlwind Tour of Common Crawl's Datasets using Java! You should now understand different filetypes we have in our corpus and how to interact with Common Crawl's datasets using Java. To see what other people have done with our data, see the [Examples page](https://commoncrawl.org/examples) on our website. Why not join our Discord through the Community tab?
0 commit comments