Common Crawl: Difference between revisions

Content deleted Content added
No edit summary
AnomieBOT (talk | contribs)
m Dating maintenance tags: {{Clarify}}
 
(25 intermediate revisions by 20 users not shown)
Line 1:
{{Short description|Nonprofit web crawling and archive organization}}
{{Infobox dot-com company
| name = Common Crawl
| company_type = [[501(c)(3)]] non-profit
| traded_as =
| foundation = 2007
| dissolved = =
| location = [[San Francisco, California]]; [[Los Angeles, California]], United States
| incorporated = =
| founder = [[Gil Elbaz]]
| chairman =
| president =
| CEO =
| key_people = [[Peter Norvig]], [[Nova Spivack]], [[Carl Malamud]], [[KurtRich BollackerSkrenta]], [[JoiEva ItoHo]]
| CEO =
| industry =
| key_people = [[Peter Norvig]], [[Nova Spivack]], [[Carl Malamud]], [[Kurt Bollacker]], [[Joi Ito]]
| industry products =
| products services =
| services assets =
| assets equity =
| equity owner =
| num_employees = =
| owner =
| parent =
| num_employees =
| parent divisions =
| subsid =
| divisions =
| url = {{url|commoncrawl.org}}
| subsid =
| ipv6 =
| url = {{url|commoncrawl.org}}
| alexa =
| ipv6 =
| alexa website_type =
| website_type registration =
| registration num_users =
| language =
| num_users =
| launch_date =
| language = [[English language|English]]
| launch_date current_status =
| current_status screenshot =
| screenshot caption =
| caption footnotes =
| license=[[Apache 2.0]] (software) {{clarify|reason=dataset license?|date=November 2024}}
| footnotes =
}}
 
'''Common Crawl''' is a [[nonprofit organization|nonprofit]] [[501(c) organization#501.28c.29.283.29|501(c)(3)]] organization that [[web crawler|crawls]] the web and freely provides its archives and datasets to the public.<ref name=latimes>{{cite news |title=Tech entrepreneur Gil Elbaz made it big in L.A.|author=Rosanna Xia|work=Los Angeles Times|date=February 5, 2012|access-date=July 31, 2014|url=httphttps://articleswww.latimes.com/2012business/la-xpm-2012-feb/-05/business/-la-fi-himi-elbaz-20120205-story.html}}</ref><ref name=pressheretv>{{cite news |title=Gil Elbaz and Common Crawl|work=NBC News|date=April 4, 2013|access-date=July 31, 2014|url=http://www.pressheretv.com/gil-elbaz-and-common-crawl/}}</ref> Common Crawl's [[Web archiving|web archive]] consists of [[petabyte|petabytes]] of data collected since 20112008.<ref name="ready">{{cite web |url=https://commoncrawl.org/the-data/get-started/ |title=So you're ready to get started|publisher=Common Crawl|access-date=2018-06-029 June 2023}}</ref> It completes crawls generally every month.<ref name=theverge>{{cite news |title=Winter 2013 Crawl Data Now Available|author=Lisa Green|date=January 8, 2014|access-date=June 2, 2018|url=https://commoncrawl.org/2014/01/winter-2013-crawl-data-now-available/}}</ref>
 
Common Crawl was founded by [[Gil Elbaz]].<ref name=twist>{{cite news |title=Startups - Gil Elbaz and Nova Spivack of Common Crawl - TWiST #222|publisher=This Week In Startups|date=January 10, 2012}}</ref> Advisors to the non-profit include [[Peter Norvig]] and [[Joi Ito]].<ref name=technologyreview>{{cite news |title=A Free Database of the Entire Web May Spawn the Next Google|author=Tom Simonite|publisher=MIT Technology Review|date=January 23, 2013|access-date=July 31, 2014|url=http://www.technologyreview.com/news/509931/a-free-database-of-the-entire-web-may-spawn-the-next-google/|archive-date=June 26, 2014|archive-url=https://web.archive.org/web/20140626114525/http://www.technologyreview.com/news/509931/a-free-database-of-the-entire-web-may-spawn-the-next-google/|url-status=dead}}</ref> The organization's crawlers respect [[nofollow]] and [[Robot exclusion standard|robots.txt]] policies. Open source code for processing Common Crawl's data set is publicly available.
 
The Common Crawl dataset includes copyrighted work and is distributed from the US under [[fair use]] claims. Researchers in other countries have made use of techniques such as shuffling sentences or referencing the common crawl dataset to work around copyright law in other [[Jurisdiction|legal jurisdictions]].<ref>{{Cite journal |last=Schäfer |first=Roland |title=CommonCOW: Massively Huge Web Corpora from CommonCrawl Data and a Method to Distribute them Freely under Restrictive EU Copyright Laws |url=https://aclanthology.org/L16-1712 |journal=Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC'16) |date=May 2016 |location=Portorož, Slovenia |publisher=European Language Resources Association (ELRA) |pages=4501}}</ref>
 
English is the primary language for 46% of documents in the March 2023 version of the Common Crawl dataset. The next most common primary languages are German, Russian, Japanese, French, Spanish and Chinese, each with less than 6% of documents.<ref>{{Cite web |title=Statistics of Common Crawl Monthly Archives by commoncrawl |url=https://commoncrawl.github.io/cc-crawl-statistics/plots/languages.html |access-date=2023-04-02 |website=commoncrawl.github.io}}</ref>
 
==History==
[[Amazon Web Services]] began hosting Common Crawl's archive through its Public Data Sets program in 2012.<ref name=semanticweb_1>{{cite news |title=Common Crawl Toto Add New Data Inin Amazon Web Services Bucket |author=Jennifer Zaino |publisher=Semantic Web |date=March 13, 2012 |access-date=July 31, 2014 |url=http://semanticweb.com/common-crawl-to-add-new-data-in-amazon-web-services-bucket_b27341 |archive-url=https://web.archive.org/web/20140701235708/http://semanticweb.com/common-crawl-to-add-new-data-in-amazon-web-services-bucket_b27341 |archive-date=July 1, 2014 |url-status=dead}}</ref>
 
The organization began releasing [[metadata]] files and the text output of the crawlers alongside [[ARCHeritrix#Arc (file format)files|.arc]] files in July of that year2012.<ref name=semanticweb_2>{{cite news |author=Jennifer Zaino |date=July 16, 2012 |title=Common Crawl Corpus Update Makes Web Crawl Data More Efficient, Approachable Forfor Users Toto Explore|author=Jennifer Zaino|publisher=Semantic Web|date=July 16, 2012|access-date=July 31, 2014|url=http://semanticweb.com/common-crawl-corpus-update-makes-web-crawl-data-more-efficient-approachable-for-users-to-explore_b30771 |archive-url=https://web.archive.org/web/20140812101154/http://semanticweb.com/common-crawl-corpus-update-makes-web-crawl-data-more-efficient-approachable-for-users-to-explore_b30771 |archive-date=August 12, 2014 |url-status=dead |publisher=Semantic Web |access-date=July 31, 2014}}</ref> Common Crawl's archives had only included .arc files previously.<ref name=semanticweb_2 />
 
In December 2012, [[blekko]] donated to Common Crawl search engine [[metadata]] blekko had gathered from crawls it conducted from February to October 2012.<ref name=semanticweb_3>{{cite news |author=Jennifer Zaino |date=December 18, 2012 |title=Blekko Data Donation Is As Big Benefit Toto Common Crawl|author=Jennifer Zaino|publisher=Semantic Web|date=December 18, 2012|access-date=July 31, 2014|url=http://semanticweb.com/blekko-data-donation-is-a-big-benefit-to-common-crawl_b34177 |archive-url=https://web.archive.org/web/20140812101151/http://semanticweb.com/blekko-data-donation-is-a-big-benefit-to-common-crawl_b34177 |archive-date=August 12, 2014 |url-status=dead |publisher=Semantic Web |access-date=July 31, 2014}}</ref> The donated data helped Common Crawl "improve its crawl while avoiding spam, porn and the influence of excessive [[search engine optimization|SEO]]."<ref name=semanticweb_3 />
In 2013, Common Crawl began using the [[Apache Software Foundation|Apache Software Foundation]]'s]] [[Nutch]] webcrawler instead of a custom crawler.<ref name=ccnutch>{{cite web |title=Common Crawl's Move to Nutch|author=Jordan Mendelson|publisher=Common Crawl|date=February 20, 2014 |access-datetitle=JulyCommon 31,Crawl's Move to Nutch 2014|url=http://commoncrawl.org/common-crawl-move-to-nutch/ |publisher=Common Crawl |access-date=July 31, 2014}}</ref> Common Crawl switched from using .arc files to [[Web ARChive |.warc]] files with its November 2013 crawl.<ref name=ccnov2013>{{cite web |title=New Crawl Data Available!|author=Jordan Mendelson|publisher=Common Crawl|date=November 27, 2013 |access-datetitle=JulyNew 31,Crawl Data Available! 2014|url=http://commoncrawl.org/new-crawl-data-available/ |publisher=Common Crawl |access-date=July 31, 2014}}</ref>
 
A filtered version of Common Crawl was used to train OpenAI's [[GPT-3]] language model, announced in 2020.<ref>{{Cite arXiv |last1=Brown |first1=Tom |last2=Mann |first2=Benjamin |last3=Ryder |first3=Nick |last4=Subbiah |first4=Melanie |last5=Kaplan |first5=Jared |last6=Dhariwal |first6=Prafulla |last7=Neelakantan |first7=Arvind |last8=Shyam |first8=Pranav |last9=Sastry |first9=Girish |last10=Askell |first10=Amanda |last11=Agarwal |first11=Sandhini |date=2020-06-01|page=14 |title=Language Models areAre Few-Shot Learners |classpage=cs.CL14 |eprint=2005.14165 |class=cs.CL |quote="the majority of our data is derived from raw Common Crawl with only quality-based filtering."}}</ref> One challenge of using Common Crawl data is that despite the vast amount of documented web data, individual pieces of crawled websites could be better documented. This can create challenges when trying to diagnose problems in projects that use the Common Crawl data. A solution proposed by [[Timnit Gebru]], et al., in 2020 to an industry-wide documentation shortfall is that every dataset should be accompanied with a datasheet that documents its motivation, composition, collection process, and recommended uses.<ref>{{cite arXiv| first1 = Timnit | last1 = Gebru | first2 = Jamie | last2 = Morgenstern | first3 = Briana | last3 = Vecchione | first4 = Jennifer | last4 = Wortman Vaughan | first5 = Hanna | last5 = Wallach| first6 = Hal | last6 = Daumé III | first7 = Kate | last7 = Crawford | title = Datasheets for Datasets| date = March 19, 2020 | class = cs.DB | eprint = 1803.09010 }}</ref>
 
==HistoryTimeline of Common Crawl data==
 
The following data have been collected from the official Common Crawl Blog.<ref>{{Cite web|url=httphttps://commoncrawl.org/connect/blog/|title=Blog – Common Crawl}}</ref>
and Common Crawl's API.<ref>{{Cite web|url=https://index.commoncrawl.org/collinfo.json|title=Collection info - Common Crawl}}</ref>
 
{| class="wikitable"
|-
! Crawl date !! Size in [[Tebibyte|TiB]] !! Billions of pages !! Comments
|-
|April 2024
|386
|2.7
|Crawl conducted from April 12 to April 24, 2024
|-
|February/March 2024
|425
|3.16
|Crawl conducted from February 20 to March 5, 2024
|-
|December 2023
|454
|3.35
|Crawl conducted from November 28 to December 12, 2023
|-
|June 2023
|390
|3.1
|Crawl conducted from May 27 to June 11, 2023
|-
|April 2023
|400
|3.1
|Crawl conducted from March 20 to April 2, 2023
|-
|February 2023
|400
|3.15
|Crawl conducted from January 26 to February 9, 2023
|-
|December 2022
|420
|3.35
|Crawl conducted from November 26 to December 10, 2022
|-
|October 2022
Line 88 ⟶ 127:
|-
|August 2018
|220
|2.65
|—
|
|-
Line 190 ⟶ 229:
| March 2014 || 223 || 2.8 || First Nutch crawl
|-
| JanuaryWinter 20142013 || 148 || 2.3 || CrawlsCrawl performedconducted monthlyfrom December 4 through December 22, 2013
|-
| NovemberSummer 2013 || 102? || 2? || DataCrawl inconducted Warcfrom fileMay format2013 through June 2013. First WARC crawl
|-
| July 2012 || ? || ? || DataCrawl inconducted from January 2012 through June 2012. ArcFinal fileARC formatcrawl
|-
| January 20122009-2010 || ? || ? || PublicCrawl conducted Datafrom SetJuly of2009 Amazonthrough WebSeptember Services2010
|-
| 2008-2009 || ? || ? || Crawl conducted from May 2008 through January 2009
| November 2011 || 40 || 5 || First availability on Amazon
|}
 
==Norvig Web Data Science Award==
In corroboration with [[SURFsara]], Common Crawl sponsors the Norvig Web Data Science Award, a competition open to students and researchers in [[Benelux]].<ref name=ccaward>{{cite web |title=The Norvig Web Data Science Award|author=Lisa Green|publisher=Common Crawl|date=November 15, 2012|access-date=July 31, 2014|url=http://commoncrawl.org/the-norvig-web-data-science-award/}}</ref><ref name=dtlsaward>{{cite web|title=Norvig Web Data Science Award 2014|publisher=Dutch Techcentre for Life Sciences|access-date=July 31, 2014|url=http://www.dtls.nl/dtl/news/norvig-web-data-science-award-2014.html|archive-url=https://web.archive.org/web/20140815035946/http://www.dtls.nl/dtl/news/norvig-web-data-science-award-2014.html|archive-date=August 15, 2014|url-status=dead}}</ref> The award is named for [[Peter Norvig]] who also chairs the judging committee for the award.<ref name=ccaward />
 
== Colossal Clean Crawled Corpus ==
{{Anchor|Colossal Clean Crawled Corpus}}
 
Google's version of the Common Crawl is called the Colossal Clean Crawled Corpus, or C4 for short. It was constructed for the training of the [[T5 (language model)|T5 language model series]] in 2019.<ref name=":0">{{Cite journal |last=Raffel |first=Colin |last2=Shazeer |first2=Noam |last3=Roberts |first3=Adam |last4=Lee |first4=Katherine |last5=Narang |first5=Sharan |last6=Matena |first6=Michael |last7=Zhou |first7=Yanqi |last8=Li |first8=Wei |last9=Liu |first9=Peter J. |date=2020 |title=Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer |url=http://jmlr.org/papers/v21/20-074.html |journal=Journal of Machine Learning Research |volume=21 |issue=140 |pages=1–67 |issn=1533-7928}}</ref> There are some concern over copyrighted content in the C4.<ref>{{Cite news |last=Hern |first=Alex |date=2023-04-20 |title=Fresh concerns raised over sources of training material for AI systems |url=https://www.theguardian.com/technology/2023/apr/20/fresh-concerns-training-material-ai-systems-facist-pirated-malicious |access-date=2023-04-21 |work=The Guardian |language=en-GB |issn=0261-3077}}</ref>
 
==References==
Line 211 ⟶ 255:
*[https://github.com/commoncrawl/ Common Crawl GitHub Repository] with the crawler, libraries and example code
*[https://groups.google.com/forum/?fromgroups#!forum/common-crawl Common Crawl Discussion Group]
*[https://commoncrawl.org/connect/blog/ Common Crawl Blog]
 
[[Category:Internet-related organizations]]