Production-ready web scraping. Out of the box.
CrawleeOne wraps Crawlee with everything production scrapers need -- data transforms, privacy compliance, error tracking, caching, and more -- in a single function call. Write the extraction logic. CrawleeOne handles the rest.
Works seamlessly with Apify, but the storage backend is pluggable -- you're not locked in.
npm install crawlee-oneimport { crawleeOne } from 'crawlee-one';
await crawleeOne({
type: 'cheerio',
routes: {
mainPage: {
match: /example\.com\/home/i,
handler: async (ctx) => {
const { $, pushData, pushRequests } = ctx;
await pushData([{ title: $('h1').text() }], {
privacyMask: { author: true },
});
await pushRequests([{ url: 'https://example.com/page/2' }]);
},
},
otherPage: {
match: (url, ctx) => url.startsWith('/') && ctx.$('.author').length > 0,
handler: async (ctx) => {
/* ... */
},
},
},
});That's it. No Actor.main() boilerplate, no manual router setup, no input wiring. CrawleeOne handles initialization, routing, input resolution, error handling, and teardown.
Replace 100+ lines of Actor + Router + input boilerplate with a single crawleeOne() call.
Go from cheerio to playwright by changing one prop. Your route handlers stay the same.
Users filter, transform, rename, and limit results via input config -- no code changes needed.
{
"outputPickFields": ["name", "email"],
"outputRenameFields": { "photo": "media.photos[0].url" },
"outputMaxEntries": 500,
"outputFilter": "(entry) => entry.rating > 4.0"
}Route handlers and context objects are typed based on your crawler type. TypeScript knows whether you have ctx.page or ctx.$ -- no extra setup.
Mark fields as personal data. CrawleeOne redacts them automatically when includePersonalData is off.
Only process entries you haven't seen before. Built-in cache with KeyValueStore tracks what's been scraped across runs.
Failed requests are saved to a dataset automatically. Plug in Sentry with one line, or implement your own telemetry.
Regex, functions, or both. CrawleeOne auto-routes unlabeled requests to the right handler.
What CrawleeOne replaces (click to expand)
With CrawleeOne:
await crawleeOne({
type: 'cheerio',
routes: {
mainPage: {
match: /example\.com\/home/i,
handler: async (ctx) => {
const data = [
/* ... */
];
await ctx.pushData(data, { privacyMask: { author: true } });
await ctx.pushRequests([{ url: 'https://...' }]);
},
},
},
});Without CrawleeOne (vanilla Crawlee + Apify):
import { Actor } from 'apify';
import { CheerioCrawler, createCheerioRouter } from 'crawlee';
await Actor.main(async () => {
const rawInput = await Actor.getInput();
const input = {
...rawInput,
...(await fetchInput(rawInput.inputFromUrl)),
...(await runFunc(rawInput.inputFromFunc)),
};
const router = createCheerioRouter();
router.addHandler('mainPage', async (ctx) => {
await onBeforeHandler(ctx);
const data = [
/* ... */
];
const finalData = await transformAndFilterData(data, ctx, input);
const dataset = await Actor.openDataset(input.datasetId);
await dataset.pushData(data);
const reqs = ['https://...'].map((url) => ({ url }));
const finalReqs = await transformAndFilterReqs(reqs, ctx, input);
const queue = await Actor.openRequestQueue(input.requestQueueId);
await queue.addRequests(finalReqs);
await onAfterHandler(ctx);
});
router.addDefaultHandler(async (ctx) => {
await onBeforeHandler(ctx);
const url = ctx.request.loadedUrl || ctx.request.url;
if (url.match(/example\.com\/home/i)) {
const req = { url, userData: { label: 'mainPage' } };
const finalReqs = await transformAndFilterReqs([req], ctx, input);
const queue = await Actor.openRequestQueue(input.requestQueueId);
await queue.addRequests(finalReqs);
}
await onAfterHandler(ctx);
});
const crawler = new CheerioCrawler({ ...input, requestHandler: router });
crawler.run(['https://...']);
});And that's far from everything -- the vanilla version still doesn't include data transforms, privacy masking, error tracking, caching, or input validation.
CrawleeOne scrapers support these out of the box, all configurable via input:
| Use case | What it does |
|---|---|
| Import URLs | Load URLs from databases, datasets, or custom functions. |
| Data transforms | Rename, select, limit, and reshape output without code changes. |
| Request filtering | Control what gets scraped to save time and money. |
| Caching | Incremental scraping -- only process new entries. |
| Privacy compliance | Redact personal data with a single toggle. |
| Error capture | Centralized error tracking across scrapers. |
npm install crawlee-one- Read the getting started guide for a full walkthrough of
crawleeOne()and its options. - See example projects for real-world usage.
- Use
crawlee-one gento generate types, actor.json, actorspec.json, and README from a single config file.
Scrapers built with CrawleeOne are configurable by the end users (via Apify platform). Transform, filter, limit, and reshape scraped data and requests -- all through input fields, no code changes needed.
| Document | Description |
|---|---|
| Getting started | Developer guide with full crawleeOne() options reference. |
| Features | Complete feature catalog with code examples. |
| Use cases | All 12 use cases with links to detailed guides. |
| Input reference | All available input fields. |
| Deploying to Apify | Step-by-step Apify deployment guide. |
| Code generation | Generate types, actor.json, actorspec, and README from config. |
| Integrations | Custom telemetry and storage backends. |
| User guide | Guide for end users of CrawleeOne scrapers. |
| API reference | Auto-generated TypeScript API docs. |
| Crawlee & Apify overview | Background on how Crawlee and Apify work. |
- SKCRIS Scraper -- Slovak research database scraper.
- Profesia.sk Scraper -- Slovak job board scraper.
Found a bug or have a feature request? Please open an issue.
When contributing code, please fork the repo and submit a pull request. See CONTRIBUTING.md for dev setup and guidelines.
Want to build, test, or hack on CrawleeOne? The development guide covers prerequisites, all npm scripts, project structure, architecture, and testing strategy.
CrawleeOne is a labour of love. If you find it useful, you can support the project on Buy Me a Coffee.
