Web Scraping Basics: Your First API Call

FastWebScraper Team2 min read

Web Scraping Basics: Your First API Call

This guide walks you through the core concepts of using the FastWebScraper API. By the end, you'll understand authentication, request parameters, response handling, and HTML parsing.

Prerequisites

  • A FastWebScraper account (sign up free)
  • Your API key (found in the dashboard under Settings)
  • A development environment for your language of choice

Authentication

Every API request requires your API key in the X-API-Key header:

X-API-Key: your_api_key_here

Your API key identifies your account and tracks your usage. Keep it secret — never commit it to version control or expose it in client-side code.

Making Your First Request

The FastWebScraper API offers two scraping modes:

ModeEndpointBehavior
AsyncPOST /v1/scrape/asyncReturns a job ID immediately. Poll for results.
SyncPOST /v1/scrape/syncWaits for the scrape to complete and returns the result.

Use sync for simple one-off scrapes and testing. Use async for production workloads and batch scraping.

Sync Request Example

const response = await fetch('https://api.fastwebscraper.com/v1/scrape/sync', { method: 'POST', headers: { 'X-API-Key': 'YOUR_API_KEY', 'Content-Type': 'application/json', }, body: JSON.stringify({ url: 'https://example.com', mode: 'auto', }), }); const result = await response.json(); console.log(result.data.html); // Full page HTML
import requests response = requests.post( 'https://api.fastwebscraper.com/v1/scrape/sync', headers={ 'X-API-Key': 'YOUR_API_KEY', 'Content-Type': 'application/json', }, json={ 'url': 'https://example.com', 'mode': 'auto', } ) result = response.json() print(result['data']['html']) # Full page HTML
using System.Net.Http.Json; using var client = new HttpClient(); client.DefaultRequestHeaders.Add("X-API-Key", "YOUR_API_KEY"); var request = new { url = "https://example.com", mode = "auto" }; var response = await client.PostAsJsonAsync( "https://api.fastwebscraper.com/v1/scrape/sync", request); var result = await response.Content.ReadFromJsonAsync<JsonElement>(); var html = result.GetProperty("data").GetProperty("html").GetString(); Console.WriteLine(html); // Full page HTML

Async Request Example

For the async flow, you submit a job and then poll for the result:

// Step 1: Submit the scrape job const submitResponse = await fetch('https://api.fastwebscraper.com/v1/scrape/async', { method: 'POST', headers: { 'X-API-Key': 'YOUR_API_KEY', 'Content-Type': 'application/json', }, body: JSON.stringify({ url: 'https://example.com', mode: 'auto', }), }); const { data } = await submitResponse.json(); const jobId = data.jobId; console.log('Job submitted:', jobId); // Step 2: Poll for the result let job; do { await new Promise(resolve => setTimeout(resolve, 2000)); // Wait 2 seconds const statusResponse = await fetch( `https://api.fastwebscraper.com/v1/jobs/${jobId}`, { headers: { 'X-API-Key': 'YOUR_API_KEY' } } ); job = await statusResponse.json(); } while (job.data.status === 'PENDING' || job.data.status === 'IN_PROGRESS'); if (job.data.status === 'COMPLETED') { console.log('HTML:', job.data.html); } else { console.error('Job failed:', job.data.error); }

Request Parameters

ParameterTypeRequiredDescription
urlstringYesThe URL to scrape
modestringNoauto, http, browser, browser_stealth, or http_stealth. Default: auto
countrystringNoTwo-letter country code for geo-targeted requests (e.g., US, GB, DE)
waitForSelectorstringNoCSS selector to wait for before capturing HTML

Scraping Modes

  • auto: Smart selection based on domain history (recommended)
  • http: Fast HTTP requests for simple pages (1 credit)
  • browser: Full browser rendering for SPAs (7 credits)
  • browser_stealth: Browser with anti-detection (10 credits)
  • http_stealth: HTTP with anti-bot bypass (15 credits)

Wait For Selector

JavaScript-heavy sites (React, Vue, Angular) load content dynamically. Use waitForSelector to wait until specific elements appear:

{ "url": "https://spa-site.com/products", "mode": "browser", "waitForSelector": ".product-card" }

Without this parameter, you might get the HTML shell before the actual content loads.

Response Structure

Successful Response

{ "success": true, "data": { "jobId": "clx1234567890", "status": "COMPLETED", "url": "https://example.com", "html": "<html>...</html>", "statusCode": 200 } }

Error Response

{ "success": false, "error": { "code": "VALIDATION_ERROR", "message": "Invalid URL format" } }

Parsing HTML

Once you have the HTML, use a parsing library to extract data.

Node.js with cheerio

import * as cheerio from 'cheerio'; const $ = cheerio.load(html); // Extract text content const title = $('h1').text(); const paragraphs = $('p').map((_, el) => $(el).text()).get(); // Extract attributes const images = $('img').map((_, el) => ({ src: $(el).attr('src'), alt: $(el).attr('alt'), })).get(); // Extract table data const rows = $('table tr').map((_, row) => { return [$(row).find('td').map((_, cell) => $(cell).text()).get()]; }).get();

Python with BeautifulSoup

from bs4 import BeautifulSoup soup = BeautifulSoup(html, 'html.parser') # Extract text content title = soup.find('h1').text paragraphs = [p.text for p in soup.find_all('p')] # Extract attributes images = [{'src': img['src'], 'alt': img.get('alt', '')} for img in soup.find_all('img', src=True)] # Extract table data rows = [] for row in soup.find_all('tr'): cells = [cell.text.strip() for cell in row.find_all('td')] if cells: rows.append(cells)

C# with AngleSharp

using AngleSharp.Html.Parser; var parser = new HtmlParser(); var document = await parser.ParseDocumentAsync(html); // Extract text content var title = document.QuerySelector("h1")?.TextContent; var paragraphs = document.QuerySelectorAll("p") .Select(p => p.TextContent).ToList(); // Extract attributes var images = document.QuerySelectorAll("img[src]") .Select(img => new { Src = img.GetAttribute("src"), Alt = img.GetAttribute("alt") ?? "" }).ToList(); // Extract table data var rows = document.QuerySelectorAll("tr") .Select(row => row.QuerySelectorAll("td") .Select(cell => cell.TextContent.Trim()).ToList()) .Where(cells => cells.Count > 0).ToList();

Next Steps