Welcome to the Ultimate Guide to Grabity API
Grabity is a powerful tool designed for web scraping and data extraction. This guide aims to introduce you to several useful APIs provided by Grabity, complete with code snippets and an application example.
Getting Started with Grabity
Installation is simple and straightforward. Use the following command to install Grabity:
npm install grabity
Basic Usage
Below is a basic usage example where we extract metadata from a webpage.
const grabity = require("grabity");
(async () => {
const it = await grabity.grabIt("https://example.com");
console.log(it);
})();
Advanced Features
Extracting Open Graph Data
Open Graph data extraction can be done easily using Grabity.
(async () => {
const ogData = await grabity.grabIt("https://example.com", { ogOnly: true });
console.log(ogData);
})();
Extracting Specific Metadata
If you are only interested in specific metadata, you can specify the fields you need.
(async () => {
const metaTags = ["description", "keywords"];
const options = { filter: metaTags };
const metaData = await grabity.grabIt("https://example.com", options);
console.log(metaData);
})();
Error Handling
Proper error handling is crucial in a production environment.
(async () => {
try {
const data = await grabity.grabIt("https://example.com");
console.log(data);
} catch (error) {
console.error("An error occurred:", error);
}
})();
Application Example: Creating a Metadata Scraping App
Below is an example of an application that uses Grabity to scrape metadata from multiple websites.
const express = require("express");
const grabity = require("grabity");
const app = express();
const PORT = 3000;
app.get('/scrape', async (req, res) => {
try {
const urls = ["https://example.com", "https://anotherexample.com"];
const results = await Promise.all(urls.map(url => grabity.grabIt(url)));
res.json(results);
} catch (error) {
res.status(500).send("An error occurred: " + error.message);
}
});
app.listen(PORT, () => {
console.log(`Server is running on port ${PORT}`);
});
With this application, you can scrape metadata by navigating to http://localhost:3000/scrape
in your web browser.
Hash: c8dbf75b69eb4c5170c63d25baa1cdc207d30423e9bb158295e9202dbe89447f