In this article, we will see how a user can start a crawler in AWS Glue Data Catalog.
Example
Problem Statement: Use boto3 library in Python to start a crawler.
Approach/Algorithm to solve this problem
Step 1: Import boto3 and botocore exceptions to handle exceptions
Step 2: crawler_name is the parameter in this function.
Step 3: Create an AWS session using boto3 lib. Make sure region_name is mentioned in the default profile. If it is not mentioned, then explicitly pass the region_name while creating the session.
Step 4: Create an AWS client for glue.
Step 5: Now use the start_crawler function and pass the parameter crawler_name as Name.
Step 6: It returns the response metadata and starts the crawler irrespective of its schedule. If the status of crawler is running, then it throws CrawlerRunningException.
Step 7: Handle the generic exception if something went wrong while starting a crawler.
Example Code
The following code starts a crawler in AWS Glue Data Catalog −
import boto3 from botocore.exceptions import ClientError def start_a_crawler(crawler_name) session = boto3.session.Session() glue_client = session.client('glue') try: response = glue_client.start_crawler(Name=crawler_name) return response except ClientError as e: raise Exception("boto3 client error in start_a_crawler: " + e.__str__()) except Exception as e: raise Exception("Unexpected error in start_a_crawler: " + e.__str__()) #1st time start the crawler print(start_a_crawler("Data Dimension")) #2nd time run, before crawler completes the operation print(start_a_crawler("Data Dimension"))
Output
#1st time start the crawler {'ResponseMetadata': {'RequestId': '73e50130-*****************8e', 'HTTPStatusCode': 200, 'HTTPHeaders': {'date': 'Sun, 28 Mar 2021 07:26:55 GMT', 'content-type': 'application/x-amz-json-1.1', 'content-length': '2', 'connection': 'keep-alive', 'x-amzn-requestid': '73e50130-***************8e'}, 'RetryAttempts': 0}} #2nd time run, before crawler completes the operation Exception: boto3 client error in start_a_crawler: An error occurred (CrawlerRunningException) when calling the StartCrawler operation: Crawler with name Data Dimension has already started