Example: Get the details of a crawler, crawler_for_s3_file_job.
Approach/Algorithm to solve this problem
Step 1 − Import boto3 and botocore exceptions to handle exceptions.
Step 2 − crawler_name is the mandatory parameter. It is a list so user can send multiple crawlers name at a time to fetch details.
Step 3 − Create an AWS session using boto3 library. Make sure the region_name is mentioned in default profile. If it is not mentioned, then explicitly pass the region_name while creating the session.
Step 4 − Create an AWS client for glue.
Step 5 − Now use the batch_get_crawlers function and pass the crawler_names.
Step 6 − It returns the metadata of crawlers.
Step 7 − Handle the generic exception if something went wrong while checking the job.
Example
Use the following code to fetch the details of a crawler −
import boto3 from botocore.exceptions import ClientError def get_crawler_details(crawler_names:list) session = boto3.session.Session() glue_client = session.client('glue') try: crawler_details = glue_client.batch_get_crawlers(CrawlerNames= crawler_names) return crawler_details except ClientError as e: raise Exception( "boto3 client error in get_crawler_details: " + e.__str__()) except Exception as e: raise Exception( "Unexpected error in get_crawler_details: " + e.__str__()) print(get_crawler_details("[crawler_for_s3_file_job]"))