Overall Experience: 5.5 years of experience on IT industry and experience on
snowflake Database and on Hadoop having experience in Linux application support. Highly skilled Hadoop Administrator has extensive knowledge and possesses strong abilities in administration of large data clusters in big data environments and is extremely analytical with excellent problem-solving.
PROFILE SUMMARY:
Hadoop & Snowflake DBA
MARCH 2019 - Till date with HCL TECHNOLOGIES
Have knowledge on Snowflake cloud Technology.
Have knowledge in Snowflake Multi-cluster size and credit usage. Experience on Creating the Snowflake virtual warehouses and applying the Usage, Monitor and Operate privileges on Virtual warehouses. Experience on creation of Snowflake Database, schema and Table structures working with the users. Experience on creating users and service accounts with AWS public keys or with basic authentication. Good knowledge on disabling users and resetting the password for users who are more than 90 days. Experience on creating Snowflake roles, cloud security roles and add users into roles. Experience on applying read and write privileges on current and future objects over schema level. Good knowledge on Creating Inbound and Outbound databases with account admin role Worked with the App teams and Modeling team to create the Tables and secured views in the Databases. Experience with app team to create the Internal stages, AWS external stages and apply privileges to the user mentioned roles. Good knowledge on Data sharing from one Snowflake account another snowflake account Have a Good knowledge on loading and unloading data into tables from internal stage and external stage. Experience on Creating file formats in order to load the data. Data Replication from PostegreSql to Snowflake using AWS services Performance tuning on Snowflake Databases Deploying the objects in Production databases using the CI/CD pipelines Fixing the data issues while replicate the data into Snowflake
Hadoop Admin:
FLY TXT : OCT 2017 – JAN 2019
Have 2.5 years of experience in Database Administration and Hadoop Administration in
production Support domain. Good knowledge on Hadoop cluster and experienced on monitoring the cluster Creating profiles and attaching them with users and monitoring the users. Flexible with 24*7 environment including oncall support as per requirement. Involved in Change creation like Normal, Expedite and Emergency changes. Maintaining cluster health and HDFS space for better performance Responsible for commissioning and decommissioning of nodes from clusters Managing the cluster health and HDFS space for better performance Experienced in managing and reviewing the Hadoop log files Experienced on troubleshooting and resolving the tickets related to server in Prod and dev Monitoring File system usage and cleaning up the unwanted data Responsible for installing and configuring Hadoop clusters (for large data sets) in Linux systems. Involved with multiple teams for design and implementation of Hadoop clusters. Responsible for day-to-day activities which includes HDFS support and maintenance, Cluster maintenance, creation/removal of nodes, Cluster Monitoring/ Troubleshooting, Manage and review Hadoop log files, Backup and restoring, capacity planning. Familiar in commissioning and decommissioning of nodes on Hadoop Cluster. Worked on setting- up Name Node high availability and designed Automatic failover control using zookeeper and quorum journal nodes. Configured various property files like core-site.xml, hdfs-site.xml, mapred-site.xml, yarn- site.xml based upon the job requirement. Monitoring the scheduled jobs (like Daily, Weekly, Monthly, backup Jobs and do some investigation on the logs for the failed jobs to find the root cause of the issue. Involved in bench marking Hadoop cluster file systems various batch jobs and workloads. Responsible for Operating system and Hadoop Cluster monitoring using tools like Nagios, Ganglia, Ambari. Involved in minor and major upgrades of Hadoop and Hadoop eco system. Involved in troubleshooting issues on the Hadoop ecosystem, understanding of systems capacity, bottlenecks, basics of memory, CPU, OS, storage, and networks.