0% found this document useful (0 votes)
34 views2 pages

Senior Engineer Integration Kafka Confluent Kubernetes AWS Job Description

AWS Job description

Uploaded by

Raveendra Kumar
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
34 views2 pages

Senior Engineer Integration Kafka Confluent Kubernetes AWS Job Description

AWS Job description

Uploaded by

Raveendra Kumar
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 2

Integration Engineer (Kafka, Confluent, Kubernetes, AWS)

We're seeking a seasoned Integration Engineer with deep expertise in Apache Kafka, Confluent
Platform, Kubernetes, and cloud technologies – specifically Amazon Web Services (AWS) EKS. In this
pivotal role, you'll design, develop, and maintain robust, scalable integration solutions that are the
backbone of our data-driven systems. You'll act as the bridge between disparate systems and
technologies, ensuring seamless data flow and maximizing the reliability of our platforms.

Location
Bangalore, India

Job Profile
Your work will involve:

Responsibilities:

● Architect and Implement Kafka-based Integrations: Design and build highly available and
performant integration solutions leveraging Apache Kafka and Confluent Platform.
● Kubernetes Orchestration and Deployment: Package, deploy, and manage Kafka and Confluent
components on Kubernetes, including within AWS EKS environments.
● Streamline Data Pipelines: Develop event-driven architectures and real-time data pipelines to
enable efficient data processing and analytics.
● AWS Integration and Troubleshooting: Proactively troubleshoot AWS EKS clusters, containers, and
related integration issues, ensuring high availability and platform stability.
● Collaboration and Best Practices: Work closely with development teams to understand integration
needs, define strategies, and enforce best practices for integration methodologies.
● Real-time Data Processing: Develop and maintain streaming pipelines for real-time data ingestion,
processing, and analytics.
● Security and Compliance: Ensure the security and compliance of Kafka clusters and data pipelines,
adhering to industry standards and regulations.
● Performance Tuning and Scalability: Monitor and optimize Kafka clusters for peak performance,
ensuring scalability and high throughput.
● Messaging and Event Brokerage: Implement messaging patterns and event-driven architectures
using Kafka topics, partitions, and consumers.
● Data Transformation and Enrichment: Apply data transformation techniques to cleanse, enrich,
and prepare data for downstream consumption.
● Monitoring and Observability: Set up monitoring and observability tools (e.g., Prometheus,
Grafana, Jaeger) to track Kafka metrics and troubleshoot issues.
● Disaster Recovery and High Availability: Design and implement disaster recovery and high
availability strategies for Kafka clusters to ensure continuous availability.
● Interoperability and Integration: Integrate Kafka with other systems and technologies, such as
databases, NoSQL stores, and message queues, to enable seamless data exchange.

Requirements:

● Proven Experience: Minimum [5-7+] years of hands-on integration engineering experience,


including a strong track record with Apache Kafka and Confluent Platform.
● Kubernetes Mastery: In-depth knowledge of Kubernetes concepts, resource management, and
deployment strategies. Solid AWS EKS and container experience is a major plus.
● Cloud Agility: Proficiency in AWS services, particularly EKS, EC2, S3, and other relevant
technologies.
● Integration Mindset: A deep understanding of integration patterns, data transformation
techniques (ETL/ELT), and API protocols (REST, SOAP, etc.).
● Troubleshooting Acumen: Excellent diagnostic and problem-solving skills, with the ability to
quickly isolate and resolve issues in complex distributed systems.
● Nice to Have: Experience with configuration management tools (Terraform, Ansible), CI/CD
pipelines, and monitoring/observability tools.
● Dotnet Core and Python are the languages used for the microservices being monitored

Educational Background
BE / B Tech / MS in Computer Science, Information Technology, or equivalent degree in related
discipline.

About SMC2

Dallas-based SMC Squared provides US companies with IT talent to accelerate innovation, develop
digital products and services, or simply get basic operational work done. Our GIC approach helps CIOs
minimize risk and cost and also improve quality and retention.

Whether clients need a small team of 5 developers or 500 engineers in a dedicated facility, our
employee-based approach builds dedicated teams of top technology talent with the same mission,
vision, and values as their US counterparts. We recruit-for-fit rather than assign-for-need, thus
creating a common culture, establishing stronger control, and radically improving resource retention.

Our leadership team has walked in your shoes and worked alongside talent in the US and India for
more than 70 collective years. We bring a people-first approach and respect that attracts and retains
high quality Bangalore-based talent.

In doing so, we provide:


• US leadership that coaches leaders and managers through organizational change aspects both in the
US and in India.
• A 1-to-1 FTE productivity guarantee compared to US employees.
• A unique build-optimize-transition model (“rent to own”) that eliminates start- up risks for
companies seeking to access overseas talent.
• Transparent pricing with mutually agreed to total cost of ownership targets saving clients on average
40% existing costs.

For more details, please check the following link https://www.smc2.com

You might also like