Senior Engineer Integration Kafka Confluent Kubernetes AWS Job Description
Senior Engineer Integration Kafka Confluent Kubernetes AWS Job Description
We're seeking a seasoned Integration Engineer with deep expertise in Apache Kafka, Confluent
Platform, Kubernetes, and cloud technologies – specifically Amazon Web Services (AWS) EKS. In this
pivotal role, you'll design, develop, and maintain robust, scalable integration solutions that are the
backbone of our data-driven systems. You'll act as the bridge between disparate systems and
technologies, ensuring seamless data flow and maximizing the reliability of our platforms.
Location
Bangalore, India
Job Profile
Your work will involve:
Responsibilities:
● Architect and Implement Kafka-based Integrations: Design and build highly available and
performant integration solutions leveraging Apache Kafka and Confluent Platform.
● Kubernetes Orchestration and Deployment: Package, deploy, and manage Kafka and Confluent
components on Kubernetes, including within AWS EKS environments.
● Streamline Data Pipelines: Develop event-driven architectures and real-time data pipelines to
enable efficient data processing and analytics.
● AWS Integration and Troubleshooting: Proactively troubleshoot AWS EKS clusters, containers, and
related integration issues, ensuring high availability and platform stability.
● Collaboration and Best Practices: Work closely with development teams to understand integration
needs, define strategies, and enforce best practices for integration methodologies.
● Real-time Data Processing: Develop and maintain streaming pipelines for real-time data ingestion,
processing, and analytics.
● Security and Compliance: Ensure the security and compliance of Kafka clusters and data pipelines,
adhering to industry standards and regulations.
● Performance Tuning and Scalability: Monitor and optimize Kafka clusters for peak performance,
ensuring scalability and high throughput.
● Messaging and Event Brokerage: Implement messaging patterns and event-driven architectures
using Kafka topics, partitions, and consumers.
● Data Transformation and Enrichment: Apply data transformation techniques to cleanse, enrich,
and prepare data for downstream consumption.
● Monitoring and Observability: Set up monitoring and observability tools (e.g., Prometheus,
Grafana, Jaeger) to track Kafka metrics and troubleshoot issues.
● Disaster Recovery and High Availability: Design and implement disaster recovery and high
availability strategies for Kafka clusters to ensure continuous availability.
● Interoperability and Integration: Integrate Kafka with other systems and technologies, such as
databases, NoSQL stores, and message queues, to enable seamless data exchange.
Requirements:
Educational Background
BE / B Tech / MS in Computer Science, Information Technology, or equivalent degree in related
discipline.
About SMC2
Dallas-based SMC Squared provides US companies with IT talent to accelerate innovation, develop
digital products and services, or simply get basic operational work done. Our GIC approach helps CIOs
minimize risk and cost and also improve quality and retention.
Whether clients need a small team of 5 developers or 500 engineers in a dedicated facility, our
employee-based approach builds dedicated teams of top technology talent with the same mission,
vision, and values as their US counterparts. We recruit-for-fit rather than assign-for-need, thus
creating a common culture, establishing stronger control, and radically improving resource retention.
Our leadership team has walked in your shoes and worked alongside talent in the US and India for
more than 70 collective years. We bring a people-first approach and respect that attracts and retains
high quality Bangalore-based talent.