Are you a Database Admin extraordinaire? 🤓 TechTammina LLC is seeking a data guru to join our rockstar team! 🚀 If you live and breathe databases like MariaDB, MySQL, MSSQL, PostgreSQL, MongoDB, and Cassandra, this could be your dream gig! 💻 We're looking for someone who can: > Manage and optimize our database systems like a boss 💪 > Collaborate with our DevOps team to implement best practices > Work with cutting-edge technologies like AppianCloud Sound like a perfect fit? Then what are you waiting for? 🔥 Drop your resume at [email protected] and let's take your database skills to new heights! #NowHiring #WeAreHiring #DatabaseAdministrator #TechJobs #MariaDB #DataCareers #DevOps #AppianCloud #DatabaseAdmin #TechTammina
TechTammina LLC’s Post
More Relevant Posts
-
Actively looking for opportunities in Data Engineer Roles | Big Data| Hadoop| Hive | Pyspark | SQL | Python | Oracle| Power BI | Tableau |Cloud Computing | Kafka| Azure | AWS | currently Open to work, immediately
🚀 Seeking Exciting Opportunities as a Senior Data Engineer! 📊 Hello LinkedIn community, I hope this post finds you well. I am excited to announce that I am actively looking for new opportunities as a Senior Data Engineer on corp-to-corp . 👨💼💼 With a strong background in data engineering and a proven track record of turning data into actionable insights, I am eager to take on new challenges and contribute my expertise to a forward-thinking organization. Here's a glimpse of what I bring to the table: 📈 Expertise: I have a deep understanding of data pipelines, ETL processes, and data warehousing, and I am well-versed in working with a variety of data technologies and clouds. 🛠 Technical Skills: Proficient in tools like Python, SQL, Hadoop, Spark, and more, I have hands-on experience in building and optimizing data architectures. 🌐 Data-Driven Mindset: I believe in the power of data to drive informed decision-making and can translate business requirements into scalable data solutions. 🔗 Team Collaboration: I thrive in cross-functional teams and am adept at communicating complex technical concepts to non-technical stakeholders. 📊 Data Security & Compliance: I take data security and compliance seriously and ensure that data remains secure and meets all regulatory requirements. 🌟 If your organization is in need of a Senior Data Engineer who is passionate about leveraging data to create value, I'd love to connect and discuss potential opportunities. Feel free to reach out to me here on LinkedIn or via email at [[email protected]]. 🔗 Let's connect and explore how we can work together to drive data-driven success! #c2c, #corp2corp, #uscontractjobs, #corptocorp, #contracttohire, #CTH, #remote, #onsite, #dataengineer, #etl, #aws, #Azure, #python, #cloud, #cloudservices, #datalake, #bigdata, #hadoop, #snowflake, #databricks, #datamodeling, #mysql, #cloudservices, #postgre, #apache, #airflow, #contract, #dataengineerjobs, #etldeveloper, #pythondeveloper, #bigdatadeveloper, #opentorelocate, #relocate, #dataengineercontract, #usitrecruiters, #gc, #greencard, #hadoopdeveloper, #TechJobs, #Hiring, #JobSearch, #usrecutier, #ITrecutier
To view or add a comment, sign in
-
Hello Everyone, Greetings of the day! We are #urgenthiring for the below position. Role: Data Architect Remote Contract #C2C #W2 #uscitizens #greencard #independentcontractors Please reach out to me at email([email protected] or [email protected]) Job Description • This position is a blend of a senior architect and engineer - Data Migration of onprem relational database into cloud data lakehouse • 8+ years on Coding (Python, Java, Scala, or other programming languages), Data warehousing, Database systems (relational and NoSQL),Data analysis, Cloud computing (Azure),Big Data (Hadoop, Hbase), Pyspark, Azure Deltalake, Databricks, DBT, Healthcare domain experience • Azure Data Factory, Databricks, SQL, Data Architecture, Data Modeling • Data Migration of onprem relational database into cloud data lakehouse Skills: • Programming Languages: Familiarity with languages like Python or Java for data analysis and application development. • Data Modeling: Proficiency in creating logical and physical data models to represent business requirements and system architecture. • Data Warehousing: Understanding of data warehousing concepts, including data storage, retrieval, and optimization. • Data Security: Knowledge of data security best practices, encryption, and access controls. • ETL (Extract-Transform-Load): Expertise in ETL processes and tools like DBT (Data Build Tool) [DBT Core] and DBT framework/packages • DBT framework/packages - package: dbt-labs/dbt_utils - package: calogica/dbt_date - package: dbt-labs/spark_utils - package: yu-iskw/dbt_unittest - package: mjirv/dbt_datamocktool - package: Datavault-UK/automate_dv • Database Management Systems (DBMS): Proficiency in relational databases (e.g., Microsoft SQL Server) and NoSQL databases. • Cloud Computing: Understanding of cloud platforms (e.g., Azure, AWS, GCP) and their data services. • Delta Lake: Familiarity with Delta Lake, an open-table format that combines data lake flexibility with ACID transactions. • Medallion Architecture: Knowledge of the medallion architecture, which organizes data layers (Bronze, Silver, Gold) in a lakehouse. • Data Vault: Understanding of the Data Vault methodology for scalable and flexible data warehousing. • Azure Data Factory (ADF): Proficiency in using ADF for orchestrating data workflows, data movement, and transformation. Healthcare domain knowledge Saif Shekh Madhuri Chaudhari Jeet Kumar Akanksha P Sachin Tiwari Sunanda Pandey Ashis Kumar Singh Nitesh K. Laxmi Narayan Shivam Rajput Pooja Kumari Manisha Pal Sourabh Singh Chauhan
To view or add a comment, sign in
-
#linkedinconnection #linkedinfamily #hiringimmediately Data Engineer with AWS Redshit (Mandatory) Harrisburgh, PA Onsite H4EAD/GC-EAD/GC/USC 12 Months C2C Send Resumes to [email protected] for immediate response. As an AWS Redshift Data Engineer, the primary responsibility is to design, implement, and maintain data solutions using Amazon Redshift. The ideal candidate should possess the following skills: Data Modeling and Design: Develop and maintain data models for Redshift databases, including schema design, table structures, and optimization techniques. Collaborate with data architects and stakeholders to understand requirements and translate them into efficient data structures. ETL Development: Design and implement Extract, Transform, Load (ETL) processes to extract data from various sources, transform it as per business requirements, and load it into Redshift. Develop efficient and scalable ETL workflows, considering data quality, performance, and data governance. Performance Optimization: Optimize query performance by creating appropriate data distribution keys, sort keys, and compression techniques. Identify and resolve performance bottlenecks, fine-tune queries, and optimize data processing to enhance Redshift's performance. Data Integration: Integrate Redshift with other AWS services, such as AWS Glue, AWS Lambda, Amazon S3, and more, to build end-to-end data pipelines. Ensure seamless data flow between different systems and platforms, maintaining data integrity and consistency. Monitoring and Troubleshooting: Implement monitoring and alerting systems to proactively identify issues and ensure the stability and availability of the Redshift cluster. Perform troubleshooting, diagnose and resolve data-related issues, and work closely with support teams to resolve any performance or operational problems. Security and Compliance: Implement security best practices to protect data stored in Redshift. Ensure compliance with data privacy regulations and industry standards, such as GDPR and HIPAA. Implement encryption, access controls, and data masking techniques to secure sensitive data. Documentation and Collaboration: Maintain documentation of data models, ETL processes, and system configurations. Collaborate with cross-functional teams, including data scientists, analysts, and software engineers, to understand data requirements and provide data solutions that meet their needs. Scalability and Capacity Planning: Plan and execute strategies for scaling Redshift clusters to handle increasing data volumes and user demands. Monitor resource utilization, track data growth, and make recommendations for capacity planning and infrastructure scaling. Knowledge or previous experience in Oracle PLSQL will be an added advantage. #dataengineer #dataengineerjobs #usitrecruitment #c2crequirements #usitstaffing #immediatehiring #itandsoftware #usaitjobs #corp2corp
To view or add a comment, sign in
-
Requitement Specialist-IT/NON IT /Powering Your Success with Job Placement & HR Consultancy Qlikview|Qlik Sense|Nprinting|Power BI|Mashup |SQL|SSRS|SSIS|GCP|GLOVIA G2|Python|AWS|VAPT
#Hiring #HiringKafka #SeniorKafkaArchitect #ArchitecturalComponents #Storm Hello folks, We are hiring for below positions for Senior Kafka Architect find the below JD as well, Job Title: Senior Kafka Architect Locations: Bangalore Experience: 8 to 12 Skill Set: KAFKA AND STORM Job Description: Position: Senior Kafka Architect Duration: Full Time MUST HAVE SKILLS: : Kafka : Storm : Architectural components: brokers, Zookeeper, Producers/Consumer JOB DESCRIPTION: Looking for a senior Kafka engineer/architect with an established track record in Kafka technology, with hands-on production experience and a deep understanding of the Kafka architecture and internals of how it works, along with interplay of architectural components: brokers, Zookeeper, Producers/Consumers etc. SOME OF THE KEY RESPONSIBILITIES ARE LISTED BELOW: Architecture Design: Design Kafka-based data streaming architecture that meet scalability, fault-tolerance, and performance requirements. Cluster Sizing: Determine the appropriate cluster size, partitioning, and replication factors based on workload and data volume. Topic Modeling: Define Kafka topic schemas, naming conventions, and access control policies for efficient data organization. Security Planning: Implement security measures like SSL/TLS encryption, SASL authentication, and ACLs to protect data and Kafka infrastructure. High Availability: Ensure Kafka clusters are highly available and fault-tolerant by configuring replication, leader election, and monitoring. Disaster Recovery: Plan and implement disaster recovery strategies to ensure data integrity in case of cluster failures. Performance Optimization: Fine-tune Kafka configurations, monitor performance metrics, and optimize resource utilization for efficient data processing. Monitoring and Alerting: Set up monitoring tools and alerting systems to detect issues, track performance, and respond to anomalies. Load Testing: Conduct load testing and benchmarking to validate the Kafka architecture’s performance and scalability. Cost Management: Optimize resource usage to control operational costs while maintaining performance and reliability. Training and Support: Provide training to other team members and support staff on Kafka best practices and troubleshooting. Documentation: Maintain comprehensive documentation for Kafka configurations, data flows, and architecture diagrams. Scaling Strategies: Develop strategies for horizontally and vertically scaling Kafka clusters based on changing workloads. Hands-On Experience: Hands-on experience as a developer who has used the Kafka API to build producer and consumer applications/components. Be able to come up with library/framework for common functionality/use cases to have different teams reuse code rather than writing new code. Message Formats: Strong familiarity of message formats such as XML, JSON, CSV, etc. Recruiters: Please drop an e-mail on below email if you are interested, [email protected]
To view or add a comment, sign in
-
#hiring *Lead Engineer - Big Data Platform/Infra (Hadoop, Spark Streaming, Druid)*, Minneapolis, *United States*, $196K, fulltime #jobs #jobseekers #careers $196K #Minneapolisjobs #Minnesotajobs #ITCommunications *Apply*: https://lnkd.in/gQw-sMG6 Location: 7000 Target Pkwy N, Brooklyn Park, Minnesota, United States, 55445The pay range is $109,000.00 - $196,200.00 Pay is based on several factors which vary based on position. These include labor markets and in some instances may include education, work experience and certifications. In addition to your pay, Target cares about and invests in you as a team member, so that you can take care of yourself and your family. Target offers eligible team members and their dependents comprehensive health benefits and programs, which may include medical, vision, dental, life insurance and more, to help you and your family take care of your whole selves. Other benefits for eligible team members include 401(k), employee discount, short term disability, long term disability, paid sick leave, paid national holidays, and paid vacation. Find competitive benefits from financial and education to well-being and beyond at . JOIN US AS A LEAD ENGINEER - BIG DATA PLATFORM About us: As a Fortune 50 company with more than 350,000 team members worldwide, Target is an iconic brand and one of America's leading retailers. Working at Target means the opportunity to help all families discover the joy of everyday life. Caring for our communities is woven into who we are, and we invest in the places we collectively live, work and play. We prioritize relationships, fuel and develop talent by creating growth opportunities, and succeed as one Target team. At our core, our purpose is ingrained in who we are, what we value, and how we work. It's how we care, grow, and win together. The Target High Performance Distributed Computing team creates the platforms and tools to enable our business partners to make data-based decisions at Target. This team helps to manage hardware and software for large scale distributed computing, frequently angling towards data analytics and Artificial Intelligence/Machine Learning type applications. We help develop the technology that personalizes the guest experience, from product recommendations to relevant ad content. We're also the source of the data and analytics behind Target's Supply Chain optimization, fraud detection, demand forecasting (DFE) and metrics to support our stores. We play a key role in identifying the test-and-measure or A/B test opportunities that continuously help Target improve the guest experience, whether they love to shop in stores or As a Lead Engineer, you serve as the technical anchor for the engineering team that supports a product. You create, own and are responsible for the application and platform architecture that best serves the product in its functional and non-
To view or add a comment, sign in
-
Senior Data Engineer at INTEGRIS Health || Skilled in Azure, AWS, GCP, ETL, Big Data, Python, Hadoop, Spark, SQL, Snowflake, Data bricks, PySpark || Actively looking for new opportunities.
🔍🚀 Experienced Data Engineer Seeking Exciting C2C Opportunities! 🔍🚀 🔍📊 Are you in search of a seasoned Data Engineer with over 10 years of hands-on experience? Look no further! With over a decade of hands-on experience in the dynamic field of data engineering, I am excited to explore new career opportunities. Currently seeking Contract-to-Contract (C2C) engagements where I can leverage my expertise to drive impactful data-driven solutions. 💼 About Me: 🔹 10+ years of proven experience in designing, implementing, and maintaining scalable data pipelines. 🔹 Hands-on experience in Cloud Platforms Like Azure, AWS, GCP for building data solutions. 🔹 Proficient in a variety of programming languages including Python, SQL, and Java. 🔹 Extensive knowledge of big data technologies such as Hadoop, Spark, and Kafka. 🔹 Skilled in data warehousing, ETL processes, and data modeling techniques. 🔹 Strong problem-solving abilities with a keen eye for detail and efficiency. 📧 Let's Connect: If you're in search of a seasoned data engineer to join your team on a contract basis, I'd love to discuss how my skills and experience align with your needs. Feel free to reach out via LinkedIn message 📩 or 📫 email at [email protected] . Thanks & Regards Jaya Chandu Mandava #DataEngineer #DataAnalytics #ContractToContract #C2C #DataPipeline #CloudComputing #BigData #ETL #DataArchitecture #AWS #Azure #GCP #SQL #NoSQL #kafka #spark #MachineLearning #DataScience #Scala #hadoop #HDFS #mapreduce #databricks #database #ec2 #s3 #bigquery #snowflake #python #agile #scrum #redshift #lambda #synapse #azurefunctions #azurecloud #azuredevops #awsdataengineer #gcpdataengineer #gcpengineer #mongodb #cassandra
To view or add a comment, sign in
-
#hiring *Sr Engineer - Big Data Infra (Hadoop, Spark, Linux, Java)*, Minneapolis, *United States*, $151K, fulltime #jobs #jobseekers #careers $151K #Minneapolisjobs #Minnesotajobs #ITCommunications *Apply*: https://lnkd.in/gQEXgmu2 Location: 7000 Target Pkwy N, Brooklyn Park, Minnesota, United States, 55445The pay range is $83,800.00 - $150,800.00 Pay is based on several factors which vary based on position. These include labor markets and in some instances may include education, work experience and certifications. In addition to your pay, Target cares about and invests in you as a team member, so that you can take care of yourself and your family. Target offers eligible team members and their dependents comprehensive health benefits and programs, which may include medical, vision, dental, life insurance and more, to help you and your family take care of your whole selves. Other benefits for eligible team members include 401(k), employee discount, short term disability, long term disability, paid sick leave, paid national holidays, and paid vacation. Find competitive benefits from financial and education to well-being and beyond at . JOIN US AS A SR ENGINEER - BIG DATA INFRA ENGINEERING (HADOOP, SPARK, DRUID) About Us: As a Fortune 50 company with more than 350,000 team members worldwide, Target is an iconic brand and one of America's leading retailers. Working at Target means the opportunity to help all families discover the joy of everyday life. Caring for our communities is woven into who we are, and we invest in the places we collectively live, work and play. We prioritize relationships, fuel and develop talent by creating growth opportunities, and succeed as one Target team. At our core, our purpose is ingrained in who we are, what we value, and how we work. It's how we care, grow, and win together. The Target High Performance Distributed Computing team creates the platforms and tools to enable our business partners to make great data-based decisions at Target. This team helps to manage hardware and software for large scale distributed computing, frequently angling towards data analytics and Artificial Intelligence/Machine Learning type applications. We help develop the technology that personalizes the guest experience, from product recommendations to relevant ad content. We're also the source of the data and analytics behind Target's Supply Chain optimization, fraud detection, demand forecasting and metrics to support our stores. We play a key role in identifying the test-and-measure or A/B test opportunities that continuously help Target improve the guest experience, whether they love to shop in stores or at As a Senior Engineer, High Performance Distributed Computing, you'll have the opportunity to create software solutions using Agile practices and DevOps principles. Your responsibilities will include designing, programming, debugging and supp
https://www.jobsrmine.com/us/minnesota/minneapolis/sr-engineer-big-data-infra-hadoop-spark-linux-java/461256499
jobsrmine.com
To view or add a comment, sign in
-
#hiring *Sr Engineer - Big Data Infra (Hadoop, Spark, Linux, Java)*, Minneapolis, *United States*, $151K, fulltime #jobs #jobseekers #careers $151K #Minneapolisjobs #Minnesotajobs #ITCommunications *Apply*: https://lnkd.in/gthhKJqY Location: 7000 Target Pkwy N, Brooklyn Park, Minnesota, United States, 55445The pay range is $83,800.00 - $150,800.00 Pay is based on several factors which vary based on position. These include labor markets and in some instances may include education, work experience and certifications. In addition to your pay, Target cares about and invests in you as a team member, so that you can take care of yourself and your family. Target offers eligible team members and their dependents comprehensive health benefits and programs, which may include medical, vision, dental, life insurance and more, to help you and your family take care of your whole selves. Other benefits for eligible team members include 401(k), employee discount, short term disability, long term disability, paid sick leave, paid national holidays, and paid vacation. Find competitive benefits from financial and education to well-being and beyond at . JOIN US AS A SR ENGINEER - BIG DATA INFRA ENGINEERING (HADOOP, SPARK, DRUID) About Us: As a Fortune 50 company with more than 350,000 team members worldwide, Target is an iconic brand and one of America's leading retailers. Working at Target means the opportunity to help all families discover the joy of everyday life. Caring for our communities is woven into who we are, and we invest in the places we collectively live, work and play. We prioritize relationships, fuel and develop talent by creating growth opportunities, and succeed as one Target team. At our core, our purpose is ingrained in who we are, what we value, and how we work. It's how we care, grow, and win together. The Target High Performance Distributed Computing team creates the platforms and tools to enable our business partners to make great data-based decisions at Target. This team helps to manage hardware and software for large scale distributed computing, frequently angling towards data analytics and Artificial Intelligence/Machine Learning type applications. We help develop the technology that personalizes the guest experience, from product recommendations to relevant ad content. We're also the source of the data and analytics behind Target's Supply Chain optimization, fraud detection, demand forecasting and metrics to support our stores. We play a key role in identifying the test-and-measure or A/B test opportunities that continuously help Target improve the guest experience, whether they love to shop in stores or at As a Senior Engineer, High Performance Distributed Computing, you'll have the opportunity to create software solutions using Agile practices and DevOps principles. Your responsibilities will include designing, programming, debugging and supp
https://www.jobsrmine.com/us/minnesota/minneapolis/sr-engineer-big-data-infra-hadoop-spark-linux-java/458732351
jobsrmine.com
To view or add a comment, sign in