At Impetus Technologies, we drive innovation and deliver cutting-edge solutions to our clients. We are hiring an experienced Big Data Engineer with a strong focus on GCP/Azure/AWS to join our team in Phoenix, AZ. The ideal candidate will have extensive experience in Hadoop, Spark (Batch/Streaming), Hive, and Shell scripting and solid programming skills in Java or Scala. A deep understanding and hands-on experience with GCP/Azure/AWS are critical for this role.
Qualifications:
Proven experience with Hadoop, Spark (Batch/Streaming), and Hive.
Proficiency in Shell scripting and programming languages such as Java and/or Scala.
Strong hands-on experience with GCP/Azure/AWS and a deep understanding of its services and tools.
Ability to design, develop, and deploy big data solutions in a GCP/Azure/AWS environment.
Experience with migrating data systems to GCP/Azure/AWS .
Excellent problem-solving skills and the ability to work independently or as part of a team.
Strong communication skills to effectively collaborate with team members and stakeholders.
Responsibilities:
Development: Design and develop scalable big data solutions using Hadoop, Spark, Hive, and GCP/Azure/AWS services.
Design: Architect and implement big data pipelines and workflows optimized for GCP/Azure/AWS, ensuring efficiency, security, and reliability.
Deployment: Deploy big data solutions on GCP/Azure/AWS, leveraging the best practices for cloud-based environments.
Migration: Lead the migration of existing data systems to GCP/Azure/AWS, ensuring a smooth transition with minimal disruption and optimal performance.
Collaboration: Work closely with cross-functional teams to integrate big data solutions with other cloud-based services and business goals.
Optimization: Continuously optimize big data solutions on GCP/Azure/AWS to improve performance, scalability, and cost-efficiency.