We are seeking an experienced ETL Architect to design, implement, and manage ETL solutions that ensure seamless data integration across enterprise systems. The ideal candidate will have a strong technical background, a deep understanding of data management principles, and expertise in transforming complex business requirements into scalable ETL frameworks.
Key Responsibilities
Architecture Design:
Develop and document scalable ETL architectures for data integration and transformation processes.
Design data flow processes that align with enterprise data strategy and business needs.
ETL Development:
Build, test, and optimize ETL processes using tools such as Informatica, Talend, Apache NiFi, Microsoft SSIS, or equivalent.
Implement robust error handling, logging, and performance monitoring mechanisms.
Data Integration:
Collaborate with data analysts, data scientists, and business stakeholders to understand data requirements.
Integrate data from various sources, including on-premises databases, cloud platforms, and third-party APIs.
Performance Optimization:
Monitor and enhance ETL performance to meet SLAs and handle large datasets efficiently.
Troubleshoot bottlenecks and resolve issues in data workflows.
Governance and Standards:
Ensure compliance with data governance policies, security standards, and regulatory requirements.
Establish best practices and documentation for ETL processes.
Team Collaboration:
Provide technical guidance to ETL developers and other team members.
Act as a subject matter expert in ETL architecture and data integration technologies.
Qualifications and Skills
Required:
Bachelor's or Master's degree in Computer Science, Information Technology, or a related field.
Proven experience as an ETL Architect or similar role.
Expertise Informatica 10.5, ,UNIX, Mainframe Scheduling, DB View creation, Stored Procedures & DB Tuning.
Strong knowledge of SQL, database systems (e.g., Oracle, SQL Server, PostgreSQL), and data modeling.
Proficiency in programming or scripting languages (e.g., Python, Java, Shell scripting).
Experience with cloud platforms (AWS, Azure, GCP) and associated data services (e.g., AWS Glue, Azure Data Factory).
Familiarity with Big Data frameworks (e.g., Hadoop, Spark) is a plus.
Solid understanding of data warehousing, data lakes, and real-time data processing.
Preferred:
Certification in ETL tools or cloud data services.
Experience with Agile/Scrum methodologies.
Strong problem-solving and analytical skills.
Key Competencies
Strong communication and interpersonal skills.
Ability to handle multiple priorities in a fast-paced environment.
Detail-oriented with a focus on data accuracy and quality.
Team leadership and mentoring capabilities.
Benefits
Competitive salary and performance-based incentives
Comprehensive health and wellness benefits
Opportunities for career development and growth
Access to the latest technologies and resources for continued professional development