A local candidate would be ideal for this position since they would need to report onsite day 1
Top Skills:
Confluent Kafka (SME in Kafka), Automation (Ansible, Python), Hands on experience deploying into AWS (Kubernetes), Linux
General Experience: • Ability to troubleshoot and diagnose complex issues, including internal and external SaaS/PaaS, and network flows. • Demonstrated experience supporting technical users and conducting requirements analysis. • Ability to work independently with minimal guidance and oversight. • Familiarity with IT Service Management, including Incident & Problem Management. • Skilled in identifying performance bottlenecks, anomalous system behavior, and resolving root causes of service issues. • Proven ability to work effectively across teams and functions to influence design, operations, and deployment of highly available software. • Knowledge of standard practices related to security, performance, and disaster recovery. • Advanced understanding of agile methodologies such as CI/CD, Application Resiliency, and Security.
Required Technical Expertise: • Deep understanding of Kafka and its various components. • Strong knowledge of Kafka Connect, KSQL, and KStreams. • Experience designing and building secure Kafka/streaming/messaging platforms at an enterprise scale and integrating with other data systems in hybrid multi-cloud environments. • Experience with Confluent Kafka, Confluent Cloud, Schema Registry, and KStreams. • Proficiency in Infrastructure as Code (IaC) using tools like Terraform. • Strong operational background in running Kafka clusters at scale. • Knowledge of physical/on-prem systems and public cloud infrastructure. • Understanding of Kafka broker, connect, and topic tuning and architectures. • Strong understanding of Linux fundamentals as they relate to Kafka performance. • Background in both Systems and Software Engineering. • Experience with containers and Kubernetes clusters. • Proven experience as a DevOps Engineer with a focus on AWS. • Proficiency in AWS services such as EC2, IAM, S3, RDS, Lambda, EKS, and VPC. Working knowledge of networking, including VPCs, Transit Gateways, firewalls, load balancers, etc. • Experience with monitoring and visualization tools such as Prometheus, Grafana, and Kibana. • Competency in developing solutions using high-level languages such as Java and Python. • Experience with configuration management in code/IaC, including Ansible and Terraform. • Hands-on experience delivering complex software in an enterprise environment. • 3+ years of experience with Python and Shell Scripting. • 3+ years of AWS DevOps experience. • Proficiency in distributed Linux environments.
Preferred Technical Experience: • Certification in Confluent Kafka and/or Kubernetes is a plus.