This Chicago-based financial services company has continued to grow in the past year and is adding a Data Engineer to its team. They are building a product that utilizes ML to help make better financial decisions. They need a Data Engineer to work on the backend API and Data Pipelines feeding the Data lake. The ideal candidate will have 5+ years of Python development experience and have built custom ETL pipelines in Python. Additional experience with Spark, PySpark, Kafka, or Apache Beam would make for an ideal fit. This is a hybrid role, so this person needs to be in the Chicago metro area.
Requirements:
5+ years of Data Engineering with strong Python Coding Experience
Experience working in a cloud and microservices environment with GCP, AWS or Azure
Experience with ETL tools such as Apache Beam, Kafka, or Spark
Experience working with ML engineers and moving models into production
Strong SQL Skills and ideally PostgreSQL experience
Bachelor's Degree in Computer Science, Computer Engineering, Mathematics or other engineering degrees.
Nice to Have:
Experience building APIs and working with Flask, Django, or FastAPI
Specific experience with GCP, BigQuery, and Elastic Search
Kafka and Spark would be a strong preference
BigQuery Experience
This role is not able to hire C2C or sponsor at this time.