Client is looking for an innovative software engineering lead who will lead the technical design and development of an Analytic Foundation. The Analytic Foundation is a suite of individually commercialized analytical capabilities (think prediction as a service, matching as a service or forecasting as a service) that also includes a comprehensive data platform. These services will be offered through a series of APIs that deliver data and insights from various points along a central data store. This individual will partner closely with other areas of the business to build and enhance solutions that drive value for our customers.
Engineers work in small, flexible teams. Every team member contributes to designing, building, and testing features. The range of work you will encounter varies from building intuitive, responsive UIs to designing backend data models, architecting data flows, and beyond. There are no rigid organizational structures, and each team uses processes that work best for its members and projects.
Here are a few examples of products in our space:
Portfolio Optimizer (PO) is a solution that leverages Client's data assets and analytics to allow issuers to identify and increase revenue opportunities within their credit and debit portfolios.
Audiences uses anonymized and aggregated transaction insights to offer targeting segments that have high likelihood to make purchases within a category to allow for more effective campaign planning and activation.
Credit Risk products are a new suite of APIs and tooling to provide lenders real-time access to KPIs and insights serving thousands of clients to make smarter risk decisions using Client data.
Help found a new, fast-growing engineering team!
The client seeks dynamic software engineering lead to spearhead the technical design and development of an Analytic Foundation. Proficiency in Python and/or Scala is required, along with hands-on experience in the Hadoop ecosystem (Hive, Impala, Nifi, Oozie, and Scoop) and Databricks.
Programming Languages: Strong command of Python and/or Scala is essential for handling data processing, scripting, and creating automation pipelines.
Big Data Technologies: Experience with the Hadoop ecosystem (Hive, Impala, Nifi, Oozie, and Scoop) and Databricks is necessary for data storage, processing, and analysis on a large scale.
SQL Expertise: High proficiency in SQL to pull, analyze, and manage data within the large-scale analytics environment.
2. Cloud Infrastructure & Data Management:
Ability to work within distributed computing frameworks and experience with cloud-based data platforms to deploy scalable, efficient data solutions.
3. Data & Services Application Understanding:
Experience in developing data-driven applications, ideally within payment processing, finance, or analytics services, which is central to the role's purpose in supporting business decisions through data insights.
4. Collaboration & Self-Sufficiency:
Strong communication skills and a collaborative spirit to work effectively with cross-functional teams, especially given the distributed team structure across geographies.
Self-direction and initiative, as the role requires the ability to troubleshoot and resolve issues independently.
5. Nice-to-Have Skills:
Familiarity with tools like Git and Jenkins for version control and continuous integration.
Knowledge of monitoring and alerting solutions (such as Splunk) would be advantageous to support system reliability and security.