At KLS Martin, we offer a unique opportunity to contribute to the success of a dynamic and thriving company whose products are used daily across the world to help surgical patients.
The KLS Martin Group is a worldwide leader in creating surgical solutions for the craniomaxillofacial and cardiothoracic fields. Surgical innovation is our passion, and we are constantly working with surgeons to improve surgical care for their patients. Our product portfolio includes titanium and resorbable implants for reconstruction, innovative distraction devices to stimulate bone lengthening, over 4,000 surgical instruments, and other surgical products designed specifically for CMF and cardiothoracic surgeons.
KLS Martin is an innovative leader in the treatment of CMF deformities and trauma cases. We use Individual Patient Solutions (IPS) by using our proprietary IPS products where CT scans are used to custom design implants that are created specifically for that individual patient. This technology allows our surgeons to provide the best-in-class treatment for their patients.
KLS Martin Guiding Principles
Established, Privately Held Business Group - Responsive to customers, not shareholders. KLS Martin has manufactured medical products since 1896, and we have sold our products in the United States under the KLS name since 1993. We have always been, and always will be, privately owned.
Patient Focus - We design products with the patient in mind - CMF, Thoracic & Hand
Product to Table - Integrated planning, design, manufacturing and distribution process
Educational Partner - Our primary focus for support is on education
Inventory Alliance - Inventory management is critical to patient treatment/outcome
Surgical Innovation is Our Passion - More than just a tagline
What We Offer
We provide full-time employees with a competitive benefits package, including paid parental leave
In-house training and professional development opportunities
A culture of creativity and innovation by drawing on diverse perspectives and ideas to drive surgical innovation
Job Summary
This position is responsible for designing, building, and maintaining the data infrastructure and systems required for collecting, storing, and analyzing large data sets. You will work closely with data analysts, ETL developers, and other stakeholders to create a data architecture supporting high-visibility data pipelines and products.
Essential Functions, Duties, and Responsibilities
Data Infrastructure Development:
Design, construct, install, test, and maintain highly scalable data management systems.
Develop and maintain efficient data pipelines for ETL (Extract, Transform, Load) processes.
Optimize data storage and retrieval processes for performance and reliability.
Data Warehousing:
Design and optimize data warehouse architecture to support the storage and retrieval of structured and unstructured data.
Implement data partitioning, indexing, and optimization techniques to enhance query performance.
Database Management:
Create and maintain data structures and database schemas that support data warehousing and business intelligence.
Implement and manage database solutions using SQL and NoSQL databases.
Ensure data quality, integrity, and security across all databases .
Data Integration:
Integrate data from various sources into the data warehouse or data lake.
Work with APIs to extract data from external systems.
Collaborate with software engineers to integrate data processing and storage solutions within the broader technical architecture.
Data Modeling and Transformation:
Apply best practices in data modeling and transformation to ensure data consistency, integrity, and quality throughout the integration process.
Implement data enrichment, cleansing, and normalization techniques as necessary.
Data Governance and Compliance:
Ensure compliance with data governance and security policies.
Implement data quality measures and monitor data integrity.
Maintain documentation for data processes, structures, and architecture.
Collaboration and Support:
Work closely with data scientists, analysts, and other stakeholders to understand their data needs.
Provide technical support and troubleshooting for data-related issues.
Train and mentor junior data engineers and other team members.
Application Integration:
Collaborate with application developers and IT teams to integrate data from business-critical applications into our data ecosystem.
Develop APIs, connectors, and middleware solutions to facilitate seamless data exchange between applications and data repositories .
Performance Monitoring and Optimization:
Monitor and optimize data processing and storage infrastructure for performance, scalability, and cost efficiency.
Identify and address performance bottlenecks, latency issues, and resource constraints.
The above cited duties and responsibilities describe the general nature and level of work performed by people assigned to job. They are not intended to be an exhaustive list of all the duties and responsibilities that an incumbent may be expected or asked to perform. Qualifications
Educational and Experience Requirements
Bachelor's Degree in a technical field or equivalent work experience.
Proven experience as a Data Engineer or in a similar role
Experience with data warehousing, ETL processes, and big data technologies.
Proficiency in SQL and experience with NoSQL databases.
Experience with Informatica preferred.
Familiarity with cloud platforms (e.g., AWS, Azure, Google Cloud).
Experience with cloud-based data platforms (e.g., Snowflake, Azure Synapse)
Experience with Azure Data Factory, Synapse, and Spark preferred.
Relevant certifications (e.g., Microsoft Certified: Azure Data Engineer Associate) are a plus.
Experience with Data Warehouse Automation & Data Vault 2.0 preferred.
Experience with Microsoft Fabric/Synapse/OneLake preferred.
Experience understanding data requirements and performing data preparation and feature engineering tasks to support model training and evaluation preferred.
Experience with EDI preferred.
Knowledge, Skills, and Abilities
Proficient in SQL query language and SQL Server management
Strong programming skills in languages such as Python, Java, or Scala.
Experience with data pipeline and data warehouse automation tools (e.g., Wherescape).
Knowledge of distributed systems and big data processing frameworks (e.g., Hadoop, Spark).
Excellent problem-solving skills and attention to detail.
Strong communication and collaboration skills.
Proficient with Microsoft Office applications
Experience with formal software development life cycle is preferred.
Excellent analytical and problem-solving skills
Ability to quickly learn and adapt to new technologies, tools, and techniques.
Ability to work both independently and within a team environment
All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability, or status as a protected veteran.
KLS Martin is a drug-free employer Who We Are
At KLS Martin, we offer a unique opportunity to contribute to the success of a dynamic and thriving company whose products are used daily across the world to help surgical patients.
The KLS Martin Group is a worldwide leader in creating surgical solutions for the craniomaxillofacial and cardiothoracic fields. Surgical innovation is our passion, and we are constantly working with surgeons to improve surgical care for their patients. Our product portfolio includes titanium and resorbable implants for reconstruction, innovative distraction devices to stimulate bone lengthening, over 4,000 surgical instruments, and other surgical products designed specifically for CMF and cardiothoracic surgeons.
KLS Martin is an innovative leader in the treatment of CMF deformities and trauma cases. We use Individual Patient Solutions (IPS) by using our proprietary IPS products where CT scans are used to custom design implants that are created specifically for that individual patient. This technology allows our surgeons to provide the best-in-class treatment for their patients.
KLS Martin Guiding Principles
Established, Privately Held Business Group - Responsive to customers, not shareholders. KLS Martin has manufactured medical products since 1896, and we have sold our products in the United States under the KLS name since 1993. We have always been, and always will be, privately owned.
Patient Focus - We design products with the patient in mind - CMF, Thoracic & Hand
Product to Table - Integrated planning, design, manufacturing and distribution process
Educational Partner - Our primary focus for support is on education
Inventory Alliance - Inventory management is critical to patient treatment/outcome
Surgical Innovation is Our Passion - More than just a tagline
What We Offer
We provide full-time employees with a competitive benefits package, including paid parental leave
In-house training and professional development opportunities
A culture of creativity and innovation by drawing on diverse perspectives and ideas to drive surgical innovation
Job Summary
This position is responsible for designing, building, and maintaining the data infrastructure and systems required for collecting, storing, and analyzing large data sets. You will work closely with data analysts, ETL developers, and other stakeholders to create a data architecture supporting high-visibility data pipelines and products.
Essential Functions, Duties, and Responsibilities
Data Infrastructure Development:
Design, construct, install, test, and maintain highly scalable data management systems.
Develop and maintain efficient data pipelines for ETL (Extract, Transform, Load) processes.
Optimize data storage and retrieval processes for performance and reliability.
Data Warehousing:
Design and optimize data warehouse architecture to support the storage and retrieval of structured and unstructured data.
Implement data partitioning, indexing, and optimization techniques to enhance query performance.
Database Management:
Create and maintain data structures and database schemas that support data warehousing and business intelligence.
Implement and manage database solutions using SQL and NoSQL databases.
Ensure data quality, integrity, and security across all databases .
Data Integration:
Integrate data from various sources into the data warehouse or data lake.
Work with APIs to extract data from external systems.
Collaborate with software engineers to integrate data processing and storage solutions within the broader technical architecture.
Data Modeling and Transformation:
Apply best practices in data modeling and transformation to ensure data consistency, integrity, and quality throughout the integration process.
Implement data enrichment, cleansing, and normalization techniques as necessary.
Data Governance and Compliance:
Ensure compliance with data governance and security policies.
Implement data quality measures and monitor data integrity.
Maintain documentation for data processes, structures, and architecture.
Collaboration and Support:
Work closely with data scientists, analysts, and other stakeholders to understand their data needs.
Provide technical support and troubleshooting for data-related issues.
Train and mentor junior data engineers and other team members.
Application Integration:
Collaborate with application developers and IT teams to integrate data from business-critical applications into our data ecosystem.
Develop APIs, connectors, and middleware solutions to facilitate seamless data exchange between applications and data repositories .
Performance Monitoring and Optimization:
Monitor and optimize data processing and storage infrastructure for performance, scalability, and cost efficiency.
Identify and address performance bottlenecks, latency issues, and resource constraints.
The above cited duties and responsibilities describe the general nature and level of work performed by people assigned to job. They are not intended to be an exhaustive list of all the duties and responsibilities that an incumbent may be expected or asked to perform. Educational and Experience Requirements
Bachelor's Degree in a technical field or equivalent work experience.
Proven experience as a Data Engineer or in a similar role
Experience with data warehousing, ETL processes, and big data technologies.
Proficiency in SQL and experience with NoSQL databases.
Experience with Informatica preferred.
Familiarity with cloud platforms (e.g., AWS, Azure, Google Cloud).
Experience with cloud-based data platforms (e.g., Snowflake, Azure Synapse)
Experience with Azure Data Factory, Synapse, and Spark preferred.
Relevant certifications (e.g., Microsoft Certified: Azure Data Engineer Associate) are a plus.
Experience with Data Warehouse Automation & Data Vault 2.0 preferred.
Experience with Microsoft Fabric/Synapse/OneLake preferred.
Experience understanding data requirements and performing data preparation and feature engineering tasks to support model training and evaluation preferred.
Experience with EDI preferred.
Knowledge, Skills, and Abilities
Proficient in SQL query language and SQL Server management
Strong programming skills in languages such as Python, Java, or Scala.
Experience with data pipeline and data warehouse automation tools (e.g., Wherescape).
Knowledge of distributed systems and big data processing frameworks (e.g., Hadoop, Spark).
Excellent problem-solving skills and attention to detail.
Strong communication and collaboration skills.
Proficient with Microsoft Office applications
Experience with formal software development life cycle is preferred.
Excellent analytical and problem-solving skills
Ability to quickly learn and adapt to new technologies, tools, and techniques.
Ability to work both independently and within a team environment
All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability, or status as a protected veteran.