IT Consultant IV, Solutions - Data Engineer
As an individual contributor, you will take ownership of the most challenging technical initiatives-solving problems at scale, driving improvements in performance and reliability, and setting the standards that others follow. You will partner closely with Data Engineering, Platform Engineering, Data Science, Architecture, and other technical teams to evaluate solution designs, influence engineering decisions, and mentor developers in modern data engineering practices.
Responsibilities and Accountabilities:
- Develop & Maintain Scalable Data Pipelines: Architect, build, and optimize ETL/ELT pipelines using PySpark, Spark SQL, Auto Loader, and Delta Live Tables to support end to end ingestion and transformation.
- Implement Robust Lakehouse Architecture: Design and enhance Medallion Layers (Bronze/Silver/Gold) data models, applying Delta Lake features such as schema evolution, CDF, Optimize, and Z Ordering to deliver performant, reliable, and cost-efficient data layers.
- Integrate Data Across Cloud Platforms: Ingest and harmonize structured, semi structured, and unstructured data from multiple cloud environments including Azure and enterprise object storage.
- Develop Reusable Engineering Frameworks: Create and maintain reusable Python, PySpark, and YAML based libraries and patterns to standardize ingestion, transformation, automation, and engineering workflows across teams.
- Enforce Data Quality & Governance: Implement and operationalize automated data validation frameworks (DLT expectations, data contracts) while applying Unity Catalog governance covering permissions, lineage, external locations, and PII/PHI controls.
- Optimize Performance & Cost Efficiency: Tune Spark workloads by applying partitioning, caching, and join optimization strategies; leverage Photon, serverless SQL, and cluster right sizing to improve runtime performance and reduce compute costs.
- Collaborate with Data & Platform Teams: Partner closely with Data Scientists, Analysts, SMEs, and Platform Engineering teams to translate requirements into scalable data solutions and align on architectural, governance, and operational standards.
- Operationalize Data Science Workflows: Convert prototype notebooks into production ready pipelines, support feature engineering and batch/real time scoring and manage MLflow tracking and model registry operations.
- Lead cross-team alignment on Databricks standards (CI/CD, governance, data quality, and operational readiness), ensuring consistent adoption across domains and delivery teams.
In addition to the responsibilities listed below, this position is responsible for providing support for customers (users), and assigned applications and/or information systems, including software implementation, integration, configuration, and testing. Additional responsibilities also include supporting solution design, researching how to help translate requirements into workable technical solutions, and supporting the evaluation of third-party vendors as directed.
Essential Responsibilities:
- Completes work assignments and supports business-specific projects by applying expertise in subject area; supporting the development of work plans to meet business priorities and deadlines; ensuring team follows all procedures and policies; coordinating and assigning resources to accomplish priorities and deadlines; collaborating cross-functionally to make effective business decisions; solving complex problems; escalating high priority issues or risks, as appropriate; and recognizing and capitalizing on improvement opportunities.
- Practices self-development and promotes learning in others by proactively providing information, resources, advice, and expertise with coworkers and customers; building relationships with cross-functional stakeholders; influencing others through technical explanations and examples; adapting to competing demands and new responsibilities; listening and responding to, seeking, and addressing performance feedback; providing feedback to others and managers; creating and executing plans to capitalize on strengths and develop weaknesses; supporting team collaboration; and adapting to and learning from change, difficulties, and feedback.
- Develops requirements for complex process or system solutions within assigned business domain(s) by interfacing stakeholders and appropriate IT teams (for example, Solutions Delivery, Infrastructure, Enterprise Architecture) and leading junior team members in the development process as appropriate.
- Leverages multiple business requirements gathering methodologies to identify business, functional, and non-functional requirements (for example, SMART) across multiple business domains.
- Develops and documents comprehensive business cases to assess the costs, benefits, ROI, and Total Cost of Ownership (TCO) of proposed solutions.
- Provides insight and supports the evolution of applications, systems, and/or processes to a desired future state by maintaining a comprehensive understanding of how current processes impact business operations across multiple domains.
- Maps current state against future state processes.
- Identifies the impact of requirements on upstream and downstream solution components.
- Provides recommendations to management and business stakeholders on how to integrate requirements with current systems and business processes across regions or domains.
- Identifies and validates value gaps and opportunities for process enhancements or efficiencies.
- Supports solution design by providing insight at design sessions with IT teams to help translate requirements into workable business solutions.
- Identifies and recommends additional data and/or services needed to address key business issues related to process or solutions design.
- Participates in evaluating third-party vendors as directed.
- Supports continuous process improvement by participating in the development, implementation, and maintenance of standardized tools, templates, and processes across multiple business domains.
- Recommends regional and/or national process improvements which align with sustainable best practices, and the strategic and tactical goals of the business.
- Ambiguity/Uncertainty Management
- Attention to Detail
- Business Knowledge
- Communication
- Critical Thinking
- Cross-Group Collaboration
- Decision Making
- Dependability
- Diversity, Equity, and Inclusion Support
- Drives Results
- Facilitation Skills
- Health Care Industry
- Influencing Others
- Integrity
- Learning Agility
- Organizational Savvy
- Problem Solving
- Short- and Long-term Learning & Recall
- Teamwork
- Topic-Specific Communication
- Software Development Life Cycle
- Analytical Skills
- Business Case Development
- Business Planning
- Business Process Improvement
- Client Focus
- Crisis Incident Management
- Debugging and Troubleshooting
- Demonstrating Personal Flexibility
- Managing Diverse Relationships
- Model Development
- Negotiation
- Organizational Skills
- Prioritization
- Process Validation
- Project Management
- Relationship Building
- Requirements Elicitation & Analysis
- Technical Documentation
- Vendor Management
- Bachelors degree in Business Administration, Computer Science, CIS or related field and a Minimum of Six (6) years experience in IT consulting, business analysis, or a related field. Additional equivalent work experience may be substituted for the degree requirement.
- 10+ years of Data Engineering experience, including 4+ years working on Databricks.
- Proven experience designing enterprise-scale data architectures and distributed systems
- Deep expertise in Delta Lake internals (file pruning, compaction, metadata management, and CDF tuning).
- Experience leading complex migrations (legacy ETL, cloud migrations, warehouse consolidation).
- Experience developing reusable engineering frameworks, libraries, and standards.
- Strong proficiency in Python, SQL, and PySpark for building scalable data pipelines.
- Experience with cloud platforms such as Azure, AWS, or GCP, including working with object storage.
- Hands-on experience with warehouse/Lakehouse technologies, including Synapse, Snowflake, or Redshift.
- Knowledge of traditional ETL tools, such as Informatica, Talend, or equivalent.
- Proficiency with Git-based version control and DevOps tooling (Azure DevOps, GitHub, Bitbucket).
- Experience with Databricks Workflows and orchestration tools for automated data processing.
- Corona
- Lake Oswego
- Greenwood Village
- Atlanta
- Hyattsville
- Renton
- Honolulu
Navigating the Hiring Process
We're here to support you!
Having trouble with your account or have questions on the hiring process?
Please visit the FAQ page on our website for assistance.
Need help with your computer and browser settings?
Please visit the Technical Information page for assistance or reach out to the web manager at kp-hires@kp.org.
Do you need a reasonable accommodation due to a disability?
Reasonable accommodations may be available to facilitate access to, or provide modifications to the following:
- Online Submissions
- Pre-Hire Assessments
- Interview Process
If you have a disability-related need for accommodation, please submit your accommodation request and someone will contact you.
Jobs For You
- IT Consultant III, UXD Greenwood Village, Colorado, Flexible, Full-time, Day
- IT Applications Engineer IV, Database - NCAL Pleasanton, California, Remote, Full-time, Day
- Project Management Consultant II - Corporate Services IT Pleasanton, California, Flexible, Full-time, Day
You have no recently viewed jobs
You currently have no saved jobs
Join Our Talent Community
Join our Talent Network today to receive email notifications about our career opportunities that match your skills.
Connect With Us