Lead Data Engineer

We are looking for a highly skilled Lead Data Engineer to architect, build, and scale our modern data platform. This role will be responsible for designing robust data models, building scalable ELT pipelines, optimising Snowflake performance, and establishing engineering best practices across the data team.


You will work cross-functionally with Analytics, Product, and Engineering teams to ensure reliable, high-performance data solutions that power business decisions.

Roles and Responsibilities

  1. Architecture & Data Platform Ownership

  • Design and implement scalable, high-performance data warehouse solutions using Snowflake.

  • Define and maintain enterprise-grade data modelling standards (dimensional modelling, star schema, slowly changing dimensions, etc.)

  • Lead the migration or modernization of legacy data systems (if applicable)

  • Define best practices for data architecture, testing, documentation, and deployment

  1. Data Pipeline Development

  • Build and maintain scalable ELT pipelines using Python and SQL

  • Develop and manage transformation layers using DBT

  • Optimize large-scale data transformations and ensure cost efficiency in Snowflake

  • Implement data validation and automated testing within DBT

  1. Performance & Optimization

  • Optimize Snowflake compute and storage usage

  • Improve query performance and warehouse efficiency

  • Implement partitioning, clustering, and performance tuning strategies

  • Monitor data pipeline health and troubleshoot failures

  1. Leadership & Collaboration

  • Provide technical leadership to Data Engineers

  • Conduct code reviews and enforce coding standards

  • Mentor team members on SQL optimization, modelling, and pipeline design

  • Work closely with business stakeholders to translate requirements into technical solutions

  1. Governance & Quality

  • Ensure data accuracy, integrity, and consistency

  • Define and implement data quality frameworks

  • Establish version control and CI/CD processes for data workflows

  • Maintain documentation for models, pipelines, and architecture



Requirements

Skill and Qualificationas


Technical Skills
  • Bachelor’s degree in Computer Science, Information Technology, or a related field with 5+ years of IT experience.

  • 4+ years of experience in Data Engineering

  • Deep hands-on expertise in Snowflake (architecture, optimization, security, RBAC)

  • Strong experience in DBT (models, macros, testing, incremental models)

  • Advanced SQL skills (complex joins, window functions, query optimization)

  • Strong proficiency in Python (data processing, automation, scripting)

  • Solid understanding of data modelling concepts (fact/dimension tables, SCD types, normalization vs denormalization)

  • Experience designing production-grade data pipelines

Preferred / Good to Have

  • Experience with orchestration tools (Airflow, Dagster, etc.)

  • Experience with cloud platforms (AWS / Azure / GCP)

  • CI/CD implementation for data workflows

  • Experience handling large-scale datasets (TB+ scale)

  • Exposure to data governance and compliance standards


Signs you may be a great fit

  • Impact: Play a pivotal role in shaping a rapidly growing venture studio.

  • Culture: Thrive in a collaborative, innovative environment that values creativity and ownership.

  • Growth: Access professional development opportunities and mentorship.

  • Benefits: Competitive salary, health/wellness packages, and flexible work options.