Senior Databricks Engineer
About the Role
We are seeking an experienced Senior Databricks Engineer to design, build, and scale modern data platforms using Databricks and cloud-based data technologies. In this role, you will be responsible for developing high-performance data pipelines, enabling advanced analytics, and supporting AI/ML workloads across the organization.
You will collaborate closely with data engineers, data scientists, analysts, and platform teams to build scalable, reliable, and secure data solutions. The ideal candidate is passionate about big data processing, lakehouse architectures, and modern cloud data platforms, and enjoys solving complex data engineering challenges.
Roles & Responsibilities
Design, develop, and maintain scalable data pipelines and ETL/ELT workflows using Databricks and Apache Spark.
Build and manage data lakehouse architectures leveraging Delta Lake and cloud-based storage solutions.
Optimize data processing jobs for performance, reliability, and cost efficiency across large-scale datasets.
Collaborate with data scientists and analysts to prepare and transform data for analytics, reporting, and AI/ML use cases.
Implement data quality, governance, and security best practices across data platforms.
Develop reusable data frameworks, libraries, and automation to improve engineering productivity.
Integrate Databricks with other cloud-native services, orchestration tools, and data platforms.
Monitor and troubleshoot data pipelines and production workloads to ensure high availability and reliability.
Contribute to data architecture design, documentation, and best practices across the organization.
Mentor junior engineers and support knowledge sharing within the data engineering team
Requirements
Qualifications & Skills
Bachelor’s or Master’s degree in Computer Science, Engineering, Information Systems, or a related field.
5–8 years of experience in data engineering or big data development.
Strong hands-on experience with Databricks, Apache Spark, and Delta Lake.
Advanced programming skills in Python, Scala, or SQL.
Experience building and optimizing ETL/ELT pipelines and large-scale data workflows.
Hands-on experience with cloud platforms such as AWS, Azure, or GCP.
Strong knowledge of data lakehouse architectures and distributed data processing systems.
Experience with workflow orchestration tools such as Airflow or similar platforms.
Strong understanding of data modeling, performance optimization, and data architecture best practices.
Excellent analytical, problem-solving, and communication skills.
Preferred Skills
Experience with streaming data platforms such as Kafka or Spark Streaming.
Familiarity with Snowflake, Redshift, or other modern data warehouse platforms.
Experience supporting machine learning or advanced analytics workloads.
Exposure to CI/CD pipelines, DataOps, and infrastructure automation practices.
Signs you may be a great fit
Impact: Play a pivotal role in shaping a rapidly growing venture studio.
Culture: Thrive in a collaborative, innovative environment that values creativity and ownership.
Growth: Access professional development opportunities and mentorship.
Benefits: Competitive salary, health/wellness packages, and flexible work options.