Required Skills Experience with ETL (Extract, Transform, Load) and ELT (Extract, Load, Transform) processes
Experience in AWS cloud platforms
Experience and certified as AWS admin
Experience with Python or SQL
Experience with Delta Lake
Experience in Dataiku
Understanding of DevOps principles and practices
About the role
Design, develop, and optimize data workflows and notebooks using Databricks to ingest, transform, and load data from various sources into the data lake.
Build and maintain scalable and efficient data processing workflows using Spark (PySpark or Spark SQL) by following coding standards and best practices.
Develop data models and schemas to support reporting and analytics needs.
Ensure data quality, integrity, and security by implementing appropriate checks and controls.
Monitor and optimize data processing performance, identifying, and resolving bottlenecks.
Stay up to date with the latest advancements in data engineering and Databricks technologies.