Position Overview
We are seeking an experienced Transformation Speed Layer Engineer to join our data engineering team. The successful candidate will be responsible for designing, developing, and maintaining large-scale data pipelines and architectures that enable fast and reliable data transformation and processing.
Key Responsibilities
- Design, develop, and maintain scalable data pipelines and architectures for fast data transformation and processing
- Collaborate with cross-functional teams to identify and prioritize data transformation requirements
- Ensure data quality, integrity, and security across data workflows
- Work closely with data scientists and analysts to optimize data models and algorithms
- Optimize performance, scalability, and reliability of data pipelines
- Develop and maintain technical documentation
Required Qualifications & Skills
- Bachelor’s degree in Computer Science, Engineering, or related field
- Minimum 5 years of experience in data engineering or software development
Technical Skills (Must Have)
- PySpark and Java
- Data Modelling, Data Ingestion, Data Warehousing
- Real-time Services: Kafka, Spark Streaming, Flink
- Distributed Processing: Hadoop, HDFS, Cloudera, Hive
- Experience working on Azure platform
Good to Have
- Experience with Apache Beam, Apache Spark, or similar processing technologies
- Experience with cloud platforms (Azure, AWS, Google Cloud)
- Experience with containerization technologies (e.g., Docker, Kubernetes)
- Familiarity with Machine Learning and Data Science concepts
Solliciteren