About the Role
We are looking for a highly skilled
Data Engineer
to join our Data & Analytics team in
Herstal
.
You will play a key role in designing, building, and optimizing modern data pipelines and cloud-based architectures leveraging
AWS
,
Snowflake
,
Spark
, and a broad integration ecosystem.
You will collaborate with cross-functional teams to ensure reliable data flows, scalable infrastructures, and high-quality datasets powering analytical and operational use cases.
Key Responsibilities
Data Engineering
- Design, build, and optimize data pipelines (batch and real-time)
- Develop data transformations using
Python
,
SQL
, and
Spark - Integrate diverse data sources via
AWS Glue
,
MuleSoft
,
Kafka
, and API-based ingestion
Cloud Architecture (AWS)
- Implement and maintain scalable cloud data architectures on
AWS - Design data models and ingestion frameworks for
Snowflake - Ensure availability, security, and performance of data platforms
Infrastructure & Automation
- Deploy infrastructure as code using
Terraform - Contribute to CI/CD pipelines and automated deployment processes
- Monitor and troubleshoot data workflows in production
Required Skills & Experience
Must-have
- 3+ years of experience as a
Data Engineer - Strong knowledge of
AWS
(Glue, Lambda, S3, IAM, Step Functions…) - Solid experience with
Snowflake - Proficiency in
Python
,
Spark
, and
SQL - Experience with
Kafka
(streaming/real-time processing) - Experience with
Terraform
for IaC - Strong understanding of data modeling and ETL/ELT design
Nice-to-have
- MuleSoft integration experience
- Knowledge of CI/CD (GitLab CI, GitHub Actions, Jenkins, etc.)
- Experience in industrial/operational environments is a plus
What We Offer
- A stimulating environment in a fast-growing Data-driven organization
- Work on cutting-edge cloud technologies and large-scale data projects
- Hybrid working model based in
Herstal - Attractive compensation package based on experience
Solliciteren