Job 1 van 1


Report this listing

Solliciteren



AI Platform Engineer


AI Platform Operations, Enablement & Open-Source Build-Out (with Expert Support)
Location: Brussels (on site)
Working language: English and either French or Dutch
Context
TreeTop Asset Management is building a state-of-the-art, open-source-first internal AI platform for a regulated environment. We are a small team, so we ship fast and we keep governance pragmatic but strict where it matters: confidentiality, traceability, and reliability.
We will work with an experienced external consultant to accelerate architecture and initial delivery. Your role is to learn fast, co-build, and become the internal owner who can operate, extend, and continuously improve the platform.
Mission
Become TreeTop's internal AI platform owner over time by:
· co-building the foundation with the external consultant
· operating the platform day-to-day (stability, upgrades, monitoring, documentation)
· converting team needs (Marketing, Compliance, Operations, Research) into repeatable templates and workflows
· progressively taking ownership of new features, integrations, and automation
What you will build (LLM-first, platform-centric)
Phase 1 – Internal portal + routing + guardrails
With the consultant, you will:
· Deploy a self-hosted internal AI portal such as Open WebUI.
· Implement a gateway/routing layer so teams can use one interface while the platform selects the right approved provider/model based on task and data sensitivity (example gateway: LiteLLM Proxy).
· Implement pragmatic guardrails:
o in-product policy prompts and warnings
o sensitivity-aware routing and basic redaction/blocking patterns where relevant
o audit-friendly logs (who/when/template/model/provider)
o ability to rapidly disable a provider/model when required
· Ship the first internal workflows with reliability controls:
o structured outputs where needed (schema/tool calling patterns)
o validation, retries, fallbacks
Your focus: absorb the architecture, document it (runbook), and ensure TreeTop is not dependent on external parties.
Phase 2 – Controlled open-source model capability
· Add one or more open-source models hosted in a controlled environment (compute hosted by a third-party infrastructure provider, under our control and policies).
· Define routing rules for internal/sensitive workloads and internal knowledge use cases.
· Establish lifecycle basics: versioning, rollbacks, performance/cost visibility, and small evaluation sets.
Phase 3 – Workflow automation + agents (ongoing)
· Integrate workflow automation such as n8n for repeatable business processes.
· Add observability/tracing so you can debug and improve workflows over time (example: Langfuse, open source).
· Build agentic workflows with explicit tool access, constrained outputs, and human validation where appropriate.
Must-have skills
· Strong Docker + Linux fundamentals (deploy, debug, logs, networking basics).
· Solid Python and API integration skills.
· Comfort with modern AI stacks and the open-source ecosystem (pragmatic deployment mindset).
· Reliability mindset: validation, structured outputs, regression tests for templates/workflows.
· Clear documentation and communication (runbooks, \"how to use safely\" guides).
Nice-to-have
· Identity/access control concepts (OIDC/SSO, RBAC).
· Observability basics (metrics, alerting, incident hygiene).
· RAG/embeddings experience (ingestion, retrieval evaluation).
Who should apply
Junior–mid engineers who learn fast and like ownership. Strong signals include:
· shipped internal tools that others rely on
· open-source contributions (maintainer or meaningful contributor)
· comfort operating real services (updates, stability, incident handling)
How to apply
Send:
1. CV
2. GitHub (or evidence of shipped work)
3. A short note answering:
· a Docker-deployed service you operated (monitoring, upgrades, incidents)
· an example where you made machine outputs reliable (validation, structured outputs, tests)

Solliciteren