Career & Certifications
A timeline of my professional experience and validated certifications.
Infrastructure & Automation Engineer (Consultancy)
At GreenCodes Digital Core I acted as Infrastructure & Automation Engineer in a consultancy capacity. I designed and built a production-grade, highly available Kubernetes platform and GitOps delivery so that multiple e‑commerce and business clients could run their workloads in a secure, repeatable, and observable way.
Platform and resilience:
- Kubernetes on Rancher: Production-ready, highly available clusters managed with Rancher, Longhorn for replicated block storage, and Velero for automated disaster recovery of both cluster state and application data.
- Security: Falco for runtime detection; Nginx Proxy Manager and cloudflared tunnel operators for secure exposure without opening inbound ports; corporate access over tunnels.
- GitOps with Argo CD: Client-specific stacks (PrestaShop, WordPress, Saleor) were managed via Argo CD and isolated with GitLab groups, so deployments were repeatable across multiple e‑commerce tenants.
- ERPNext multi-tenant: Deployed ERPNext in multi-tenant mode with custom Frappe modules so each business had an isolated instance while management stayed centralised.
- Gotifier: A global notification system for incidents, attacks, or downtime so that teams could react quickly across all client environments.
Automation and AI:
- WhatsApp automation (WAHA + n8n): Workflows that read product data from Google Drive, auto-reply to customer inquiries, create orders in ERPNext, and notify production—reducing manual handling and shortening order-to-fulfilment.
- MCP ERPNext AI assistant: An assistant that let employees query sales, customers, and inventory in natural language from ERPNext, improving internal productivity.
End-to-end automation and platform work improved operational efficiency by ~70% across multiple e‑commerce clients, with faster response times and less manual data entry.
Platform & Build Engineer
Over the course of two freelance engagements, I helped clients modernise their delivery pipelines and build systems—from full‑stack web apps to polyglot monorepos with mobile apps and microservices.
Project 1: Next.js Cloud Automation (Sept–Nov 2024)
Took a Next.js project from manual deployment to a fully automated lifecycle on AWS:
- CI/CD: GitHub Actions workflows for build, test, and code quality on every push/PR.
- Deployment automation: Merge‑to‑production with zero‑touch pipelines.
- Infrastructure as Code: Provisioned EC2, IAM, networking, and related services using Terraform – versioned, repeatable, and secure.
- Documentation: Clear runbooks and pipeline descriptions for client maintenance.
Project 2: Bazel Monorepo for E‑commerce (Mar–Dec 2025)
Architected a Bazel‑powered monorepo unifying 10+ microservices (Java, Go, Python) and mobile apps (iOS/Android with 4 flavors each). Delivered a secure, scalable build system with measurable impact:
- Unified build: Custom Bazel macros reduced boilerplate by 70%; used
rules_apple,rules_android,rules_go, etc. for polyglot support. - Performance: Incremental builds and remote caching (RBE) cut full‑workspace CI times from 45 min → 12 min; mobile flavor builds from 20 min → under 5 min.
- SSDLC integration: Custom SonarQube rule for SAST, dependency scanning (
bazel query+ SBOM), and mandatory container scanning (Trivy/Grype) – zero critical vulnerabilities in production for 6 months. - GitLab CI: Dynamic child pipelines, remote cache sharing, and artifact promotion (container images, IPA, APK) via GitLab registry.
- Developer experience: Onboarded 20+ engineers; pre‑commit hooks reduced failed CI runs by 40%; built remote execution clusters for consistent builds.
- Outcome: 70% build time reduction, 40% cloud cost savings—all while maintaining GitOps deployments to Kubernetes.
Freelance work lets me apply platform engineering patterns—CI/CD, IaC, security—across different domains, from lean web apps to complex monorepos. These projects also keep my skills sharp with tools like Bazel, GitLab CI, and AWS.
SysOps Engineer — IT Team
After the TGO chapter, I moved into the IT team at Proxym IT as a SysOps Engineer. My role was to be the bridge between TGO (governance, DevOps, security) and IT (servers, hypervisors, networks): I understood both worlds and built solutions that made both teams work together smoothly. That meant upgrading core platforms (GitLab, SonarQube), automating client infrastructure (e.g. Bankerise at La Poste Tunisie), and improving resilience—including responding to a ransomware incident.
A zero-day exploit led to an Akira infection in one part of the company DC; years of data were encrypted. With the team, we recovered in about 24 hours using existing DR plans and clear coordination. I helped identify and isolate the source, apply remediations, and then roll out AdGuard across the company to harden DNS and reduce exposure. That period reinforced how critical DR and calm incident response are.
La Poste Tunisie (end-to-end):
I supported La Poste Tunisie from both the TGO and IT sides: CI/CD and repo governance as DevOps, then production infrastructure as SysOps. With the La Poste team we automated provisioning using ProxGen, Kubespray, ECK, and Longhorn so that a full, production-grade environment could be brought up in a fraction of the time that manual setup used to take.
Rancher and Kubernetes governance:
We moved from vanilla K8s dev clusters to Rancher for central management. I set up Rancher HA, integrated it with our vSphere environment so that new VMs were provisioned automatically for scaling dev clusters, and we replaced NFS with Longhorn for distributed storage. RBAC and OPA policies (and admission hooks) were used to block unsafe practices in manifests; Velero operators and CRDs automated backups of cluster state to S3 and supported DR plans.
ProxGen — VM provisioning in about a minute:
IT was spending 30+ minutes per VM (ISO, manual install, config drift). I led the shift to cloud-init templates and Terraform: the ProxGen project lives in GitLab, with centralized tfstate, so an operator only needs to open a branch, set tfvars (OS, CPU, RAM, IP, etc.), and run the pipeline—about one minute later the VM is provisioned in Proxmox or vSphere. ProxGen snippet on GitHub.
Total infra automation (Kubespray, Longhorn, ECK):
For client deployments (e.g. Bankerise at La Poste), we cut preparation from ~2 days to ~1.5 hours. Using ProxGen for VMs, a hardened Kubespray for production-ready Kubernetes, and Ansible playbooks for Velero and other components, we then relied on operators (Keycloak, Redis, ECK for the Elastic stack) instead of ad-hoc installs. That reduced config drift between clients and made environments stable and DR-ready.
Bankerise Platform — DevOps & SysOps
Led the DevOps and SysOps efforts for the implementation of the Bankerise® Omnichannel Digital Banking Solution at La Poste Tunisienne. Collaborated with a 15+ member cross-functional team to streamline development, deployment, and production operations, ensuring a smooth and highly reliable banking platform experience.
This project spanned both my TGO (DevOps) and IT (SysOps) roles at Proxym IT—from pipelines and code quality to production infrastructure and knowledge transfer with the La Poste team.
Key contributions:
- GitLab CI/CD pipelines: Designed and managed pipelines for testing, building, scanning, and deploying across Dev, QA, and SIT environments. Pipelines covered both Android and iOS applications with clear quality gates and deployment stages.
- Argo CD: Implemented Argo CD to automate deployments and support continuous delivery, keeping application state in sync with Git and enabling repeatable, auditable rollouts.
- Team guidance: Guided frontend, backend, and mobile teams on code quality, Git practices, and workflow standards so the whole team could ship with confidence.
- Production infrastructure: Coordinated with IT to set up high-production infrastructure, including Kubernetes clusters, Elasticsearch & Kibana (ECK), and Keycloak—leveraging ProxGen, Kubespray, Longhorn, and ECK operators for rapid, consistent provisioning.
- Knowledge transfer: Conducted workshops with La Poste technical representatives to explain infrastructure and Bankerise solution setup, ensuring the client team could operate and maintain the platform.
OpenShift & GitOps Platform Engineer (Consultancy)
For Qatar International Islamic Bank (QIIB) I worked on making their OpenShift platform resilient, auditable, and production-ready. The focus was multiple environments (preprod, staging) and a GitOps-driven workflow so that changes were traceable and repeatable.
What I did:
- OpenShift cluster management: Operated and hardened OpenShift clusters for different environments, aligning with banking-grade requirements.
- Argo CD: Introduced and configured Argo CD for GitOps so that application and config changes were driven from Git, with clear history and rollback.
- Sealed Secrets: Integrated Sealed Secrets so that sensitive data could be stored in Git in encrypted form and decrypted only inside the cluster, keeping secrets management secure and GitOps-friendly.
- Collaboration: Worked with the bank’s IT and security teams to align automation with their policies and audit needs.
In regulated environments, “who changed what and when” matters. GitOps gives a single source of truth and a full audit trail from commit to cluster.
DockerIOs — Virtualized macOS CI Runners
The company was hitting a scaling wall: GitLab pipeline jobs for iOS builds were growing, and physical Mac Mini runners were expensive and hard to scale. I proposed and led a project to run fully virtualized macOS machines as CI runners by leveraging Docker-OSX (QEMU-based macOS virtualization in Docker).
High-spec iOS build jobs need macOS; buying and maintaining many Mac Minis was the bottleneck. We needed a way to scale runners without scaling hardware linearly.
What I did:
- Designed and implemented virtualized macOS runners using Docker-OSX (QEMU + Docker), so we could run macOS build agents as containers.
- Deployed on Proxmox: These “virtual” runners were in reality high-spec VMs on Proxmox, giving us many macOS build environments without a room full of Mac Minis.
- Integrated with GitLab: Wired the new runners into the existing GitLab CI setup so iOS pipelines could use them transparently.
- Result: Removed the bottleneck of physical macOS hardware for building iOS apps on demand; the team could run more concurrent iOS jobs and iterate faster.
Mobile Delivery & Release Engineer (VCP)
At Injazat I worked in a multinational team as the bridge between development and delivery for mobile applications. My focus was Firebase, Bitrise, and release flows so that multiple flavours (Android and iOS) could be built, tested, and published in a controlled way.
What I did:
- Sync between dev and delivery: Aligned developers and release processes so that builds were reproducible and releases were predictable.
- Firebase: Configured and maintained Firebase projects for the apps (features, config, environments).
- Bitrise and Bitbucket: The client had chosen Bitrise for mobile CI; I set up and maintained pipelines for build, test, and production releases, including AAB and iOS artefacts, and kept everything versioned and traceable in Bitbucket.
- Bug reporting and quality: Integrated bug reporting and quality checks into the pipeline so that issues were caught before release and could be tracked back to commits or builds.
Working across time zones and cultures reinforced how important clear documentation and automation are—when the pipeline is the source of truth, everyone stays aligned.
DevOps Engineer — Technology Governance Office (TGO)
At Proxym IT I joined the Technology Governance Office (TGO) and worked on DevOps, technology governance, and software architecture for a wide range of projects—banking, insurance, and others—across web and mobile platforms. My role was to bring order, security, and repeatability to how we built and delivered software at scale.
We managed over 1000 projects on a self-managed GitLab instance: pipelines, access, and deployment patterns had to be consistent and governable.
Key responsibilities and achievements:
- Technology governance: Defined and applied DevOps practices, CI/CD standards, and architecture guidelines so that TGO and delivery teams could align on quality and security.
- Artifact and license compliance: Led a project to monitor third-party licenses across downstream projects and avoid distributing commercial products that relied on incompatible OSS. I integrated and adapted the SonarQube License Check plugin, wiring it into GitLab pipelines so that JSON lock files and POMs were scanned as part of quality gates. That gave us clear visibility and helped prevent compliance issues with vendors and clients.
- SSDLC implementation: Standardized CI/CD across 1,000+ projects with reusable GitLab CI templates and components for Java, Node.js, and container-based pipelines—removing copy-paste and keeping delivery consistent. I enforced pre-commit Git hooks (e.g. Gitleaks) to block secrets and sensitive code before merge, and integrated SAST (SonarQube, Trivy, Hadolint) and DAST (OWASP ZAP, ArcherySec as ASOC). We centralized vulnerability management in DefectDojo to aggregate findings, handle false positives, and speed up client audits.
- WeHub — GitHub Actions and OVH: Took a legacy codebase and moved it to GitHub Actions with full CI/CD and deployment to OVH, reducing manual steps and aligning the project with modern automation.
- Diag IT (Viola / MTA3): Delivered full CI/CD on GitLab for Diag IT mobile apps: build, test, and publish for Android and iOS in an automated, repeatable way.
- Nexus and artifact management: Improved proxy usage and bandwidth, enforced cleanup and compaction policies (freeing 200GB+), and led a migration to a newer Nexus version by converting and reconciling blobs the hard way when tooling was limited.
- GitLab self-hosted and runners: Drove upgrades and a better runner strategy with the IT team: moved to Docker runners with distributed caches (MinIO) and built a dashboard to monitor 200+ daily pipelines, spot bottlenecks, and give the DevOps and governance teams clear visibility.
Mobile Platform for Circular Economy Resource Management
Diag IT is a mobile platform that helps companies manage the resource diagnosis of their products, equipment, and materials (PEMD) and quickly market reusable materials, promoting the circular economy. The platform includes web, Android, and iOS applications designed to optimize resource reuse and streamline diagnostic workflows.
I worked on this project while at Proxym IT; it followed me across both roles—DevOps in the TGO and SysOps in the IT team—so I was able to own the full lifecycle from pipelines to infrastructure.
I took ownership of legacy source code with a focus on understanding and improving it: modernizing, optimizing, and scaling the platform end to end.
My role and contributions:
- CI/CD automation: Automated CI/CD pipelines for web, Android, and iOS apps, enabling faster and safer releases. Pipelines ran on GitLab with clear quality gates and deployment stages.
- React Native upgrade: Collaborated with the team to upgrade the React Native version, ensuring app stability and future-proofing for both Android and iOS.
- Server management and optimization: Removed legacy implementations, cleaned up servers, handled disaster recovery, and mitigated bottlenecks. Used PostgreSQL with PgBouncer where relevant for connection pooling and performance.
- Observability: Enhanced app instrumentation using Loki and Grafana, improving monitoring and observability so the team could detect and debug issues quickly.
- Cloud migration: Architected a migration from legacy OVH deployment to AWS EKS, implementing Longhorn volume management for scalability and reliability. The move to EKS and Longhorn set the platform up for better resilience and growth.
Network Research Intern
My early career started with a four-month research internship focused on network simulation for a new mode of cellular network implementation. Working in Germany, I used NS-3 (Network Simulator 3) to model and evaluate cellular setups, which gave me a solid foundation in network protocols, simulation workflows, and research-driven development.
This internship set the tone for the rest of my path: learning by doing, dealing with complex systems in a controlled environment, and delivering results on a timeline.
What I did:
- Designed and ran NS-3 simulation scenarios for cellular network behaviour and performance.
- Documented assumptions, parameters, and results for reproducibility and handover.
- Collaborated with the team on interpreting results and proposing improvements.