AKAbdullah Khan

AK // Architecture // Delivery // Cloud Data

Senior data engineer designing reliable platforms, strong architectures, and pragmatic delivery.

Senior Data Engineer based in Toronto with 5+ years delivering ETL, lakehouse, analytics, and data platform systems across banking, insurance, and startup products. Much of my earlier delivery work was in Python, and over the last 1.5 years at Quantexa I expanded into Scala, Quantexa delivery, and certification-backed implementation work.

My work has centered on picking the right implementation approach for the environment. Much of my earlier delivery work was in Python, and at Quantexa I expanded into Scala while supporting delivery on top of a licensed platform in a regulated environment.

Based in Toronto, available for local and remote delivery.

Impact Snapshot

Delivery Range

Hands-On

Strong implementation background across ETL, orchestration, APIs, analytics pipelines, and client-facing delivery.

Quantexa + Scala

1.5 Years

Learned in-role at Quantexa, became productive in delivery, and completed Quantexa Data Engineer certification.

CI/CD + Deployments

Platform Fluent

Comfortable working with Docker, Kubernetes, Jenkins, ArgoCD, and Azure DevOps release workflows.

Architecture

Systems + Data

Comfortable across system design, data architecture, cloud delivery patterns, and software engineering tradeoffs.

Architecture Thinking

Architecture Snapshot

Enterprise Lakehouse Delivery Map

A clean ingestion-to-governance flow showing the kind of reusable enterprise delivery patterns I build for regulated teams.

Simple enterprise data lakehouse architecture diagram

Architecture Snapshot

Quantexa Implementation Workflow

A compact view of how insurance and banking data can be prepared and delivered into Quantexa workflows when the platform is already in place.

Simple Quantexa implementation workflow diagram

Featured Case Studies

All projects

EnterpriseNational Bank of Belgium

IDF Ingestion Delivery Framework

Primary engineer for a centralized ingestion framework enabling the bank's cloud lakehouse teams to onboard sources faster with stronger governance.

  • Azure
  • Databricks
  • PySpark
  • SparkSQL

EnterpriseWarburg Pincus

D365 High-Throughput ETL Platform

Designed ingestion for 100+ D365 F&O tables with strict delivery windows into Synapse/Delta Lake.

  • Azure Synapse
  • PySpark
  • Azure Functions
  • Event Grid

Client Reviews

"Abdullah is an extremely responsive, thorough, and talented seller. I will work with him again."

Upwork client

"Abdullah is very committed and dedicated. He knows Python very well and delivers ahead of time in the most efficient way."

Fiverr client

Latest Insights

All posts

FAQ

What kind of data engineering work do you specialize in?

I specialize in ETL/ELT, lakehouse design, system and data architecture, and enterprise-grade ingestion frameworks on Azure and Databricks.

How do Python and Scala fit into your experience?

Much of my earlier engineering delivery work was in Python across ETL, orchestration, APIs, and analytics. More recently at Quantexa, I added Scala where the role required it, became comfortable delivering with it, and completed Quantexa Data Engineer certification there.

Do you have hands-on experience with Quantexa implementation and delivery?

Yes. I have worked on Quantexa delivery across insurance claims, policy, and watchlist datasets for AML and investigative use cases, plus banking customer data for KYC-focused workflows. My role is in implementing and supporting Quantexa-based delivery, not building entity resolution systems from scratch.

Are you comfortable with deployment and CI/CD workflows?

Yes. I regularly work with deployment and platform teams and understand how Docker, Kubernetes, Jenkins, ArgoCD, and Azure DevOps workflows fit into delivery and release processes.

Do you work with startups or only enterprise clients?

I work with both. Enterprise experience helps with reliability and governance, while startup projects benefit from faster iteration and practical scope management.

How quickly can you ramp up on a new project?

Typical ramp-up is 1-2 weeks for architecture and data flow clarity, followed by phased delivery with measurable milestones and risk checkpoints.

How do you think about developer productivity?

I adopt newer engineering tools when they improve speed, quality, or feedback loops. The goal is better delivery and better decisions, not attachment to any single workflow.

Ready To Build

Need a reliability-first data engineer for your next delivery phase?

I work with enterprise teams and growth-stage startups across architecture, implementation, technical leadership, and practical AI-augmented execution.