Portfolio Company Careers

Discover opportunities across our network of values-driven companies!
Sovereign’s Capital
Sovereign’s Capital

Senior Data Engineer, Integrity Data Platform



Data Science
Petaling Jaya, Selangor, Malaysia
Posted on Monday, July 1, 2024

Company Description

Life at Grab

At Grab, every Grabber is guided by The Grab Way, which spells out our mission, how we believe we can achieve it, and our operating principles - the 4Hs: Heart, Hunger, Honour and Humility. These principles guide and help us make decisions as we work to create economic empowerment for the people of Southeast Asia.

Job Description

Get to know our Team:

The Trust team is the custodian of integrity at Grab. We build cutting-edge solutions designed to provide robust fraud detection service with petabyte-scale datasets. Our platform leverages the latest advancements in machine learning and artificial intelligence to help businesses minimize the risk of fraud and maintain a secure environment. We're committed to building a diverse team of passionate and talented professionals who are dedicated to shaping the future of fraud detection technology, and prevent risks like Account Takeover, Chargeback, Fake Orders with fully automated solutions.

Get to know the Role:

Data Engineers in Grab get to work on one of the largest and fastest growing datasets of any company in South East Asia. We operate in a challenging, fast paced and ever changing environment that will push you to grow and learn.

As a Senior Data Engineer for the Integrity Data Platform, you’ll be at the forefront of our day-to-day protection systems, finding ways to derive useful signals from the petabytes of raw data in both offline batch and online streaming systems. This is an opportunity to explore one of the richest datasets in SouthEast Asia, and derive signals that can drive measurable impact.

The day-to-day activities:

  • Envision and build end to end data pipelines that generate invaluable signals used in both real time ML models and rules

  • Work on large-scale big data systems, leveraging data processing frameworks like Spark and Flink to continuously enhance platform security

  • Ingest data from both streaming systems like Kafka and batch systems like Hadoop and build highly performant ETL jobs

  • Design and implement scripts, ETL jobs, data models, etc.

  • Collaborate closely with data scientists, analysts and machine learning engineers to create innovative solutions for fraud detection and prevention.

  • Coordinate with various stakeholders to understand the end to end business requirements

  • Participate in technical and product discussions, code reviews, and on-call support activities


The Must-Haves

  • Bachelor degree in Analytics, Data Science, Mathematics, Computer Science, Information Systems, Computer Engineering, or a related technical field

  • At least ~4 years of experience in Big Data applications

  • Ability to work in a fast-paced agile development environment

  • Experience with Big Data frameworks such as Hadoop, Spark, Flink, etc.

  • Strong knowledge and fluency of SQL, preferably in a MPP OLAP database

  • Knowledge of static programming languages such as Java, Scala, Golang, Rust. etc

  • Ability to drive initiatives and work independently, while being a team player who can liaison with various stakeholders across the organization

  • Excellent written and verbal communication skills in English, and strong willingness to communicate and coordinate with others from different culture and language backgrounds.

Good to have:

  • Experience with Stream processing technologies such as Flink, Spark Streaming, Kafka

  • Experience in handling large data sets (multiple PBs) and working with structured, unstructured and datasets

  • Knowledgeable on cloud systems like AWS, Azure, or Google Cloud Platform

  • Familiar with tools within the Hadoop ecosystem, especially Presto and Spark.

  • Deep understanding on databases and best engineering practices - include handling and logging errors, monitoring the system, building human-fault-tolerant pipelines, understanding how to scale up, addressing continuous integration, knowledge of database administration, maintaining data cleaning and ensuring a deterministic pipeline

  • Experience working in modern cloud native environments like Kubernetes is also a plus

Additional Information

Our Commitment

We recognize that with these individual attributes come different workplace challenges, and we will work with Grabbers to address them in our journey towards creating inclusion at Grab for all Grabbers.