Portfolio Company Careers

Discover opportunities across our network of values-driven companies!
Sovereign’s Capital
Sovereign’s Capital

Senior Data Engineer



Data Science
Petaling Jaya, Selangor, Malaysia
Posted on Wednesday, June 26, 2024

Company Description

Life at Grab

At Grab, every Grabber is guided by The Grab Way, which spells out our mission, how we believe we can achieve it, and our operating principles - the 4Hs: Heart, Hunger, Honour and Humility. These principles guide and help us make decisions as we work to create economic empowerment for the people of Southeast Asia.

Job Description

Get to know our Team:

The Mobility team is an established team responsible for the ride-hailing module in the Grab app. We make an impact by continuously improving the core transport booking experience. Our team is made up of talented and diversified engineers. If you are looking for an opportunity to make it seamless and effortless for millions of users, then you should join our team!

We are working on high throughput, real-time distributed systems that use sophisticated machine learning techniques to solve hundreds of millions of requests per day. Our mission is to offer the best-in-class products and experiences to our passengers to increase adoption and engagement of our services.

We are a distributed team with members spread across Singapore, Malaysia, China, and Vietnam. Our communication is in English, both in spoken and written form. Our team has direct end-user contact, and impact on the bottom line for Grab.

Get to know the Role:

Data Engineers in Grab get to work on one of the largest and fastest growing datasets of any company in South East Asia. We operate in a challenging, fast paced and ever changing environment that will push you to grow and learn. You will be involved in various areas of Grab’s Data Ecosystem including reporting & analytics, data infrastructure, and various other data services that are integral parts of Grab’s overall technical stack.

Our team is now looking for experienced data engineers to design and enhance our data warehouse systems. The Data Engineer plays a key role within the data warehouse team in developing the data warehouse systems, working closely with stakeholders from multiple lines of business (LOBs).

The day-to-day activities (Duties and Responsibilities):

  • Participate in all aspects of developing a data warehouse system. Design, develop, test, deploy and support scalable and flexible data warehouse system that can integrate with multiple LOBs

  • Liaison with Product, Business, Design and Engineering stakeholders to define and review technical specifications

  • Design and implement scripts, ETL jobs, data models, etc.

  • Provide horizontal, organization-wide visibility through metric measurements and insights regionally across different functions and teams

  • Develop and uphold best practices with respect to change management, documentation and data protocols

  • Identify system and application metrics, develop dashboards and setup alerts for metrics and thresholds

  • Participate in technical and product discussions, code reviews, and on-call support activities


The Must-Haves

  • Bachelor degree in Analytics, Data Science, Mathematics, Computer Science, Information Systems, Computer Engineering, or a related technical field

  • At least 3-4 years of experience developing Data warehouse and Business Intelligence solutions

  • Sound knowledge of data warehousing concepts , data modeling/architecture and SQL

  • Ability to work in a fast-paced agile development environment

  • Knowledge of programming languages such as Java, Scala, Python, etc.

  • Understanding of performance, scalability and reliability concepts

  • Experience with Big Data frameworks such as Hadoop, Spark, etc.

  • Experience with developing data solutions on AWS

  • Ability to drive initiatives and work independently, while being a team player who can liaison with various stakeholders across the organization

  • Excellent written and verbal communication skills in English

Good to have:

  • Experience in handling large data sets (multiple PBs) and working with structured, unstructured and geographical datasets

  • Good experience in handling big data within a distributed system and knowledge of SQL in distributed OLAP environments.

  • Knowledgeable on cloud systems like AWS, Azure, or Google Cloud Platform

  • Familiar with tools within the Hadoop ecosystem, especially Presto and Spark.

  • Good experience with programming languages like Python, Go, Scala, Java, or scripting languages like Bash.

  • Experience with Stream processing technologies such as Kafka, Spark streams, etc.

  • Deep understanding on databases and best engineering practices - include handling and logging errors, monitoring the system, building human-fault-tolerant pipelines, understanding how to scale up, addressing continuous integration, knowledge of database administration, maintaining data cleaning and ensuring a deterministic pipeline