Portfolio Company Careers

Discover opportunities across our network of values-driven companies!
Sovereign’s Capital
Sovereign’s Capital

Senior Data Engineer



Data Science
Petaling Jaya, Selangor, Malaysia
Posted on Wednesday, June 26, 2024

Company Description

Grab is Southeast Asia’s leading superapp. We are dedicated to improving the lives of millions of users across the region by providing them everyday services such as deliveries, mobility, financial services, enterprise services and others. More than that, we provide the opportunity for them to have a better life. And that aspiration starts inside Grab because we believe in a seamless blend of work and home life, making every aspect of life better for all.

Guided by The Grab Way, which spells out our mission, how we believe we can achieve it, and our operating principles—the 4Hs: Heart, Hunger, Honour and Humility—we work to create economic empowerment for the people of Southeast Asia. With our unwavering commitment to our values, we believe that we're more than a service provider; we're agents of positive change.

Job Description

As a Senior Data Engineer, you will be responsible for architecting, building, and optimizing scalable data pipelines and systems to support the organization's data needs. You will collaborate closely with cross-functional teams, including data scientists, analysts, and software engineers, to ensure seamless data integration, processing, and delivery. Leveraging your expertise in big data technologies, you will drive the evolution of our data architecture, ensuring reliability, scalability, and performance.

Key Responsibilities:

  • Design, develop, and maintain robust, scalable data pipelines and ETL processes to extract, transform, and load data into our data warehouse.

  • Collaborate with stakeholders to understand data requirements and translate them into technical solutions, ensuring alignment with business objectives.

  • Architect data models and schemas for efficient storage, retrieval, and analysis of structured and unstructured data.

  • Implement best practices for data quality, governance, and security to maintain the integrity and confidentiality of data assets.

  • Optimize performance and scalability of data infrastructure, leveraging cloud services and distributed computing technologies.

  • Evaluate and implement new tools, frameworks, and technologies to enhance the capabilities of our data platform.

  • Mentor junior members of the team, providing guidance on technical design, coding standards, and best practices.

  • Stay current with industry trends and advancements in data engineering, contributing to continuous improvement and innovation within the organization.


The Must-Haves

  • Bachelor's degree in Computer Science, Engineering, or related field.

  • 5+ years of experience in data engineering, with a proven track record of designing and implementing complex data solutions.

  • Proficiency in programming languages such as Python or Scala, with experience in data processing frameworks like Spark or Presto.

  • Hands-on experience with platforms like Airflow

  • Understanding of data format like Avro, Parquet, Delta, ORC

  • Strong understanding of relational and NoSQL databases, data modeling, and SQL query optimization.

  • Experience with stream processing technologies (e.g., Kafka, Flink) and real-time data processing pipelines is a plus.

  • Excellent problem-solving skills, with the ability to troubleshoot and debug complex data issues.

  • Effective communication skills, with the ability to collaborate with cross-functional teams and present technical concepts to non-technical stakeholders.

Additional Information

The Nice-to-Haves

  • Understanding of Go & relational databases like PostGres

  • Understanding of Python

  • Hands-on experience with cloud platforms such as AWS, Azure, or Google Cloud

  • Understanding of Big Data Systems, Hive, Spark, Presto

  • Understanding PySpark or ScalaSpark

  • Knowledge of Kafka & Streaming (Flink)

  • Familiar with observability tools like Splunk, Kibana and/or DataDog

  • Understanding of AWS concepts, terraform.