Get to Know the Team
The Data Science (GrabMaps) team builds map intelligence that powers core Grab services like transport allocation, logistics, and pricing. You'll work on problems such as place search and recommendation, data curation, travel time estimation, traffic forecasting, routing, and positioning.
The team invests in deep research and scalable models, and you'll have room to explore new ideas that can shape how millions of users experience Grab's products.
Get to Know the Role
As an Applied Scientist for GrabMaps, you'll design, build, and ship machine learning and generative AI solutions that directly impact how Grab understands and uses map data. You'll work across the full lifecycle: framing problems with stakeholders, developing models (including LLMs and multi鈥憁odal models), and deploying them into production.
You'll focus on using large models鈥擫LMs, vision, and multi鈥憁odal models鈥攖o improve search, recommendation, and content understanding around places and road networks.
The Critical Tasks You Will Perform
You will:
- Translate business problems in mapping, search, and recommendation into clear machine learning problems, define success metrics, and explain your approach and results to both technical and non鈥憈echnical stakeholders.
- Own end鈥憈o鈥慹nd delivery of small to medium鈥憇cope ML/LLM features or services, from data exploration and model design through training, evaluation, deployment, and post鈥憀aunch monitoring.
- Develop, and optimize deep learning models鈥攊ncluding LLMs, generative and multi鈥憁odal models鈥攖o solve use cases such as POI understanding, relevance ranking, content generation, and map data quality.
- Fine鈥憈une, evaluate, and adapt state鈥憃f鈥憈he鈥慳rt LLMs (e.g., GPT, Llama, Qwen) and other foundation models using supervised fine鈥憈uning and RL鈥慴ased methods, including prompt and instruction design for downstream tasks.
- Architect and implement agentic AI workflows (for example, with LangChain, LlamaIndex, or function鈥慶alling APIs), including tool integration, workflow chaining, and multi鈥慳gent coordination for real鈥憈ime or near real鈥憈ime applications.
- Build and maintain scalable pipelines for data preprocessing, feature extraction, model training, fine鈥憈uning, automated evaluation, and model versioning, working with ML engineers and software engineers to run them in production.
- Optimize model serving for latency, throughput, and cost using techniques such as model compression, quantization, GPU/TPU acceleration, and distributed inference, and integrate with serving frameworks like TorchServe, Triton, or Ray Serve.
- Review relevant research in search/recommendation, NLP/LLMs, and computer vision, run targeted experiments, and bring promising ideas into production prototypes or features.