Role Description
This role is on-site in Palo Alto, CA.
Nace AI is building the next generation of enterprise intelligence — advanced long-horizon reasoning models and autonomous agents designed to execute complex financial workflows with precision. Our flagship product, Agentic Accounting, enables financial audit, billing audit, and revenue leakage detection with speed and accuracy that traditional systems cannot match.
We are at a pivotal stage in our growth. While we’ve already achieved sufficient funding, our ambitions are much larger. Our goal is to become the foundational AI platform for intelligent financial operations across the enterprise.
The work we are doing has meaningful impact across industries, and every hire at Nace AI plays a critical role in shaping the company’s trajectory. This is a unique opportunity to join a high conviction AI company at an early stage and directly influence its growth.
If building a world-class AI team from the ground up excites you, we’d love to talk.
Minimum Qualifications
Direct experience working with Large Language Models (LLMs) or Vision-Language Models (VLMs) in research or production settings
Strong research background in Natural Language Processing, Machine Learning, or related disciplines with focus on language modeling
Proven track record in solving complex problems in language understanding, generation, or multimodal AI using rigorous quantitative methodologies
Demonstrated ability to clearly communicate research findings to diverse technical audiences
Proficient programming skills in Python and deep learning frameworks (PyTorch, JAX, or TensorFlow), with experience in distributed training and model optimization
Preferred Qualifications
PhD in Computer Science, Computational Linguistics, or closely related field with focus on language models and adaptive learning systems
Proven research and engineering experience with LLMs/VLMs, particularly in meta-learning or parameter-efficient adaptation, as evidenced by grants, fellowships, patents, internships, or contributions to open-source projects
First authored publications on language models, meta-learning, hypernetworks, or adaptive AI in recognized peer-reviewed conferences (ACL, EMNLP, NeurIPS, ICML, ICLR) or journals
Kaggle experience is a plus.
Preferred Technical Experience
Research expertise in LLM reasoning, hypernetworks, multi-task learning, meta-learning, designing novel LLM adaptation methods, Online Continual Learning
Nace.ai
https://nace.ai.com