This role is open to candidates based in LATAM, Africa, and Eastern Europe. Please note that as this role supports U.S.-based clients, candidates must be available to work during U.S. business hours aligned with the client’s time zone.
Our client is an AI-driven technology company building forecasting and attribution intelligence products powered by high-quality, analytics-ready data. Their teams work cross-functionally across data engineering, analytics, data science, and product to deliver reliable insights that support customer onboarding, reporting workflows, and advanced AI use cases in a fast-moving, execution-focused environment.
Fully remote | 9 AM - 5 PM EST
The Data Engineer will help build and maintain reliable, scalable data pipelines that support analytics, forecasting, and AI-driven products. This is a hands-on, execution-focused contract role centered on data quality, pipeline reliability, and collaboration across analytics, data science, and product teams.
The role operates within a modern analytics engineering stack using Python, dbt, and Dagster, with a strong emphasis on supporting customers onboarding and reporting workflows.
Build and maintain scalable, fault-tolerant ELT pipelines using Python
Orchestrate and monitor data workflows using Dagster
Troubleshoot pipeline failures, performance issues, and data inconsistencies
Monitor pipeline health using observability tools and metrics
Develop, optimize, and document dbt models following analytics engineering best practices
Model clean, analytics-ready datasets for BI, forecasting, and machine learning feature consumption
Contribute to refactoring and improvement of existing data workflows as product needs evolve
Implement and maintain data quality checks and testing strategies
Follow established team standards for SLAs, code quality, and deployments
Collaborate closely with data scientists to support forecasting and AI-driven use cases
Work cross-functionally with analytics and product teams to ensure data meets business and product requirements
3+ years of professional experience in data engineering or analytics engineering
Hands-on experience working with dbt (Core or Cloud)
Experience using Dagster or similar orchestration tools
Experience working with cloud data warehouses such as Snowflake, BigQuery, or Redshift
Experience collaborating with Product, Analytics, or Data Science teams
Ability to work independently and deliver results in a contract environment
Strong proficiency in Python, including libraries such as pandas, SQLAlchemy, or psycopg2
Advanced SQL skills, including CTEs, window functions, and query optimization
Familiarity with modern ELT tools such as Airbyte, Fivetran, Meltano, or dltHub
Strong troubleshooting skills for data pipelines, performance, and data quality issues
Ability to follow established standards for reliability, testing, and deployment
This contract role offers the opportunity to contribute directly to AI-driven products by building high-impact data infrastructure used for forecasting, attribution, and reporting. You’ll work within a modern analytics engineering stack, collaborate closely with technical teams, and gain hands-on exposure to real-world AI and analytics use cases in a fast-paced, product-driven environment.
Fill in the application form
Record a video showcasing your skill sets
Scale%20army%20careers
https://scale%20army%20careers.com