Build automated data quality, anomaly detection, and observability frameworks .
Implement proactive alerting, incident response, and root cause analysis for data issues.
Build and Evolve Reliable Data Products Develop and maintain high-quality, production-grade data products (code + data).
Enforce separation of source-aligned vs. aggregate data products to support domain ownership and governance.
Ensure data products are composable, joinable, and reusable across the organization.
Design AI-Ready, Agentic Data Systems Build data products optimized for ML models and AI agents , including: Feature consistency and reuse Data versioning and lineage Reproducibility for model training and inference Enable agentic workflows (ADLC) by: Defining data contracts and logic in machine-readable specs (MD/DSL) Ensuring data products are self-describing and agent-consumable Operationalize Data + AI Workloads Partner with data scientists to productionize models and AI agents with strong reliability
Deploy and operate data and AI services on MCP (Microservices, Containers, Platforms) using Kubernetes.
Ensure systems meet high availability, scalability, and low-latency requirements .
Maintain rich metadata and catalog entries to support discoverability and reuse.
Promote best practices in reliability engineering across teams.
Drive adoption through well-documented, discoverable, and trusted data products .
Requirements
Red Hat – Core Business Platforms | Data Platform & Data Products Red Hat Core Business Platforms is seeking a Data Reliability Engineer (DRE) to help build and operate a reliable, AI-ready data platform that powers both business operations and advanced AI initiatives.
You will work across data pipelines, data products, and AI systems to embed reliability as a first-class concern , enabling both human users and AI agents to confidently consume and act on data.
experience in Data Engineering, Software Engineering, SRE, or DRE . Ability to work ON-SITE in Lowell, Massachusetts Strong SQL skills and proficiency in Python, Java, or Scala .
Experience with modern data platforms (Snowflake, Databricks, BigQuery, S3/MinIO).
Solid understanding of: Data modeling (dimensional, data vault, domain-driven design) Batch and streaming architectures Hands-on
Browse by skills
Browse by role
Browse by location
experience with: Data quality and observability frameworks SLIs/SLOs and production operations
Experience with CI/CD, testing, and version control .
Familiarity with Docker, Kubernetes , and cloud-native systems.
Experience with AI/ML systems in production (feature stores, model reliability). Familiarity with agentic frameworks or AI agents (e.g., LangChain, MCP integrations).
Experience with streaming technologies (Kafka, Spark Streaming).
Experience with data governance platforms (catalogs, lineage).
Background in SRE/DRE/AIRE operating models .
Master’s degree in Computer Science, Engineering, or related field.
About Red Hat Red Hat is the world’s leading provider of enterprise open source software solutions, using a community-powered approach to deliver high-performing Linux, cloud, container, and Kubernetes technologies.
Red Hat does not seek or accept unsolicited resumes or CVs from recruitment agencies.
We are not responsible for, and will not pay, any fees, commissions, or any other payment related to unsolicited resumes or CVs except as required in a written contract between Red Hat and the recruitment agency or party requesting payment of a fee.
Experience
What You Will Bring 2+ years of
Benefits
Strong communication skills and ability to drive a culture of reliability and ownership . Bonus Points
The salary range for this position is $110,940.00 - $177,280.00.
Pay Transparency Red Hat determines compensation based on several factors including but not limited to job location, experience, applicable skills and training, external market value, and internal pay equity.
Annual salary is one component of Red Hat’s compensation package.
This position may also be eligible for bonus, commission, and/or equity.
For positions with Remote-US locations, the actual salary range for the position may differ based on location but will be commensurate with job duties and relevant work experience.
Benefits ● Comprehensive medical, dental, and vision coverage ● Flexible Spending Account - healthcare and dependent care ● Health Savings Account - high deductible medical plan ● Retirement 401(k) with employer match ● Paid time off and holidays ● Paid parental leave plans for all new parents ● Leave
benefits including disability, paid family medical leave, and paid military leave ● Additional
benefits including employee stock purchase plan, family planning reimbursement, tuition reimbursement, transportation expense account, employee assistance program, and more! Note: These
Equal Opportunity Policy (EEO) Red Hat is proud to be an equal opportunity workplace and an affirmative action employer.
Contact
If you need assistance completing our online job application, email application-assistance@redhat.com .
Additional details
This role is focused on ensuring that data is trustworthy, observable, and resilient by design .
What You Will Do Own Data Reliability End-to-End Define and operate data SLIs/SLOs (freshness, completeness, accuracy, availability).
Continuously improve system reliability through post-incident reviews and systemic fixes .
Governance, Security, and Trust Classify and tag data assets to enforce responsible usage and compliance .
Apply masking, access controls, and RLS at the data product layer .
Enable InnerSource & Platform Adoption Contribute to a high-velocity InnerSource model for data product development.
Actual offer will be based on your qualifications.
Spread across 40+ countries, our associates work flexibly across work environments, from in-office, to office-flex, to fully remote, depending on the requirements of their role.
Red Hatters are encouraged to bring their best ideas, no matter their title or tenure.