Job Summary:Are you passionate about shaping the future of AI by building infrastructure that ensures large language models and AI agents are safe, reliable, and aligned with human values? Red Hat's OpenShift AI team is seeking a senior ML engineer who combines deep technical expertise with a commitment to responsible AI innovation.As a pivotal contributor to open-source projects like Open Data Hub, KServe, TrustyAI, Kubeflow, and llama-stack, you'll be at the forefront of democratizing trustworthy AI infrastructure. These critical open-source initiatives are transforming how organizations develop, deploy, and monitor machine learning models across hybrid cloud and edge environments. Your work will directly shape the next generation of MLOps platforms, making advanced AI technologies more accessible, secure, and ethically aligned.Job ResponsibilitiesArchitect and implement comprehensive safety systems for LLM deployments, including content filtering, output validation, and alignment techniquesDesign and develop robust guardrailing frameworks that enforce model behavioral boundaries while maintaining performance and user experienceLead the development of monitoring systems to detect and mitigate potential model hallucinations, harmful outputs, and alignment drift in productionBuild and maintain evaluation frameworks for assessing model safety, including automated testing pipelines for toxicity, bias, and harmful behaviorDevelop prompt engineering systems and safety layers that ensure reliable and controlled LLM outputs across different use cases and deployment scenariosImplement fine-tuning and human preference alignment pipelines with a focus on maintaining model alignment and improving safety characteristicsDesign and deploy systems for LLM output validation, including fact-checking mechanisms and source attribution capabilitiesLead technical initiatives around model interpretability and transparency, including debugging tools for understanding model decisionsCollaborate with policy and safety teams to translate safety requirements into technical implementations and measurable metrics Requirements5+ years of ML engineering experience, with 3+ years specifically working with transformer-based models and LLMsDeep expertise in prompt engineering, instruction tuning, or human preference alignment techniquesStrong background in implementing AI safety mechanisms and guardrails for production LLM systemsExperience with LLM evaluation frameworks and safety metricsProven track record of building production-grade systems for model monitoring and safety enforcementStrong programming skills in Python and experience with modern LLM frameworks (PyTorch, Transformers, etc.)Experience implementing content filtering and output validation systemsUnderstanding of AI alignment principles and practical safety techniques The following will be considered a plus: Experience with constitutional AI and alignment techniquesBackground in implementing human preference alignment pipelines and fine-tuning large language modelsFamiliarity with LLM deployment platforms and serving infrastructuresExperience with model interpretation techniques and debugging tools for LLMsKnowledge of AI safety research and current best practicesUnderstanding of adversarial attacks and defense mechanisms for language modelsExperience with prompt injection prevention and input sanitization techniquesBackground in implementing automated testing systems for model safetyAdvanced degree in Computer Science, ML, or related field with focus on AI safety#LI-PM1About Red HatRed Hat is the world’s leading provider of enterprise open source software solutions, using a community-powered approach to deliver high-performing Linux, cloud, container, and Kubernetes technologies. Spread across 40+ countries, our associates work flexibly across work environments, from in-office, to office-flex, to fully remote, depending on the requirements of their role. Red Hatters are encouraged to bring their best ideas, no matter their title or tenure. We're a leader in open source because of our open and inclusive environment. We hire creative, passionate people ready to contribute their ideas, help solve complex problems, and make an impact.Diversity, Equity & Inclusion at Red HatRed Hat’s culture is built on the open source principles of transparency, collaboration, and inclusion, where the best ideas can come from anywhere and anyone. When this is realized, it empowers people from diverse backgrounds, perspectives, and experiences to come together to share ideas, challenge the status quo, and drive innovation. Our aspiration is that everyone experiences this culture with equal opportunity and access, and that all voices are not only heard but also celebrated. We hope you will join our celebration, and we welcome and encourage applicants from all the beautiful dimensions of diversity that compose our global village.Equal Opportunity Policy (EEO)Red Hat is proud to be an equal opportunity workplace and an affirmative action employer. We review applications for employment without regard to their race, color, religion, sex, sexual orientation, gender identity, national origin, ancestry, citizenship, age, veteran status, genetic information, physical or mental disability, medical condition, marital status, or any other basis prohibited by law.Red Hat does not seek or accept unsolicited resumes or CVs from recruitment agencies. We are not responsible for, and will not pay, any fees, commissions, or any other payment related to unsolicited resumes or CVs except as required in a written contract between Red Hat and the recruitment agency or party requesting payment of a fee.Red Hat supports individuals with disabilities and provides reasonable accommodations to job applicants. If you need assistance completing our online job application, email application-assistance@redhat.com. General inquiries, such as those regarding the status of a job application, will not receive a reply.