Staff Software Engineer, Machine Learning Operations
Date: Aug 28, 2025
Location: CHICAGO, IL, US, 60661-4555 Remote, IL, US, N/A
Company: Grainger Businesses
Work Location Type: Hybrid
Req Number 323339
About Grainger:
W.W. Grainger, Inc., is a leading broad line distributor with operations primarily in North America, Japan and the United Kingdom. At Grainger, We Keep the World Working® by serving more than 4.5 million customers worldwide with products and solutions delivered through innovative technology and deep customer relationships. Known for its commitment to service and award-winning culture, the Company had 2024 revenue of $17.2 billion across its two business models. In the High-Touch Solutions segment, Grainger offers approximately 2 million maintenance, repair and operating (MRO) products and services, including technical support and inventory management. In the Endless Assortment segment, Zoro.com offers customers access to more than 14 million products, and MonotaRO.com offers more than 24 million products. For more information, visit www.grainger.com.
Compensation:
The anticipated base pay compensation range for this position is $121,500.00 to $202,500.00.
Rewards and Benefits:
With benefits starting on day one, our programs provide choice and flexibility to meet team members' individual needs, including:
- Medical, dental, vision, and life insurance plans with coverage starting on day one of employment and 6 free sessions each year with a licensed therapist to support your emotional wellbeing.
- 18 paid time off (PTO) days annually for full-time employees (accrual prorated based on employment start date) and 6 company holidays per year.
- 6% company contribution to a 401(k) Retirement Savings Plan each pay period, no employee contribution required.
- Employee discounts, tuition reimbursement, student loan refinancing and free access to financial counseling, education, and tools.
- Maternity support programs, nursing benefits, and up to 14 weeks paid leave for birth parents and up to 4 weeks paid leave for non-birth parents.
For additional information and details regarding Grainger’s benefits, please click on the link below:
https://experience100.ehr.com/grainger/Home/Tools-Resources/Key-Resources/New-Hire
The pay range provided above is not a guarantee of compensation. The range reflects the potential base pay for this role at the time of this posting based on the job grade for this position. Individual base pay compensation will depend, in part, on factors such as geographic work location and relevant experience and skills.
The anticipated compensation range described above is subject to change and the compensation ultimately paid may be higher or lower than the range described above.
Grainger reserves the right to amend, modify, or terminate its compensation and benefit programs in its sole discretion at any time, consistent with applicable law.
Position Details:
The Machine Learning Platform & Operations team is focused on enabling machine learning scientists and engineers at Grainger to continuously develop, deploy, monitor, and refine machine learning models as well as improving the ML software development process. Our mission is to empower Grainger teams to effortlessly build, ship, and scale reliable machine learning, data science, and analytical solutions by proactively listening to our users and anticipating Grainger’s evolving needs; delivering self-service, quality-first platforms that accelerate business outcomes. You will work with machine learning, data engineering, network, security, and platform engineering teams to build core components of a scalable, self-service machine learning platform that powers customer-facing applications. You will play an important part in developing the tools and services that form the backbone of Grainger’s AI driven features leveraging methods in Deep Learning, Natural Language Processing / Generative AI, Computer Vision, and beyond. This is an exciting opportunity to join a team fueling the next phase in Grainger Technology Group’s data- and AI-driven modernization.
Our team is organized around three focus areas:
- Machine Learning Operations & Infrastructure: Build and maintain core infrastructure components (i.e., Kubernetes clusters) and tooling enabling self-service development and deployment of a variety of applications leveraging GitOps practices.
- Machine Learning
Platform: Design and develop user-friendly software systems and interfaces supporting all stages of the machine learning development lifecycle. - Machine Learning Effectiveness & Enablement: Guide, partner, and consult with machine learning, product, and business domain teams from across the organization to foster responsible, scalable, and efficient development of high-quality ML systems.
For this role, we seek an individual with deep experience administering and maintaining scalable cloud infrastructure components to continue driving quality and reliability of our Machine Learning Operations & Infrastructure focus area. If you are passionate about driving improvements in system reliability and availability and are excited by the challenge of supporting high scale machine learning systems, this is the role for you.
- Build self-service and automated components of the machine learning platform to enable the development, deployment, and monitoring of machine learning models.
- Design, monitor, and improve cloud infrastructure solutions that support applications executing at scale. Optimize infrastructure spend by conducting utilization reviews, forecasting capacity, and driving cost/performance trade‑offs for training and inference.
- Architect multi‑cluster/region topologies (e.g., with High Availability (HA), Disaster Recovery (DR), failover/federation, blue/green) for ML workloads and lead progressive delivery (canary, auto‑rollback) patterns in CI/CD.
- Ensure a rigorous deployment process using DevOps (GitOps) standards and mentor users in software development best practices. Evolve CI/CD from repo‑local workflows to reusable pipeline templates with quality/performance gates; standardize GitOps objects/guardrails (e.g., Argo CD Applications/Projects, policy‑as‑code).
- Define org‑wide observability standards (logs/metrics/traces schemas, retention) for ML system and model reliability; drive adoption across teams and integrate with enterprise tools (Prometheus/Grafana + Splunk/Datadog).
- Collaborate with the SRE team to define and drive SRE standards for ML systems by setting and reviewing SLOs/error budgets, partnering on org-wide reliability scorecards and improvement plans, and scaling blameless RCA rituals.
- Institute compatibility and deprecation/versioning policies for clusters and runtimes; integrate enterprise SSO (Okta/AD) and define RBAC scopes across clusters / pipelines.
- Own multi‑component roadmap initiatives that measurably move platform & reliability OKRs; communicate major changes and incidents to org‑wide forums and host cross‑team design sessions.
- Partner with teams across the business to enable reliable adoption of ML by hosting internal workshops, publishing playbooks/templates, and advising teams on adopting platform patterns safely.
- Bachelor’s degree and 7+ years’ relevant work experience or equivalent staff-level impact in platform / infrastructure roles.
- Possess strong software engineering fundamentals and experience developing production-grade software; experience with Python, Golang, or similar language preferred.
- Experience leading org-wide platform initiatives (e.g., multi‑cluster K8s, CI/CD platform evolution, observability standards) and mentoring senior engineers.
- Strong working knowledge of cloud-based services as well as their capabilities and usage; AWS preferred.
- Expertise with IaC tools and patterns to provision, manage, and deploy applications to multiple environments (e.g., Terraform, Ansible, Helm).
- Deep expertise with GitOps practices and tools (Argo CD app‑of‑apps, RBAC, sync policies) as well as policy‑as‑code (OPA/Kyverno) for safe rollouts.
- Familiarity with application monitoring and observability tools and integration patterns (e.g., Prometheus/Grafana, Splunk, Datadog, ELK).
- Deep, hands‑on experience with containers and Kubernetes (cluster operations/upgrades, HA/DR patterns).
- Ability to work collaboratively and empathetically in a team environment.
Bonus:
- Expertise in designing, analyzing, and troubleshooting large-scale distributed systems and/or working with accelerated compute (e.g., GPUs).
- Experience driving machine learning system reliability and awareness of associated requirements (e.g., model/feature drift telemetry, evaluation services, and model‑routing layers integrated with CI/CD).
- You’ve built pragmatic Kubernetes extensions (think small CRDs or admission webhooks), helped teams adopt OpenTelemetry to standardize traces/metrics/logs, and led safe, multi-cluster Kubernetes upgrades with staged rollouts, thorough testing, and clean rollback.
Don’t meet every single qualification? Studies show people are hesitant to apply if they don’t meet all requirements listed in a job posting. If you feel you don’t have all the desired experience, but it otherwise aligns with your background and you’re excited about this role, we encourage you to apply. You could be a great candidate for this or other roles on our team.
We are committed to equal employment opportunity regardless of race, color, ancestry, religion, sex (including pregnancy), national origin, sexual orientation, age, citizenship, marital status, disability, gender identity or expression, protected veteran status or any other protected characteristic under federal, state, or local law. We are proud to be an equal opportunity workplace.
We are committed to fostering an inclusive, accessible work environment that includes both providing reasonable accommodations to individuals with disabilities during the application and hiring process as well as throughout the course of one’s employment, should you need a reasonable accommodation during the application and selection process, including, but not limited to use of our website, any part of the application, interview or hiring process, please advise us so that we can provide appropriate assistance.