General Information

City
Cebu
State/Province
Central Visayas (Region VII)
Country
Philippines
Department
IM ENGINEERING
Date
Thursday, February 26, 2026
Working time
Full-time
Ref#
20038344
Job Level
Individual Contributor
Job Type
Experienced
Job Field
IM ENGINEERING
Seniority Level
Associate

Description & Requirements

About Xerox Holdings Corporation
For more than 100 years, Xerox has continually redefined the workplace experience. Harnessing our leadership position in office and production print technology, we’ve expanded into software and services to sustainably power the hybrid workplace of today and tomorrow. Today, Xerox is continuing its legacy of innovation to deliver client-centric and digitally-driven technology solutions and meet the needs of today’s global, distributed workforce. From the office to industrial environments, our differentiated business and technology offerings and financial services are essential workplace technology solutions that drive success for our clients. At Xerox, we make work, work. Learn more about us at www.xerox.com
.


We are seeking a highly organized, analytical, and forward-thinking AI Test Lead/Coordinator to manage and execute testing activities for AI-enabled solutions across the enterprise. This role combines leadership in AI test strategy with hands-on coordination of AI testing workflows.

As AI becomes increasingly embedded into business systems, this role is responsible for ensuring that all AI components — machine learning models, predictive analytics, and generative AI features — are tested accurately, safely, and responsibly. The AI Test Lead/Coordinator will define test approaches, prepare and validate datasets, coordinate test cycles, and manage defect triage across multiple teams including Data Science, Engineering, QA, and Business stakeholders.

This is a newly established role designed to bring structure, governance, and quality assurance to AI testing efforts across high-impact projects.

KEY RESPONSIBILITIES

AI Testing Strategy & Governance

  • Own the end-to-end AI testing strategy, including methodology, standards, and responsible AI guidelines.
  • Define testing coverage for AI systems, including accuracy, robustness, bias, drift, explainability, and data quality.
  • Ensure consistent understanding and adoption of AI testing processes across all participating teams.

Test Cycle Management

  • Plan and execute all AI testing cycles, including environment readiness, test data preparation, scenario creation, and model validation activities.
  • Coordinate schedules and deliverables across Data Science, Engineering, QA, and Business teams to ensure timely completion.
  • Ensure appropriate tools and frameworks are available for AI model testing and monitoring.

Model & Defect Management

  • Establish and maintain model defect reporting and AI issue-tracking procedures.
  • Lead AI defect triage sessions across Data Science, Engineering, and business stakeholders.
  • Validate model outputs, data anomalies, drift patterns, and unexpected behavior.

UAT & Business Collaboration

  • Lead UAT support for AI-driven features in close partnership with business SMEs.
  • Ensure readiness, alignment, and clear communication during UAT execution for AI workflows.
  • Translate model behavior and test results into business-friendly insights.

Vendor & Tool Oversight

  • Review and approve AI vendor testing plans, validation documentation, and timelines.
  • Evaluate AI testing tools and automation solutions for accuracy, explainability, and enterprise suitability.

Reporting & Communication

  • Track and report on AI testing progress, model quality metrics, risks, and compliance indicators.
  • Provide clear documentation and reporting for AI transparency, audit readiness, and responsible AI governance.
  • Communicate testing outcomes and model risks to project and leadership stakeholders.

QUALIFICATIONS

Required

  • Bachelor’s degree in Computer Science, Data Science, IT, Engineering, or related field.
  • 3–5 years of experience in software testing, QA, or data validation roles.
  • Foundational understanding of AI/ML concepts (accuracy, bias, drift, model lifecycle).
  • Experience coordinating testing efforts across multiple teams.
  • Strong analytical, documentation, and problem‑solving skills.

Preferred

  • Experience with AI/ML platforms (Azure ML, Databricks, SageMaker, MLflow).
  • Exposure to model evaluation, Python scripting, or AI testing tools.
  • Familiarity with responsible AI practices, explainability techniques, or LLM validation.
  • ISTQB, Agile/Scrum certifications, or relevant AI certifications.

Skills

  • Excellent communication and stakeholder management.
  • Ability to explain complex AI behaviors to non‑technical audiences.
  • Strong coordination skills and ability to manage multiple parallel test efforts.
  • Comfort working under pressure in fast-paced, evolving environments.

DELIVERABLES

  • AI Test Strategy and Validation Framework
  • AI Test Plans, Scenarios, and Data Checklists
  • Model Validation Reports and Explainability Summaries
  • AI Defect Logs and Resolution/Triage Reports
  • UAT Support Documentation
  • Weekly AI Testing Status and KPI Reports