Given the sensitive nature of media rights and content, how do you define the "human in the loop" at TelevisaUnivision?
Rodriguez: Governance and trust are critical. We have established strict decision authority guardrails. The general rule is that there cannot be autonomous decision-making on greenlighting content, talent contracting, pricing, ad inventory allocation, carriage terms, or regulatory compliance determinations. AI can recommend, summarize, or flag, but a human must be the final approval.
We also have data boundary guardrails. We prohibit AI access to non-public deal economics, carriage deals, sports rights, M&A deals, and sensitive advertiser data. We want to prevent leakage and regulatory exposure. AI informs decisions, but it never replaces accountability.
You've argued that Legal shouldn't be the sole owner of AI governance. How does that work in practice?
Rodriguez: Governance cannot just live in the legal function, or we become a gate instead of an enabler. AI risk is operational, not just legal. By the time Legal sees a problem, the operational damage may already be done.
I believe Legal should shape the rules, but the business functions must own the execution. We need a shared accountability model. We have distributed ownership across the enterprise that works like this:
- Legal sets the boundaries, defining the risk appetite and regulatory interpretation
- HR drives the human adoption, managing training and policy enforcement
- Finance owns the internal controls, ensuring the reliability of AI outputs for reporting
- IT secures the perimeter, managing architecture and vendor security
We set the frame, but the business needs to run the systems.