Leading Sky's Digital Ethics Network: forecasting AI, privacy, and trust risks in Sky Live, a camera-first product in the living room, then turning them into design decisions teams could act on.
01 · The moment that made this necessary
Sky Live is a camera-first product operating in people's living rooms, processing video, audio, and biometric data to power real-time experiences. The ethical stakes were unusually high: trust, privacy, and responsible AI weren't optional considerations, they were foundational to whether customers adopted the product at all.
The trigger. Customer sentiment ahead of launch. A 2021 YouGov trust study placed Sky in the low 30s on protecting privacy for Face ID technology, behind Apple, Samsung, Google, and Netflix. Pre-launch quant research in 2022 ranked privacy among the top reasons prospective non-buyers gave for passing on Sky Live, with around one in four citing it explicitly. Privacy was already measurable as a brake on purchase intent, not a theoretical concern.
The intervention. Drawing on my Oxford coursework in ethics and AI, I joined Sky's Digital Ethics Network (DEN), an employee-led group sponsored by Lucy Thomas, Chief Data Officer, whose mission was to bring ethical thinking and design to the use of data and technology at Sky. I combined two existing tools, Omidyar's Ethical Explorer and Design Ethically's Layers of Effect, into a single Miro workshop board built to forecast ethical impact across Sky Live.
What would have been missed. A set of initiatives would not have existed: the ethical vetting of partners onboarded into Sky Live (NEX, Stingray, and others using AI), the decision to adopt Google's ML Kit Pose Detection algorithm partly for its ethical track record, and a series of UX changes that improved how customers understood and controlled what Sky Live did with their data.
02 · Role
I was Lead UX Designer on New Products at the time, and the design lead in the Digital Ethics Network alongside Diana Spehar (Data Ethics Lead), Angelica Stephenson (Senior Tech Product Manager), and Mario Feegrade (Head of Product). Together we wrote the set of initiatives the group recommended. My role was distinct: the only designer in a group otherwise led by data and product leadership, holding the UX perspective in data-heavy conversations and making sure the initiatives translated into something design and product teams could actually ship.
I personally created the Miro workshop board, combining Ethical Explorer and Layers of Effect into a single cross-team tool, and I led the workshops across Sky Live teams. Within the five areas of concern surfaced, I owned the Privacy and Inclusivity threads end to end, from framing the risk to shaping the mitigations that landed in product. The format and the shared language for ethical risk-mapping did not exist at Sky Live before this work. Three asks came out of those workshops and were escalated to the Beyond TV directors' board, where they became active initiatives on Sky Live.
03 · What I built
The centrepiece was a hybrid workshop designed and facilitated on Miro, built around three named stages: lightning talks framing Cherry (Sky Live's internal codename) from multiple perspectives; a structured risk-mapping block using Ethical Explorer and Layers of Effect tailored to Sky Live; and a "What now?" stage that converted each named risk into actionable insight, a design mitigation, and an owner.
Short framing talks giving every attendee the same picture of Cherry: what it is, what data it touches, what the customer perspective looks like, what the known constraints are.
Teams ran each area of concern through Ethical Explorer to name the risk, then Layers of Effect to forecast first, second, and third-order consequences on customers, the business, and society.
Every named risk came out with a committed action, a named owner, and the people who still needed to be involved. No risk left the workshop unassigned.
The honest gap. What the team could not yet answer about the risk, the dataset, the behaviour, or the customer.
The committed action. A concrete design, product, or engineering step, not a statement of intent.
The people who could approve or block the action, named explicitly so the team knew who to go to.
The cross-functional partners, legal, research, engineering, propositions, needed to make the mitigation real.
A camera-first device in the home introduced real risks: surveillance, stalking, hacking. These needed explicit design mitigations, not just policy statements. I owned this thread from risk framing through to the "Your privacy protected" onboarding and hardware mute-button messaging.
Face detection reliant on ML is fallible and risks excluding users. Children's rights, positive self-image, and non-violent values needed active consideration. I owned this through to the no-beautifying-filters statement in Video Booth.
The remaining threads, held across the DEN: the consent gap between what the service did and what customers understood it to do; Watch Together risks around inappropriate broadcasts and unsupervised children; and model bias from insufficiently diverse training data.
Built on Ethical Explorer and Layers of Effect as a Miro-based template for teams to adopt in their own projects. Served as a shared tool to spark discussion, align stakeholders, and surface the ethical impact of design decisions before they shipped. A workshop-outcomes deck captured the risks and initiatives for ongoing reference.
A series of talks across Sky teams on how designers should approach AI responsibly. Covered practical AI use cases, machine learning concepts, model cards, and the UX challenges of AI-driven products, all pre-GenAI.
04 · How teams used it
There was no formal measurement framework in place at the time, something I'd change if I ran this again. The directional signals we did have: the workshop ran five times, mapped five areas of concern spanning design, product, and engineering, and two other design teams adopted the framework to measure the ethical impact of their own products.
The 26 attendees were senior by design: heads of department, heads of product management, lead interaction designers, senior hardware engineers, and lead software engineers from across Beyond TV, alongside responsible business, propositions, and data ethics leads. The room could approve action, not just discuss it.
The Sky Live onboarding was rewritten to explain the hardware privacy button more clearly. It traces back to the designing-for-trust risk surfaced in the workshop and the "Your privacy protected" statement it seeded.
The family-focused Video Booth app, built around funny filters, shipped with an explicit statement that no beautifying filters were used on faces. A direct response to the inclusivity risks around children's self-image surfaced in the workshop.
05 · What I'd do differently
Two things I'd change. First, the Digital Ethics Network was volunteer-led, run on top of day jobs. I'd make it an official project with dedicated time, especially for the engineering partners whose involvement mattered most and was hardest to secure under voluntary terms.
Second, I'd run the workshop at the start of Sky Live projects, not towards the end. By the time we were forecasting risks, some of the decisions that carried the most ethical weight, around data collection, algorithm choice, and partner selection, were already locked in. Running the workshop pre-commit would have changed which risks could still be prevented versus mitigated.
06 · Where this goes with AI
Designing responsibly for AI has been my focus since 2020, when I started bringing the UX implications of AI-driven products into conversations with fellow designers. The Sky work clarified what the gap looks like in practice: making a product genuinely ethical requires intervention at three layers at once, and most teams only look at one.
Algorithms and datasets. Engineering-led, but design has to be in the room for dataset choices (diversity, edge cases, fairness) because they shape every downstream experience.
How the product is shaped once risks and outcomes are understood. Product and design together. This is where the Ethical Explorer to Layers of Effect walkthrough does the most work.
How we explain the system to customers and how we let them correct it. Design-led. The privacy-button onboarding, the no-beautifying-filter statement, and the mute-button messaging all live here.