views
I remember the company’s drive to become GDPR compliant. You would imagine it would be a fairly simple process. Lawyers would clearly articulate new things you had to start doing, old things you had to stop doing and help sort through the shades of grey in between on the parts of the new regulation that weren’t clear. Instead, it proved to be a painful, somewhat messy process for our Product team. We would often receive impractical requests, asks from partners that couldn’t be seen to fruition, all on impossibly short deadlines. Considering the obstacles we faced, it is with some pride that I recognize what our teams were able to accomplish and that they ultimately completed the job. That said, the experience made it clear to all involved — there had to be a better way. So was born our initiative to build Digital Responsibility into the core of our offerings and actively contribute to the conversation across the industry with our partners, clients and the regulators
Regulation is subject to change, but by creating and enforcing a strong policy agenda, your organization can proactively ensure that all product design is built with digital responsibility in mind
. Once our policy agenda was finalized, we created a number of internal practices to ensure our Products and Services are not only compliant, but in line with the high ethical standards we have set for ourselves and are demanded by our clients. There are three key components:
- Senior leadership review board: A collaboration of leadership across all functional areas of the company that together consider challenges and major policy decisions related to uses of data and technology. This ensures leadership commitment to fairness to people, respect and accountability on uses of data and technology in our Products and Services.
- Digital responsibility evaluation process: A formal evaluation of product and services during the design phase to ensure that by design what we build and deliver is ethical, accountable, safe and secure
- Data source evaluation: A formal due diligence process on new data sources to ensure the data was ethically sourced, in compliance with applicable law and that we understand the permissions and prohibitions on the data. This enables us to activate the data for clients in ways that are ethical, and fair to people.
Following these practices means we can turn our good intentions into digitally responsible functionality delivered to our teams and clients… with one caveat. Up to this point, I’ve been talking about decisions made by people, either in how the data can or can’t be used, or deciding the rules applied in the software to determine an output from a series of inputs. However, with the ever-increasing use of software decisioning based on machine learning, we run the risk of having machines learning bias from the data fed into them.
It should come as no surprise that the data fed into machine learning algorithms is curated. There is the apocryphal tale of the hotel chain that wanted to understand how room occupancy was impacted by room pricing and so fed daily occupancy data and room price into a ML platform. They were surprised to find that the algorithm recommended putting the prices up to increase occupancy. On closer examination they found the days they were full were during conference season where they could charge extremely high rates for the rooms and still get 100% occupancy. Once they added demand signals to the algorithm, it started making more sensible pricing recommendations.
As one might expect, no analyst wants to go to a client and present a recommendation and have to answer the question “how did you arrive at that conclusion?” with “no clue, the machine told me.” This means our algorithms have to be explainable, and accountable. Lots of work goes into understanding what components really influenced the output and machine learning platform providers are now delivering explainable AI components to contextualize these narratives. These solutions can help point to datasets that may be reinforcing bias. Some explainable AI solutions also include “what-if” tools so you can change attributes to see how they impact outcomes and their correlation with gender or ethnicity, for example. Using these methods – such as counterfactual fairness – can help reduce machine learning bias and lead to a more fair and ethical use of AI technology.