When a company makes headlines for mismanaging user data, much of the discussion will revolve around legal implications: whether or not the company’s privacy policy and terms of service cover the data usage in question. The ethical implications of a breach may not be thrust as deeply into the spotlight.
Today, however, GDPR and similar privacy laws, such as the upcoming California Consumer Privacy Act (CCPA) are driving companies to consider ethics as a competitive edge. More innovative players will look to differentiate themselves from their competition by organizing ethical review committees, ethics teams, and data ethics officers to formally consider the implications of algorithms and machine learning on customer trust and business outcomes.
Individuals in roles such as CIO (Chief Information Officer) and IT Director have a vast responsibility when it comes to enforcing ethical data use. These individuals must oversee appropriate data management while enabling their organizations to turn this information into business value.
Balance benefits and risks created through data use
Within an organization are teams tasked with turning data into something of value. CIOs must help these teams create something beneficial to society at large while mitigating the risk to the individuals whose data is being used.
For example, a healthcare research study can inform the ways new medicines and other clinical interventions are developed, how clinical practices are carried out, and, ultimately, improve patient outcomes.
As long as an ethics review committee deems that the research method will not place undue risk on individuals whose data is being used or shared in the research study, that study may create larger societal benefits without compromising sensitive information or otherwise adversely affecting the individuals whose data are used in the study.
When companies create artificial intelligence (AI)-powered tools, technology leaders have a responsibility to limit bias. If an AI platform is supposed to provide a benefit but ends up unlawfully preventing individuals from receiving a loan or insurance coverage, or earning a job, it’s an example of technology that has not been created with ethics built in.
Ethical data use is also required for internal purposes. A marketing agency must carefully consider how much personally identifiable information (PII) it needs to ingest in order to serve advertisements to the correct audiences. If technology leaders at these types of firms don’t carefully consider how the organization manages data, the results of poor data use may be dangerous to the individuals whose data is being used.
Before using customer data to generate business value, technology leaders must ensure their teams are using data in a way that doesn’t inadvertently discriminate or stigimitize or otherwise cause harm to individuals. This kind of balancing act — one that is tied to concepts of utilitarianism — is the most relevant to data ethics. To create products or otherwise use data with ethics in mind, CIOs and their ilk should consider the following practices.
Ethics checks and balances lower risks
First, technology leaders should perform privacy impact analyses. These analyses should determine the inherent risks of the technology or its uses to individuals. The analyses should also examine whether or not the product and company’s data practices are compliant with laws or regulations and whether the controls implemented may adequately reduce the risk.
They should ask themselves if they need certain pieces of data, what the team is trying to achieve, and how achieving that goal might put individuals at risk. Where the risk outweighs the benefit, leaders have an obligation to mitigate that risk.
For example, both the General Data Protection Regulation (GDPR) and Brazil General Data Protection Law (LGPD) have risk/benefit analysis requirements. Weaving these requirements into privacy impact assessment or privacy or ethics by design guidelines is often the least burdensome approach for most organizations because it allows them to leverage existing processes.
To create safeguards against human error during product creation, CIOs must advocate for the creation of an ethics review committee. This committee should be comprised of people with ethics experience and expertise who are outside advisors. The committee should institute good governance and policies in order to best consider whether or not an organization’s data practice mitigates risk to individuals.
The goal of these practices is to force companies to comply with privacy laws while also thinking about what appropriate ethical behavior looks like and what values those companies would like to achieve.
Ethical data use leads to sustainable innovation
When companies can achieve sustained innovation, that technological advancement often turns into benefits for society. Innovation that doesn’t take ethics into account runs into scenarios where the work stalls. Facial recognition technology, for instance, is a type of technology that has not undergone the requisite ethical impact analyses.
As a result, laws like those that were recently enacted in San Francisco are going to emerge to curtail any sort of advancements that may have happened with more ethically sound products.
It is imperative that technology leaders understand the risks of their data use and mitigate those risks as much as possible. By performing impact assessments properly, following existing guidance, bringing in stakeholders, and having them weigh in about how to balance those risks, companies can greatly improve their chances of positively impacting society.
If CIOs fail to build ethics compliance into their teams’ processes, the likelihood of sustaining innovations and delivering long-term benefits to society is much lower.
Get the TNW newsletter
Get the most important tech news in your inbox each week.