top of page

The Data Oath

The Data Oath

Transparency & Explainability are central to Ethical AI. It’s how we as users can verify what the systems are doing, how they are doing it, and how we are being treated in relation to other people, groups, and in general. It’s about being able to understand all the factors that went into developing our score, recommendation, etc. Can the system tell us what data was used and with what weightings? It’s how we keep intelligent systems and the companies who develop them accountable for our treatment and outcomes.


We believe the application for black-box systems with zero need for Transparency & Explainability is rather narrow when it comes to handling data with real-life impacts on users. If a system deals with human data, there should be a requirement for the results to be explainable to users, regulators, etc. Otherwise, there would be no way to ensure the proper treatment of all users and groups in a verifiable way.


Most modern societies operate on the foundations of Trust & Accountability. This idea applies between fellow citizens, between citizen and companies, and between citizen and government. Granted, in some countries, the level of Trust & Accountability may be higher or lower, but we believe it is fair to say all citizens would expect Trust & Accountability as part of their social contract. Part of being good citizens is the expectation that everyone should also behave as a good citizen, and offending members should be reprimanded.


Putting Trust & Accountability into more concrete terms: when something significant happens in our lives, and an outside party had an influence on that event, we have the expectation of holding that outside party accountable when we feel the treatment was unfair, incorrect, or wrong. We would expect an official organization in our country (court, department, agency, etc.) would be able to hear both sides, weigh the evidence, the arguments, and issue a verdict. Both sides would be expected to be transparent and explain what they did, thought, and so on. If one party decided not to be transparent or explain its actions, the official organization wouldn’t be able to rule in their favor.

It would be unthinkable if a company or individual were able to use the explanation, “It’s too hard to explain to you,” as justification for something unfortunate they did to you and then be let off the hook. It would go against everything we stand for in modern society, and we would rightly want it changed so that the company or individual would be held accountable. It would be unethical for any company or individual to be able to impact another individual’s life with zero consequences for their actions.


But how does this example apply to Transparency & Explainability in AI? Imagine we used the same example from above and replaced the “fellow citizen” with “intelligent system” and replaced “company” with “AI company.” Would you want a company to be able to employ an intelligent system that has real-life impacts on your life (access to credit, promotions, job interviews, college admission for your kids, etc.) but has virtually no way to tell you why what happened occurred or who is accountable if the outcome was negative or undesirable? To put a little cherry on top, if you push to find out why, the company can tell you that if they made the system explainable, they would make less money.


So the question naturally arises, as we replace more people with intelligent systems: wouldn’t we want to be able to verify, audit, and understand how these systems are passing judgments on all of us? Wouldn’t we want these systems to operate in alignment with our societal standards? Do we really want to be governed by intelligent systems that are answerable to no one? We think not.

Transparent + Explainable Data

bottom of page