The Data Oath

The Data Oath

As intelligent systems replace more human workers doing a variety of tasks, the questions we ask are:  Just because it can be automated, should it? Does automating the task improve, maintain, or degrade the experience of the user? Would we be receiving a higher quality of service or outcome if a human was still in charge? But more importantly, are users and society as a whole better off as a result of using the intelligent system?


We believe that intelligent systems should only be used when it can be shown that users and society benefit in measurable results. If the system shows bias and negative outcomes for certain groups in society, the system should be taken out of service until it can be further improved. While this can seem a bit strict, it’s only asking that what we create be better than a human would be in the same job.


As we automate more tasks, jobs, and human activities, we have to start asking questions like the ones listed above, and to what degree are the choices we are making improving the lives of the users and society we are meant to serve? As a general rule, companies are organizations that are intended to provide value to the communities they serve in exchange for monetary compensation. This might seem like a strange statement given the way many companies operate, but this simply shows how far we have allowed the ethical lines in business to be moved.


As we are entering the first AI age, where machines, not humans, already have significant levels of impact on our lives, determining numerous milestones for everyone in society. Such as; do you get interviewed? Do you get paroled? Do you get credit for your first car? Are you accepted to rent a house? Are your kids immediately eliminated from consideration from the college of their dreams? Are these really the choices we want to be made purely by a machine that has no conception of the stakes at hand for users hoping for a positive outcome? To what extent should a machine be able to determine the future of an individual?

We have to be more vigilant of the ethical lines society has drawn and ensure they aren’t being moved further and further away from what we would want and expect from the technology we develop. We shouldn’t turn a blind eye to the consequences served upon users just because a technology is revolutionary in how it operates. We have to ensure that the technology serves all of us in the same way we would want our parents, siblings, sons, and daughters to be served.


Imagine you had a daughter or son and they had worked their entire lives to excel in school, but upon applying to universities, they were unanimously rejected. They had straight A’s, played a sport, volunteered, and many other things showing them to be exceptional young individuals. So you can’t really understand why they were rejected. However, after reaching out to all the schools looking for answers, you learn that the admission committees were retired a few years ago, and not everything is done by a new AI system being used by all the schools. There is no way to appeal with a positive outcome, nor is there a human who can intercede on your behalf. After all, if they did, every parent of a rejected applicant would want special treatment.


In this situation, would you be happy the school saved money on the admission committee salaries even though the system isn’t completely ready? Probably not. While the admission services has time to work out the issues over the coming years, your kid is stuck in their academic advancement, and their future opportunities are being materially impacted. If you would be happy with the situation above, stop reading, watch the Hallmark channel and then reread this section again.  


While Improving Quality of Life is the least technical of the Five Foundations, it is the most aspirational of the foundations. We should consider the impacts of the systems we are designing so that we don’t create a world for ourselves and our children we wouldn’t want to live in. All of us in the AI, technology, business, and governmental communities, must reflect on what we are doing and how it will change how we live. We must ensure that our innovations lift more of us up and provide a more level playing field across race, genders, nationalities, orientations, and borders. So let’s start with a commitment to making the lives of our users better than what they were before Artificial Intelligence. 

Data that Improves the Quality of Life