Stephanie Dinkins (2017)

Critic: Stephanie Dinkins

Artificial intelligence has already arrived, infiltrating our civic and personal lives, and quietly reshaping the ways we live, love, work, and interact. AI’s ever-growing computational potential, powered by wellsprings of near-limitless data, has resulted in significant turning points over the course of the technology’s development.

It's reasonable to declare that humans and learning machines are on the precipice of a new epoch. (Skeptics of the claim are invited to remember that the iPhone has only been around since 2007.) The imminent wave of artificially-intelligent carshomesmedical interventions, and the like is set to alter human life all over again.

Rather than fear the impending AI revolution, we would be wiser to get involved—even those among us who can’t code, design, or even comprehend AI's prismatic complexities. The importance of transparency in the creation and use of algorithmic systems—particularly those employed to make life-altering decisions (like the length of jail sentences, or the depth and breadth of medical care)—cannot be understated. We must be aware of the decisions artificially-intelligent systems are making, understand how they are making them, and realistically anticipate the ramifications these decisions will yield.

That bias and discrimination can be and already is encoded in AI systems is no secret. One need only recall a scandalous episode from 2015, in which a Google photo search tagged an image of two black friends as 'gorillas.' By most published accounts, the search engine's classification was concluded to be unintentional. Deliberate or not, the incident points to a disturbing and systemic problem that persists today. Algorithms, like the ones used in the photo search, were created by a largely homogeneous pool of programmers applying a limited dataset that did not represent or describe the diversity of the human family.

Read the full editorial on NEW INC STREAM.