Mario Gamper, VP of Strategic Design, BCG Digital Ventures, discusses why the time to build artificial ethics is now. 

What will happen to us once artificial intelligence becomes smarter than humans? Elon Musk has repeatedly warned that the singularity is the biggest current risk to humanity, and he is only the star tenor rising above the concerned hum of a much larger choir.

There is a danger, however, of focusing only on the singularity. It doesn’t take smarter-than-human AI to create severe ethical problems in human-computer interaction. We need to build “artificial ethics” (AE) now, even if they are as incomplete as our current attempts to build AI.

The most well-known examination of the human-to-AI relationship is still Isaac Asimov’s “Three Laws of Robotics.” In an early form of science-fiction prototyping, Asimov’s literary MVP explores whether a minimalist set of three ethical rules can make smart robots safe for humanity.

In today’s context, however, the laws—which state that “a robot may not injure” or “harm” a human—seem overly concerned with killer robots and prove difficult to apply directly to our lives.

Killer Robots are not the problem

In reality, the challenges that may soon keep us up at night are much more mundane. They arise from the possibility of ethics-free AIs filtering everyday opportunities for each and every one of us.

To see just how quickly things can get ethically complicated, let’s look at something as innocent as e-commerce and take it just a few years forward. Imagine a marketing-AI in 2021 which has access to data on a consumer’s spending behavior, behavior triggers and current emotion. Let’s also assume this AI is able to create or select the content of your e-commerce interface. Without ethical guardrails, this AI would use relentless A/B testing to figure out how to steer each individual to the result that maximizes profit.

We could easily imagine this AI offering a nontransparent subscription model to people who, for one reason or another, are unable to carefully evaluate it. Perhaps this is because they’re bad at math, or maybe it’s just because they partied too hard last night. The AI would continuously learn and optimize its algorithms toward what “works.” As it figures out the next best offer “by itself,” who would responsible for any ethical shortcuts it is taking?

Move forward another two-to-three-years, and this marketing-AI is no longer content with just controlling the e-commerce offer and interface. Why not manipulate the morning’s news selection in a way that makes you want to buy ice cream? Why not guide you towards a new romantic relationship—as long as it puts you on a path to booking a two-week vacation in a luxury resort.

Can selling happiness be unethical?

“Harm” and “injury” are difficult categories to apply when the outcome is sipping cocktails under a palm tree with a new lover. The ethical question is less about individual pain and more about the potential loss of autonomy and untampered interests.

These life-shaping AI robots should be given input around questions like: Is it ok for an algorithm to deny someone an opportunity, even if they don’t notice it? Is it ok to drive someone toward a solution that’s not optimal for them, even if they are happy with their suboptimal outcome?

Currently, making sure an interface is not manipulative or outright unethical is a key task for interface designers and an important element of their education. Check out  @darkpatterns for digital interfaces made by companies and designers that have taken their eyes off the ethical compass.

But as the power of guiding people’s choices moves from front-end to back-end, where will ethical responsibility sit? The answer, is just about everywhere–from UI to engineering. Quite the cultural change for tech companies. Note how no one at Facebook seems keen on raising a hand to own the “responsibility” for your personal filter-bubble.  It just happened.

No one wants to slow down in the race to AI

Pulling the ethical handbrakes in the middle of the AI drag race isn’t an intuitive idea. While Texas-based Lucid.ai started an industry-first “ethics panel” in 2015, it remained an outlier for quite some time. It took until October 2017 for Alphabet’s Deep Mind to launch its “Ethics and Society” panel.

The realization that AI companies have serious ethical work to do finally seems to be catching on. But it’s not their job alone. The path towards safely scalable AI needs a double effort:

  1. Responsibility: This doesn’t just concern the AI tech companies. All of us as digital product designers and engineers need to kick-off a thorough discussion about artificial ethics. Until we can prove that AI will figure out Kant’s Categorical Imperative by themselves, we carry the responsibility of the outcome of the algorithm’s work.
  1. Regulation: If we want to fully unlock the benefits of AI for our lives, governments need to build international legal frameworks that create a stable and ethical playing field for AI companies. This is clearly an area we can’t afford to be without regulation, even if the first attempts–like the  EU’s Art. 22 of the GDPR–look like they make innovation more difficult.

We can’t afford to sleep on this. We need to have AE—long before AI wakes up.

Read more from Mario Gamper on why startups should build identities that offer both structure and flexibility in his recent article, “Building Bendable Brands.”