Lately, if you’re like me and enjoy following the AI narrative (even if just for grins & giggles), you’re inevitably sucked into philosophical wormholes that always seem to pop you out at the same place – a world where machines rule all.
Strangely, though, we rarely encounter future scenarios that follow a path we’re already on, where machines are but tools used to assist us. If we project this scene forward, some interesting questions to ask are, “What does that world look like, and who are its haves and have-nots? Are AI titans forming?”
AI, for all its hype and promise, is still very much in its infancy. Far from being able to get up, put on its clothes, and take your job, AI today is less of a super scary robot, and more like a smart washing machine (funny you should ask, as there is one of those). It can help us conserve resources and do specialized tasks more efficiently, like getting clothes clean using fewer resources, but it really can’t do higher order thinking we take for granted like abstract judgement and reasoning. However, that super smart washing machine (and all its other specialized variants) has an owner, and together they can wield tremendous influence. And anti-trust laws (put in place over 100 years ago to prevent corporate behemoths from controlling entire markets) may be full of loop holes in the digital age.
Using a singularity argument where machines alone rule provides a convenient escape from a more complex debate about a future where various human and machine forces collide and collapse together. In this scenario, a select set of firms use walled garden data to feed their AI, and as such, seize unprecedented levels of control, influence, and power.
Here’s an example. We’re already seeing a massive rationalization of power and influence collapsing into AI titans like Google, Facebook, Apple, Microsoft, and Amazon (controlled by surprisingly few individuals); not pure machines, but formidable entities nonetheless, fueled by AI, and directed by small pools of mighty people already circling their wagons around a plethora of data.
In the short run, we (the consumers) seem to benefit, getting innovative little features and conveniences such as travel guidance and digital yellow pages, but unbeknownst to most, to get these we sacrifice gobs of data and hence privacy. Each time we travel with GPS on, our whereabouts are tracked and stored. Each time we search, we provide preference footprints. Meanwhile, the behemoths rack the data up, building behavior and preference repositories on each of us.
So what’s the rub?
First, it’s our data. Thus, it would be nice to be able to view it, and if it’s wrong, correct it. The European Union passed a law recently that goes into effect in May 2018 called GDPR – General Data Protection Regulation. Its intent is to give consumers more rights and transparency with their digital data. Other consumers outside the EU could use similar privacy protection laws.
Second, to some extent, without being cognizant of it, our choices are already being limited. For example, when you search in digital maps, perform online comparison-shopping, or ask a voice pod for restaurant recommendations, the top options returned may not be calculated objectively. Ranking algorithms already place higher emphasis on businesses that pay more to play, and search conglomerates, like Google, rank their interests (including businesses they have a stake in) higher.
Each time we purchase something, we’re casting a vote. When we go through a buying cycle, we are creating implied demand, and when we purchase we’re reinforcing that the supply is meeting the demand we created. When this cycle is cornered, choice becomes an illusion. To illustrate, on June 27, 2017 the EU slapped Google with a record-breaking $2.7 billion fine, charging the tech titan with doctoring search results giving an “illegal advantage” to its interests while harming its rivals.
Third, firms can and will use your data for their benefit, and not necessarily yours. Prior to the digital age, people stereotyped others by their physical choices such as their house, car, job, shopping habits, and clothes. Although today those choices still factor in, we also project digital personas: where we surf, what we share and like on Facebook and Instagram, what devices and channels we use, how we interact online, and so forth. When these behaviors are crunched and codified, they become rich fuel for algorithms that can manipulate, discriminate, or even do harm, without the algorithm’s owners having any concerns for side or after effects. Show preference for fast cars and thrill-seeking vacations, and not only will you receive more of those offers, but you might also receive higher insurance premiums. Share enough medical history, and an insurer’s algorithm may score you at high risk for a chronic disease, even when there’s no medical diagnosis, and there’s no certainty you’ll ever develop that condition. That might make it very hard to get medical coverage.
Admittedly, not all of the use cases lead to undesirable outcomes. In late 2016, American Banker ran an article on next-gen biometrics detailing how banks use consumer digital behavior signatures to detect fraud and protect consumers from its effects. And although consumers initially do benefit from such a service, what’s interesting (and concerning) is the nature of the behavior data fed to the fraud detection algorithm: the angle at which the operator typically holds the smartphone, pressure levels on the touch screen, and cadence of keystrokes.
Unquestionably, the bank’s primary goal is predicting whether an imposter is behind the device in question. Nonetheless, what’s stopping this same bank from using that data to predict a consumer’s likely mental state, such as likelihood of inebriation, legal or otherwise? Moreover, whether that prediction is ultimately accurate is irrelevant to the immediate recommended action and the subsequent consequences. We have little protection from the effects of algorithmic false positives, and today, except for credit scores, few brands have any accountability for model scoring accuracy.
Here’s a scenario. An algorithm thinks you’ve been drinking based on your smartphone behavior and flags you as too drunk to drive and disables your car, forcing you to find another way home. That’s one thing, but think about this – that same data might also be available to prospective employers, who use it to forecast your job performance, scoring you lower than other candidates based on its dubious drug use prediction.
Who owns and manages your digital behavior data? Are they subject to use restrictions? The answer is (although the data is about your profile and your behavior) – you don’t own it and your rights are limited. And although some of the more inconsequential data is scattered about (such as name, address, date of birth, and so on), the deeper behavioral insights are amassed, stored, and crunched by the AI titans, with seemingly no limits or full transparency, and with little insight into where its shipped, and who else might eventually use it. They suggest we simply trust them.
Those that ignore history are doomed to repeat it
History is always an amazing teacher. In the 19th century, railroads consolidated into monopolies that controlled the fate of other expanding industries, such as iron, steel, and oil. They dominated the distribution infrastructure – just as today’s AI titans, in many respects, control the lifeblood of modern day companies – their prospect and customer traffic. And those other expanding industries (iron, steel, oil) were no different. They too controlled the fate of other expanding industries, which all needed their materials.
Soon after their start, Google’s founders adopted a mantra, “Don’t be evil.” In October 2015, under the new parent company Alphabet, that changed to “Do the right thing.” Although the revised phrase still rings with the implication of justice, it raises the question of who benefits from that justice, and if there’s a disguised internal trust forming.
Everyone knows that business, by its very nature, is profit driven. There’s nothing wrong with that, yet history teaches us that we need checks and balances to promote a level playing field for other competitors or potential entrants, and for consumers.
In his 1998 book “The Meaning of it All,” Richard Feynman, a famous scientist, tells a story of entering a Buddhist temple and encountering a man giving sage advice. He said, “To every man is given the key to the gates of heaven. The same key opens the gates of hell.” Unpacked and applied to AI today:
- The term “every man” can imply an individual, or organization made of people, or humankind as a whole.
- Science, technology, data, and artificial intelligence are but tools. As history shows, humans use them for good and evil purposes.
- AI’s impact on the future isn’t pre-determined. Each of us can play a role in shaping how it turns out.
Let’s ensure we live in a world where many (not a select few) benefit from AI’s capacity and ability to improve lives, and that those responsible for its development, evolution, and application are held to fair and ethical standards.
Can AI be the rising tide that lifts all boats?
The power and potential of artificial intelligence technologies is clear, yet our ability to control it, and deploy it sustainably is not. Who should regulate and control it (and its fuel- our data) is an evolving and ongoing debate.
Used responsibly and applied democratically, we all stand to benefit from AI. Paradoxically, while it renders some of our old jobs obsolete, it retrains us for a new world where it and we play new and more rewarding roles – where living standards rise and mortality rates fall.
What’s our guarantee we’re marching toward that future?
Honestly, there are no guarantees – our world is devoid of certainty. However, we can influence likely outcomes by advocating for practical checks and balances. Call me a dreamer, but I envision a world where our privacy is valued and respected. Where we better understand the value of our data and get a reasonable exchange in return when we share it. Where we appreciate what happens when we release it, and can hold those accountable that illegally mangle or pawn it; and a world where we have assurance that when we share data, others uphold their end of the agreement, and we have recourse if they don’t.
If you would like to continue contemplating some of the top ethical implications of AI’s evolving story, click on this link:
https://www.weforum.org/agenda/2016/10/top-10-ethical-issues-in-artificial-intelligence/
Here’s my favorite quote from it:
“If we succeed with the transition, one day we might look back and think that it was barbaric that human beings were required to sell the majority of their waking time just to be able to live.”
Peace