AI: Myth, Reality, the Hype Cycle, and Amara’s Law

By Ruth Seeley

If the buzz about artificial intelligence is starting to sound reminiscent of the late days of the dotcom boom, there’s an excellent explanation for the déja vu. Whether you accept Gartner’s Hype Cycle as gospel or think of it as a metaphor, there’s no doubt AI has followed its classic pattern from technology breakthrough in 1956 through the peak of inflated expectations, the trough of disillusionment, and onto the slope of enlightenment.

During this phase, the potential benefits of AI are starting to be realized. Research is ongoing, and second- and third-generation product iterations are building confidence as they’re fine tuned. We haven’t yet reached the plateau of productivity, in which there is mainstream adoption of not only the concept but also of criteria for assessing it. But we’re close.

The development of AI has also followed Amara’s Law, which states that we tend to overestimate the effect of a technology in the short term but underestimate its long-term effects. The celebration of AI’s usefulness in fraud detection, autonomous vehicles, speech and image recognition, and natural language comprehension is deserved. But we’re also starting to look long and hard at how crucial is to eliminate bias in AI and how to accomplish it.

At CES 2020, the Consumer Technology Association has scheduled a day-long AI track. While presenters will look at consumer-facing AI technology, they’ll also ponder AI’s global economic impact, how to integrate it throughout all industries, and examine its biases.

Here are three examples of bias that have been discovered in computer vision systems alone:

  • gender classification systems are more accurate for lighter skinned males than they are for darker skinned females;
  • women are under-represented in image search occupation results, which reinforces gender stereotypes; and
  • there is a 10% greater error rate of object-recognition systems for very low income households (less than US$50 per month) versus those making more than US$3, 500 per month.

This focus on eliminating AI bias is crucial to its continued development and adoption. That’s partly because focus on eliminating bias highlights the biggest myth about AI: AI is AI is AI. The earliest concept, General AI, involved the creation of machines that would function as Humans 2.0: bigger, faster, stronger, and just as smart if not smarter. But what we’ve seen to date is actually Narrow AI: algorithms trained and modified by machine learning to perform highly specific tasks as well as or better than humans can.

Simulating how humans deal with ambiguity and nuance in a particular situation (language usage, for instance, where a word has multiple meanings but its intended meaning can be determined by context)  is something AI can do. But AI cannot currently be extended to solve problems it hasn’t been trained to solve. In other words, most AI still can’t pass the 1950 Turing Test consistently, although Natural Language Processing has come a long way in a relatively short period of time.

The most pervasive myth about AI is that its algorithms can make sense of any and all data. The reality, however, is that the quality of data, including its sources, matters (and it probably matters more than its quantity).  As IT Professional Magazine’s Seth Earley has said, “The most important input for an AI tool is data—not just any data, but the right data . . . relevant to the problem being solved and specific to a set of use cases and a domain of knowledge.”

Watson’s Jeopardy win is a classic example of what it will take to get to the plateau of productivity: three years of effort and a $25-million investment. And if a single human brain really does have more switches than all computers, routers, and internet connections combined, General AI is still a very long way off.

Whether that’s a good thing or a bad thing isn’t yet clear. But it does mean two of the other myths about AI are just that: AI will lead to huge job losses and eliminate the need for humans.

Gartner and Deloitte both predict net job gains by 2020. Gartner’s prediction is that 1.8 million U.S. jobs will be lost as a result of AI implementation. But 2.3 million will be created. Unemployment stats reinforce this conclusion, with the U.S. unemployment rate lower in 2019 than it’s been in 50 years. Deloitte sees a lot of job gains among the self-employed: 42 million “alternative” workers by 2020.

Ultimately, the technology and machines we’ve created aren’t really so different from us. Just as humans need continuous learning and nutritious fuel to thrive, so does AI. Gartner’s hopeful prediction is that augmented workforce combining human and artificial intelligence will recover $6.2 billion hours of work productivity. In a world where being “on” 24/7 has become the norm, AI may be key to finally working smarter, not harder.

Leave A Reply

Your email address will not be published.