Lie #1: AI is too complicated for regular people to understand. What is commonly referred to as “artificial intelligence” is machine learning in a nice coat, and machine learning is fairly straightforward.

  1. Some data is formatted or “wrangled” for analysis.
  2. Some of the data is input to a machine learning model that is either supervised (with the data labeled) or unsupervised (see what patterns the machine finds), which creates a formula for structuring the data and finding markers to predict.
  3. The rest of the data the machine hasn’t seen yet is tested against that formula, sometimes in small or large chunks (called “folds”), sometimes remixed several times to assess or score the effectiveness of the model. The approach for testing and validation is unique to each use case, and requires its own exploration in developing and maintaining a machine learning model.
  4. A “tuning” process is run with human and machine elements, to find ways to improve and which are applied back into the model.


This is happening at manifold scale across a huge range of applications, from recommending a movie to planning driving routes that avoid traffic, to suggesting grammatical improvements, to predicting the next words I will type, to making full paragraphs in my tone from some dot points. There are loads of fun and interesting technical and ethical layers in how to source, build, tune and manage models, but fundamentally all machine learning has the same component structure of data: model: evaluation: iteration.

Lie #2: Black box algorithms are sexy

Explainability has been a core issue of the AI field since Turing outlined the concept of a stored-program machine (the fundamental of modern computing) with the potential for moderating and improving its own program (artificial intelligence via machine learning). From the data science perspective, it’s critical to understand how an algorithm intakes, predicts, scores, tunes, and iterates to be able to debug, validate, and streamline. More importantly, this understanding supports communication to users of the strengths, flaws, and limitations of the model for utility in whatever use case is being applied.

But the AI marketplace have decided that all that technical gobbledegook makes it all less sexy, and no one should care what’s behind the curtain: enter the black box algorithm where you need not think about how it works. This both protects IP and competitive advantage, and reduces opportunity for the kind of scrutiny that would inform end users how effective or reliable an algorithm is, or for example if the AI-magical shopping experience is a machine at all or actually 1000 people in India manually supervising.

Lie #3: AI can be used to make decisions

The greatest failure of the AI discourse and marketplace is the misapplication of AI.

The potential and impact of machine learning as computational augmentation of human resources is extraordinary. Consider the actual “computers” - mathematicians whose spectacular feats of analysis that facilitated the construction of cathedrals and space exploration. What could those minds achieve with access to Microsoft Excel, let alone predictive AI. There is no doubt the impact of this kind of computation power is world-changing.

The problem with using AI to make decisions occurs at every stage of machine learning. The data is always inherently flawed, data management is undervalued, and data wrangling is often sidelined for time and cost. Then the mathematical principles of predictive modeling are not well understood. Finally, the output of the model should be considered advice, not instruction.

Even the most thoroughly user-tested algorithms fail regularly. Streaming TV recommenders will suggest that you continue to watch very similar content. Map routing will tell you to casually drive straight across major roads every now and then. The kid at Blockbuster or a local driver would have been able to make a better recommendation. There are practical reasons for these errors, but the outcome is that we as users apply critical thinking and context not available to the machine into our decision making.

And that’s the real magic we miss if AI is misapplied to replace rather than augment intelligence.

By Kate Dodd. Kate is an independent strategist, speaker, facilitator and innovator. She is a force in co-creating the future of business through design thinking, data/AI/ML understanding, with an unabashedly millennial approach to effective-first strategy and hyper complexity.