Making AI practical means adopting a design-first approach to models & algorithms and considering how human users interact with AI
Design for AI -- Making AI Practical for Engineers
AI solutions that ignore the human user end up being not truly scalable since machine learning (ML) only works as long as feedback is provided in the form of training labels — to capture mistakes made or accurate predicted performance — so models and algorithms continue to improve and learn in real-time.
1. What does AI — specifically ML — truly need?
Machine learning (ML) aims to reproduce the human learning experience by having a set of training data from which mistakes can be made for future correction. How training data is curated & provided is something that the AI community is beginning to focus on since bias can be introduced in learnt models based on training data exposed to models.
To generate massive amounts of training data needed for modern ML models such as deep nets, AI industry has leaned on remote part-time workers or commercial centers of labeling workers to label / correct images, videos, text strings or transcribe audio snippets. This manually intensive human aspect of ML needs to be factored into the launches of ML products in order to ensure scalable performance, i.e. improvement of ML models over time with continual human feedback.
For example, email spam filtering works well when the user continues to give feedback to classify emails as spam or correct misclassifications
2. What does design have to do with AI / ML?
Given that human generated training data is the key feedstock for well-performing ML models, designing how human feedback is collected is key to ensuring an improvement in prediction performance. An example of a simple yet elegant tool that collects human feedback for AI training is the CAPTCHA, which is a common tool for online spam filtering & detection.
3. Our AI Design Philosophy
In DataSeer, we’ve adopted a design thinking approach that incorporates the human engineer as part of an AI data digitization system, prompting the user to give feedback in real-time about whether model predictions were right or wrong, while also allowing the user to override the AI system in order to specify preferences and provide missing context.
Our philosophy is composed of the following beliefs:
- AI should be purposeful and functional, providing measurable business value
- AI should not be scary or threatening to engineers: automation is not a threat but a way to help us spending time on more meaningful work
- Building trust is key to an effective AI tool, which means turning to automation when we’re confident predictions are correct and turning control over to the user when we are not
4. A story…
We started out in 2019 by identifying user personas and conducting user interviews with engineers to find out what pressing problems they had. What we heard was that users wanted to automate repetitive, error-prone tedious manual processes in data extraction & digitization of drawings. So we decided to design and build an AI tool to do exactly this.
However, once we rolled out our beta version of the tool in test trials, users told us the automated parts of the AI tool were fast but sometimes plain wrong. Why? It turns out that in drawings, there’s often additional context the engineer has to apply in order to interpret the drawing. This context can be in the form of additional documentation such as asset hierarchy structure manuals or subject matter expert (SME) domain knowledge from 25 years of practical field experience.
We went back to the drawing board and iterated on our AI design. This time, we factored in human interaction options so users could, at the right time, enter context & preferences specific to their project before making AI model predictions.