AIs, predictions and judgment
This Mckinsey piece summarizes some of Ajay Agrawal thinking (and book) on the economics of artificial intelligence. It starts with the example of the microprocessor, an invention he frames as “reducing the cost of arithmetic.” He then presents the impact as lowering the cost of the substitute and raising the value of the complements.
The third thing that happened as the cost of arithmetic fell was that it changed the value of other things—the value of arithmetic’s complements went up and the value of its substitutes went down. So, in the case of photography, the complements were the software and hardware used in digital cameras. The value of these increased because we used more of them, while the value of substitutes, the components of film-based cameras, went down because we started using less and less of them.
He then looks at AI and frames it around the reduction of the cost of prediction, first showing how AIs lower the value of our own predictions.
… The AI makes a lot of mistakes at first. But it learns from its mistakes and updates its model every time it incorrectly predicts an action the human will take. Its predictions start getting better and better until it becomes so good at predicting what a human would do that we don’t need the human to do it anymore. The AI can perform the action itself.
The very interesting twist is here, where he mentions the trope of “data is the new oil” but instead presents judgment as the other complement which will gain in value.
But there are other complements to prediction that have been discussed a lot less frequently. One is human judgment. We use both prediction and judgment to make decisions. We’ve never really unbundled those aspects of decision making before—we usually think of human decision making as a single step. Now we’re unbundling decision making. The machine’s doing the prediction, making the distinct role of judgment in decision making clearer. So as the value of human prediction falls, the value of human judgment goes up because AI doesn’t do judgment—it can only make predictions and then hand them off to a human to use his or her judgment to determine what to do with those predictions. (emphasis mine)
This is pretty much exactly the same thing as the idea for advanced or centaur chess where a combination of human and AI can actually be more performant than either one separately. We could also link this to the various discussions on ethics, trolley problems, and autonomous killer robots. The judgment angle above doesn’t automatically solve any of these issues but it does provide another way of understanding the split of responsibilities we could envision between AIs and humans.
The author then presents five imperatives for businesses looking to harness AIs and predictions: “Develop a thesis on time to AI impact; Recognize that AI progress will likely be exponential; Trust the machines; Know what you want to predict; Manage the learning loop.” One last quote, from his fourth imperative:
The organizations that will benefit most from AI will be the ones that are able to most clearly and accurately specify their objectives. We’re going to see a lot of the currently fuzzy mission statements become much clearer. The companies that are able to sharpen their visions the most will reap the most benefits from AI. Due to the methods used to train AIs, AI effectiveness is directly tied to goal-specification clarity.
Stay Connected