I’ve finished reading “The Alignment Problem” (ISBN: 9780393635829), by Brian Christian. As the subtitle states, it’s an attempt to discuss fuzzier aspects of human value with the growing relevance of machine learning (ML). By ML, the author almost exclusively means neural networks. Overall, it was a good book. As with most, though, it missed a few things. It’s definitely worth a read for those who are interested in the potential social impact of artificial intelligence (AI).

The book is organized in three sections of three chapters. I have a problem with the naming of the three sections, but the chapter organization makes sense. The first section is “Prophecy”, which I feel would have been a better title for the last section. The first three chapters are, in opposition to the title, setting the foundation of the discussion by defining and discussing representation, fairness and transparency. Plenty of articles have covered the first two issues.

Representation, the first chapter, focuses on the need to represent a wide variety of people by describing the evolution of machine vision systems. That includes the inherent bias of mainly white, male images in many of the primary databases used to train systems. The chapter is well written but nothing new. Fairness, again, doesn’t trod new ground, describing paroles systems and their problems. The point of this chapter is about how a broken system, such as in criminal justice, isn’t the best thing to use as a pattern when training an inference engine.

By far, the best of the first three chapters is the one on transparency. The lack of transparency in ML systems is going to be one of the biggest impediments to widespread adoption, and one of the main reasons governments will become involved. It’s important that this chapter is third, because the first two chapters show examples of why transparency is required.

AI company founders and developers like to talk about the “black box” as if there’s nothing that can be done. That is not true. It’s that providing insight takes a change in design. Companies know what they code the nodes in their layers to analyze. Developers change weighting with reasons, based on training results. That information can be opened up and analyzed. It’s not just the datasets that need to be observed, but also the ML engines.

After the introductory chapters, the rest rely on the ML which is well defined in the fourth chapter: reinforcement learning. To understand, at a well explained yet untechnical level, what reinforcement learning is, and how it differs from supervised learning, this chapter is an excellent baseline. It’s probably my favorite chapter in the book.

Shaping, chapter 5, is hit and miss. The focus is on how we shape human behavior, and the chapter is weak about how it can be applied to ML. Yet, in the middle, it has a good description of sparsity, how we can deal with sparse information to make more efficient inferences. Curiosity, the last chapter in the middle section, is also a bit too focused on humans, with too little about how the knowledge can be adapted to AI. One problem I had with it was on a link between the two that says, “An image-labeling system like Alex-Net might require hundreds of thousands of images, each of them labeled by humans. That’s quite clearly not how we acquire our own visual skills early in life.” The problem is that the author suddenly drops into supervised learning. The infants are seeing lots of things. They then are labeling some themselves and also getting reinforcement from other humans and their own actions to adjust labels and add new ones. Human learning isn’t dependent solely on labels, but it’s “clearly” part of our learning.

The last three chapters are on imitation, inference, and uncertainty. The imitation chapter was intriguing as it provided a nice explanation of both the strong points and the weaknesses of imitation learning.

Chapter 8, Inference, is poorly named only in that the author (or editor…) decided to only have one word chapter titles. Neural networks are all about inference. So why would there be only one chapter with that title? The chapter is centered on inverse reinforcement learning, a type of inference. In “normal” inference, we look at actions and try to infer a goal. Inverse reinforcement learning turns it around. Given a goal, can a mind or machine infer the needed actions required to achieve that goal. As the book shows, that can have exciting impacts on the training of robotic movement and other areas where specific instructions for actions are too complex for sanity or completeness.

The final chapter is about uncertainty. It stays uncertain, with no link to anything specific to impact ML learning. It repeats a number of the risks of rigid ML conclusions given that the real world is uncertain. Again, nothing particularly new, but it does round out the book into a parallel group of segments.

“The Alignment Problem” stays non-technical, but it’s a bit too long for a non-technical audience. That’s primarily because of the amount of psychological, purely human, information added to the book. Some may find that important and useful. As it is, the book still remains another good entry in a list of ones aiming at discussing the increasing importance of AI in business, government and our lives.

Write A Comment