How explainable artificial intelligence can help people innovate

Image

In the field of artificial intelligence (AI), computers have been created that can drive cars, synthesize compounds, fold proteins, and detect high-energy particles at superhuman levels. However, these AI algorithms cannot explain the thought processes behind their decisions. A computer that can master protein folding and tell researchers more about the rules of biology would be much more useful than one that folds proteins without explanation. is currently focused on developing AI algorithms that can explain themselves in a way that humans can understand. If we can do that, we believe that AI will discover new facts about the world that have not yet been discovered, teach us new things, and lead to new innovations. One area of ​​AI called reinforcement learning looks at how computers can learn from their own experiences. In reinforcement learning, AI explores the world and receives positive or negative feedback based on its actions. This approach has resulted in algorithms that have independently learned to play chess at superhuman levels and prove mathematical theorems without human guidance. In my work as an AI researcher, I use reinforcement learning to create AI algorithms that learn how to solve puzzles like Rubik's Cube. Reinforcement learning makes AI uniquely learn to solve problems that are difficult even for humans to solve. This has led me and many other researchers to think less about what AI can learn and more about what humans can learn from AI. A computer that can solve a Rubik's Cube should also be able to teach humans how to solve it. Unfortunately, superhuman AI thinking is currently out of reach for us humans. AI is a bad teacher, what the computing world calls a “black box”. Black-box AI just spits out solutions without justifying them. Computer scientists have been trying to open this black box for decades, but recent research shows that many AI algorithms actually think like humans. For example, a computer trained to recognize animals learns about different types of eyes and ears and combines this information to correctly identify animals. Trying to open a black box is called explainable AI. My research group at the University of South Carolina AI Institute is interested in developing explainable AI. To achieve this, we use Rubik's Cube intensively. A Rubik's Cube is basically a path-finding problem. Find a way from point A (encrypted Rubik's Cube) to point B (solved Rubik's Cube). Other pathfinding problems include navigation, theorem proving, and chemical synthesis. My lab has launched a website where anyone can watch how an AI algorithm solves a Rubik's Cube. However, it would be difficult to learn how to solve a cube from this website. This is because the computer cannot tell the user the logic behind the solution. Solving a Rubik's Cube can be broken down into a few general steps. For example, the first step forms a cross and the second step places the corner pieces. The Rubik's Cube itself has more than 10^19 possible combinations, but the generalized step-by-step guide is very easy to remember and applicable to many different scenarios.

Hita Joseph