About This Course
Artificial Intelligence began in the 1950s, when the first computers became available. At that time, combinatorial problems (such as playing chess) were considered to be interesting because they required some mental effort, that programmers did not know how to describe, encode or simulate. The main tools of symbolic AI are combinatorics and logic processing. Due to the rapid increase in computing power (Moore's Law), some of those problems have become solvable, and computers now regularly defeat humans (for example, in chess or in the game of Go). In this course, we will look back to the early days of AI to understand the kinds of problems that were being solved.
The next important step in AI was the development of so-called "machine learning" approaches, in which we do not encode a problem using logic. The data and the solution are presented to the computer and the system learns directly from a big data set. The first type of systems developed were pattern recognition systems. Since the data is automatically encoded by the computer, no symbols are processed (as in logic), and we call this "subsymbolic learning".
In the 1990s and early 2000s, one of the most important problems in AI was to bring symbolic and subsymbolic systems together. This has been made possible in recent years by the development of Large Language Models (LLMs), which are based on deep neural networks, but which can handle language and even logic questions in a way that is fundamentally different from the past. In this course, we will explore this convergence and its possible applications.
We also look at so-called graphical models, where knowledge is stored in the nodes and edges of graphs, and how probabilistic reasoning can be instrumented on top of them. We are living in an era where probabilistic reasoning and even formal mathematical methods could be incorporated into LLMs.
Given the importance of the field and the profound questions it raises, even for our own identity as human beings, we look at the ethical issues being discussed today around AI and the future of work as we have known it until now.
The course provides students with the necessary background to decide whether to study AI and pursue a career in the field.
Course Staff
Prof. Dr. Raúl Rojas
Raúl Rojas is an Emeritus Professor of AI at Freie Universität Berlin. He is a two-time World Robotics Champion (2004 in Italy and 2005 in Japan) and his autonomous vehicles have been driven by computers on the streets of Berlin since 2007.
Dr. Rojas has led other high-tech projects: a) Reading devices for the blind, i.e. a video camera mounted on glasses that automatically reads text; b) Micro-robots the size of insects that imitate the dance of bees; c) Autonomous wheelchairs that move people around homes following verbal commands or controlled by brain waves; d) Humanoid robots for different activities; e) The classroom of the future, which Dr. Rojas developed to improve the teaching of mathematics.
In March 2015, Dr. Rojas received the "Professor of the Year" award, which is presented by the German Professors' Association (35,000 members). In addition to this recognition, Dr. Rojas has received awards for his academic career in several countries: three other awards in Germany, one in the United Kingdom, one in Austria, one in Sweden, two in the United States, one in Spain, and three in Mexico.