We are pleased to announce talks by:
Department of Computer Science
University of Massachusetts Amherst
In the computational reinforcement learning (RL) framework, the reward function determines the problem the learning agent is trying to solve. Properties of the reward function influence how easy or hard the problem is, and how well an agent may do in trying to solve it, but RL theory and algorithms are insensitive to the source of rewards (except perhaps requiring that reward magnitude be bounded). This is a great strength of the framework because of the generality it confers, but it is also a weakness because it defers key questions about the nature of reward functions. I describe a series of computational experiments recently carried out by Satinder Singh, Rick Lewis, and me that elucidate aspects of the relationship between ultimate goals (cf. reproductive success for an animal) and the primary rewards that drive learning. Among the lessons provided by these experiments are clarification of the traditional notions of extrinsically and intrinsically motivated behavior and that the precise form of an optimal reward function need not bear a transparent relationship to an agent's ultimate goal.
Andrew Barto is Professor of Computer Science, University of Massachusetts, Amherst. He has been Chair of the UMass Department of Computer Science since 2007. He received his B.S. with distinction in mathematics from the University of Michigan in 1970, and his Ph.D. in Computer Science in 1975, also from the University of Michigan. He joined the Computer Science Department of the University of Massachusetts Amherst in 1977 as a Postdoctoral Research Associate, became an Associate Professor in 1982, and has been a Full Professor since 1991. He is Co-Director of the Autonomous Learning Laboratory and a core faculty member of the Neuroscience and Behavior Program of the University of Massachusetts. His research centers on learning in natural and artificial systems, and he has studied machine learning algorithms since 1977, contributing to the development of the computational theory and practice of reinforcement learning. He currently serves as an associate editor of Neural Computation, as a member of the editorial boards of the Journal of Machine Learning Research, Adaptive Behavior, and Theoretical Computer Science-C: Natural Computing. Professor Barto is a Fellow of the American Association for the Advancement of Science, a Fellow and Senior Member of the IEEE, and a member of the American Association for Artificial Intelligence and the Society for Neuroscience. He received the 2004 IEEE Neural Network Society Pioneer Award for contributions to the field of reinforcement learning. He has published over one hundred papers or chapters in journals, books, and conference and workshop proceedings. He is co-author with Richard Sutton of the book "Reinforcement Learning: An Introduction," MIT Press 1998.
Yahoo! Labs and Research Operations
Artificial Intelligence has played an important role in Web search and advertising, with substantial contributions coming from IR-influenced text processing and statistical machine learning. Given the immense value being extracted from these areas, Web companies have not really needed to focus on more structure- and meaning-oriented AI approaches like those from the knowledge representation tradition. But hidden among the successes of data-driven technologies on the Web are some interesting challenges that over the next decade could demand innovations from the knowledge representation and reasoning community.
My goal will be to give a very high level sense of where some of those opportunities might come from. In addition to a quick look at the Web and KR, I will offer a few remarks about research opportunities outside of academia based on my experience in industry and at DARPA.
Ron Brachman is Vice President of Yahoo! Labs and Research Operations and the creator of Yahoo!'s Academic Relations organization. His work on knowledge representation and reasoning (KR&R) is well-known and has been extremely influential in the history of artificial intelligence. Among other things, with Hector Levesque he has written an important KR&R textbook, and he was a founder of the International Conferences on the Principles of Knowledge Representation and Reasoning. His work was the basis for the entire area of Description Logics. Ron received his Bachelor's degree from Princeton University and Master's and Ph.D. degrees from Harvard University. He served as President of AAAI, and from 2002 to 2005 he was the Director of the Information Processing Technology Office at DARPA. Prior to that he created and managed world-class research teams at Bell Labs and AT&T Labs. He has won distinguished service awards from AAAI and IJCAI, and is a Fellow of AAAI, ACM, and IEEE.
Facebook's massive data provide opportunities to answer long-standing questions about hundreds of millions of users --- Who are they? What do they do? What do they want to do? Answering such questions at scale requires leveraging advances in data infrastructure, machine learning and data mining. In this talk I will present several approaches pursued by members of the Data Team at Facebook at answering these questions. I will describe how Facebook deals with large data sets, how we can learn and make predictions at scale, and how we can use our unique data to gain insights into the ethnic composition, political inclinations, geographic distribution, and sentiments of our user base.
Chang is a member of the Data Science team at Facebook. There he explores Bayesian probabilistic modeling, topic modeling, data mining, and their application to large-scale systems. Jonathan earned his B.S. in Electrical and Computer Engineering from Caltech in 2003, and inches ever closer to finishing his PhD from Princeton.