• BsAs-NationalCongress

    Buenos Aires - National Congress

  • BsAs-DiagonalNorte

    Buenos Aires - Roque Saenz Peña Avenue (aka Diagonal Norte)

  • BsAs-Madero

    Puerto Madero

  • BsAs-9Julio

    The Obelisk, the most popular symbol of Buenos Aires

  • BsAs-FarolCongreso

    Plaza Congreso

  • BsAs-PlazaLibertad

    Plaza Libertad

  • BsAs-AguasCorrientes

    Palacio de Aguas Corrientes

IJCAI-15 will be held in Buenos Aires, Argentina from July 25th to July 31st, 2015. We look forward to seeing you there.

Prof. Michael Wooldridge, Professor at the University of Oxford, UK is the IJCAI-15 Conference Chair.

Prof. Qiang Yang, Hong Kong University of Science and Technology, Hong Kong is the IJCAI-15 Program Chair.

IJCAI-15 Invited Speakers

 chienDr. Steve Chien is Head of the Artificial Intelligence Group and Senior Research Scientist at the Jet Propulsion Laboratory, California Institute of Technology where he leads efforts in autonomous systems for space exploration. Dr. Chien was a recipient of the 1995 Lew Allen Award for Excellence, JPL’s highest award recognizing outstanding technical achievements by JPL personnel in the early years of their careers.  He has been awarded NASA medals in 1997, 2000, and 2005 for the development and deployment of Artificial Intelligence software for space missions.  He is a four-time honoree in the NASA Software of the Year Competition (twice in 1999, 2005, and 2011). In 2011 He was awarded the inaugural AIAA Intelligent Systems Award, for his contributions to Spacecraft Autonomy.  He led the deployment of AI flight software onboard the Earth Observing One and Mars Exploration Rovers missions.  He has also led the deployment of AI scheduling software for ground-based planning of space missions - most recently he led the deployment of ASPEN for scheduling science observations for the Rosetta Orbiter mission, an ESA-led mission to explore the comet Churyumov-Gerasimenko.

Talk Title: Using Constraint-based Search to Schedule Science Campaigns for the Rosetta Orbiter
Date: 7/31/2015, 14hs Room LB
Abstract: In August 2014, Rosetta (http://blogs.esa.int/rosetta/) entered orbit around the comet 67P/Churyumov-Gerasimenko. Rosetta, a European Space Agency mission, is the first to deploy a soft lander to a comet and to escort a comet for an extended period (over one year). But Rosetta is also a pathfinding space mission from the perspective of Operations and Artificial Intelligence in it’s usage of the ASPEN Artificial Intelligence planning and scheduling software for early to mid-range science activity scheduling for the Rosetta Orbiter. In my talk I first briefly discuss comets and their importance in understanding the evolution of our solar system and life on Earth. Second, I describe elements of the multi- disciplinary Rosetta science planning process which incorporates diverse science, geometric, engineering, and resource constraints. Next, I describe the constraint-driven scheduling automation and how AI has much to offer not only in schedule generation, but in constraint enforcement, problem and constraint analysis, and in iterative schedule refinement. Finally, I discuss prospects for autonomous spacecraft for future comet and other space missions.

  cornebiseDr. Julien Cornebise is currently a senior research scientist at Google DeepMind, which he joined in 2012. After two MSc, in Computer Science and in Mathematical Statistics, he completed his PhD in 2009 in Mathematical Statistics at University Pierre et Marie Curie and ParisTech, on Adaptive Sequential Monte Carlo Methods, for which he received the Savage Award in Theory and Methods from the International Society for Bayesian Analysis. He then held several research positions at SAMSI/Duke University, University of British Columbia, and University College London, working on advanced computational statistics. Prior to joining DeepMind, Julien also acted as an applied mathematics and statistics consultant to various pharmaceutical companies.

Talk title: Towards General Artificial Intelligence
Date: 7/28/2015, 14hs Room LB
Abstract:Founded in 2011 in London, Google DeepMind is a unique environment for ambitious long-term research. This talk provides the latest insights into how their interdisciplinary team has made a number of high profile breakthroughs towards general-purpose learning agents by combining the best techniques from deep learning, reinforcement learning and systems neuroscience.

 gabrilovichDr. Evgeniy Gabrilovich is a senior staff research scientist at Google, where he works on knowledge discovery from the web. Prior to joining Google in 2012, he was a director of research and head of the natural language processing and information retrieval group at Yahoo! Research. Evgeniy is an ACM Distinguished Scientist, and is a recipient of the 2014 IJCAI-JAIR Best Paper Prize. He is also a recipient of the 2010 Karen Sparck Jones Award for his contributions to natural language processing and information retrieval. Evgeniy currently serves as a program co-chair for WSDM 2015.

Talk Title: In Knowledge We Trust
Date: 7/29/2015, 14hs Room R
Abstract:Recent years have witnessed a proliferation of large-scale knowledge bases, including Wikipedia, Freebase, YAGO, Microsoft's Satori, and Google's Knowledge Graph. These knowledge repositories enable new kinds of functionality in Web search and many other applications, and help democratize information access. Historically, many of these repositories have been built through manual curation, which is challenging to scale. This talk will discuss how to leverage existing bodies of knowledge to automatically acquire even more knowledge. To this end, we will describe Knowledge Vault, a research project at Google that automatically extracts large numbers of facts from Web pages and reasons about their correctness using probabilistic inference. We will also present a complementary approach that seeks specific missing facts by automatically constructing Web search queries. The talk will introduce the problem of knowledge fusion, which involves another dimension of uncertainty compared to the standard data fusion setting, namely, extraction errors in addition to factual errors in the sources. The Knowledge-Based Trust (KBT) algorithm we developed explicitly models both kinds of errors, and jointly estimates the correctness of facts and the trustworthiness of their sources. Finally, we will discuss the current limitations and open challenges in knowledge extraction from the Web.

 Christof Koch. Photo Credit – Fatma ImamogluBorn in the American Midwest, Christof Koch grew up in Holland, Germany, Canada, and Morocco. He studied Physics and Philosophy at the University of Tübingen in Germany and was awarded his Ph.D. in Biophysics. Following four years at MIT, Christof joined the California Institute of Technology as a Professor in Biology and Engineering. After a quarter of a century, Christof left academia to become the Chief Scientific Officer at the non-for-profit Allen Institute for Brain Science in Seattle. He is leading a ten year, large-scale, high through-put effort to build brain observatories to map, analyze and understand the mouse and human cerebral cortex.
Christof has authored more than 300 scientific papers and articles, eight patents and five books concerned with the way computers and neurons process information and the neuronal and computational basis of visual recognition and perception and attention. Together with his long-time collaborator, Francis Crick, Christof pioneered the scientific study of consciousness. His latest book is Consciousness – Confessions of a Romantic Reductionist. He is a frequent public speaker and writes a regular column for Scientific American Mind.  Christof lives in Seattle and loves dogs, climbing, biking and long-distance running.

Talk title: Consciousness in Biological and Artificial Brains
: 7/28/2015, 14hs Room R
Abstract: Human and non-human animals not only act in the world but are capable of conscious experience. That is, it feels like something to have a brain and be cold, angry or see red. I will discuss the scientific progress that has been achieved over the past decades in characterizing the behavioral and the neuronal correlates of consciousness, both based on clinical case studies as well as laboratory experiments. I will introduce the Integrated Information Theory (IIT) that explains in a principled manner which physical systems are capable of conscious, subjective experience. The theory explains many biological and medical facts aboutconsciousness and its pathologies in humans, can be extrapolated to more difficult cases, such as fetuses, mice, or non-mammalian brains and has been used to assess the presence of consciousness in individual patients in the clinic.  IIT also explains why consciousness evolved by natural selection. The theory predicts that feed-forward networks, such as deep convolutional networks, are not conscious even if they perform tasks that in humans would be associated with conscious experience. Furthermore, and in sharp contrast to widespread functionalist beliefs, IIT implies that digital computers, even if they were to run software faithfully simulating the human brain, would experience next to nothing. That is, while in the biological realm, intelligence and consciousness are intimately related, contemporary developments in AI dissolve that link, giving rise to intelligence without consciousness.

 littmanMichael L. Littman is a Professor of Computer Science at Brown University. His research in machine learning examines algorithms for decision making under uncertainty.  He has earned multiple awards for teaching and his research has been recognized with three best-paper awards on the topics of meta-learning for computer crossword solving, complexity analysis of planning under uncertainty, and algorithms for efficient reinforcement learning.  Littman has served on the editorial boards for the Journal of Machine Learning Research and the Journal of Artificial Intelligence Research.  He was general chair of International Conference on Machine Learning 2013 and program chair of the Association for the Advancement of Artificial Intelligence Conference 2013 and is a Fellow of AAAI.

Talk Title: Programming agents via rewards
Date: 7/31/2015, 14hs Room R
The reinforcement-learning (RL) paradigm splits the problem of creating intelligent agents into two main pieces: (1) define a reward function that encourages desirable behavior, and (2) allow the agent to search for behavior that optimizes this reward function. The majority of RL research focuses on the second problem---how can we build agents that learn to maximize reward? In this talk, I will focus on the problem of creating suitable reward functions via a variety of mechanisms such as behavioral examples, evolutionary optimization, formal specifications, and human feedback. The goal of this work is to make the power of RL agents more scalable and accessible.

 Jon McCormack is a researcher in computing and an internationally acclaimed electronic media artist. He is currently an ARC Australian Research Fellow in the Faculty of Information technology at Monash University in Melbourne.
With a background in art, mathematics and computer science, his research seeks to discover new kinds of creativity using computers. This research spans visualisation and virtual environments, evolutionary systems, machine intelligence, human-computer interaction, music composition and sound arts.
McCormack is the recipient of more than 15 international awards for both art and computing research, most recently the 2012 Eureka Prize for Innovation in Computer Science. His artworks have been widely exhibited at leading galleries, museums and symposia, including the Museum of Modern Art (New York, USA), Tate Gallery (Liverpool, UK), ACM SIGGRAPH (USA), Ars Electronica Museum (Austria) and the Australian Centre for the Moving Image (Australia).  The book "Computers and Creativity" (Springer, 2012) edited by McCormack and Prof. Mark d'Inverno (Goldsmiths) surveys how human creativity is being radically changed by technology and has become a significant reference text for the field, described by Professor Luc Steels as "required reading for everyone involved in the create arts and interested in the role of technology towards shaping its future."

Talk Title: Art is a System
Date: 7/30/2015, 14hs Room LB
Abstract: Most approaches to AI and the Arts are conceptualised as problems involving the production or classification of produced artefacts. This is unsurprising as we naturally think of human artists creating artefacts as the main activity that exemplifies this societal role. One implicit assumption of this conceptualisation is that art making is simply a problem of production, i.e. it is fixated on the problem of generating appropriate output. In this talk I will offer a different view of Art and AI. Rather than focusing on the production of objects, we consider art as a system of exchanges, relationships and interactions and investigate what this means for AI approaches; past, present and future. A systems view enables us to reimagine the role of AI in artistic practice, and more broadly in non-anthropocentric creativity. Current approaches focus on the automation of human creativity, which I would argue is both a technical and ethical cul-de-sac. A systems view – which can incorporate machines as artists, critics, provocateurs, assistants or catalysts in an artistic ecosystem – allows us to imagine new roles for AI and the Arts and new kinds of art.

 velosoManuela M. Veloso is the Herbert A. Simon University Professor in the Computer Science Department at Carnegie Mellon University, with courtesy appointments in the Robotics Institute, Machine Learning, Electrical and Computer Engineering, and Mechanical Engineering Departments. She researches in Artificial Intelligence and Robotics. She founded and directs the CORAL research laboratory, for the study of autonomous agents that Collaborate, Observe, Reason, Act, and Learn, www.cs.cmu.edu/~coral. Professor Veloso is IEEE Fellow, AAAS Fellow, AAAI Fellow, and the past President of AAAI and RoboCup. She was the program Chair of IJCAI'07. She received the 2009 ACM/SIGART Autonomous Agents Research Award for her contributions to agents in uncertain and dynamic environments, including distributed robot localization and world modeling, strategy selection in multiagent systems in the presence of adversaries, and robot learning from demonstration. Professor Veloso and her students have worked with a variety of autonomous robots, including mobile service robots and soccer robots. See www.cs.cmu.edu/~mmv for further information, including publications.

Talk Title: Making Intelligent Mobile Service Robots a Reality
Date: 7/30/2015, 14hs Room R
We pursue research on making autonomous task-focused mobile robots a reality in our environments. I will present the perception, cognition, and actuation integrated challenges of task-based autonomous robots. I will present symbiotic robot autonomy, in which robots are robustly autonomous in parts of their tasks, such as their localization and navigation, and handle their limitations by proactively asking for help from humans, accessing the web for missing knowledge, and coordinating with other robots. We aim at such intelligent robots to coexist and interact with humans in a natural way.  I will then present the language-based human-robot interaction, in terms of the human use of complex commands, and the teaching and management of new tasks. Our CoBot service robots have moved  in our multi-floor buildings for more than 1,000km escorting visitors, and transporting packages between locations. The work is jointly pursued with my research group - see www.cs.cmu.edu/~coral .

 wurmanPete Wurman is currently CTO of Kiva Systems, the Boston-based company that pioneered the use of mobile robotics in warehouses and distribution facilities. Pete joined Kiva in 2004 as a technical co-founder with Raff D’Andrea to help founder Mick Mountz bring his vision to life. By the time it was acquired by Amazon in 2012, Kiva had delivered several warehouse systems to fortune 500 retailers with as many as 1,000 robots in a building. Prior to joining Kiva, Pete was an Associate Professor of Computer Science at North Carolina State University in Raleigh, where he was co-director of the E-commerce Program. Pete’s teaching focus was e-commerce systems, and his research focused on electronic auctions (especially combinatorial auctions), multi-agent systems, and resource allocation. Pete earned his Ph.D. in Computer Science from the University of Michigan in 1999, and his B.S. in Mechanical Engineering from M.I.T. in 1987.

Talk Title: AI and Robotics: Tales from Kiva Systems
Date: 7/29/2015, 14hs Room LB
Abstract: Kiva systems was founded in 2002 with the goal of using fleets of mobile robots to deliver shelves of inventory to pick operators in warehouses. Over the following decade, Kiva deployed many systems to fortune 500 companies and was eventually acquired by Amazon.com. Many core AI concepts were instrumental in building the robust and flexible software system that controls our robots. In this talk I will give a brief history of Kiva and some insight into AI's influence on its design. I will also discuss the results of the recent Amazon Picking Challenge and the next big frontier of robotics in warehousing.

You are here: Home Invited Speakers