• BsAs-NationalCongress

    Buenos Aires - National Congress

  • BsAs-DiagonalNorte

    Buenos Aires - Roque Saenz Peña Avenue (aka Diagonal Norte)

  • BsAs-Madero

    Puerto Madero

  • BsAs-9Julio

    The Obelisk, the most popular symbol of Buenos Aires

  • BsAs-FarolCongreso

    Plaza Congreso

  • BsAs-PlazaLibertad

    Plaza Libertad

  • BsAs-AguasCorrientes

    Palacio de Aguas Corrientes

IJCAI-15 will be held in Buenos Aires, Argentina from July 25th to July 31st, 2015. We look forward to seeing you there.

Prof. Michael Wooldridge, Professor at the University of Oxford, UK is the IJCAI-15 Conference Chair.

Prof. Qiang Yang, Hong Kong University of Science and Technology, Hong Kong is the IJCAI-15 Program Chair.

Brochure Addendum

  1. Brochure page 12 and page 33, IJCAI-JAIR Best paper prize is cancelled. No presentation will be given. Please update the text description and the schedule.
  2. Brochure Page 34, author of paper Main550 is blank, should be "Michael Kim".
  3. Brochure Page 36, paper ML-ISC1 information is not update to date, paper title should be “Learning in the Model SPace for Temporal Data”, presenter should be "Peter Tino".
  4. Brochure Page 37, session Main13 Social Networks 1, Paper Main834 should be presented in front of paper Main234.
  5. Brohure Page 43, session Panel1, the name of panel should be updated to "Panel: Whos Speaks for AI?"
  6. Brochure Page 44, session Main28 Social Network 2, Paper Main1228 should be presented before paper Main665.
  7. Brochure Page 49, paper Main347, there is an extra "{" in the paper title.
  8. Brochure Page 52, missing Student Reception, Thursday, 20:00 to 24.:00 hrs.

Doctoral Consortium SCHEDULE 

Monday July 27 
in Room 448 in the New Building of Facultad de Ciencias Económicas 
 
8:45-9:15 Introductions 
9:15-10:30 - Poster advertisements (authors in alphabetical order from Aleksandrov to Ramdas)
10:30-11:00 - Coffee break 
11:00-11:45 - Posters 
11:45-12:45 - Talk by Gabriela Llaneza. She will present and discuss typical  areas of concern in PhD. writing. Lexical and syntactical choices  will be dealt with in a game-like format to enhance audience participation. Students will receive a handout with pointers  to tips and online resources. 
12:45-1:45 - Lunch for the DC participants in the coffee break room on the 1st floor with the poster boards
1:45-2:30 - Poster advertisements (authors in alphabetical order from Roijers to Zhu plus Cornelio, Jahedpari, and Koitz)
2:30-3:30 - Career panel: Marie DesJardin (Univ of Maryland Baltimore County, USA),  Maria Vanina Martinez (Universidad Nacional del Sur, Argentina),  Pascal Poupart (Waterloo University, Canada),
 Meinolf Sellman (IBM, USA),  Jie Tang (Tsinghua University, China)
3:30-4:00 - Coffee break 
4:00-5:00 - Posters 
5:00-6:30 - Meet and Greet event (open to all attendees)  in Salón de Actos on the ground floor of the same building 
 
Tuesday July 28 
In  Catalinas Room at the Sheraton Hotel 
 
9:40-10:40  - Talk by Toby Walsh, UNSW Australia and NICTA (open to all attendees) 
"Managing your Supervisor"
To get the most out of your PhD, you need to take charge of your  relationship with your supervisor. In this interactive session, 
we will cover a number of topics: understanding your supervisor (as a first step to managing them), manipulating them, expectations, 
pitfalls, conflict, and your overall research career. Whilst many of the topics are focused on the PhD-supervisor relationship, much 
holds true for other similar relationships like that between PostDoc and supervisor. Please come and share your questions and problems. 
 
 
POSTER: the maximum size is A0 size in vertical format (1189 mm in 
height x 841 mm in width). 
 
POSTER ADVERTISEMENT: be prepared to give a 2 1/2 minutes presentation  on the highlights of your work. If you prepare slides for it, no more 
than 3 slides. Send a pdf (horizontal landscape) before the DC to This email address is being protected from spambots. You need JavaScript enabled to view it. or have it ready to be copied on a single laptop that 
we will use for the presentations. The time will be enforced strictly. 
 

ML Track Invited Sister Conference Presentations 1
15:10-16:30 Tuesday, Jul 28 - Room LB2

15:10 - Peter Tino
"Learning in the Model Space for Temporal Data"
I will first introduce the general concept of the emerging field of "learning in the model space". The talk will then focus on time series data. After reviewing some of the existing model based time series kernels, I will introduce a framework for building new kernels based on temporal filters inspired by a class of parametrized state space models known as "reservoir" models. I will briefly outline the key theoretical concepts of their analysis and design. The methodology will be demonstrated in a series of sequence classification tasks and in an incremental fault detection setting.

15:30 - Yan Liu
"Fast Spatio-temporal Analysis via Low Rank Tensor Learning"
Accurate and efficient analysis of large-scale multivariate spatio-temporal data is critical in sustainability, mobile applications, and big data in general. Existing models usually assume simple inter-dependence among variables, space, and time, and are computationally expensive. In this talk, I will discuss a unified low rank tensor learning framework for multivariate spatio-temporal analysis, which can conveniently incorporate important properties of spatio-temporal data into modeling, such as spatial clustering and shared structure among variables. I will demonstrate how the general framework can be applied to cokriging and forecasting tasks, and an efficient greedy algorithm in batch setting as well a parallel online update algorithm to solve the resulting optimization problem with convergence guarantee.

15:50 - Luc De Raedt
"Languages for Mining and Learning"
Applying machine learning and data mining to novel applications is cumbersome. This observation is the prime motivation for the interest in languages for learning and mining. In this talk, I shall provide a gentle introduction to three types of languages that support machine learning and data mining: inductive query languages, which extend database query languages with primitives for mining and learning, modelling languages, which allow to declaratively specify and solve mining and learning problems, and programming languages, that support the learning of functions and subroutines. I shall uses an example of each type of language (the mining views for inductive querying, MiningZinc for modelling and ProbLog for probabilistic programming) to introduce the underlying ideas and to put them into a common perspective.   This then forms the basis for a short analysis of the state-of-the-art.

16:10 - Stephen Muggleton
"Learning as Interpretation"
This talk describes a new approach to Machine Learning which involves meta-level interpretation of the examples in terms of additional logical symbols.  Meta-Interpretive Learning (MIL) is a recent Inductive Logic Programming technique  aimed at supporting predicate invention and learning of recursive definitions.  A powerful and novel aspect of MIL is that when learning a predicate definition it automatically introduces sub-definitions, allowing decomposition into a hierarchy of reuseable parts.  MIL is based on an adapted version of a Prolog meta-interpreter. Normally such a meta-interpreter derives a proof by repeatedly fetching first-order Prolog clauses whose heads unify with a given goal. By contrast, a meta-interpretive learner additionally fetches higher-order meta-rules whose heads unify with the goal, and saves the resulting meta-substitutions to form a program. This talk will overview theoretical and implementational advances in this new area including the ability to learn Turing computabale functions within a constrained subset of logic programs, the use of probabilistic representations within Bayesian meta-interpretive and techniques for minimising the number of meta-rules employed. The talk will also summarise applications of MIL including the learning of regular and context-free grammars, learning from visual representions with repeated patterns, learning string transformations for spreadsheet applications, learning and optimising recursive robot strategies and learning tactics for proving correctness of programs.  It will conclude by pointing to the many challenges which remain to be addressed within this new area.



ML Track Invited Sister Conference Presentations 2
15:10-16:30 Thursday, Jul 30  - Room R3


15:10 - Huan Liu
"Employing Machine Learning to Help Verifying Research Hypotheses"
Social media offers a fresh lens to understanding human activities and behaviors.  Social media data is massive, noisy, partial, multi-dimensional, and with co-existing social networks. In social media research, we collect data mainly relying on passive observation. Concomitant with this type of new data are many novel scientific hypotheses to be investigated as well as unique challenges such as lack of ground truth, too costly to conduct large-scale user studies. In this work, we show that by taking advantage of the properties of social media data, we can verify some hypotheses with the help of machine learning. We will illustrate how to accomplish the task using real-world examples and challenging scientific hypotheses.

15:30 - Kristian Kersting
"The Democratization of Optimization"
Democratizing data does not mean dropping a huge spreadsheet on everyone’s desk and saying, “good luck,” it means to make data mining, machine learning and AI methods useable in such a way that people can easily instruct machines to have a „look" at the data and help them to understand and act on it.  A promising approach is the  declarative “Model + Solver” paradigm that was and is behind many revolutions in computing in general: instead of outlining how a solution should be computed, we specify what the problem is using some modeling language and solve it using highly optimized solvers. Analyzing data, however, involves more than just the optimization of an objective function subject to constraints. Before optimization can take place, a large effort is needed to not only formulate the model but also to put it in the right form. We must often build models before we know what individuals are in the domain and, therefore, before we know what variables and constraints exist. Hence modeling should facilitate the formulation of abstract, general knowledge. This not only concerns the syntactic form of the model but also needs to take into account the abilities of the solvers; the efficiency with which the problem can be solved is to a large extent determined by the way the model is formalized. In this talk, I shall review our recent efforts on on relational linear programming. It can reveal the rich logical structure underlying many AI and data mining problems both at the formulation as well as the optimization level. Ultimately, it will make optimization several times easier and more powerful than current approaches and is a step towards achieving the grand challenge of automated programming as sketched by Jim Gray in his Turing Award Lecture.
Joint work with Martin Mladenov and Pavel Tokmakov and based on previous joint works together with Babak Ahmadi, Amir Globerson, Martin Grohe, Fabian Hadiji, Marion Neumann, Aziz Erkal Selman, and many more.

15:50 - Animashree Anandkumar
"Tensor Methods: A New Paradigm for Training Probabilistic Models and Feature Learning"
Tensors are rich structures for modeling complex higher order relationships in data rich domains such as social networks, computer vision, internet of things, and so on. Tensor decomposition methods are embarrassingly parallel and scalable to enormous datasets. They are guaranteed to converge to the global optimum and yield consistent estimates of parameters for many probabilistic models such as topic models, community models, hidden Markov models, and so on.  I will also demonstrate how tensor methods can yield rich discriminative features for classification tasks and can serve as an alternative method for training neural networks. For the first time, it yields a guaranteed method for training neural networks.  

16:10 - Elad Eban
"Discrete Chebyshev Classifiers"
In large scale learning problems it is often easy to collect simple statistics of the data, but hard or impractical to store all the original data. A key question in this setting is how to construct classifiers based on such partial information. One traditional approach to the problem has been to use maximum entropy arguments to induce a complete distribution on variables from statistics. However, this approach essentially makes conditional independence assumptions about the distribution, and furthermore does not optimize prediction loss. Here we present a framework for discriminative learning given a set of statistics. Specifically, we address the case where all variables are discrete and we have access to various marginals. Our approach minimizes the worst case hinge loss in this case, which upper bounds the generalization error. We show that for certain sets of statistics the problem is tractable, and in the general case can be approximated using MAP LP relaxations. Empirical results show that the method is competitive with other approaches that use the same input.

 

 

sheratonThe IJCAI-15 will be held at the Sheraton Convetion Center (SCC) which is located close to the financial district, the main commercial attractions and important cultural and entertainment centers. It is 30 kilometers from Ministre Pistarini International Airport, Ezeiza, and 7 kilometers from Jorge Newbery Domestic Airport.

The Sheraton Convention Center offers the largest event facilities in the city. Their fifteen meeting rooms totaling 6,500 square meters can accommodate up to 9,000 guests. Their venues are ideal for conferences, exhibitions, and small events. Venues include audiovisual equipment, videoconferencing services, and simultaneous translation. Plus, their specially trained staff provides excellent food and beverage service.

Sheraton Hotel is on San Martin street near of 9 Julio Avenue and Libertador Avenue, one of the principal thoroughfares in Buenos Aires. It is 34 kilometers from Ezeiza International Airport (EZE) and 6.5 kilometers from Jorge Newbery International Airport (AEP). On Wednesday there will be bus service from here to the Banquet location. In order to get the bus, you must show your ticket.

School of Law is on Figueroa Alcorta Avenue a major thoroughfare, with a length of over 7 km along the city's northside. It is 2.5 kilometers from Sheraton Hotel.

School of Economics is on Córdoba Avenue and is 3 kilometers from Sheraton Hotel. On Monday there will be bus service between here and School of Law in order to attend to the Opening Ceremony.


Sheraton also provides special rates for the IJCAI Conference.

Alternative hotels are handled by IJCAI-15 tour operator: http://www.tipgrouptravel.com/ijcai/en/index.html

 

arielphoto

The winner of the 2015 IJCAI Computers and Thought Award is Ariel Procaccia, Assistant Professor at the Computer Science Department, Carnegie Mellon University. Professor Procaccia is recognized for his contributions to the fields of computational social choice and computational economics, and for efforts to make advanced fair division techniques more widely accessible.

Talk
AI and Economics for a Healthier, Safer, and Fairer World

Ariel Procaccia

Tuesday 28th - 08:30 to 9:30hs

Abstract:
I will explore the broad and exciting interaction between AI and economics, which spans the spectrum from deep theory to deployed applications in healthcare, physical security, and dispute resolution. Specifically, I will talk about market design, focusing on kidney exchange algorithms that overcome uncertainty; game theory, with an emphasis on the problem of learning to play security games; and fair division, where I will argue that computational thinking gives rise to new notions of fairness, and show how these ideas are integrated into Spliddit.org, a not-for-profit website that offers provably fair solutions to everyday problems. On the way I will make a special effort to highlight new challenges for AI research.

 

You are here: Home Program Conference Overview Uncategorised