• BsAs-NationalCongress

    Buenos Aires - National Congress

  • BsAs-DiagonalNorte

    Buenos Aires - Roque Saenz Peña Avenue (aka Diagonal Norte)

  • BsAs-Madero

    Puerto Madero

  • BsAs-9Julio

    The Obelisk, the most popular symbol of Buenos Aires

  • BsAs-FarolCongreso

    Plaza Congreso

  • BsAs-PlazaLibertad

    Plaza Libertad

  • BsAs-AguasCorrientes

    Palacio de Aguas Corrientes

IJCAI-15 will be held in Buenos Aires, Argentina from July 25th to July 31st, 2015. We look forward to seeing you there.

Prof. Michael Wooldridge, Professor at the University of Oxford, UK is the IJCAI-15 Conference Chair.

Prof. Qiang Yang, Hong Kong University of Science and Technology, Hong Kong is the IJCAI-15 Program Chair.

ML Track Invited Sister Conference Presentations

ML Track Invited Sister Conference Presentations 1
15:10-16:30 Tuesday, Jul 28 - Room LB2

15:10 - Peter Tino
"Learning in the Model Space for Temporal Data"
I will first introduce the general concept of the emerging field of "learning in the model space". The talk will then focus on time series data. After reviewing some of the existing model based time series kernels, I will introduce a framework for building new kernels based on temporal filters inspired by a class of parametrized state space models known as "reservoir" models. I will briefly outline the key theoretical concepts of their analysis and design. The methodology will be demonstrated in a series of sequence classification tasks and in an incremental fault detection setting.

15:30 - Yan Liu
"Fast Spatio-temporal Analysis via Low Rank Tensor Learning"
Accurate and efficient analysis of large-scale multivariate spatio-temporal data is critical in sustainability, mobile applications, and big data in general. Existing models usually assume simple inter-dependence among variables, space, and time, and are computationally expensive. In this talk, I will discuss a unified low rank tensor learning framework for multivariate spatio-temporal analysis, which can conveniently incorporate important properties of spatio-temporal data into modeling, such as spatial clustering and shared structure among variables. I will demonstrate how the general framework can be applied to cokriging and forecasting tasks, and an efficient greedy algorithm in batch setting as well a parallel online update algorithm to solve the resulting optimization problem with convergence guarantee.

15:50 - Luc De Raedt
"Languages for Mining and Learning"
Applying machine learning and data mining to novel applications is cumbersome. This observation is the prime motivation for the interest in languages for learning and mining. In this talk, I shall provide a gentle introduction to three types of languages that support machine learning and data mining: inductive query languages, which extend database query languages with primitives for mining and learning, modelling languages, which allow to declaratively specify and solve mining and learning problems, and programming languages, that support the learning of functions and subroutines. I shall uses an example of each type of language (the mining views for inductive querying, MiningZinc for modelling and ProbLog for probabilistic programming) to introduce the underlying ideas and to put them into a common perspective.   This then forms the basis for a short analysis of the state-of-the-art.

16:10 - Stephen Muggleton
"Learning as Interpretation"
This talk describes a new approach to Machine Learning which involves meta-level interpretation of the examples in terms of additional logical symbols.  Meta-Interpretive Learning (MIL) is a recent Inductive Logic Programming technique  aimed at supporting predicate invention and learning of recursive definitions.  A powerful and novel aspect of MIL is that when learning a predicate definition it automatically introduces sub-definitions, allowing decomposition into a hierarchy of reuseable parts.  MIL is based on an adapted version of a Prolog meta-interpreter. Normally such a meta-interpreter derives a proof by repeatedly fetching first-order Prolog clauses whose heads unify with a given goal. By contrast, a meta-interpretive learner additionally fetches higher-order meta-rules whose heads unify with the goal, and saves the resulting meta-substitutions to form a program. This talk will overview theoretical and implementational advances in this new area including the ability to learn Turing computabale functions within a constrained subset of logic programs, the use of probabilistic representations within Bayesian meta-interpretive and techniques for minimising the number of meta-rules employed. The talk will also summarise applications of MIL including the learning of regular and context-free grammars, learning from visual representions with repeated patterns, learning string transformations for spreadsheet applications, learning and optimising recursive robot strategies and learning tactics for proving correctness of programs.  It will conclude by pointing to the many challenges which remain to be addressed within this new area.



ML Track Invited Sister Conference Presentations 2
15:10-16:30 Thursday, Jul 30  - Room R3


15:10 - Huan Liu
"Employing Machine Learning to Help Verifying Research Hypotheses"
Social media offers a fresh lens to understanding human activities and behaviors.  Social media data is massive, noisy, partial, multi-dimensional, and with co-existing social networks. In social media research, we collect data mainly relying on passive observation. Concomitant with this type of new data are many novel scientific hypotheses to be investigated as well as unique challenges such as lack of ground truth, too costly to conduct large-scale user studies. In this work, we show that by taking advantage of the properties of social media data, we can verify some hypotheses with the help of machine learning. We will illustrate how to accomplish the task using real-world examples and challenging scientific hypotheses.

15:30 - Kristian Kersting
"The Democratization of Optimization"
Democratizing data does not mean dropping a huge spreadsheet on everyone’s desk and saying, “good luck,” it means to make data mining, machine learning and AI methods useable in such a way that people can easily instruct machines to have a „look" at the data and help them to understand and act on it.  A promising approach is the  declarative “Model + Solver” paradigm that was and is behind many revolutions in computing in general: instead of outlining how a solution should be computed, we specify what the problem is using some modeling language and solve it using highly optimized solvers. Analyzing data, however, involves more than just the optimization of an objective function subject to constraints. Before optimization can take place, a large effort is needed to not only formulate the model but also to put it in the right form. We must often build models before we know what individuals are in the domain and, therefore, before we know what variables and constraints exist. Hence modeling should facilitate the formulation of abstract, general knowledge. This not only concerns the syntactic form of the model but also needs to take into account the abilities of the solvers; the efficiency with which the problem can be solved is to a large extent determined by the way the model is formalized. In this talk, I shall review our recent efforts on on relational linear programming. It can reveal the rich logical structure underlying many AI and data mining problems both at the formulation as well as the optimization level. Ultimately, it will make optimization several times easier and more powerful than current approaches and is a step towards achieving the grand challenge of automated programming as sketched by Jim Gray in his Turing Award Lecture.
Joint work with Martin Mladenov and Pavel Tokmakov and based on previous joint works together with Babak Ahmadi, Amir Globerson, Martin Grohe, Fabian Hadiji, Marion Neumann, Aziz Erkal Selman, and many more.

15:50 - Animashree Anandkumar
"Tensor Methods: A New Paradigm for Training Probabilistic Models and Feature Learning"
Tensors are rich structures for modeling complex higher order relationships in data rich domains such as social networks, computer vision, internet of things, and so on. Tensor decomposition methods are embarrassingly parallel and scalable to enormous datasets. They are guaranteed to converge to the global optimum and yield consistent estimates of parameters for many probabilistic models such as topic models, community models, hidden Markov models, and so on.  I will also demonstrate how tensor methods can yield rich discriminative features for classification tasks and can serve as an alternative method for training neural networks. For the first time, it yields a guaranteed method for training neural networks.  

16:10 - Elad Eban
"Discrete Chebyshev Classifiers"
In large scale learning problems it is often easy to collect simple statistics of the data, but hard or impractical to store all the original data. A key question in this setting is how to construct classifiers based on such partial information. One traditional approach to the problem has been to use maximum entropy arguments to induce a complete distribution on variables from statistics. However, this approach essentially makes conditional independence assumptions about the distribution, and furthermore does not optimize prediction loss. Here we present a framework for discriminative learning given a set of statistics. Specifically, we address the case where all variables are discrete and we have access to various marginals. Our approach minimizes the worst case hinge loss in this case, which upper bounds the generalization error. We show that for certain sets of statistics the problem is tractable, and in the general case can be approximated using MAP LP relaxations. Empirical results show that the method is competitive with other approaches that use the same input.

 

 

You are here: Home Machine Learning Track Invited Sister-Conf Talks