HEPTECH AIME ML&VA on Clouds

Europe/Budapest
Mátyás Hall (Groundfloor) (Hotel Mercure Budapest)

Mátyás Hall (Groundfloor)

Hotel Mercure Budapest

Krisztina körút 41-43. 1013 Budapest Hungary
Description

CERN, MTA SZTAKI, MTA WIGNER RCP and the universities of BME, ELTE together with the HEPTech Network are organizing the next

 

Academia-Industry Matching Event
Machine Learning and Visual Analytics in the Clouds Workshop

 

The aim of this event is to bring together Academic researchers and Industry experts to share ideas, potential applications and fostering collaborations in the newly emerging field of Machine Learning and Visual Analytics and related technologies.

 

The event is sponsored by:

Accelerate Logo

 

ACCELERATE is a Horizon 2020 project, supporting the long-term sustainability of large scale research infrastructures (RIs) through the development of policies and legal and administrative tools for a more effective management and operation of RIs, with a special focus on ERICs and CERIC in particular.

 

 

EUflag

 

ACCELERATE has received funding from the European Union’s Horizon 2020 research and innovation programme under grant agreement N. 731112

 

 

Topics of the workshop include

  • Machine Learning

  • Artificial Intelligence

  • Big Data

  • Visual Analytics

  • Quality of Life

  • Computational Neuroscience

  • Computational Linguistics

  • Computational Physics

  • Cloud Computing Technology

  • Data Quality

  • Data Security

     

Confirmed Speakers

  • Jean-Marie Le Goff, CERN

  • András Pataricza, Budapest University of Technology

  • Róbert Kabai, Continental Ltd.

  • László Milán Molnár, Robert Bosch Ltd.

  • Balazs Szegedy, Alfred Rényi Institute of Mathematics

  • Zoltán Lehóczky, Lombiq Technologies Ltd.

  • Vince Grolmusz, Eötvös Loránd University

  • Géza Németh, Hungarian CP for EU AI project

  • Gergő Orbán, Wigner RCP

  • Gábor Prószéky, MorphoLogic Ltd.

  • Gábor Vattay, Eötvös Loránd University

  • András Benczúr, SZTAKI

  • András Lőrincz, Eötvös Loránd University

  • Péter Antal, Budapest University of Technology

 

Call for contribution

Contributed talks, posters are warmly welcome. Abstracts are requested at the registration.

Competition for young contributors will be organized for the best 1 talks and 1 posters.

The maximum poster dimensions are 100 cm in width and 200 cm in height.

 

Decision on submitted abstracts is coming soon!

 

Dates 29-30 October 2018

Venue Hotel Mercure Buda, Budapest, Hungary

Attendance 80 seats are available. Registrations are accepted in FIFS order.

Registration fee

100 € for regular participants,

The registration fee is waived for young participants with valid student ID.

 

Web https://indico.kfki.hu/e/aime18

Contact person szathmari.nora@wigner.hu

 

Participants
  • Adrienn Forró
  • Albin Márton Nagy
  • Aleksandar Belic
  • Alex Olar
  • Andor Magony
  • Andras Pataricza
  • Andras Telcs
  • Andrea Angeli
  • Andrew Gargett
  • András Benczúr
  • András Lukács
  • András Lőrincz
  • András Magyarkuti
  • Antal Jakovac
  • Antal Nikodemus
  • Attila Barta
  • Balázs Endre Szigeti
  • Balázs Szegedy
  • Bea Ehmann
  • Beata Tunde Szabo
  • Bence Bruncsics
  • Bence Golda
  • Benedek Farkas
  • Bernd Schlei
  • Boros Ábel
  • Daniel Berenyi
  • Daniel Dobos
  • Dezső Burján
  • Ellák Somfai
  • Fatma Abdelkhalek
  • Gergely Gabor Barnafoldi
  • Gergely Honti
  • Gergo Orban
  • Gyula Dörgő
  • Gábor Légrádi
  • Gábor Nagy
  • Gábor Prószéky
  • Gábor Stofa
  • Gábor Tamás
  • Gábor Vattay
  • Géza Németh
  • Jean-Marie Le Goff
  • John Isaacs
  • Jozsef Laczko
  • Julia Bergmann
  • Karin Rathsman
  • Katalin Biró
  • Kinga Faragó
  • Kovács Dávid
  • Lajos Berenyi
  • Lajos Budai
  • Lilla Lomoschitz
  • Lilla Zólyominé Botzheim
  • Lászlo Milán Molnár
  • László Békési
  • László Gábor Lovászy
  • Marcell Stippinger
  • Mariann Percze-Mravcsik
  • Marija Mitrovic Dankulov
  • Mark Jelasity
  • Mate Hegedűs
  • Márton Neogrády-Kiss
  • Máté Csőke
  • Nikita Moshkov
  • Nora Szathmari
  • Nóra Balogh
  • Petar Jovanovic
  • Peter Antal
  • Peter Levai
  • Péter Gerendás
  • Péter Hatvani
  • Péter Horváth
  • Robert-Zsolt Kabai
  • Réka Hollandi
  • Simone Montesano
  • Steve Welch
  • Tamas Balassa
  • Tamas Biro
  • Tamas Ruppert
  • Tivadar Danka
  • Viktor Jeges
  • Viktor Nagy
  • Vince Grolmusz
  • Zoltán Kolarovszki
  • Zoltán Lehóczky
  • Zsolt Illes
  • Zsolt Tabi
  • Ádám Nárai
  • Monday, October 29
    • 1
      Opening
      Speaker: Peter Levai (WIGNER RCP)
    • 2
      Supporting Decision Making through Interactive Visualisation

      Data from a variety of sources is often presented in an abstract way that makes it hard for decision makers or non-expert stakeholders to understand the full context of the situation or problem illustrated. This talk discusses different approaches that the research team has taken in the presentation of data in a number of application areas including oil and gas logistics, environmental management, urban sustainability, and historical building evacuation. The core focus of this research is to create a common language where stakeholders of different backgrounds and experience have access to the same data and the ability to explore that data, so they can more fully understand the context and consequences of decisions being made.

      Speaker: John Isaacs
    • 3
      Artificial Intelligence and Smart Interaction Research at BME TMIT Smartlabs
      Speaker: Géza Németh
    • 4
      Life beyond the pixels: image analysis and machine learning for single-cell analysis

      In this talk I will give an overview of the computational steps in the analysis of a single cell-based high-content screen. First, I will present a novel microscopic image correction method designed to eliminate vignetting and uneven background effects which, left uncorrected, corrupt intensity-based measurements. I will discuss methods capable of identifying cellular phenotypes based on features extracted from the image using deep learning and other advanced machine learning algorithms. For cases where discrete cell-based decisions are not suitable, we propose a method to use multi-parametric regression to analyze continuous biological phenomena. Finally single cell selection and isolation methods using machine learning will be discussed.

      Speaker: Peter Horvath
    • 5
      Getting Artificial Intelligence into industry

      Artificial Intelligence is in the headlines a lot these days. Encouragingly, despite the hype, it also seems that some progress is being made with completing the circuit from arcane research topics to real world applications, at least for larger organisations geared up to better absorb any risks involved. However, what is not so clear is whether this is translating into widespread uptake within day-to-day running of smaller commercial ventures. Our work at the Hartree Centre, within the Science and Technology Facilities Council, is a key component of a long-term vision of the UK government about promoting the adoption of this technology at all levels of UK industry. This talk will describe our experiences as we add AI, Data Science and Big Data to our established reputation as one of Europe's premier industry facing centres for High Performance Computing; particular focus will be given to our efforts to ensure we reach all levels of the commercial world, from the largest to the smallest players, and the likely benefits to all.

      Speaker: Andrew Gargett
    • 2:50 PM
      Coffee Break
    • 6
      Turning software into computer chips – Hastlayer

      Software is flexible, specialized hardware is extremely fast. So why not write software, then turn it into a computer chip? This is what Hastlayer (https://hastlayer.com) does by transforming .NET software into electronic circuits. The result is faster and uses less power while you simply keep on writing software. You may not be able to tell just by looking at it but behind some function calls now actually embedded hardware is working! (You wonder how? Check out what FPGAs are!) In this demo-packed session we'll get an overview of what Hastlayer is, why it is useful for researchers like you and how to write Hastlayer-compatible software.

      Speaker: Mr Lehóczky Zoltán (Lombiq Technologies Ltd.)
    • 7
      Online Machine Learning in Big Data Streams - Theory and Practice

      The area of online machine learning in big data streams covers algorithms that are (1) distributed and (2) work from data streams with only a limited possibility to store past data. The first requirement mostly concerns software architectures and efficient algorithms. The second one also imposes nontrivial theoretical restrictions on the modeling methods: In the data stream model, older data is no longer available to revise earlier suboptimal modeling decisions as the fresh data arrives.

      In my presentation, I will provide an overview of distributed software architectures and libraries as well as machine learning models for online learning. I will highlight the most important ideas for classification, regression, recommendation, and unsupervised modeling from streaming data, and we show how they are implemented in various distributed data stream processing systems. I will also explore the usability of online machine learning, especially for recommender systems and industrial IoT applications.

      Speaker: Andras Benczur (Institute for Computer Science and Control, Hungarian Academy of Sciences)
    • 8
      Federated Learning on the Edge

      Federated learning is aimed at implementing machine learning based on training data stored by many personal devices. The key idea is that data is never transferred to a central location, instead, machine learning models are trained locally and then aggregated centrally. Our research aims at reducing the burden on the central cloud component by using local communication on the Edge. Devices exchange information with each other and perform the aggregation step among themselves in a decentralized manner. Apart from the decentralized learning algorithm, we introduce additional techniques to reduce communication such as subsampling and compression. We demonstrate that our approach is comparable to the centralized version in terms of convergence speed as a function of the amount of information exchanged.

      Speaker: Mark Jelasity (University of Szeged)
    • 9
      From exploratory big data analysis towards run-time verification

      Dimensioning and validating large-scale highly-available computing and communication systems necessitate extensive benchmarking campaigns, which generate vast amounts of measurement data. Moreover, models derived from the evaluations of these campaigns should be scalable and portable in the sense that derived conclusions have to be applicable in a variety of deployment configurations of different size.
      Identification of outliers which indicate workload induced failures is a core objective of this evaluation process. In operation time, the limited controllability of the external workload needs prevention of performability failures by integrating monitoring logic and, upon necessity, allocation of further resources for proper processing of the increased amount of input..
      By its very nature, the problem is the identification of a hybrid workload-performability model. Its first step is the classification of the different operation domains to distinguish between normal, overloaded, etc. states. The next step is to synthetize of a qualitative system control model targeting the mitigation of overload problems.
      The presented approach presented relies on visual exploratory analysis of sample data, resulting in a set of hypotheses for the qualitative model. Subsequently, the model is validated by confirmatory analysis carried out with the help of hybrid checking automata over the entire large dataset. If the model turns to be valid, the checking automata form the basis for monitoring and run-time verification, thus facilitating run-time resource control as a byproduct of the benchmark evaluation process.

      Speaker: Dezső Burján (Ericsson Hungary)
    • 4:50 PM
      Coffee Break
    • 10
      modAL: A modular active learning framework for Python

      With the recent explosion of available data, you have can have millions of unlabelled examples with a high cost to obtain labels. Active learning is a machine learning technique which aims to find the potentially most informative instances in unlabeled datasets, allowing the you to label it and improve the performance of classification.

      modAL is a new active learning framework for Python, designed with modularity, flexibility and extensibility in mind. The key components of any workflow are the machine learning algorithm you choose and the query strategy you apply to request labels for the most informative instances. With modAL, instead of choosing from a small set of built-in components, you have the freedom to seamlessly integrate scikit-learn or Keras models into your algorithm and easily tailor your custom query strategies, allowing the rapid development of active learning workflows with nearly complete freedom.

      modAL is fully open source and hosted on GitHub.

      Speaker: Dr Danka Tivadar (Hungarian Academy of Sciences)
    • 11
      Machine learning: learning from reverse engineering human intelligence and engineering artificial intelligence

      Biological and artificial agents commit errors. These errors are fundamentally different and reveal something about the types of computations these agent are performing. Immense advances in machine learning help us understand what underlies human behavior and understanding human behavior provide insights into challenges machine learning are maced with. In this talk I will present how our lab uses machine learning to tackle important problems in human cognition and how the insights obtained can inform engineers in the design of efficient algorithms.

      Speaker: Dr Gergo Orban (MTA Wigner RCP)
    • 12
      How hidden knowledge in series of temporal events can be extracted and utilized?

      Temporal events are inherent parts of every industrial, business or generalized processes. The operational characteristic of these often high complexity processes is nicely represented by the generated temporal events. However, extracting useful knowledge from these large datasets and the process of model building using the extracted knowledge is by no means an easy task. Therefore, in our presentation, we would like to highlight and emphasize the application possibilities of some often neglected approaches for the analysis of large temporal event datasets. We present challenging problems of different fields of interest including process safety and churn analysis and present how the toolbox of data and process mining and predictive modeling can be utilised following the integrated information concept of Industry 4.0.

      In our presentation, first, the multi-temporal sequence-based representation of the event series is described and statistical metrics (frequency, probability, confidence, etc.) for the characterization of the datasets are introduced. Based on the presented simple statistical metrics, we present a Bayesian model for the prediction of the next occurring event in our process.

      Realizing the complexity of the event sequences, the applicability of advanced machine learning techniques like deep learning is investigated for the detection of the root cause of past events and the prediction of future sequences. The problem of root cause detection is formulated as a classification problem assuming a known set of root causes in our process, while the task of sequence prediction is formulated as a sequence to sequence (seq2seq) learning problem. In addition to the description of a recurrent neural network model of Long Short-Term Memory (LSTM) units for the solution of the above-mentioned tasks, we present how the multivariate data analysis techniques can be applied to extract knowledge from recurrent neural network models.

      Finally, we present how the task of churn analysis and sequence mining is connected and how the topic of event (sequence) analysis facilitates the prediction of customer churn.

      The applicability of the aforementioned tools is presented through the analysis of various industry-motivated problems like alarm management and customer churn prediction. The alarm management is the effective handling of industrial process alarms, where the extraction of useful knowledge from historical process data is a high-priority problem, while the problem of churn analysis is present in quality test sequence optimization, or in the prediction of customers who are likely to discontinue the use of a service.

      The tools for the analysis of the various datasets were implemented in Matlab/Python, while the deep learning neural networks were trained by Tensorflow/Keras. Therefore, in our presentation, we intend to share our application experiences regarding these interfaces.

      Speaker: Gyula Dörgő (MTA PE Lendület Complex Systems Monitoring Research Group)
    • 13
      NIPGboard: An analysing, visualization and interacting AI tool

      The ACUMOS project of the Linux Foundation aims at integrating AI tools to support managers from business plan to monitoring and customer satisfaction estimation for building the next application. However, tools for close collaboration with human intelligence has not been included yet. I introduce our tool and demonstrate its capabilities for solving basic tasks with more or less help from the human supervisor. The tool alleviates data analysis, visualizes the results and enables interaction at different levels. This tool is for experts, who don't want to deal with programming in any of the above steps.

      References:
      Declarative Description: The Meeting Point of Artificial Intelligence, Deep Neural Networks, and Human Intelligence
      Zoltán Á. Milacski, Kinga Bettina Faragó, Áron Fóthi, Viktor Varga, and Andras Lorincz
      IJCAI/ECAI 2018 Workshop on Explainable Artificial Intelligence (XAI), pp.: 97-103, Stockholm, 2018.

      LabelMovie: Semi-supervised machine annotation tool with quality assurance and crowd-sourcing options for videos
      Z Palotai, M Lang, A Sarkany, Z Toser, D Sonntag, T Toyama, A Lorincz
      Content-Based Multimedia Indexing (CBMI), 2014 12th International Workshop …

      Speaker: Kinga Faragó (Eötvös University)
  • Tuesday, October 30
    • 14
      International Pilot Projects for Digital Transformation of Hungary
      Speaker: Dr Antal Nikodémus (ITM)
    • 15
      Introduction of the Accelerate Project
      Speaker: Steve Welch (ESP Central)
    • 16
      Applied Artificial Intelligence For Assisted and Autonomous Driving

      There is an industry-wide shift happening in automotive technology. Traditional computer systems and software technology cannot keep up with the increasing complexity of the problems we need to solve. AI shows great potential to be a key technology for autonomous driving yet there are many challenges for taking research results to a great real world product. The complexity of the real world holds many challenges like different weather conditions, geographical differences and various unlikely events that are underrepresented or even missing in current datasets.
      This talk will show a survey of both the power and the perils of contemporary AI for autonomous driving.

      Speaker: Robert-Zsolt Kabai (Continental)
    • 17
      The ESS Control System Machine Learning Project
      Speaker: Karin Rathsman (ESSS)
    • 18
      Quantum Computing, Quantum Software and AI

      In the last five years, quantum computers migrated from intellectual curiosity to
      the realm of technological evolution. The first commercial computer (D-Wave) does
      not resemble the ideal quantum computer physicists and mathematicians dreamed
      of for decades. The most dramatic difference between a Turing type machine and the new architecture is that it
      not based on logical steps. While for the quantum physics and mathematics community
      this state is unsatisfactory and they are still pursuing an ideal quantum logic architecture.
      In my talk, I argue that the new architecture is here to stay and quantum software
      will adapt to the new technologically. Artificial Intelligence applications are one of the best-suited
      problems for the new quantum architecture. In the emerging quantum era, we have to
      change our concepts on what a computer is.

      Speaker: Gábor Vattay (Eötvös Loránd University, Department of Physics of Complex Systems)
    • 10:45 AM
      Coffee Break
    • 19
      The Mathematical Foundations of Artificial Intelligence - about the National Excellence Program
      Speaker: Balazs Szegedy (Alfred Renyi Institute of Mathematics)
    • 20
      Understanding understanding

      The goal of this talk is to give the mathematical background of what happens when we understand a phenomenon, independently that it happens in a computer or in a human mind. We give a definition for "understanding" as a special representation of the input. We prove that such a representation exists, and demonstrate that with it all AI tasks (classification, regression, compression, generation) can be simply solved. We discuss some special cases as image processing and scientific theory making.

      Speaker: Prof. Antal Jakovác (ELTE, Dept. of Atomic Physics)
    • 21
      AI Methods in Human Language Technologies

      In the last two-three decades researchers in human language technologies have tried to apply various statistical methods to understand what is encoded and how in large text corpora -- with limited success. The previous 5-6 years have basically changed both the basic research paradigm and the level of success in this research area as well. Continuous vector space models, neural networks, deep learning: these are some of the main terms widely used today in most data-intensive fields including linguistic research. With the new paradigm, however, new questions have arisen. One of them concerns with the possibilities of mapping the new categories (vectors, dimensions, layers, etc.) onto linguistic concepts. Besides this, the presentation deals with questions like ‘how and why is it possible that rather simple word embedding models are able to grasp real-world relations without any other knowledge sources but pure texts’?

      Speaker: Gábor Prószéky (MTA-PPKE Hungarian Language Technology Research Group)
    • 22
      From Applied Deep Learning to Artificial General Intelligence

      Since about 1950, reknown researchers have claimed from time-to-time that artificial intelligence (AI) will reach human intelligence in about 10 years. It hasn't happened. On the other hand, the evolution of computational power is exponential and the exponent of Moore's Law is large. Churchland's question -- is the brain more complex than clever? -- is still here. What are we missing?
      I argue that AI applications have found the solution, AI algorithms and AI architectures have reached the level of the mammalian brain and are about to surpass us. This is due to knowledge collected by mankind, the crowdsourcing efforts for training deep networks, the huge variety of deep learning architectures, and finally, the need to avoid the cost of crowdsourcing. I will give a pragmatic definition for creativity and intelligence being served to what I believe the crux of the innovation of the mammalian brain versus other neural systems, show a very recent architecture that can serve the purpose as well as a demonstrative example on extending it towards goal oriented systems.

      Speaker: Prof. Lőrincz András (Eötvös Loránd University)
    • 1:05 PM
      Lunch
    • 23
      Visualizing velocity field strengths in medical image data with hyper-surfaces in spacetime

      One-dimensional (1D) time sequences of spatial, three-dimensional (3D) simulation or image data may implicitly carry dynamical information of their embedded subregions. Continuous hyper-surfaces can be constructed for the full 3+1D data that enclose certain spacetime regions. Here, we demonstrate that such hypersurfaces may be viewed as 3D velocity vector fields, which explicitly characterize dynamically evolving 3D shapes contained in 4D. In particular, we consider the processing of a contrast fluid from tomographic reconstruction of a beating human heart.

      Speaker: Dr Bernd R. Schlei (GSI Helmholtzzentrum für Schwerionenforschung GmbH)
    • 24
      Graph search and knowledge extraction related to communities

      According to Daniel Keim: "Visual analytics combines automated analysis techniques with interactive visualisations for an effective understanding, reasoning and decision making on the basis of very large and complex datasets”.

      The effectiveness of Visual analytics essentially depends on the seamless interplay between automated analysis and interactive visualisation. In particular, the later shall enable users to visually explore the outputs of automated analytics with a view to maximizing insight and assess quality, and on the other hand provide support to user-driven analytics within interactive visualisation contexts.

      This talk will address important aspects of data representation and processing to be addressed for a successful combination of (hypergraph) graph-based interactive visualisation with data analysis techniques.

      Speaker: Jean-Marie Le Goff (CERN)
    • 25
      The Graph of Our Mind

      The fast development of diffusion MRI techniques made possible the mapping of the connections of the human brain on a macroscopic level: the graph consist several hundred, anatomically identified vertices, corresponding to 1-1.5 cm^2 areas of the gray matter (called Regions of Interests, ROIs), and two such vertices are connected by an edge if axonal fiber tracts are discovered between them. We have examined the resulting graphs in numerous viewpoints, and mapped the individual variabilities, the sex differences, the conservative edges, and the axonal development of the human brain in these studies. We will concentrate on the phenomenon of the Consensus Connectome Dynamics in our talk, and will explore the consequences of this surprising phenomenon.

      Speaker: Prof. Vince Grolmusz (Eötvös University)
    • 26
      Network-based analysis of common genetic background of diseases

      Most common diseases are polygenic; therefore, multiple even hundreds of genes out of the overall 23,000 can be responsible for a disease. The simultaneous appearance of diseases, comorbidities, like amongst neurological disorders are expected to have a common genetic background, which can be explored using network-based approaches.
      Novel network-based workflows for genetic studies provide a powerful approach to investigate shared genetic factors, allowing the combination of various sources about genes and genetic networks.
      We aimed to explore and identify the common genetic background of neurological comorbidities using different levels of results of the network-based analysis: at variant, gene and gene set based levels. To identify gene sets, we either used pathway databases or disease associated gene lists. To analyze the data alongside network-based techniques, we also used an approach based on regression. Polygenic risk score method uses the strength and probability of multiple gene-disease associations to create scores for each individual and given diseases.
      Using these techniques, we were able to explore and create novel disease networks based on their inferred shared genetic background.

      Speaker: Bence Bruncsics (Department of Measurement and Information Systems, Budapest University of Technology and Economics)
    • 4:15 PM
      Coffee Break
    • 27
      Failure root-cause analysis by data analytics: concept and a case study

      For automotive companies, continuous improvement of the manufacturing process is a must in order to achieve optimal product quality and cost. The traditional approach for this improvement process is Model Based Engineering, where hypotheses, and cause-effect chains are discovered purely by considering the laws of physics.
      At Robert Bosch, engineers and data scientists are working on a concept which utilizes data mining methods to create hypothesis of failure causes (grey-box modeling).

      In this presentation, we present the general approach of combination of data analytics and classical engineering and demonstrate the feasibility of the concept by a pilot big data project about a plastic encapsulation process optimization.

      In order to provide timely data for a wide range of reporting and predictive tasks, at the Robert Bosch Engineering Center, a cluster running the Apache Hadoop stack was set up as storage and analysis infrastructure for the excessive amount of production data. Data pipelines were developed in order to enable storing different data sources in the Hadoop Distributed File System (HDFS) from the Manufacturing Execution System (MES), the main data source.

      Among other sources, sensor data of molding press tool were used for modeling, in order to discover the most probable failure cause for the primary defect phenomenon, called delamination. Algorithms of modeling, feature importance measures and physical meaning of major important factors are also presented.

      Speaker: Dr László Milán Molnár
    • 28
      Towards using ML in critical applications

      Machine Learning provides highly efficient solutions for complex problems. However, the "black-box" or at most grey-box nature of the technology prohibits its use in many critical applications necessitating a throughgoing justification for the correctness of the results delivered.
      One rapidly evolving approach is xAI (eXplainable AI) targeting the simultaneous delivery of a result and arguments for its integrity.
      An alternative solution is to reuse the rich repertoire of measures collected in the field of fault-tolerant computing. One of the core problems addressed here is to build high-assurance solutions out of not entirely reliable services by using fault-detecting wrappers and redundancy scheme.
      The presentation gives an overview of the synergy of AI and FT measures with an outlook to the integration of future xAI based solutions.

      Speaker: Andras Pataricza (Budapest University of Technology and Economics)
    • 29
      Federated, privacy-preserving learning of large-scale probabilistic graphical models

      Probabilistic graphical models are successfully applied in many challenging problems of artificial intelligence and machine learning: in data and knowledge fusion, in causal inference, in trustworthy decision support systems or explanation generation. First, I summarize that their wide applicability stems from their transparent, multifaceted semantics. Second, I show that the same property makes them an ideal representation for federated and privacy-preserving extensions in these areas. I demonstrate the applicability of probabilistic graphical models in exploring dependency models in large-scale health datasets.

      Speaker: Dr Péter Antal (Budapest University of Technology and Economy)
    • 30
      ML and VA tools in support of software testing

      Software testing makes up a significant part of software development processes. This is especially true in the case of a complex IT system like IP Multimedia Subsystem (IMS). Our case study describes machine learning and visual analytics approaches to support a non-functional performance test, the endurance test. This test checks whether the software can continuously work without performance degradation. The analysis of data generated by such tests is a challenge itself due to their high complexity and contextual nature. An additional problem can be that often only a fraction of the data is annotated. Our approach includes different one-class classification models providing inputs for visualizations. We have also studied the problem of the interpretability of results in case of failed tests and tried to trace what may have gone wrong on the basis of information gained from the models.

      Speaker: András Lukács (Eötvös Loránd University)
    • 31
      Best Poster Award, Closing
      Speaker: Andras Telcs