Software is flexible, specialized hardware is extremely fast. So why not write software, then turn it into a computer chip? This is what Hastlayer (https://hastlayer.com) does by transforming .NET software into electronic circuits. The result is faster and uses less power while you simply keep on writing software. You may not be able to tell just by looking at it but behind some function calls...
The area of online machine learning in big data streams covers algorithms that are (1) distributed and (2) work from data streams with only a limited possibility to store past data. The first requirement mostly concerns software architectures and efficient algorithms. The second one also imposes nontrivial theoretical restrictions on the modeling methods: In the data stream model, older data...
Federated learning is aimed at implementing machine learning based on training data stored by many personal devices. The key idea is that data is never transferred to a central location, instead, machine learning models are trained locally and then aggregated centrally. Our research aims at reducing the burden on the central cloud component by using local communication on the Edge. Devices...
Dimensioning and validating large-scale highly-available computing and communication systems necessitate extensive benchmarking campaigns, which generate vast amounts of measurement data. Moreover, models derived from the evaluations of these campaigns should be scalable and portable in the sense that derived conclusions have to be applicable in a variety of deployment configurations of...
With the recent explosion of available data, you have can have millions of unlabelled examples with a high cost to obtain labels. Active learning is a machine learning technique which aims to find the potentially most informative instances in unlabeled datasets, allowing the you to label it and improve the performance of classification.
[modAL][1] is a new active learning framework for...
Biological and artificial agents commit errors. These errors are fundamentally different and reveal something about the types of computations these agent are performing. Immense advances in machine learning help us understand what underlies human behavior and understanding human behavior provide insights into challenges machine learning are maced with. In this talk I will present how our lab...
Temporal events are inherent parts of every industrial, business or generalized processes. The operational characteristic of these often high complexity processes is nicely represented by the generated temporal events. However, extracting useful knowledge from these large datasets and the process of model building using the extracted knowledge is by no means an easy task. Therefore, in our...
The ACUMOS project of the Linux Foundation aims at integrating AI tools to support managers from business plan to monitoring and customer satisfaction estimation for building the next application. However, tools for close collaboration with human intelligence has not been included yet. I introduce our tool and demonstrate its capabilities for solving basic tasks with more or less help from the...
There is an industry-wide shift happening in automotive technology. Traditional computer systems and software technology cannot keep up with the increasing complexity of the problems we need to solve. AI shows great potential to be a key technology for autonomous driving yet there are many challenges for taking research results to a great real world product. The complexity of the real world...
In the last five years, quantum computers migrated from intellectual curiosity to
the realm of technological evolution. The first commercial computer (D-Wave) does
not resemble the ideal quantum computer physicists and mathematicians dreamed
of for decades. The most dramatic difference between a Turing type machine and the new architecture is that it
not based on logical steps. While for the...
The goal of this talk is to give the mathematical background of what happens when we understand a phenomenon, independently that it happens in a computer or in a human mind. We give a definition for "understanding" as a special representation of the input. We prove that such a representation exists, and demonstrate that with it all AI tasks (classification, regression, compression, generation)...
In the last two-three decades researchers in human language technologies have tried to apply various statistical methods to understand what is encoded and how in large text corpora -- with limited success. The previous 5-6 years have basically changed both the basic research paradigm and the level of success in this research area as well. Continuous vector space models, neural networks, deep...
Since about 1950, reknown researchers have claimed from time-to-time that artificial intelligence (AI) will reach human intelligence in about 10 years. It hasn't happened. On the other hand, the evolution of computational power is exponential and the exponent of Moore's Law is large. Churchland's question -- is the brain more complex than clever? -- is still here. What are we missing?
I argue...
One-dimensional (1D) time sequences of spatial, three-dimensional (3D) simulation or image data may implicitly carry dynamical information of their embedded subregions. Continuous hyper-surfaces can be constructed for the full 3+1D data that enclose certain spacetime regions. Here, we demonstrate that such hypersurfaces may be viewed as 3D velocity vector fields, which explicitly characterize...
According to Daniel Keim: "Visual analytics combines automated analysis techniques with interactive visualisations for an effective understanding, reasoning and decision making on the basis of very large and complex datasets”.
The effectiveness of Visual analytics essentially depends on the seamless interplay between automated analysis and interactive visualisation. In particular, the later...
The fast development of diffusion MRI techniques made possible the mapping of the connections of the human brain on a macroscopic level: the graph consist several hundred, anatomically identified vertices, corresponding to 1-1.5 cm^2 areas of the gray matter (called Regions of Interests, ROIs), and two such vertices are connected by an edge if axonal fiber tracts are discovered between them....
Most common diseases are polygenic; therefore, multiple even hundreds of genes out of the overall 23,000 can be responsible for a disease. The simultaneous appearance of diseases, comorbidities, like amongst neurological disorders are expected to have a common genetic background, which can be explored using network-based approaches.
Novel network-based workflows for genetic studies provide a...
For automotive companies, continuous improvement of the manufacturing process is a must in order to achieve optimal product quality and cost. The traditional approach for this improvement process is Model Based Engineering, where hypotheses, and cause-effect chains are discovered purely by considering the laws of physics.
At Robert Bosch, engineers and data scientists are working on a concept...
Machine Learning provides highly efficient solutions for complex problems. However, the "black-box" or at most grey-box nature of the technology prohibits its use in many critical applications necessitating a throughgoing justification for the correctness of the results delivered.
One rapidly evolving approach is xAI (eXplainable AI) targeting the simultaneous delivery of a result and...
Probabilistic graphical models are successfully applied in many challenging problems of artificial intelligence and machine learning: in data and knowledge fusion, in causal inference, in trustworthy decision support systems or explanation generation. First, I summarize that their wide applicability stems from their transparent, multifaceted semantics. Second, I show that the same property...
Software testing makes up a significant part of software development processes. This is especially true in the case of a complex IT system like IP Multimedia Subsystem (IMS). Our case study describes machine learning and visual analytics approaches to support a non-functional performance test, the endurance test. This test checks whether the software can continuously work without performance...
Creating electrical circuit elements from one atom or molecule is one of the main issues in current molecular electronics research. Nowadays, investigation of conductance values of a single molecule can be realised at high mechanical stability by the mechanically controlled break junction (MCBJ) technique.
Among the high amount of conductance traces generated by break junction measurements,...
A novel method for nuclei detection is proposed to process diverse microscopy images. Our method incorporates deep learning techniques such as automatic training data generation from input test images called image style transfer learning that allows adjustment to the test set prior to training even with limited data size. The proposed method was originally designed for the Kaggle Data Science...
Deep learning algorithms became more and more popular for solving image processing tasks in the biomedical field. One of such tasks is cell detection and segmentation in differential interference contrast microscopy (DIC) brain tissue images. These algorithms require hundreds and thousands of images with ground truth segmentations for training to be highly accurate and we lack similar publicly...
This talk is an overview of a new collaboration between five institutes (Renyi, SZTAKI, PPKE, ELTE, SZTE) in the frame of the National Excellence Program. Since Hungary is traditionally strong in mathematics, an important goal of the program is to use this resource and to involve more mathematicians in the dynamically developing field of machine learning. Another goal is to explore new...