In recent years, quantum machine learning (QML) has emerged as a rapidly expanding field within quantum algorithms and applications. Current noisy quantum devices enable small-scale experiments on existing quantum hardware, while increasingly powerful classical hardware allows for the simulation of quantum algorithms and the execution of robust classical AI applications.
Recently, hybrid...
The investigation of the dynamics of a network's stability is considered an important research area, whether it concerns neural networks, power supply networks, or communication and social networks. These networks are usually large graphs ranging from 10,000 (power grids) to 1 million (connectome) nodes. Solving the second-order Kuramoto equations that describe such systems optimally is...
In quantum mechanics, the wave function describes the state of a physical system. In the non-relativistic case, the time evolution of the wave function is described by the time-dependent Schrödinger equation. In 1982, D Kosloff and R Kosloff proposed a method [1] to solve the time-dependent Schrödinger equation efficiently using Fourier transformation. In 2020, Géza István Márk published a...
The dawn of explicit APIs, and particularly the introduction of Vulkan®, transformed the way we interact with GPU hardware. Despite the steep learning curve, the success and fast evolution of the Vulkan API shows that there is room for this such a new programming model in the industry, and we expect this model to get even wider adoption in the API landscape going forward. The goal of our...
We present a novel system here that is capable of recording data from vehicle-mounted sensors. The system is very flexible; digital cameras, LiDAR devices, and a GPS receiver are applied at the current status of the project; but novel sensors can be added to the approach.
A data visualization system has also been completed, that can cooperate with the recording system, its most interesting...
The GigaBit Transceiver (GBT) and the low power GBT (lpGBT) link architecture and protocol have been developed by CERN for physical experiments in the Large Hadron Collider as a radiation-hard optical link between the detectors and the data processing FPGAs (https://gitlab.cern.ch/gbt-fpga/). This presentation shows the details of how to implement a large array of GBT/lpGBT links (up to 48 x...
Many engineering applications involve the global dynamical analysis of nonlinear systems to explore their fixed points, periodic orbits, or chaotic behavior. The Simple Cell Mapping (SCM) algorithm is a tool for global dynamical analysis relying on the discretization of the state space, resulting in a finite set of cells corresponding to the possible states of the system and a discrete mapping...
Extremal combinatorial structures bear fundamental relevance in coding theory and various other applications. Their study consists in solving hard computational problems. Many of these are good candidates for to be solved on Ising-based quantum annealers for their limited size. Compared to other common benchmark problem classes these are not based on pseudorandomness, and most primal solution...
- This study employed two numerical models: the CBwaves and SEOBNRE algorithms, based on the post-Newtonian and effective-one-body approaches for binary black holes evolving on eccentric orbits. A total of 20,000 new simulations were performed for non-spinning configurations, and 240,000 simulations for aligned- and non-aligned spin configurations on a common grid of parameter values over the...
We present the results of a novel type of numerical simulation that realizes a rotating universe with a shear-free, rigid body rotation in a Gödel-like metric. We run cosmological simulations with various degrees of rotation and compare the results to the analytical expectations of the Einstein--de Sitter and the $\Lambda$CDM cosmologies. To achieve this, we use the StePS
N-body code that is...
In many image processing problems, we need to process polygons that usually involve rasterization. Also, in many such problems we need to compute reductions over images, such as average of intensities or other metrics. In some cases, a combination of the two computations is desired: we need to use the area of polygons to restrict which pixels should contribute to a reduction operation. Doing...
The tensor core is a hardware unit in most modern GPUs first introduced in the NVIDIA Volta architecture. Similarly to the well-known CUDA core (Streaming Processor, SP), the tensor core is also a computing unit of the Streaming Multiprocessor (SM), but the input data to the tensor cores are a set of matrixes rather than single values processed by the CUDA cores. Each Tensor Core provides a...
High-density EEG processing is a time-consuming task due to the large number of electrodes, high sampling rates and the computational cost of the pre-processing algorithms. Typical pre-processing steps include high-pass and low-pass filtering, line noise removal, detection and interpolation of bad channels, power spectral density calculation and time-frequency analysis. This talk will present...
The presentation gives a short overview on the newly established partnership programmes introduced by the EU and also discusses the objectives of these partnership programmes. The first partnership was called to action in September 2018: this is the EuroHPC Joint Undertaking, funded by the Commission, Participating States (35 all together from EU and outside of EU) and a few organizations of...
Large language models have changed the way we think about language and are fueling the next industrial revolution. However, during their evolution, the focus quickly shifted from language to data. In this talk, I will briefly summarise how this change impacted linguistics, linguists, and other representatives of related fields in the humanities. Theoretically, the number of word forms per...
Large Language Models (LLMs) have been with us for a few years now. Their generalization capabilities are outstanding due to their sheer size; however, they still lack the benefits of information processing grounded in multimodality. In this review, we explore how early forms of this grounding could be achieved by constructing Large World Models (LWMs). We formulate method-agnostic general...
GPUs are increasingly common in scientific high-performance computing; however, their benefits are not uniform across all areas of scientific computing. In certain fields, such as in sonochemistry where delay differential equations can arise, large amounts of data must be accessed based on the current state of the simulation in an unaligned and uncoalesced manner. This usually hinders the...
We consider generalization bounds for two types of neural structures, feedforward Rectified Linear Unit (ReLU) networks, special types of neural Ordinary Differential Equations (ODE) and State Space Models (SSM). Calculating the Rademacher complexity of both models involves computationally expensive norm calculations therefore we propose techniques to compute them efficiently.
One of the most successful treatments in cancer therapy is proton therapy, with radiation planning being a key element. Photon CT is commonly used for this purpose; however, it does not provide sufficiently accurate information about the range of protons. Therefore, proton CT imaging is more favorable for radiation planning. Due to the Coulomb scattering of protons, it is important to...
Hadron therapy is a form of cancer therapy, where we aim to destroy those cancerous cells that are hard to reach with surgery. Since this kind of approach differs from the classical gamma radiation therapy, the tomography methods used for that are not sufficient enough for Hadron Therapy . Proton Computed Tomography (pCT) is developed to achieve more accurate results for this kind of...
Small square grids presented in bit matrices representing occupied sites and some neighborhood definition, like von Neumann, Moore or hexagonal neigborhood, arise in various Monte Carlo simulations and connection games. High speed testing of a very large number of such grids for connection between the opposite edges of the grid under the given neighborhood is often reqired. Because of the...
Recent advances in laser technology and nanoplasmonics, combined with heavy-ion collisions leads to a new, previously untamed road towards fusion energy production research. Here we explore recent advances in the theoretical, simulation part of the NAno-Plasmonic Laser Inertial confinement Fusion Experiment (NAPLIFE). We study how gold nanoparticle doping enhances medium absorption under laser...