News and Announcements
- June 16, 2025

Our article Reconsidering Sparse Sensing Techniques for Channel Sounding using Splicing has been accepted for publication in IEEE Transactions on Mobile Computing.
Multi-band splicing offers a promising solution to extend existing band-limited communication systems to support high-precision sensing applications. This technique involves performing narrow-band measurements at multiple center frequencies, which are then combined to effectively increase the bandwidth without changing the sampling rate. In this paper, we introduce a mmWave channel sounder based on multi-band splicing, leveraging the sparse nature of wireless channels through compressed sensing and sparse recovery techniques for channel reconstruction. We focus on three sparse recovery methods: the widely used grid-based orthogonal matching pursuit (OMP) algorithm as a baseline, our newly developed two-stage mmSplicer algorithm, which extends the OMP method by introducing an additional stage for improving its performance for our application, and our adaptation of sparse reconstruction by separable approximation (SpaRSA), named Net-SpaRSA, optimized for wireless applications.
(link to more information)
- June 15, 2025

Our article Rejuvenating IRS: AoI-based Low Overhead Reconfiguration Design has been accepted for publication in IEEE Transactions on Wireless Communications.
Intelligent reflective surface (IRS) technologies help mitigate undesirable effects in wireless links by steering the communication signal between transmitters and receivers. An IRS improves the communication link but inevitably introduces more communication overhead. This occurs especially in mobile scenarios, where the user's position must be frequently estimated to re-adjust the IRS elements periodically. Such an operation requires balancing the amount of training versus the data time slots to optimize the communication performance in the link. Aiming to study this balance with the age of information (AoI) framework, we address the question of how often an IRS needs to be updated with the lowest possible overhead and the maximum of freshness of information. We derive the corresponding analytical solution for a mobile scenario, where the transmitter is static and the mobile user (MU) follows a random waypoint mobility model. We provide a closed-form expression for the average peak age of information (PAoI), as a metric to evaluate the impact of the IRS update frequency.
(link to more information)
- June 13, 2025

Our article Incentive-based Platoon Formation: Optimizing the Personal Benefit for Drivers has been accepted for publication in IEEE Open Journal of Intelligent Transportation Systems.
In this paper, we propose a novel platoon formation algorithm that optimizes the personal benefit for drivers of individual passenger cars. For computing vehicle-to-platoon assignments, the algorithm utilizes a new metric that we propose to evaluate the personal benefits of various driving systems, including platooning. By combining fuel and travel time costs into a single monetary value, drivers can estimate overall trip costs according to a personal monetary value for time spent. This provides an intuitive way for drivers to understand and compare the benefits of driving systems like human driving, adaptive cruise control (ACC), and, of course, platooning. Unlike previous similarity-based methods, our proposed algorithm forms platoons only when beneficial for the driver, rather than solely for platooning.
(link to more information)
- June 11, 2025

Osman Tugay Başaran presented our paper Next-Gen AI-on-RAN: AI-native, Interoperable, and GPU-Accelerated Testbed Towards 6G Open-RAN at the IEEE International Conference on Communications (ICC 2025), Montréal, Canada.
Our recent work addresses a critical challenge in the journey from 5G to AI-native 6G: how to meet stringent latency and performance demands in O-RAN networks while simultaneously executing AI/ML workloads. We leverage the power of NVIDIA Aerial RAN CoLab (ARC) and the Aerial SDK to offload complex L1 and L2 signal processing onto GPUs. This not only slashes computational overhead but also opens the door to near real-time inferencing—essential for latency-sensitive 6G applications. A novel aspect of our approach is co-locating training and inference on the same NVIDIA GPU-accelerated platform.
- June 10, 2025

Our article Robust Matroid Bandit Optimization: Near-Optimal Rates under Adversarial Contamination has been accepted for publication in Elsevier Theoretical Computer Science.
We study the matroid bandit optimization problem, a fundamental and broadly applicable framework for combinatorial multi-armed bandits where the action space is constrained by a matroid. In particular, we address the challenge of designing algorithms that remain effective under adversarial contamination of feedback rewards, which may severely degrade performance or even mislead existing methods. Our main contribution is an efficient and robust algorithm named ROMM, which builds upon the principle of optimistic matroid maximization and leverages robust statistical estimators to assess base arm quality in polynomial time.
(link to more information)
- June 08, 2025

Christos Laskos presented our paper Latency Analysis of SDR-based Experimental C-RAN / O-RAN Systems at the IEEE ICC Workshop on Tactile Internet with Human-in-the-Loop (TIHL 2025), Montréal, Canada.
In this paper, we present results from an extensive experimental evaluation of latency in commonly used SDR platforms, including USRP B210, N210, N310, and X410. For this, we implemented a novel round trip time (RTT) measurement method at the SDR driver level for precise RTT analysis. Our findings highlight the impact of SDR hardware, the connection to the host computer, sampling rates, and protocol optimizations. While all tested SDRs meet the latency requirements for 4G/5G-based C-RAN implementations (less than 500 µs), some approach the 25 µs one-way O-RAN delay, and none currently satisfy the more stringent 9 µs requirements of WiFi. We identify key optimizations, such as reducing the maximum transmission unit used in the fronthaul link layer and leveraging data plane development kit (DPDK) for low-latency networking, that significantly improves the SDR performance. Our results show that, in an optimal setup using X410 USRP, 100Gbe Ethernet fronthaul, DPDK, and optimal maximum transmission unit (MTU) size, the worst-case RTT stays below 65 µs.
- June 03, 2025

Marie-Christin H. Oczko presented our paper Explainable LSTM-Based Cyclist Intention Prediction at Intersections.
In this paper, we propose a bidirectional, stacked LSTM intention prediction model utilizing real-world smartphone cycling traces. We show that even imprecise GPS data are sufficient to predict right turns, and straight-going traces with a certainty of 90 % 45 m, and left turns 28 m before the intersection center, resulting in recognizing even the intention of the fastest cyclist in the data set 4.19 s before reaching the center.
Agon Memedi presented our poster Simulator for Reinforcement Learning-based Resource Management in Vehicular Edge Computing.
In this poster, we present an open-source, modular, lightweight, discrete-event simulation framework which integrates state-of-the-art tools for improved performance evaluation. By integrating realistic mobility traces, our approach presents an opportunity to evaluate the performance and scalability of different RL-based task scheduling and resource allocation policies in diverse scenarios.
Both works were presented at the 16th IEEE Vehicular Networking Conference (VNC 2025), which was held in Porto, Portugal.
- May 30, 2025
We are looking for talented students who want to pursue their PhD studies at the School of Electrical Engineering and Computer Science, TU Berlin, Germany. The positions are part of the Telecommunication Networks group headed by Prof. Falko Dressler.
Conventional wireless communication systems are based on radio frequency waves. Even though perfectly suited for classic telecommunication tasks, communicating nodes of nano- or micro-scale size, possibly interfacing with biological cells, and/or operating in challenging environments (e.g., liquids) need to employ alternative communication paradigms and technologies like molecular communication. In the IoBNT project, we will investigate and develop a communication system targeting precision medicine and microscale industrial applications. The IoBNT is tailored to coordinate monitoring and actuation in the human body through a communication platform that also connects nanodevices and external gateways. The IoBNT will integrate radio, ultrasonic, and molecular communication schemes in the context of 6G+ wireless networks. The candidate is expected to contribute to these ongoing research activities in the scope of the ongoing projects IoBNT and NaBoCom.
(link to more information)
- May 28, 2025

Our team member Jorge Torres Gómez co-chaired a special session on Machine Learning and IoBNT networks. The session took place at the IEEE International Conference on Machine Learning for Communication and Networking (ICMLCN 2025), Barcelona, Spain. The session explored the most recent work on applying machine learning methods to advance potential applications in IoBNT networks further. Speakers targeted various applications, including monitoring biomarkers in human vessels, location estimation of cancer cells, and plant monitoring. The authors elaborated on the training and deployment of deep learning architectures like convolutional networks, feed-forward neural networks, and autoencoders. We expect also your work on the next edition of the ICMLCN conference.
- May 28, 2025

Osman Tugay Basaran presented our paper paper XAI-Enhanced Bilateral Molecular Communication: Revealing Cancer Microenvironment Dynamics via Extracellular Tumor Vesicles at the IEEE International Conference on Machine Learning for Communication and Networking (ICMLCN 2025), which was held in Barcelona, Spain. In this study, we present a neural network model designed to accurately estimate intercellular distances within the tumor microenvironment by analyzing the dynamics of extracellular vesicles (EVs). Additionally, we integrate advanced explainable AI (XAI) methods to reveal critical biological insights and ensure the transparency of AI-driven predictions.
(link to more information)