Machine Learning researcher specializing in distributed training like split learning and federated learning, and optimization for resource-constrained systems. Experienced in both academic and industrial research, with strong programming skills.
* Each item is sorted by decreasing knowledge level.
NEWS
25/November/2025: Today I started my one-month secondment visit at TU Delft Faculty of Technology, Policy and Management, hosted by Dr. Y. (Aaron) Ding, as part of the Marie Curie ENSURE 6G program research program.
10/November/2025:AAAI 2026: Our paper with the title "Data Heterogeneity and Forgotten Labels in Split Federated Learning" got accepted at the main technical track! Check here git-repo
12/July/2025:Site is up! I am actively searching for a new role!
Topic: Decentralized and distributed learning Machine Learning (ML) for resource-constrained devices,
such are mobile and IoT devices. Throughout my PhD,
I have studied the challenges of Federated and Split Learning,
built frameworks for supporting such operations, developed optimization algorithms that
improve the performance of the system, and training. In general, my interests are in building
systems for distributed ML and optimizing ML training under challenges.
Thesis Tittle: Support for Parallel Drone-based Task Execution at Multiple Edge Points thesis link (english version)
Experience
Research Intern at Telefónica Innovación Diginal, Barcelona Spain
July - December 2024
This work is part of my PhD journey. Studying the effect of Catastrophic Forgetting in Parallel Split Learning for cases of high data heterogeneity.
By the end of the internship we managed to write a paper (accepted in AAAI 2026 [C4]). This work has also been accepted by the Telefonica's Patent Office
This research visit is part of the PhD. We studied Parallel Split Learning from a more theoretical perspective. In detail, inspired by the parallel machine problem, we built a new model that fully describes the system. Furthermore, we managed to formalize two optimization problems that minimize the training delay while considering key system parameters. The publications (INFOCOM 2024) [C2] and (TMC 2025) [J1] are the outcome of this visit.
Topic of the project: Deployment of DStellar in Outscale and Analyze Performance.
Automatic deployment in cloud using AWS and Ansible.
Also, I built and gathered results using Buildbot.
Learnt working in an Agile scrum team.
List of publications
Conferences/Workshops
[C4]:Tirana, J. Tsigkari, D., & Noguer S. D., Kourtellis, N. (2026, January). Data Heterogeneity and Forgotten Labels in Split Federated Learning. In Proceedings the AAAI Conference.
[C3]:Tirana, J. Lalis, S., & Chatzopoulos, D. (2025, March). Estimating the Training Time in Single-and Multi-Hop Split Federated Learning. In Proceedings of the 8th International Workshop on Edge Systems, Analytics and Networking (pp. 37-42).
[C2]:Tirana, J., Tsigkari, D., Iosifidis, G., & Chatzopoulos, D. (2024, May). Workflow optimization for parallel split learning. In IEEE INFOCOM 2024-IEEE Conference on Computer Communications (pp. 1331-1340). IEEE.
[C1]:Tirana, J., Pappas, C., Chatzopoulos, D., Lalis, S., & Vavalis, M. (2022, July). The role of compute nodes in privacy-aware decentralized ai. In Proceedings of the 6th International Workshop on Embedded and Mobile Deep Learning (pp. 19-24).
Journals
[J1]:Tirana, J., Tsigkari, D., Iosifidis, G., & Chatzopoulos, D. (2025). Minimization of the Training Makespan in Hybrid Federated Split Learning. IEEE Transactions on Mobile Computing, (01), 1-18.
[B2]: Byabazaire, J.,Tirana, J., Chouliaras, A., Koutsos, V., Aslanidis, T., Panagiotidis, I., & Chatzopoulos, D. (2025). Deep learning and the Internet of Things: Applications, challenges and opportunities. Internet of Things A to Z: Technologies and Applications, Second Editions. In Press. Wiley and Sons.
[B1]:Tirana, J., & Chatzopoulos, D. (2025). Split learning and synergetic inference: When IoT collaborates with the cloud-edge continuum. In Advances in the Internet of Things (pp. 203-227). CRC Press.
Dissertations
[D1]:Tirana, J., & Lalis, S. (2021). Support for Parallel Drone-based Task Execution at Multiple Edge Points. Mater's Thesis for University of Thessaly (Electrical and Computer Engineering)
Research & Coding Project
Studying the impact of data heterogeneity in Split Learning.
Specifically, we conduct systematic analysis with Deep Neural Networks (ResNet, VGG, MobileNet) using various data-sets.
As a result, we indetified the existence of catastrophic forgetting (CF) in the training.
Finally, we proposed a new ML solution for tackling CF caused by non-IID data.
In this work, we propose SplitPipe, a Machine Learning as a Service (MLaaS) modular and extensible framework for collaborative and distributed training. SplitPipe processes high-level tasks (e.g., with the model’s description that will be trained) and orchestrates the training process based on a novel Split Learning (SL) protocol. Additionally, SplitPipe supports multihop SL-based training that enhances data privacy and relaxes memory demands.
• Tools: C++ and LibTorch, devices: Raspberry Pi and Jetson
In this work, we consider a parallel SL system with multiple helper nodes. Specifically, we focus on orchestrating the workflow of this system, which is critical in highly heterogeneous systems. In particular, we formulate the joint problem of client-helper assignments and scheduling decisions to minimize the training makespan. We propose a solution method based on the decomposition of the problem by leveraging its inherent symmetry.
Developed a distributed system consisting of a server in the cloud and multiple servers on edge nodes. Each edge-node is located near a group of drones, with direct access to them. Edge-nodes can process the generated data in parallel and independently of each other. The system offers users a shell interface through which one can initiate tasks to specific edge nodes and afterwards combine the results. The communication between the server and the edges is done without any user intervention. Also, created an estimation model using metrics that were extracted from experimental testing.
Build multiple distributed computing systems during Bachelor's and Master's studies. Some indicative examples are: Distributed computing environment with transparent migration and load balancing, distributed system for Uniform Reliable multicast communication with synchronous view.
• Tools: Java, Unix libraries for networking
Services
Artifact reviewer: EurSys'23, CoNEXT'23
Main papers TPC: ACM WebConf'25, ACM IMC'25 (shadow)
Journal reviews: IEEE TNET/TMC/TGCN
Workshops TPC: EuroMLSys'25
Invited Talk at IBM, title: "Enabling on-device AI model training using cloud resources", 29th May '24, Dublin
Invited Talk at Qualcomm, title: "Design and Analysis of Distributed Protocols for Decentralized AI", 20th of Oct. '23, Cork
Teaching
Web Development Teaching Assistant UCD -- Ac. year: 2022-2023