In recent years, Federated Learning (FL) methods have attained popularity and impressive performance in collaborative machine learning. In this work, we discuss advances and performance achievements by the proposed methods in the field of FL. As a solution to the Gradient Leakage (GL) problem, we suggest a couple of secure FL techniques to safeguard the deep learning model and data. First, we introduce blockchain-based Swarm Learning (SL), which allows participating clients to establish a secure network for communication while training collaborative models. Additionally, we suggest a strategy based on Fully Homomorphic Encryption (FHE) that allows FL clients to securely communicate by exchanging only encrypted model parameters. Second, we suggest solutions for heterogeneous data and models in collaborative model training. For this, we provide a deep learning model training strategy based on knowledge distillation and a client-confidence score, which distributes knowledge from a valid model instead of noisy client input. A symmetric loss is also employed to limit the detrimental impact of label diversity that finally minimizes the model overfitting to noisy labels. Furthermore, we provide a method called Multi-Domain Federated Learning (MDFL) to address data heterogeneity in collaborative model training involving datasets from many domains. Two loss functions are employed in this method: one is used to empower related latent features, while the other is used to forecast class labels correctly. We leverage non-convolutional transformer models for training in collaborative learning and Convolutional Neural Networks (CNN) for the evaluation of suggested approaches.

In recent years, Federated Learning (FL) methods have attained popularity and impressive performance in collaborative machine learning. In this work, we discuss advances and performance achievements by the proposed methods in the field of FL. As a solution to the Gradient Leakage (GL) problem, we suggest a couple of secure FL techniques to safeguard the deep learning model and data. First, we introduce blockchain-based Swarm Learning (SL), which allows participating clients to establish a secure network for communication while training collaborative models. Additionally, we suggest a strategy based on Fully Homomorphic Encryption (FHE) that allows FL clients to securely communicate by exchanging only encrypted model parameters. Second, we suggest solutions for heterogeneous data and models in collaborative model training. For this, we provide a deep learning model training strategy based on knowledge distillation and a client-confidence score, which distributes knowledge from a valid model instead of noisy client input. A symmetric loss is also employed to limit the detrimental impact of label diversity that finally minimizes the model overfitting to noisy labels. Furthermore, we provide a method called Multi-Domain Federated Learning (MDFL) to address data heterogeneity in collaborative model training involving datasets from many domains. Two loss functions are employed in this method: one is used to empower related latent features, while the other is used to forecast class labels correctly. We leverage non-convolutional transformer models for training in collaborative learning and Convolutional Neural Networks (CNN) for the evaluation of suggested approaches.

Data and Model Privacy in Cloud Computing using Federated Learning / Hussain Ahmad Madni , 2025 Jul 01. 37. ciclo, Anno Accademico 2023/2024.

Data and Model Privacy in Cloud Computing using Federated Learning

MADNI, HUSSAIN AHMAD
2025-07-01

Abstract

In recent years, Federated Learning (FL) methods have attained popularity and impressive performance in collaborative machine learning. In this work, we discuss advances and performance achievements by the proposed methods in the field of FL. As a solution to the Gradient Leakage (GL) problem, we suggest a couple of secure FL techniques to safeguard the deep learning model and data. First, we introduce blockchain-based Swarm Learning (SL), which allows participating clients to establish a secure network for communication while training collaborative models. Additionally, we suggest a strategy based on Fully Homomorphic Encryption (FHE) that allows FL clients to securely communicate by exchanging only encrypted model parameters. Second, we suggest solutions for heterogeneous data and models in collaborative model training. For this, we provide a deep learning model training strategy based on knowledge distillation and a client-confidence score, which distributes knowledge from a valid model instead of noisy client input. A symmetric loss is also employed to limit the detrimental impact of label diversity that finally minimizes the model overfitting to noisy labels. Furthermore, we provide a method called Multi-Domain Federated Learning (MDFL) to address data heterogeneity in collaborative model training involving datasets from many domains. Two loss functions are employed in this method: one is used to empower related latent features, while the other is used to forecast class labels correctly. We leverage non-convolutional transformer models for training in collaborative learning and Convolutional Neural Networks (CNN) for the evaluation of suggested approaches.
1-lug-2025
In recent years, Federated Learning (FL) methods have attained popularity and impressive performance in collaborative machine learning. In this work, we discuss advances and performance achievements by the proposed methods in the field of FL. As a solution to the Gradient Leakage (GL) problem, we suggest a couple of secure FL techniques to safeguard the deep learning model and data. First, we introduce blockchain-based Swarm Learning (SL), which allows participating clients to establish a secure network for communication while training collaborative models. Additionally, we suggest a strategy based on Fully Homomorphic Encryption (FHE) that allows FL clients to securely communicate by exchanging only encrypted model parameters. Second, we suggest solutions for heterogeneous data and models in collaborative model training. For this, we provide a deep learning model training strategy based on knowledge distillation and a client-confidence score, which distributes knowledge from a valid model instead of noisy client input. A symmetric loss is also employed to limit the detrimental impact of label diversity that finally minimizes the model overfitting to noisy labels. Furthermore, we provide a method called Multi-Domain Federated Learning (MDFL) to address data heterogeneity in collaborative model training involving datasets from many domains. Two loss functions are employed in this method: one is used to empower related latent features, while the other is used to forecast class labels correctly. We leverage non-convolutional transformer models for training in collaborative learning and Convolutional Neural Networks (CNN) for the evaluation of suggested approaches.
Federated Learning; Data Heterogeneity; Model Heterogeneity; Data Privacy; Model Privacy
Federated Learning; Data Heterogeneity; Model Heterogeneity; Data Privacy; Model Privacy
Data and Model Privacy in Cloud Computing using Federated Learning / Hussain Ahmad Madni , 2025 Jul 01. 37. ciclo, Anno Accademico 2023/2024.
File in questo prodotto:
File Dimensione Formato  
phd_thesis_hussain_uniud.pdf

accesso aperto

Descrizione: phd_thesis_hussain_uniud.pdf
Licenza: Creative commons
Dimensione 16.65 MB
Formato Adobe PDF
16.65 MB Adobe PDF Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11390/1308667
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus ND
  • ???jsp.display-item.citation.isi??? ND
social impact