-
Notifications
You must be signed in to change notification settings - Fork 0
Expand file tree
/
Copy pathfeed.xml
More file actions
39 lines (22 loc) · 19.3 KB
/
feed.xml
File metadata and controls
39 lines (22 loc) · 19.3 KB
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
<?xml version="1.0" encoding="utf-8"?><feed xmlns="http://www.w3.org/2005/Atom" ><generator uri="https://jekyllrb.com/" version="3.9.5">Jekyll</generator><link href="/feed.xml" rel="self" type="application/atom+xml" /><link href="/" rel="alternate" type="text/html" /><updated>2025-11-14T14:07:08+00:00</updated><id>/feed.xml</id><title type="html">c[_]</title><subtitle>Massimo Gallo is a Principal Engineer at Huawei Technologies Co.,Ltd in Paris since 2019. He obtained the Ph.D. in Networks and computer science from Telecom ParisTech, Paris, France in 2012, performing his graduate research at Orange Labs, France Telecom, Paris, France. He spent six years as a researcher at BellLabs, Nokia working on Information Centric Networking and High-speed packet processing. His work has been published in several top tier international conferences (e.g., IEEE ICNP, Usenix ATC, ACM CoNEXT) and journals (e.g., Transaction on Networking) and led to several patents. His main research interests are on the performance evaluation, simulation, design and experimentation on networked systems with particular focus on Programmable networks, Traffic generation, and Network Monitoring.</subtitle><author><name> </name><email>massimo.gallo@huawei.com</email></author><entry><title type="html">Invited to the PACMNet (CoNEXT) 2026 program commitee</title><link href="/2025/09/24/CONEXT-PC.html" rel="alternate" type="text/html" title="Invited to the PACMNet (CoNEXT) 2026 program commitee" /><published>2025-09-24T00:00:00+00:00</published><updated>2025-09-24T00:00:00+00:00</updated><id>/2025/09/24/CONEXT-PC</id><content type="html" xml:base="/2025/09/24/CONEXT-PC.html"><![CDATA[<p>Looking forward to review exciting system research papers.</p>]]></content><author><name> </name><email>massimo.gallo@huawei.com</email></author><summary type="html"><![CDATA[Looking forward to review exciting system research papers.]]></summary></entry><entry><title type="html">Wang Chao’s MITUNE paper accepted at ICLR 2025</title><link href="/2025/01/27/ICLR.html" rel="alternate" type="text/html" title="Wang Chao’s MITUNE paper accepted at ICLR 2025" /><published>2025-01-27T00:00:00+00:00</published><updated>2025-01-27T00:00:00+00:00</updated><id>/2025/01/27/ICLR</id><content type="html" xml:base="/2025/01/27/ICLR.html"><![CDATA[<p>Wang Chao’s paper titled <em>“Information Theoretic Text-to-Image Alignment”</em> will be presented at Conext 2024</p>
<p>Congrats to Chao and the team.</p>
<p><em>Abstract:</em> Diffusion models for Text-to-Image (T2I) conditional generation have recently achieved tremendous success. Yet, aligning these models with user’s intentions still involves a laborious trial-and-error process, and this challenging alignment problem has attracted considerable attention from the research community. In this work, instead of relying on fine-grained linguistic analyses of prompts, human annotation, or auxiliary vision-language models, we use Mutual Information (MI) to guide model alignment. In brief, our method uses self-supervised fine-tuning and relies on a point-wise MI estimation between prompts and images to create a synthetic fine-tuning set for improving model alignment. Our analysis indicates that our method is superior to the state-of-the-art, yet it only requires the pre-trained denoising network of the T2I model itself to estimate MI, and a simple fine-tuning strategy that improves alignment while maintaining image quality.</p>
<p>And the paper pre-print here: <a href="https://arxiv.org/abs/2405.20759"> MITUNE paper </a></p>]]></content><author><name> </name><email>massimo.gallo@huawei.com</email></author><summary type="html"><![CDATA[Wang Chao’s paper titled “Information Theoretic Text-to-Image Alignment” will be presented at Conext 2024]]></summary></entry><entry><title type="html">Invited to the USENIX ATC 2025 program commitee</title><link href="/2024/10/07/USENIX-ATC-PC.html" rel="alternate" type="text/html" title="Invited to the USENIX ATC 2025 program commitee" /><published>2024-10-07T00:00:00+00:00</published><updated>2024-10-07T00:00:00+00:00</updated><id>/2024/10/07/USENIX-ATC-PC</id><content type="html" xml:base="/2024/10/07/USENIX-ATC-PC.html"><![CDATA[<p>Looking forward to review exciting system research.</p>]]></content><author><name> </name><email>massimo.gallo@huawei.com</email></author><summary type="html"><![CDATA[Looking forward to review exciting system research.]]></summary></entry><entry><title type="html">PhD student Hiring at Huawei Paris, Fall 2024</title><link href="/2024/07/05/Hiring.html" rel="alternate" type="text/html" title="PhD student Hiring at Huawei Paris, Fall 2024" /><published>2024-07-05T00:00:00+00:00</published><updated>2024-07-05T00:00:00+00:00</updated><id>/2024/07/05/Hiring</id><content type="html" xml:base="/2024/07/05/Hiring.html"><![CDATA[<p>My group has an opening for a PhD Student:</p>
<p>The fully funded PhD position at Huawei in Paris, and in collaboration with EURECOM, offers an exciting opportunity to work on enhancing the efficiency of AI platforms through full observability. Generative Artificial Intelligence (GenAI) has emerged as a transformative technology, with tools like ChatGPT and DALL-E gaining widespread adoption and significantly impacting various industries. These technologies are built on foundation models driven by Transformer architecture and trained on vast datasets, presenting unique challenges in scalability and power requirements.</p>
<p>This PhD project seeks to address these challenges within Cloud Native environments, which offer the flexibility needed to efficiently utilize expensive dedicated hardware infrastructure. The research will focus on developing observability systems for distributed GenAI inference and training. The successful candidate will explore several critical areas, including network monitoring to address both end-host and in-network challenges in the context of distributed GenAI models; GenAI application monitoring (possibly transparent) of training/inference processes; and the integration of network and application layer monitoring to achieve a holistic system overview able to capture the complex interplay between GenAI applications and the underlying system infrastructure. Additionally, based on the enhanched observability offered by the proposed system, the research aims to develop methodologies to minimize the resource consumption of GenAI applications without compromising their performance.</p>
<p>Key questions that will form the basis of the research include identifying the challenges of network monitoring in the context of GenAI within Cloud Native environments, understanding the challenges of application monitoring in this context, exploring ways to integrate network and application layer monitoring for a comprehensive system view, and developing systems to reduce the resource consumption of GenAI applications.</p>
<p>Ideal candidates for this position will have a strong passion for complex and distributed systems, along with a Master’s degree in Computer Science, Networking, or a related field. Familiarity with Cloud Native platforms (e.g., Kubernetes, Docker, etc.), distributed LLM and AI models’ training/inference, system programming (e.g., C, Rust, C++, P4, etc), is a strong plus. If you are interested or want to know more, drop us an email.</p>
<p>Contacts:
Roberto Morabito - Eurecom
Gabriele Castellano - Huawei
Massimo Gallo - Huawei</p>
<p>To apply send an email to us with CV, motivation letter, and references (if any)</p>]]></content><author><name> </name><email>massimo.gallo@huawei.com</email></author><summary type="html"><![CDATA[My group has an opening for a PhD Student:]]></summary></entry><entry><title type="html">PhD succesfully defended by Dr. Raphael Azorin</title><link href="/2024/06/18/Raphael.html" rel="alternate" type="text/html" title="PhD succesfully defended by Dr. Raphael Azorin" /><published>2024-06-18T00:00:00+00:00</published><updated>2024-06-18T00:00:00+00:00</updated><id>/2024/06/18/Raphael</id><content type="html" xml:base="/2024/06/18/Raphael.html"><![CDATA[<p>Raphael Azorin succesfully defended his PhD entitled <a href="https://theses.hal.science/tel-04689917/"> “Traffic representations for network measurements” </a>. Raphael is now working at UbiSoft in the context of in-game fraud detection.</p>]]></content><author><name> </name><email>massimo.gallo@huawei.com</email></author><summary type="html"><![CDATA[Raphael Azorin succesfully defended his PhD entitled “Traffic representations for network measurements” . Raphael is now working at UbiSoft in the context of in-game fraud detection.]]></summary></entry><entry><title type="html">DUMBO accepted at Conext 2024</title><link href="/2024/01/15/Conext.html" rel="alternate" type="text/html" title="DUMBO accepted at Conext 2024" /><published>2024-01-15T00:00:00+00:00</published><updated>2024-01-15T00:00:00+00:00</updated><id>/2024/01/15/Conext</id><content type="html" xml:base="/2024/01/15/Conext.html"><![CDATA[<p>Our paper titled <em>“Taming the Elephants: Affordable Flow Length Prediction inthe Data Plane”</em> will be presented at Conext 2024</p>
<p>Congrats to the team, especially Raphael, Andrea, and Gabriele.</p>
<p><em>Abstract:</em> Machine Learning (ML) shows promising potential for enhancing networking tasks. In particular, early flow size prediction would be beneficial for a wide range of use cases. However, implementing an ML-enabled system is a challenging task due to network devices limited resources. Previous works have demonstrated the feasibility of running simple ML models in the data plane, yet their integration in a practical end-to-end system is not trivial. Additional challenges in resources management and model maintenance need to be addressed to ensure the network task(s) performance improvement justifies the system overhead. In this work, we propose DUMBO, a versatile end-to-end system to generate and exploit flow size hints at line rate.Our system seamlessly integrates and maintains a simple ML model that offers early coarse-grain flow size prediction in the data plane. We evaluate the proposed system on flow scheduling, per-flow packet inter-arrival time distribution, and flow size estimation using real traffic traces, and perform experiments using an FPGA prototype running on an AMD(R)-Xilinx(R) Alveo U280 SmartNIC. Our results show that DUMBO outperforms traditional state-of-the-art approaches by equipping network devices data planes with a lightweight ML model.</p>
<p>Check out the code here: <a href="https://github.com/cpt-harlock/DUMBO"> DUMBO Github</a>
And the paper here: <a href="https://gallomassimo.github.io/docs/2024Conext.pdf"> DUMBO paper </a></p>]]></content><author><name> </name><email>massimo.gallo@huawei.com</email></author><summary type="html"><![CDATA[Our paper titled “Taming the Elephants: Affordable Flow Length Prediction inthe Data Plane” will be presented at Conext 2024 Congrats to the team, especially Raphael, Andrea, and Gabriele. Abstract: Machine Learning (ML) shows promising potential for enhancing networking tasks. In particular, early flow size prediction would be beneficial for a wide range of use cases. However, implementing an ML-enabled system is a challenging task due to network devices limited resources. Previous works have demonstrated the feasibility of running simple ML models in the data plane, yet their integration in a practical end-to-end system is not trivial. Additional challenges in resources management and model maintenance need to be addressed to ensure the network task(s) performance improvement justifies the system overhead. In this work, we propose DUMBO, a versatile end-to-end system to generate and exploit flow size hints at line rate.Our system seamlessly integrates and maintains a simple ML model that offers early coarse-grain flow size prediction in the data plane. We evaluate the proposed system on flow scheduling, per-flow packet inter-arrival time distribution, and flow size estimation using real traffic traces, and perform experiments using an FPGA prototype running on an AMD(R)-Xilinx(R) Alveo U280 SmartNIC. Our results show that DUMBO outperforms traditional state-of-the-art approaches by equipping network devices data planes with a lightweight ML model. Check out the code here: DUMBO Github And the paper here: DUMBO paper]]></summary></entry><entry><title type="html">Invited to the USENIX ATC 2024 program commitee</title><link href="/2023/12/24/USENIX-ATC-PC.html" rel="alternate" type="text/html" title="Invited to the USENIX ATC 2024 program commitee" /><published>2023-12-24T00:00:00+00:00</published><updated>2023-12-24T00:00:00+00:00</updated><id>/2023/12/24/USENIX-ATC-PC</id><content type="html" xml:base="/2023/12/24/USENIX-ATC-PC.html"><![CDATA[<p>Looking forward to review exciting system research.</p>]]></content><author><name> </name><email>massimo.gallo@huawei.com</email></author><summary type="html"><![CDATA[Looking forward to review exciting system research.]]></summary></entry><entry><title type="html">Data Augmentation for Traffic Classification accepted at PAM 2024</title><link href="/2023/12/13/PAM.html" rel="alternate" type="text/html" title="Data Augmentation for Traffic Classification accepted at PAM 2024" /><published>2023-12-13T00:00:00+00:00</published><updated>2023-12-13T00:00:00+00:00</updated><id>/2023/12/13/PAM</id><content type="html" xml:base="/2023/12/13/PAM.html"><![CDATA[<p>Our abstract titled <em>“Data Augmentation for Traffic Classification”</em> will be presented at PAM 2024</p>
<p>Congrats to Wang Chao and other Co-authors.</p>
<p><em>Abstract:</em> Abstract. Data Augmentation (DA)—enriching training data by adding synthetic samples—is a technique widely adopted in the Computer Vision (CV) and Natural Language Processing (NLP) domains to improve models performance. Yet, DA has struggled to gain traction in networking contexts, particularly in Traffic Classification (TC) tasks. In this work, we fulfill this gap by benchmarking 18 augmentation functions applied to 3 TC datasets across a variety of conditions. Our results (i) show that DA can reap benefits previously unexplored with (ii) augmentations acting on sequence order and masking being a better suit for TC and (iii) provide hints about why augmentations have positive or negative effects based on simple latent space analysis.</p>]]></content><author><name> </name><email>massimo.gallo@huawei.com</email></author><summary type="html"><![CDATA[Our abstract titled “Data Augmentation for Traffic Classification” will be presented at PAM 2024 Congrats to Wang Chao and other Co-authors. Abstract: Abstract. Data Augmentation (DA)—enriching training data by adding synthetic samples—is a technique widely adopted in the Computer Vision (CV) and Natural Language Processing (NLP) domains to improve models performance. Yet, DA has struggled to gain traction in networking contexts, particularly in Traffic Classification (TC) tasks. In this work, we fulfill this gap by benchmarking 18 augmentation functions applied to 3 TC datasets across a variety of conditions. Our results (i) show that DA can reap benefits previously unexplored with (ii) augmentations acting on sequence order and masking being a better suit for TC and (iii) provide hints about why augmentations have positive or negative effects based on simple latent space analysis.]]></summary></entry><entry><title type="html">Invited to the IEEE/IFIP TMA 2024 program commitee</title><link href="/2023/12/05/TMA-PC.html" rel="alternate" type="text/html" title="Invited to the IEEE/IFIP TMA 2024 program commitee" /><published>2023-12-05T00:00:00+00:00</published><updated>2023-12-05T00:00:00+00:00</updated><id>/2023/12/05/TMA-PC</id><content type="html" xml:base="/2023/12/05/TMA-PC.html"><![CDATA[<p>Looking forward to review exciting measurements research.</p>]]></content><author><name> </name><email>massimo.gallo@huawei.com</email></author><summary type="html"><![CDATA[Looking forward to review exciting measurements research.]]></summary></entry><entry><title type="html">SPADA accepted at Conext 2023</title><link href="/2023/10/18/Conext.html" rel="alternate" type="text/html" title="SPADA accepted at Conext 2023" /><published>2023-10-18T00:00:00+00:00</published><updated>2023-10-18T00:00:00+00:00</updated><id>/2023/10/18/Conext</id><content type="html" xml:base="/2023/10/18/Conext.html"><![CDATA[<p>Our paper titled <em>“SPADA: A Sparse Approximate Data Structure representation for data plane per-flow monitoring”</em> will be presented at Conext 2023</p>
<p>Congrats to the team, especially Andrea, Raphael, and Gabriele.</p>
<p><em>Abstract:</em> Accurate per-flow monitoring is critical for precise network diagnosis, performance analysis, and network operation and management in general. However, the limited amount of memory available on modern programmable devices and the large number of active flows force practitioners to monitor only the most relevant flows with approximate data structures, limiting their view of network traffic. We argue that, due to the skewed nature of network traffic, such data structures are, in practice, heavily underutilized, i.e., sparse, thus wasting a significant amount of memory.</p>
<p>This paper proposes a Sparse Approximate Data Structure (SPADA) representation that leverages sparsity to reduce the memory footprint of per-flow monitoring systems in the data plane while preserving their original accuracy. SPADA representation can be integrated into a generic per-flow monitoring system and is suitable for several measurement use cases. We prototype SPADA in P4 for a commercial FPGA target and test our approach with a custom simulator that we make publicly available, on four real network traces over three different monitoring tasks. Our results show that SPADA achieves 2× to 11× memory footprint reduction with respect to the state-of-the-art while maintaining the same accuracy, or even improving it.</p>]]></content><author><name> </name><email>massimo.gallo@huawei.com</email></author><summary type="html"><![CDATA[Our paper titled “SPADA: A Sparse Approximate Data Structure representation for data plane per-flow monitoring” will be presented at Conext 2023 Congrats to the team, especially Andrea, Raphael, and Gabriele. Abstract: Accurate per-flow monitoring is critical for precise network diagnosis, performance analysis, and network operation and management in general. However, the limited amount of memory available on modern programmable devices and the large number of active flows force practitioners to monitor only the most relevant flows with approximate data structures, limiting their view of network traffic. We argue that, due to the skewed nature of network traffic, such data structures are, in practice, heavily underutilized, i.e., sparse, thus wasting a significant amount of memory. This paper proposes a Sparse Approximate Data Structure (SPADA) representation that leverages sparsity to reduce the memory footprint of per-flow monitoring systems in the data plane while preserving their original accuracy. SPADA representation can be integrated into a generic per-flow monitoring system and is suitable for several measurement use cases. We prototype SPADA in P4 for a commercial FPGA target and test our approach with a custom simulator that we make publicly available, on four real network traces over three different monitoring tasks. Our results show that SPADA achieves 2× to 11× memory footprint reduction with respect to the state-of-the-art while maintaining the same accuracy, or even improving it.]]></summary></entry></feed>