June 21, 2021, Taipei, Taiwan
Chaired by: Ibéria Medeiros, University of Lisboa |
15H00 | | USB-IDS-1: a Public Multilayer Dataset of Labeled Network Flows for IDS Evaluation |
15H20 | | SMS Goes Nuclear: Fortifying SMS-Based MFA in Online Account Ecosystem |
15H40 | | Statistical Approach For Cloud Security: Microsoft Office 365 audit logs case study |
Chaired by: Ibéria Medeiros, University of Lisboa |
|
Nils Ole Tippenhauer, CISPA, Saarland University, GermanyNils is a faculty at the CISPA Helmholtz Center for Information Security in Germany. Before joining CISPA, he held a position as Assistant Professor at the Information Systems Technology and Design Pillar, at the Singapore University of Technology and Design (SUTD). Nils earned his Dr. Sc. in Computer Science from ETH Zurich (Switzerland) in 2012. At ETH, he was part of the System Security group led by Prof. Srdjan Capkun.Nils is interested in information security aspects of practical systems. In particular, he is currently working on security of industrial control systems and the Industrial Internet of Things, for applications such as public infrastructure (e.g., public water systems and power grids). At SUTD, Nils was involved in the construction and operation of several of our practical testbeds in those areas (SWaT, WADI, EPIC). In addition, he has worked on physical layer security aspects of wireless and embedded systems, for example secure ranging, distance measurements and communication using wireless signals. Nils continues to work in those areas in the context of (I)IoT security, and contributed to projects such as the DP3T Covid-19 contact tracing project, the National Science Experiment in Singapore, and the Secure Cyber-Physical Systems Week event at SUTD. Nils also co-organized the BATADAL competition for physical process attack detection algorithms. |
Title: Process-aware Attack Detection in Cyber-Physical Systems - The good, the bad, and the
ugly
Abstract:
Cyber-physical systems (CPS) combine networking, distributed control, and physical process measurement
and actuation.
Example CPS are industrial control systems (ICS), autonomous vehicles, and smart home systems.
Successfull cyberattacks on
such systems could have severe physical impact, causing devastating damage and loss of life. As
industrial protocols and end
hosts often lack basic cybersecurity features, complementary countermeasures are required. The
(arguably) deterministic nature
of processes in CPS motivated researchers to propose a number of process-aware detection systems. In
this talk, we discuss
challenges and opportunities for such systems, and the need for benchmark datasets in that domain.
|
|
Michael Kamp, Monash University, AustraliaMichael Kamp is a research fellow at Monash University in the Data Science and AI department and the Monash Data Futures Institute. He was awarded his PhD in computer science from the University of Bonn in 2019 on Black-Box Parallelization for Machine Learning. He is an editorial board member of the Springer Machine Learning Journal, and was program chair of several workshops on distributed learning (DMLE, PDFL) and cybersecurity (MLCS, DSN).He has worked on federated learning both in research projects (EU projects LIFT and FERARI) and industry projects (for example, with Volkswagen and DHL), and has published on that subject at major conferences (NIPS, ICLR, KDD, {ECMLPKDD}, SDM) and workshops (NIPS, ICDM). His current work focusses on the theoretical explanation of deep learning, privacy-preserving federated learning of interpretable models, adversarial robustness, and uncertainty estimation in federated learning. |
Title: Secure and Trustworthy Federated Learning
Abstract:
Machine learning has transformed many industries, being employed not only on large centralized datasets,
but increasingly on
data generated by a multitude of networked, complex devices such as mobile phones, autonomous vehicles
or industrial machines.
However, data-privacy and security concerns often prevent the centralization of this data, most
prominently in healthcare and
autonomous driving. Federated learning allows to train machine learning models in-situ, i.e., on the
data-generating devices,
without sharing data.
For federated learning to be applicable in critical applications, it must be secure and trustworthy. That is, we need theoretical sound guarantees on data privacy, model quality, and robustness against (adversarial) attacks to the model. To be truly trustworthy, many applications furthermore require interpretable or intelligible models, as well as explanations for the model's predictions. I will present the current state of trustworthy federated learning, as well as recent and ongoing work in this direction. |