IT Security Lounge by CROSSING at TU Darmstadt is an intimate, invitation-only event intended to foster conversation, cooperation and technology transfer in cybersecurity research. It offers you a great platform to meet with top scientists, esteemed guest speakers, as well as our award-winning cybersecurity start-ups. The talks throughout the day are focussed on recent innovations, trends and future challenges in cybersecurity.
If you would like to request an invitation, please contact firstname.lastname@example.org.
|Wednesday, September 11|
|8:00 am - 9:00 am||Registration & Welcome|
|9:00 am - 9:15 am||Opening by Ahmad-Reza Sadeghi|
|9:15 am - 9:45 am||Johannes Buchmann, TU Darmstadt|
|9:45 am - 10:30 am||Fari Assaderaghi, NXP Semiconductors
Machine Learning and Trust • Abstract »
Machine Learning capabilities have been growing tremendously over the past 10 years due to advances in Deep Learning algorithms, acceleration of computational capabilities (both at cloud and edge), and availability of enormous amount of data. This has brought about many beneficial applications from speech recognition and machine translation to image and facial recognition. Countless other applications are being developed that will have a seminal impact on the entire human society. ML will transform all aspects of economy from healthcare (diagnostics, disease control, pharmaceuticals), to infrastructure (transport, logistics, energy), to finance, to advanced industries, and clearly to consumer.
This realization has not only led to break-neck advances in computer science, software and algorithms, but also a renaissance in hardware innovations that address both processing and storage of data. Vast engineering resources have been directed towards improving performance and power efficiency of algorithms and their silicon counterparts. The efforts have been directed to both ends of the spectrum, from massive data centers consuming Mega-Watts of power and computing 100’s of petaflops, to end node IoT devices with milli-Watt power consumption and a few MIPS of compute resources.
In this race for performance, until recently little attention was paid to the trust facet of Machine Learning. If any, the focus was only on improving accuracy of ML models. However, with the broad reach of ML finally being appreciated, many trust aspects are gaining attention and a burgeoning research field is taking shape. The Trust Umbrella covers security, privacy, interpretability, and fairness of ML.
ML and security intersect in two ways: ML for security and security of ML. ML can be utilized to impact security of platforms and applications. It can enhance security as an additional defense mechanism by enabling tasks such as anomaly detection, intrusion detection, and control flow protection. On the other hand, it can become a tool of bad actors by making exploitation of system vulnerabilities such as ‘side-channel attacks’ easier and more automated. Security of ML itself is however the larger concern. Both in the training phase and inferencing phase, researchers have shown that ML can be exploited. In training phase, data poisoning can lead to ML models that have specific vulnerabilities and ‘back doors’. In inferencing phase use of adversarial inputs can intentionally cause ML to mis-predict, compromising security and safety. Although recent research papers have demonstrated exploitation of ML vulnerabilities, and also countermeasures against these attacks, the practical nature and real-world applicability of these investigations are in question. Therefore, the threat models need to be much better defined and systematic methods for evaluating effectiveness of countermeasures developed.
Under this Trust Umbrella the other factor to consider is model confidentiality. Given the disproportionate amount of resources spent into training the models, and the enormous curated data that is essentially embedded in these models, model confidentiality is of prime importance. The motivation is not only avoidance of IP loss, but also the realization that attacks against ML become more efficient if adversaries have “white-box” access to the models.
Another important factor is data confidentiality. Data used both in training phase and inferencing phase can be sensitive, particularly in Machine Learning as a Service (MLaaS) applications. Examples include health, financial, industrial, and governmental data. If private data is leaked, it can lead to regulatory compliance violations (e.g., GDPR), loss of customer trust, and damage to a company’s brand. Recently, several advances in privacy-preserving ML have been made that rely on cryptographic techniques. They span from fully-homomorphic encryption (FHE), to HE and garbled circuits and multiparty compute techniques. All these approaches have significant computational overhead and are mostly limited to inferencing phase at this point. The field is very nascent and rapid advances in hardware and algorithms are being made.
Finally, a major disadvantage of current machine learning approaches is that insights about the data and the task the machine solves is hidden in increasingly complex models. If one focuses only on performance, the result is more and more opaque models. For example, the winning models of the recent Kaggle.com competition were mostly ensembles of models or very complex models such as boosted trees or deep neural networks. Interpretability is the degree to which a human can understand the cause of a ML decision. With more opaque and complex models, interpretability becomes extremely difficult. This is a major issue once we realize that ML can be used for many tasks where fairness and lack of bias are important such as credit rating, and policing, or where wrong decisions can have safety consequences. We are at the very early stages of evaluating ML models from this perspective and developing interpretable models.
|10:30 am - 11:00 am||Coffee break|
|11:00 am - 11:45 am||Rosario Cammarota, Intel
Enabling Data Scientists to incorporate Users’ Privacy in their Inference Models Seamlessly • Abstract »
Advances in users’ data privacy laws create pressures and pain points for both service users and service providers. When the service provider and the cloud infrastructure are the same entity – such an entity owns the AI models in the service, or when a user owns both the AI model and computation, the use of cryptographic methods such as homomorphic encryption and multi-party computation can be used to preserve users’ data privacy. In this scenario, however, data scientists are in a position of deploying trained models that must be aware of cryptographic libraries and protocol, which is daunting.
In this talk, we cover our Intel open source graph compiler that enables model optimizers to access privacy preserving cryptographic primitives seamlessly and in an optimized way.
|11:45 am - 12:15 pm||Eric Bodden, Uni Paderborn
CogniCrypt: Effective Secure Integration of Cryptographic Software • Abstract »
The insecure integration of cryptographic components is one of the most prevalent sources of security vulnerabilities. As also recent studies show, more than 90% of software products, even those shipped by security vendors, fail to use cryptographic APIs correctly. In this talk I will present CROSSING’s flagship project CogniCrypt, which addresses this problem through a novel combination of a highly configurable code generation and static security code analysis (SAST). Using CogniCrypt, developers can generate with ease provably secure crypto integrations, and can detect and fix integration mistakes in legacy software. In doing so, using world-leading SAST technology, CogniCrypt manages to report virtually only true errors (false positives under 5%), thereby providing developers an optimal signal-to-noise ratio.
|12:15 pm – 1:15 pm||Lunch break|
|1:15 pm - 2:00 pm||Moti Yung, Google
Layers of Abstractions and Layers of Obstructions and the U2F • Abstract »
What makes the field of Computer Science and the Information Technology Industry so successful? I will argue that certain design principles (Modularity and Abstraction) are at the heart of this success which has caused our time to be called the “Era of the Information Revolution.”
I will argue how this success inherently poses difficulties for security. I will then demonstrate how revised methodology applies in security design, and will demonstrate it using the ideas employed in the initial design of what became to be known as the Universal Second Factor Authentication (U2F).
|2:00 pm - 2:30 pm||Thomas Walther, TU Darmstadt|
|2:30 pm - 3:00 pm||Coffee break|
|3:00 pm - 3:45 pm||Gerold Hübner, SAP
Aspects of Encryption in Cloud Computing and Privacy Regulation (GDPR) • Abstract »
Encryption as a basic and fundamental security mechanism, for cloud computing is at least as important as in the on premises world. Especially the fact customers are shifting some of their control over their data from their own IT-department to a “provider” raises issues about trust assurance. It is true that in cloud computing some aspects need a different perspective compared to the on premises world. Often there are new opportunities and also difficulties arising that need to be taken into consideration. In general, Cloud Computing is more complex then the “old” on premises world, sometimes can even be characterized as disruptive. New social, economic and last but not least regulatory aspects are coming up. The classic approach to security and the data protection topic are converging. And since the enection of the EU Data Protection Regulation (GDPR) in 2018 “processing” as a main area of cloud computing is more tightly regulated than before. With some aspects relevant to encryption. This talk will discuss aspects of encryption related to GDPR and processing and show where difficulties are in practice and where potential benefits can be seen. Questions if and where encryption is mandatory or at least beneficial will be addressed.
|3:45 pm - 4:15 pm||Sebastian Faust, TU Darmstadt|
|4:15 pm - 5:00 pm||Fishbowl Discussion & Wrap-up|
|5:00 pm - 6:00 pm||Networking & Fingerfood|
IT Security Lounge will take place in the elegant yet relaxed atmosphere of Georg-Christoph-Lichtenberg-Haus. Built in 1898, the house was developed into a mansion with Art Nouveau elements in 1910 and is now an example of this unique architectural style for which Darmstadt is famous for.
The aim of the Collaborative Research Center CROSSING at TU Darmstadt is to provide cryptography-based security solutions enabling trust in new and next generation computing environments. More than 65 scientists from cryptography, quantum physics, system security and software engineering jointly work in CROSSING and perform both basic and application-oriented research. CROSSING is based at the TU Darmstadt and is funded by the German Research Foundation (DFG) since 2014.
Daniela Fleckenstein, email@example.com