Publications

See also Google Scholar and DBLP.

2017

  • J. Pennekamp, M. Henze, and K. Wehrle, “A Survey on the Evolution of Privacy Enforcement on Smartphones and the Road Ahead,” Pervasive and Mobile Computing, vol. 42, 2017.
    [BibTeX] [Abstract] [PDF] [DOI]

    With the increasing proliferation of smartphones, enforcing privacy of smartphone users becomes evermore important. Nowadays, one of the major privacy challenges is the tremendous amount of permissions requested by applications, which can significantly invade users’ privacy, often without their knowledge. In this paper, we provide a comprehensive review of approaches that can be used to report on applications’ permission usage, tune permission access, contain sensitive information, and nudge users towards more privacy-conscious behavior. We discuss key shortcomings of privacy enforcement on smartphones so far and identify suitable actions for the future.

    @article{PHW17,
    author = {Pennekamp, Jan and Henze, Martin and Wehrle, Klaus},
    title = {{A Survey on the Evolution of Privacy Enforcement on Smartphones and the Road Ahead}},
    journal = {Pervasive and Mobile Computing},
    volume = {42},
    month = {12},
    year = {2017},
    doi = {10.1016/j.pmcj.2017.09.005},
    abstract = {With the increasing proliferation of smartphones, enforcing privacy of smartphone users becomes evermore important. Nowadays, one of the major privacy challenges is the tremendous amount of permissions requested by applications, which can significantly invade users' privacy, often without their knowledge. In this paper, we provide a comprehensive review of approaches that can be used to report on applications' permission usage, tune permission access, contain sensitive information, and nudge users towards more privacy-conscious behavior. We discuss key shortcomings of privacy enforcement on smartphones so far and identify suitable actions for the future.},
    }

  • M. Henze, J. Pennekamp, D. Hellmanns, E. Mühmer, J. H. Ziegeldorf, A. Drichel, and K. Wehrle, “CloudAnalyzer: Uncovering the Cloud Usage of Mobile Apps,” in Proceedings of the 14th EAI International Conference on Mobile and Ubiquitous Systems: Computing, Networking and Services (MobiQuitous), 2017.
    [BibTeX] [Abstract] [PDF] [DOI]

    Developers of smartphone apps increasingly rely on cloud services for ready-made functionalities, e.g., to track app usage, to store data, or to integrate social networks. At the same time, mobile apps have access to various private information, ranging from users’ contact lists to their precise locations. As a result, app deployment models and data flows have become too complex and entangled for users to understand. We present CloudAnalyzer, a transparency technology that reveals the cloud usage of smartphone apps and hence provides users with the means to reclaim informational self-determination. We apply CloudAnalyzer to study the cloud exposure of 29 volunteers over the course of 19 days. In addition, we analyze the cloud usage of the 5000 most accessed mobile websites as well as 500 popular apps from five different countries. Our results reveal an excessive exposure to cloud services: 90 % of apps use cloud services and 36 % of apps used by volunteers solely communicate with cloud services. Given the information provided by CloudAnalyzer, users can critically review the cloud usage of their apps.

    @inproceedings{HPH+17,
    author = {Henze, Martin and Pennekamp, Jan and Hellmanns, David and M{\"u}hmer, Erik and Ziegeldorf, Jan Henrik and Drichel, Arthur and Wehrle, Klaus},
    title = {{CloudAnalyzer: Uncovering the Cloud Usage of Mobile Apps}},
    booktitle = {Proceedings of the 14th EAI International Conference on Mobile and Ubiquitous Systems: Computing, Networking and Services (MobiQuitous)},
    month = {11},
    year = {2017},
    doi = {10.1145/3144457.3144471},
    abstract = {Developers of smartphone apps increasingly rely on cloud services for ready-made functionalities, e.g., to track app usage, to store data, or to integrate social networks. At the same time, mobile apps have access to various private information, ranging from users' contact lists to their precise locations. As a result, app deployment models and data flows have become too complex and entangled for users to understand. We present CloudAnalyzer, a transparency technology that reveals the cloud usage of smartphone apps and hence provides users with the means to reclaim informational self-determination. We apply CloudAnalyzer to study the cloud exposure of 29 volunteers over the course of 19 days. In addition, we analyze the cloud usage of the 5000 most accessed mobile websites as well as 500 popular apps from five different countries. Our results reveal an excessive exposure to cloud services: 90 % of apps use cloud services and 36 % of apps used by volunteers solely communicate with cloud services. Given the information provided by CloudAnalyzer, users can critically review the cloud usage of their apps.},
    }

  • M. Henze, R. Inaba, I. B. Fink, and J. H. Ziegeldorf, “Privacy-preserving Comparison of Cloud Exposure Induced by Mobile Apps,” in Proceedings of the 14th EAI International Conference on Mobile and Ubiquitous Systems: Computing, Networking and Services (MobiQuitous), 2017.
    [BibTeX] [Abstract] [PDF] [DOI]

    The increasing utilization of cloud services by mobile apps on smartphones leads to serious privacy concerns. While users can quantify the cloud usage of their apps, they often cannot relate to involved privacy risks. In this paper, we apply comparison-based privacy, a behavioral nudge, to the cloud usage of mobile apps. This enables users to compare their personal app-induced cloud exposure to that of their peers to discover potential privacy risks from deviation from normal usage behavior. Since cloud usage statistics are sensitive, we protect them with k-anonymity and differential privacy.

    @inproceedings{HIFZ17,
    author = {Henze, Martin and Inaba, Ritsuma and Fink, Ina Berenice and Ziegeldorf, Jan Henrik},
    title = {{Privacy-preserving Comparison of Cloud Exposure Induced by Mobile Apps}},
    booktitle = {Proceedings of the 14th EAI International Conference on Mobile and Ubiquitous Systems: Computing, Networking and Services (MobiQuitous)},
    month = {11},
    year = {2017},
    doi = {10.1145/3144457.3144511},
    abstract = {The increasing utilization of cloud services by mobile apps on smartphones leads to serious privacy concerns. While users can quantify the cloud usage of their apps, they often cannot relate to involved privacy risks. In this paper, we apply comparison-based privacy, a behavioral nudge, to the cloud usage of mobile apps. This enables users to compare their personal app-induced cloud exposure to that of their peers to discover potential privacy risks from deviation from normal usage behavior. Since cloud usage statistics are sensitive, we protect them with k-anonymity and differential privacy.},
    }

  • M. Henze, J. Hiller, R. Hummen, R. Matzutt, K. Wehrle, and J. H. Ziegeldorf, “Network Security and Privacy for Cyber-Physical Systems,” in Security and Privacy in Cyber-Physical Systems: Foundations, Principles, and Applications, H. Song, G. A. Fink, and S. Jeschke, Eds., Wiley-IEEE Press, 2017.
    [BibTeX] [Abstract] [DOI]

    Cyber-physical systems (CPSs) are expected to collect, process, and exchange data that regularly contain sensitive information. CPSs may, for example, involve a person in the privacy of her home or convey business secrets in production plants. Hence, confidentiality, integrity, and authenticity are of utmost importance for secure and privacy-preserving CPSs. In this chapter, we present and discuss emerging security and privacy issues in CPSs and highlight challenges as well as opportunities for building and operating these systems in a secure and privacy-preserving manner. We focus on issues that are unique to CPSs, for example, resulting from the resource constraints of the involved devices and networks, the limited configurability of these devices, and the expected ubiquity of the data collection of CPSs. The covered issues impact the security and privacy of CPSs from local networks to Cloud-based environments.

    @incollection{HHH+17,
    author = {Henze, Martin and Hiller, Jens and Hummen, Ren{\'e} and Matzutt, Roman and Wehrle, Klaus and Ziegeldorf, Jan Henrik},
    title = {{Network Security and Privacy for Cyber-Physical Systems}},
    booktitle = {Security and Privacy in Cyber-Physical Systems: Foundations, Principles, and Applications},
    editor = {Song, Houbing and Fink, Glenn A. and Jeschke, Sabina},
    month = {11},
    year = {2017},
    publisher = {Wiley-IEEE Press},
    doi = {10.1002/9781119226079.ch2},
    abstract = {Cyber-physical systems (CPSs) are expected to collect, process, and exchange data that regularly contain sensitive information. CPSs may, for example, involve a person in the privacy of her home or convey business secrets in production plants. Hence, confidentiality, integrity, and authenticity are of utmost importance for secure and privacy-preserving CPSs. In this chapter, we present and discuss emerging security and privacy issues in CPSs and highlight challenges as well as opportunities for building and operating these systems in a secure and privacy-preserving manner. We focus on issues that are unique to CPSs, for example, resulting from the resource constraints of the involved devices and networks, the limited configurability of these devices, and the expected ubiquity of the data collection of CPSs. The covered issues impact the security and privacy of CPSs from local networks to Cloud-based environments.},
    }

  • A. Panchenko, A. Mitseva, M. Henze, F. Lanze, K. Wehrle, and T. Engel, “Analysis of Fingerprinting Techniques for Tor Hidden Services,” in Proceedings of the 16th Workshop on Privacy in the Electronic Society (WPES), 2017.
    [BibTeX] [Abstract] [PDF] [DOI]

    The website fingerprinting attack aims to infer the content of encrypted and anonymized connections by analyzing traffic patterns such as packet sizes, their order, and direction. Although it has been shown that no existing fingerprinting method scales in Tor when applied in realistic settings, the case of Tor hidden (onion) services has not yet been considered in such scenarios. Recent works claim the feasibility of the attack in the context of hidden services using limited datasets. In this work, we propose a novel two-phase approach for fingerprinting hidden services that does not rely on malicious Tor nodes. In our attack, the adversary merely needs to be on the link between the client and the first anonymization node. In the first phase, we detect a connection to a hidden service. Once a hidden service communication is detected, we determine the visited hidden service (phase two) within the hidden service universe. To estimate the scalability of our and other existing methods, we constructed the most extensive and realistic dataset of existing hidden services. Using this dataset, we show the feasibility of phase one of the attack and establish that phase two does not scale using existing classifiers. We present a comprehensive comparison of the performance and limits of the state-of-the-art website fingerprinting attacks with respect to Tor hidden services.

    @inproceedings{PMH+17,
    author = {Panchenko, Andriy and Mitseva, Asya and Henze, Martin and Lanze, Fabian and Wehrle, Klaus and Engel, Thomas},
    title = {{Analysis of Fingerprinting Techniques for Tor Hidden Services}},
    booktitle = {Proceedings of the 16th Workshop on Privacy in the Electronic Society (WPES)},
    month = {10},
    year = {2017},
    doi = {10.1145/3139550.3139564},
    abstract = {The website fingerprinting attack aims to infer the content of encrypted and anonymized connections by analyzing traffic patterns such as packet sizes, their order, and direction. Although it has been shown that no existing fingerprinting method scales in Tor when applied in realistic settings, the case of Tor hidden (onion) services has not yet been considered in such scenarios. Recent works claim the feasibility of the attack in the context of hidden services using limited datasets.
    In this work, we propose a novel two-phase approach for fingerprinting hidden services that does not rely on malicious Tor nodes. In our attack, the adversary merely needs to be on the link between the client and the first anonymization node. In the first phase, we detect a connection to a hidden service. Once a hidden service communication is detected, we determine the visited hidden service (phase two) within the hidden service universe. To estimate the scalability of our and other existing methods, we constructed the most extensive and realistic dataset of existing hidden services. Using this dataset, we show the feasibility of phase one of the attack and establish that phase two does not scale using existing classifiers. We present a comprehensive comparison of the performance and limits of the state-of-the-art website fingerprinting attacks with respect to Tor hidden services.},
    }

  • M. Henze, B. Wolters, R. Matzutt, T. Zimmermann, and K. Wehrle, “Distributed Configuration, Authorization and Management in the Cloud-based Internet of Things,” in 2017 IEEE International Conference on Trust, Security and Privacy in Computing and Communications (TrustCom), 2017.
    [BibTeX] [Abstract] [PDF] [DOI]

    Network-based deployments within the Internet of Things increasingly rely on the cloud-controlled federation of individual networks to configure, authorize, and manage devices across network borders. While this approach allows the convenient and reliable interconnection of networks, it raises severe security and safety concerns. These concerns range from a curious cloud provider accessing confidential data to a malicious cloud provider being able to physically control safety-critical devices. To overcome these concerns, we present D-CAM, which enables secure and distributed configuration, authorization, and management across network borders in the cloud-based Internet of Things. With D-CAM, we constrain the cloud to act as highly available and scalable storage for control messages. Consequently, we achieve reliable network control across network borders and strong security guarantees. Our evaluation confirms that D-CAM adds only a modest overhead and can scale to large networks.

    @inproceedings{HWM+17,
    author = {Henze, Martin and Wolters, Benedikt and Matzutt, Roman and Zimmermann, Torsten and Wehrle, Klaus},
    title = {{Distributed Configuration, Authorization and Management in the Cloud-based Internet of Things}},
    booktitle = {2017 IEEE International Conference on Trust, Security and Privacy in Computing and Communications (TrustCom)},
    month = {08},
    year = {2017},
    doi = {10.1109/Trustcom/BigDataSE/ICESS.2017.236},
    abstract = {Network-based deployments within the Internet of Things increasingly rely on the cloud-controlled federation of individual networks to configure, authorize, and manage devices across network borders. While this approach allows the convenient and reliable interconnection of networks, it raises severe security and safety concerns. These concerns range from a curious cloud provider accessing confidential data to a malicious cloud provider being able to physically control safety-critical devices. To overcome these concerns, we present D-CAM, which enables secure and distributed configuration, authorization, and management across network borders in the cloud-based Internet of Things. With D-CAM, we constrain the cloud to act as highly available and scalable storage for control messages. Consequently, we achieve reliable network control across network borders and strong security guarantees. Our evaluation confirms that D-CAM adds only a modest overhead and can scale to large networks.},
    }

  • J. H. Ziegeldorf, J. Pennekamp, D. Hellmanns, F. Schwinger, I. Kunze, M. Henze, J. Hiller, R. Matzutt, and K. Wehrle, “BLOOM: BLoom filter based oblivious outsourced matchings,” BMC Medical Genomics, vol. 10, iss. Suppl 2, 2017.
    [BibTeX] [Abstract] [PDF] [DOI]

    Whole genome sequencing has become fast, accurate, and cheap, paving the way towards the large-scale collection and processing of human genome data. Unfortunately, this dawning genome era does not only promise tremendous advances in biomedical research but also causes unprecedented privacy risks for the many. Handling storage and processing of large genome datasets through cloud services greatly aggravates these concerns. Current research efforts thus investigate the use of strong cryptographic methods and protocols to implement privacy-preserving genomic computations. We propose Fhe-Bloom and Phe-Bloom, two efficient approaches for genetic disease testing using homomorphically encrypted Bloom filters. Both approaches allow the data owner to securely outsource storage and computation to an untrusted cloud. Fhe-Bloom is fully secure in the semi-honest model while Phe-Bloom slightly relaxes security guarantees in a trade-off for highly improved performance. We implement and evaluate both approaches on a large dataset of up to 50 patient genomes each with up to 1000000 variations (single nucleotide polymorphisms). For both implementations, overheads scale linearly in the number of patients and variations, while Phe-Bloom is faster by at least three orders of magnitude. For example, testing disease susceptibility of 50 patients with 100000 variations requires only a total of 308.31 s (σ=8.73 s) with our first approach and a mere 0.07 s (σ=0.00 s) with the second. We additionally discuss security guarantees of both approaches and their limitations as well as possible extensions towards more complex query types, e.g., fuzzy or range queries. Both approaches handle practical problem sizes efficiently and are easily parallelized to scale with the elastic resources available in the cloud. The fully homomorphic scheme, Fhe-Bloom, realizes a comprehensive outsourcing to the cloud, while the partially homomorphic scheme, Phe-Bloom, trades a slight relaxation of security guarantees against performance improvements by at least three orders of magnitude.

    @article{ZPH+17,
    author = {Ziegeldorf, Jan Henrik and Pennekamp, Jan and Hellmanns, David and Schwinger, Felix and Kunze, Ike and Henze, Martin and Hiller, Jens and Matzutt, Roman and Wehrle, Klaus},
    title = {{BLOOM: BLoom filter based oblivious outsourced matchings}},
    journal = {BMC Medical Genomics},
    volume = {10},
    number = {Suppl 2},
    month = {07},
    year = {2017},
    doi = {10.1186/s12920-017-0277-y},
    abstract = {Whole genome sequencing has become fast, accurate, and cheap, paving the way towards the large-scale collection and processing of human genome data. Unfortunately, this dawning genome era does not only promise tremendous advances in biomedical research but also causes unprecedented privacy risks for the many. Handling storage and processing of large genome datasets through cloud services greatly aggravates these concerns. Current research efforts thus investigate the use of strong cryptographic methods and protocols to implement privacy-preserving genomic computations.
    We propose Fhe-Bloom and Phe-Bloom, two efficient approaches for genetic disease testing using homomorphically encrypted Bloom filters. Both approaches allow the data owner to securely outsource storage and computation to an untrusted cloud. Fhe-Bloom is fully secure in the semi-honest model while Phe-Bloom slightly relaxes security guarantees in a trade-off for highly improved performance.
    We implement and evaluate both approaches on a large dataset of up to 50 patient genomes each with up to 1000000 variations (single nucleotide polymorphisms). For both implementations, overheads scale linearly in the number of patients and variations, while Phe-Bloom is faster by at least three orders of magnitude. For example, testing disease susceptibility of 50 patients with 100000 variations requires only a total of 308.31 s (σ=8.73 s) with our first approach and a mere 0.07 s (σ=0.00 s) with the second. We additionally discuss security guarantees of both approaches and their limitations as well as possible extensions towards more complex query types, e.g., fuzzy or range queries.
    Both approaches handle practical problem sizes efficiently and are easily parallelized to scale with the elastic resources available in the cloud. The fully homomorphic scheme, Fhe-Bloom, realizes a comprehensive outsourcing to the cloud, while the partially homomorphic scheme, Phe-Bloom, trades a slight relaxation of security guarantees against performance improvements by at least three orders of magnitude.},
    }

  • M. Henze, M. P. Sanford, and O. Hohlfeld, “Veiled in Clouds? Assessing the Prevalence of Cloud Computing in the Email Landscape,” in 2017 Network Traffic Measurement and Analysis Conference (TMA), 2017.
    [BibTeX] [Abstract] [PDF] [DOI]

    The ongoing adoption of cloud-based email services – mainly run by few operators – transforms the largely decentralized email infrastructure into a more centralized one. Yet, little empirical knowledge on this transition and its implications exists. To address this gap, we assess the prevalence and exposure of Internet users to cloud-based email in a measurement study. In a first step, we study the email infrastructure and detect SMTP servers running in the cloud by analyzing all 154M .com/.net/.org domains for cloud usage. Informed by this infrastructure assessment, we then study the prevalence of cloud-based SMTP services among actual email exchanges. Here, we analyze 31M exchanged emails, ranging from public email archives to the personal emails of 20 users. Our results show that as of today, 13% to 25% of received emails utilize cloud services and 30% to 70% of this cloud usage is invisible for users.

    @inproceedings{HSH17,
    author = {Henze, Martin and Sanford, Mary Peyton and Hohlfeld, Oliver},
    title = {{Veiled in Clouds? Assessing the Prevalence of Cloud Computing in the Email Landscape}},
    booktitle = {2017 Network Traffic Measurement and Analysis Conference (TMA)},
    month = {06},
    year = {2017},
    doi = {10.23919/TMA.2017.8002910},
    abstract = {The ongoing adoption of cloud-based email services - mainly run by few operators - transforms the largely decentralized email infrastructure into a more centralized one. Yet, little empirical knowledge on this transition and its implications exists. To address this gap, we assess the prevalence and exposure of Internet users to cloud-based email in a measurement study. In a first step, we study the email infrastructure and detect SMTP servers running in the cloud by analyzing all 154M .com/.net/.org domains for cloud usage. Informed by this infrastructure assessment, we then study the prevalence of cloud-based SMTP services among actual email exchanges. Here, we analyze 31M exchanged emails, ranging from public email archives to the personal emails of 20 users. Our results show that as of today, 13% to 25% of received emails utilize cloud services and 30% to 70% of this cloud usage is invisible for users.},
    }

  • M. Henze, R. Matzutt, J. Hiller, E. Mühmer, J. H. Ziegeldorf, J. van der Giet, and K. Wehrle, “Practical Data Compliance for Cloud Storage,” in 2017 IEEE International Conference on Cloud Engineering (IC2E), 2017.
    [BibTeX] [Abstract] [PDF] [DOI]

    Despite their increasing proliferation and technical variety, existing cloud storage technologies by design lack support for enforcing compliance with regulatory, organizational, or contractual data handling requirements. However, with legislation responding to rising privacy concerns, this becomes a crucial technical capability for cloud storage systems. In this paper, we introduce PRADA , a practical approach to enforce data compliance in key-value based cloud storage systems. To this end, PRADA introduces a transparent data handling layer which enables clients to specify data handling requirements and provides operators with the technical means to adhere to them. The evaluation of our prototype shows that the modest overheads for supporting data handling requirements in cloud storage systems are practical for real-world deployments.

    @inproceedings{HMH+17,
    author = {Henze, Martin and Matzutt, Roman and Hiller, Jens and M{\"u}hmer, Erik and Ziegeldorf, Jan Henrik and van der Giet, Johannes and Wehrle, Klaus},
    title = {{Practical Data Compliance for Cloud Storage}},
    booktitle = {2017 IEEE International Conference on Cloud Engineering (IC2E)},
    month = {04},
    year = {2017},
    doi = {10.1109/IC2E.2017.32},
    abstract = {Despite their increasing proliferation and technical variety, existing cloud storage technologies by design lack support for enforcing compliance with regulatory, organizational, or contractual data handling requirements. However, with legislation responding to rising privacy concerns, this becomes a crucial technical capability for cloud storage systems. In this paper, we introduce PRADA , a practical approach to enforce data compliance in key-value based cloud storage systems. To this end, PRADA introduces a transparent data handling layer which enables clients to specify data handling requirements and provides operators with the technical means to adhere to them. The evaluation of our prototype shows that the modest overheads for supporting data handling requirements in cloud storage systems are practical for real-world deployments.},
    }

  • J. H. Ziegeldorf, J. Metzke, J. Rüth, M. Henze, and K. Wehrle, “Privacy-Preserving HMM Forward Computation,” in The 7th ACM Conference on Data and Application Security and Privacy (CODASPY), 2017.
    [BibTeX] [Abstract] [PDF] [DOI]

    In many areas such as bioinformatics, pattern recognition, and signal processing, Hidden Markov Models (HMMs) have become an indispensable statistical tool. A fundamental building block for these applications is the Forward algorithm which computes the likelihood to observe a given sequence of emissions for a given HMM. The classical Forward algorithm requires that one party holds both the model and observation sequences. However, we observe for many emerging applications and services that the models and observation sequences are held by different parties who are not able to share their information due to applicable data protection legislation or due to concerns over intellectual property and privacy. This renders the application of HMMs infeasible. In this paper, we show how to resolve this evident conflict of interests using secure two-party computation. Concretely, we propose Priward which enables two mutually untrusting parties to compute the Forward algorithm securely, i.e., without requiring either party to share her sensitive inputs with the other or any third party. The evaluation of our implementation of Priward shows that our solution is efficient, accurate, and outperforms related works by a factor of 4 to 126. To highlight the applicability of our approach in real-world deployments, we combine Priward with the widely used HMMER biosequence analysis framework and show how to analyze real genome sequences in a privacy-preserving manner.

    @inproceedings{ZMR+17,
    author = {Ziegeldorf, Jan Henrik and Metzke, Jan and R{\"u}th, Jan and Henze, Martin and Wehrle, Klaus},
    title = {{Privacy-Preserving HMM Forward Computation}},
    booktitle = {The 7th ACM Conference on Data and Application Security and Privacy (CODASPY)},
    month = {03},
    year = {2017},
    doi = {10.1145/3029806.3029816},
    abstract = {In many areas such as bioinformatics, pattern recognition, and signal processing, Hidden Markov Models (HMMs) have become an indispensable statistical tool. A fundamental building block for these applications is the Forward algorithm which computes the likelihood to observe a given sequence of emissions for a given HMM. The classical Forward algorithm requires that one party holds both the model and observation sequences. However, we observe for many emerging applications and services that the models and observation sequences are held by different parties who are not able to share their information due to applicable data protection legislation or due to concerns over intellectual property and privacy. This renders the application of HMMs infeasible. In this paper, we show how to resolve this evident conflict of interests using secure two-party computation. Concretely, we propose Priward which enables two mutually untrusting parties to compute the Forward algorithm securely, i.e., without requiring either party to share her sensitive inputs with the other or any third party. The evaluation of our implementation of Priward shows that our solution is efficient, accurate, and outperforms related works by a factor of 4 to 126. To highlight the applicability of our approach in real-world deployments, we combine Priward with the widely used HMMER biosequence analysis framework and show how to analyze real genome sequences in a privacy-preserving manner.},
    }

  • J. H. Ziegeldorf, M. Henze, J. Bavendiek, and K. Wehrle, “TraceMixer: Privacy-Preserving Crowd-Sensing sans Trusted Third Party,” in 2017 Wireless On-demand Network Systems and Services Conference (WONS), 2017.
    [BibTeX] [Abstract] [PDF] [DOI]

    Crowd-sensing promises cheap and easy large scale data collection by tapping into the sensing and processing capabilities of smart phone users. However, the vast amount of fine-grained location data collected raises serious privacy concerns among potential contributors. In this paper, we argue that crowd-sensing has unique requirements w.r.t. privacy and data utility which renders existing protection mechanisms infeasible. We hence propose TraceMixer, a novel location privacy protection mechanism tailored to the special requirements in crowd-sensing. TraceMixer builds upon the well-studied concept of mix zones to provide trajectory privacy while achieving high spatial accuracy. First in this line of research, TraceMixer applies secure two-party computation technologies to realize a trustless architecture that does not require participants to share locations with anyone in clear. We evaluate TraceMixer on real-world datasets to show the feasibility of our approach in terms of privacy, utility, and performance. Finally, we demonstrate the applicability of TraceMixer in a real-world crowd-sensing campaign.

    @inproceedings{ZHBW17,
    author = {Ziegeldorf, Jan Henrik and Henze, Martin and Bavendiek, Jens and Wehrle, Klaus},
    title = {{TraceMixer: Privacy-Preserving Crowd-Sensing sans Trusted Third Party}},
    booktitle = {2017 Wireless On-demand Network Systems and Services Conference (WONS)},
    month = {02},
    year = {2017},
    doi = {10.1109/WONS.2017.7888771},
    abstract = {Crowd-sensing promises cheap and easy large scale data collection by tapping into the sensing and processing capabilities of smart phone users. However, the vast amount of fine-grained location data collected raises serious privacy concerns among potential contributors. In this paper, we argue that crowd-sensing has unique requirements w.r.t. privacy and data utility which renders existing protection mechanisms infeasible. We hence propose TraceMixer, a novel location privacy protection mechanism tailored to the special requirements in crowd-sensing. TraceMixer builds upon the well-studied concept of mix zones to provide trajectory privacy while achieving high spatial accuracy. First in this line of research, TraceMixer applies secure two-party computation technologies to realize a trustless architecture that does not require participants to share locations with anyone in clear. We evaluate TraceMixer on real-world datasets to show the feasibility of our approach in terms of privacy, utility, and performance. Finally, we demonstrate the applicability of TraceMixer in a real-world crowd-sensing campaign.},
    }

2016

  • M. Henze, D. Kerpen, J. Hiller, M. Eggert, D. Hellmanns, E. Mühmer, O. Renuli, H. Maier, C. Stüble, R. Häußling, and K. Wehrle, “Towards Transparent Information on Individual Cloud Service Usage,” in 2016 IEEE International Conference on Cloud Computing Technology and Science (CloudCom), 2016.
    [BibTeX] [Abstract] [PDF] [DOI]

    Cloud computing allows developers of mobile apps to overcome limited computing, storage, and power resources of modern smartphones. Besides these huge advantages, the hidden utilization of cloud services by mobile apps leads to severe privacy concerns. To overcome these concerns and allow users and companies to properly assess the risks of hidden cloud usage, it is necessary to provide transparency over the cloud services utilized by smartphone apps. In this paper, we present our ongoing work on TRINICS to provide transparent information on individual cloud service usage. To this end, we analyze network traffic of smartphone apps with the goal to detect and uncover cloud usage. We present the resulting statistics on cloud usage to the user and put these numbers into context through anonymous comparison with users’ peer groups (i.e., users with similar sociodemographic background and interests). By doing so, we enable users to make an informed decision on suitable means for sufficient self data protection for their future use of apps and cloud services.

    @inproceedings{HKH+16,
    author = {Henze, Martin and Kerpen, Daniel and Hiller, Jens and Eggert, Michael and Hellmanns, David and M{\"u}hmer, Erik and Renuli, Oussama and Maier, Henning and St{\"u}ble, Christian and H{\"a}u{\ss}ling, Roger and Wehrle, Klaus},
    title = {{Towards Transparent Information on Individual Cloud Service Usage}},
    booktitle = {2016 IEEE International Conference on Cloud Computing Technology and Science (CloudCom)},
    month = {12},
    year = {2016},
    doi = {10.1109/CloudCom.2016.0064},
    abstract = {Cloud computing allows developers of mobile apps to overcome limited computing, storage, and power resources of modern smartphones. Besides these huge advantages, the hidden utilization of cloud services by mobile apps leads to severe privacy concerns. To overcome these concerns and allow users and companies to properly assess the risks of hidden cloud usage, it is necessary to provide transparency over the cloud services utilized by smartphone apps. In this paper, we present our ongoing work on TRINICS to provide transparent information on individual cloud service usage. To this end, we analyze network traffic of smartphone apps with the goal to detect and uncover cloud usage. We present the resulting statistics on cloud usage to the user and put these numbers into context through anonymous comparison with users' peer groups (i.e., users with similar sociodemographic background and interests). By doing so, we enable users to make an informed decision on suitable means for sufficient self data protection for their future use of apps and cloud services.},
    }

  • A. Mitseva, A. Panchenko, F. Lanze, M. Henze, K. Wehrle, and T. Engel, “POSTER: Fingerprinting Tor Hidden Services,” in Proceedings of the 23rd ACM Conference on Computer and Communications Security (CCS), 2016.
    [BibTeX] [Abstract] [PDF] [DOI]

    The website fingerprinting attack aims to infer the content of encrypted and anonymized connections by analyzing patterns from the communication such as packet sizes, their order, and direction. Although recent study has shown that no existing fingerprinting method scales in Tor when applied in realistic settings, this does not consider the case of Tor hidden services. In this work, we propose a two-phase fingerprinting approach applied in the scope of Tor hidden services and explore its scalability. We show that the success of the only previously proposed fingerprinting attack against hidden services strongly depends on the Tor version used; i.e., it may be applicable to less than 1.5% of connections to hidden services due to its requirement for control of the first anonymization node. In contrast, in our method, the attacker needs merely to be somewhere on the link between the client and the first anonymization node and the attack can be mounted for any connection to a hidden service.

    @inproceedings{MPL+16,
    author = {Mitseva, Asya and Panchenko, Andriy and Lanze, Fabian and Henze, Martin and Wehrle, Klaus and Engel, Thomas},
    title = {{POSTER: Fingerprinting Tor Hidden Services}},
    booktitle = {Proceedings of the 23rd ACM Conference on Computer and Communications Security (CCS)},
    month = {10},
    year = {2016},
    doi = {10.1145/2976749.2989054},
    abstract = {The website fingerprinting attack aims to infer the content of encrypted and anonymized connections by analyzing patterns from the communication such as packet sizes, their order, and direction. Although recent study has shown that no existing fingerprinting method scales in Tor when applied in realistic settings, this does not consider the case of Tor hidden services. In this work, we propose a two-phase fingerprinting approach applied in the scope of Tor hidden services and explore its scalability. We show that the success of the only previously proposed fingerprinting attack against hidden services strongly depends on the Tor version used; i.e., it may be applicable to less than 1.5% of connections to hidden services due to its requirement for control of the first anonymization node. In contrast, in our method, the attacker needs merely to be somewhere on the link between the client and the first anonymization node and the attack can be mounted for any connection to a hidden service.},
    }

  • R. Matzutt, O. Hohlfeld, M. Henze, R. Rawiel, J. H. Ziegeldorf, and K. Wehrle, “POSTER: I Don’t Want That Content! On the Risks of Exploiting Bitcoin’s Blockchain as a Content Store,” in Proceedings of the 23rd ACM Conference on Computer and Communications Security (CCS), 2016.
    [BibTeX] [Abstract] [PDF] [DOI]

    Bitcoin has revolutionized digital currencies and its underlying blockchain has been successfully applied to other domains. To be verifiable by every participating peer, the blockchain maintains every transaction in a persistent, distributed, and tamper-proof log that every participant needs to replicate locally. While this constitutes the central innovation of blockchain technology and is thus a desired property, it can also be abused in ways that are harmful to the overall system. We show for Bitcoin that blockchains potentially provide multiple ways to store (malicious and illegal) content that, once stored, cannot be removed and is replicated by every participating user. We study the evolution of content storage in Bitcoin’s blockchain, classify the stored content, and highlight implications of allowing the storage of arbitrary data in globally replicated blockchains.

    @inproceedings{MHH+16,
    author = {Matzutt, Roman and Hohlfeld, Oliver and Henze, Martin and Rawiel, Robin and Ziegeldorf, Jan Henrik and Wehrle, Klaus},
    title = {{POSTER: I Don't Want That Content! On the Risks of Exploiting Bitcoin's Blockchain as a Content Store}},
    booktitle = {Proceedings of the 23rd ACM Conference on Computer and Communications Security (CCS)},
    month = {10},
    year = {2016},
    doi = {10.1145/2976749.2989059},
    abstract = {Bitcoin has revolutionized digital currencies and its underlying blockchain has been successfully applied to other domains. To be verifiable by every participating peer, the blockchain maintains every transaction in a persistent, distributed, and tamper-proof log that every participant needs to replicate locally. While this constitutes the central innovation of blockchain technology and is thus a desired property, it can also be abused in ways that are harmful to the overall system. We show for Bitcoin that blockchains potentially provide multiple ways to store (malicious and illegal) content that, once stored, cannot be removed and is replicated by every participating user. We study the evolution of content storage in Bitcoin's blockchain, classify the stored content, and highlight implications of allowing the storage of arbitrary data in globally replicated blockchains.},
    }

  • M. Henze, J. Hiller, S. Schmerling, J. H. Ziegeldorf, and K. Wehrle, “CPPL: Compact Privacy Policy Language,” in Proceedings of the 15th Workshop on Privacy in the Electronic Society (WPES), 2016.
    [BibTeX] [Abstract] [PDF] [DOI]

    Recent technology shifts such as cloud computing, the Internet of Things, and big data lead to a significant transfer of sensitive data out of trusted edge networks. To counter resulting privacy concerns, we must ensure that this sensitive data is not inadvertently forwarded to third-parties, used for unintended purposes, or handled and stored in violation of legal requirements. Related work proposes to solve this challenge by annotating data with privacy policies before data leaves the control sphere of its owner. However, we find that existing privacy policy languages are either not flexible enough or require excessive processing, storage, or bandwidth resources which prevents their widespread deployment. To fill this gap, we propose CPPL, a Compact Privacy Policy Language which compresses privacy policies by taking advantage of flexibly specifiable domain knowledge. Our evaluation shows that CPPL reduces policy sizes by two orders of magnitude compared to related work and can check several thousand of policies per second. This allows for individual per-data item policies in the context of cloud computing, the Internet of Things, and big data.

    @inproceedings{HHS+16,
    author = {Henze, Martin and Hiller, Jens and Schmerling, Sascha and Ziegeldorf, Jan Henrik and Wehrle, Klaus},
    title = {{CPPL: Compact Privacy Policy Language}},
    booktitle = {Proceedings of the 15th Workshop on Privacy in the Electronic Society (WPES)},
    month = {10},
    year = {2016},
    doi = {10.1145/2994620.2994627},
    abstract = {Recent technology shifts such as cloud computing, the Internet of Things, and big data lead to a significant transfer of sensitive data out of trusted edge networks. To counter resulting privacy concerns, we must ensure that this sensitive data is not inadvertently forwarded to third-parties, used for unintended purposes, or handled and stored in violation of legal requirements. Related work proposes to solve this challenge by annotating data with privacy policies before data leaves the control sphere of its owner. However, we find that existing privacy policy languages are either not flexible enough or require excessive processing, storage, or bandwidth resources which prevents their widespread deployment. To fill this gap, we propose CPPL, a Compact Privacy Policy Language which compresses privacy policies by taking advantage of flexibly specifiable domain knowledge. Our evaluation shows that CPPL reduces policy sizes by two orders of magnitude compared to related work and can check several thousand of policies per second. This allows for individual per-data item policies in the context of cloud computing, the Internet of Things, and big data.},
    }

  • M. Henze, J. Hiller, O. Hohlfeld, and K. Wehrle, “Moving Privacy-Sensitive Services from Public Clouds to Decentralized Private Clouds,” in 2016 IEEE International Conference on Cloud Engineering (IC2E) Workshops, 2016.
    [BibTeX] [Abstract] [PDF] [DOI]

    Today’s public cloud services suffer from fundamental privacy issues, e.g., as demonstrated by the global surveillance disclosures. The lack of privacy in cloud computing stems from its inherent centrality. State-of-the-art approaches that increase privacy for cloud services either operate cloud-like services on user’s devices or encrypt data prior to upload to the cloud. However, these techniques jeopardize advantages of the cloud such as elasticity of processing resources. In contrast, we propose decentralized private clouds to allow users to protect their privacy and still benefit from the advantages of cloud computing. Our approach utilizes idle resources of friends and family to realize a trusted, decentralized system in which cloud services can be operated securely and privacy-preserving. We discuss our approach and substantiate its feasibility with initial experiments.

    @inproceedings{HHHW16,
    author = {Henze, Martin and Hiller, Jens and Hohlfeld, Oliver and Wehrle, Klaus},
    title = {{Moving Privacy-Sensitive Services from Public Clouds to Decentralized Private Clouds}},
    booktitle = {2016 IEEE International Conference on Cloud Engineering (IC2E) Workshops},
    month = {04},
    year = {2016},
    doi = {10.1109/IC2EW.2016.24},
    abstract = {Today's public cloud services suffer from fundamental privacy issues, e.g., as demonstrated by the global surveillance disclosures. The lack of privacy in cloud computing stems from its inherent centrality. State-of-the-art approaches that increase privacy for cloud services either operate cloud-like services on user's devices or encrypt data prior to upload to the cloud. However, these techniques jeopardize advantages of the cloud such as elasticity of processing resources. In contrast, we propose decentralized private clouds to allow users to protect their privacy and still benefit from the advantages of cloud computing. Our approach utilizes idle resources of friends and family to realize a trusted, decentralized system in which cloud services can be operated securely and privacy-preserving. We discuss our approach and substantiate its feasibility with initial experiments.},
    }

  • M. Henze, L. Hermerschmidt, D. Kerpen, R. Häußling, B. Rumpe, and K. Wehrle, “A Comprehensive Approach to Privacy in the Cloud-based Internet of Things,” Future Generation Computer Systems (FGCS), vol. 56, 2016.
    [BibTeX] [Abstract] [PDF] [DOI]

    In the near future, the Internet of Things is expected to penetrate all aspects of the physical world, including homes and urban spaces. In order to handle the massive amount of data that becomes collectible and to offer services on top of this data, the most convincing solution is the federation of the Internet of Things and cloud computing. Yet, the wide adoption of this promising vision, especially for application areas such as pervasive health care, assisted living, and smart cities, is hindered by severe privacy concerns of the individual users. Hence, user acceptance is a critical factor to turn this vision into reality. To address this critical factor and thus realize the cloud-based Internet of Things for a variety of different application areas, we present our comprehensive approach to privacy in this envisioned setting. We allow an individual user to enforce all her privacy requirements before any sensitive data is uploaded to the cloud, enable developers of cloud services to integrate privacy functionality already into the development process of cloud services, and offer users a transparent and adaptable interface for configuring their privacy requirements.

    @article{HHK+15,
    author = {Henze, Martin and Hermerschmidt, Lars and Kerpen, Daniel and H{\"a}u{\ss}ling, Roger and Rumpe, Bernhard and Wehrle, Klaus},
    journal = {Future Generation Computer Systems (FGCS)},
    volume = {56},
    title = {{A Comprehensive Approach to Privacy in the Cloud-based Internet of Things}},
    month = {03},
    year = {2016},
    doi = {10.1016/j.future.2015.09.016},
    abstract = {In the near future, the Internet of Things is expected to penetrate all aspects of the physical world, including homes and urban spaces. In order to handle the massive amount of data that becomes collectible and to offer services on top of this data, the most convincing solution is the federation of the Internet of Things and cloud computing. Yet, the wide adoption of this promising vision, especially for application areas such as pervasive health care, assisted living, and smart cities, is hindered by severe privacy concerns of the individual users. Hence, user acceptance is a critical factor to turn this vision into reality.
    To address this critical factor and thus realize the cloud-based Internet of Things for a variety of different application areas, we present our comprehensive approach to privacy in this envisioned setting. We allow an individual user to enforce all her privacy requirements before any sensitive data is uploaded to the cloud, enable developers of cloud services to integrate privacy functionality already into the development process of cloud services, and offer users a transparent and adaptable interface for configuring their privacy requirements.},
    }

  • A. Panchenko, F. Lanze, A. Zinnen, M. Henze, J. Pennekamp, K. Wehrle, and T. Engel, “Website Fingerprinting at Internet Scale,” in 23rd Annual Network and Distributed System Security Symposium (NDSS), 2016.
    [BibTeX] [Abstract] [PDF] [DOI]

    The website fingerprinting attack aims to identify the content (i.e., a webpage accessed by a client) of encrypted and anonymized connections by observing patterns of data flows such as packet size and direction. This attack can be performed by a local passive eavesdropper – one of the weakest adversaries in the attacker model of anonymization networks such as Tor. In this paper, we present a novel website fingerprinting attack. Based on a simple and comprehensible idea, our approach outperforms all state-of-the-art methods in terms of classification accuracy while being computationally dramatically more efficient. In order to evaluate the severity of the website fingerprinting attack in reality, we collected the most representative dataset that has ever been built, where we avoid simplified assumptions made in the related work regarding selection and type of webpages and the size of the universe. Using this data, we explore the practical limits of website fingerprinting at Internet scale. Although our novel approach is by orders of magnitude computationally more efficient and superior in terms of detection accuracy, for the first time we show that no existing method – including our own – scales when applied in realistic settings. With our analysis, we explore neglected aspects of the attack and investigate the realistic probability of success for different strategies a real-world adversary may follow.

    @inproceedings{PLZ+16,
    author = {Panchenko, Andriy and Lanze, Fabian and Zinnen, Andreas and Henze, Martin and Pennekamp, Jan and Wehrle, Klaus and Engel, Thomas},
    title = {{Website Fingerprinting at Internet Scale}},
    booktitle = {23rd Annual Network and Distributed System Security Symposium (NDSS)},
    month = {02},
    year = {2016},
    doi = {10.14722/ndss.2016.23477},
    abstract = {The website fingerprinting attack aims to identify the content (i.e., a webpage accessed by a client) of encrypted and anonymized connections by observing patterns of data flows such as packet size and direction. This attack can be performed by a local passive eavesdropper - one of the weakest adversaries in the attacker model of anonymization networks such as Tor.
    In this paper, we present a novel website fingerprinting attack. Based on a simple and comprehensible idea, our approach outperforms all state-of-the-art methods in terms of classification accuracy while being computationally dramatically more efficient. In order to evaluate the severity of the website fingerprinting attack in reality, we collected the most representative dataset that has ever been built, where we avoid simplified assumptions made in the related work regarding selection and type of webpages and the size of the universe. Using this data, we explore the practical limits of website fingerprinting at Internet scale. Although our novel approach is by orders of magnitude computationally more efficient and superior in terms of detection accuracy, for the first time we show that no existing method - including our own - scales when applied in realistic settings. With our analysis, we explore neglected aspects of the attack and investigate the realistic probability of success for different strategies a real-world adversary may follow.},
    }

  • J. H. Ziegeldorf, R. Matzutt, M. Henze, F. Grossmann, and K. Wehrle, “Secure and Anonymous Decentralized Bitcoin Mixing,” Future Generation Computer Systems (FGCS), 2016.
    [BibTeX] [Abstract] [PDF] [DOI]

    The decentralized digital currency Bitcoin presents an anonymous alternative to the centralized banking system and indeed enjoys widespread and increasing adoption. Recent works, however, show how users can be reidentified and their payments linked based on Bitcoin’s most central element, the blockchain, a public ledger of all transactions. Thus, many regard Bitcoin’s central promise of financial privacy as broken. In this paper, we propose CoinParty, an efficient decentralized mixing service that allows users to reestablish their financial privacy in Bitcoin and related cryptocurrencies. CoinParty, through a novel combination of decryption mixnets with threshold signatures, takes a unique place in the design space of mixing services, combining the advantages of previously proposed centralized and decentralized mixing services in one system. Our prototype implementation of CoinParty scales to large numbers of users and achieves anonymity sets by orders of magnitude higher than related work as we quantify by analyzing transactions in the actual Bitcoin blockchain. CoinParty can easily be deployed by any individual group of users, i.e., independent of any third parties, or provided as a commercial or voluntary service, e.g., as a community service by privacy-aware organizations.

    @article{ZMH+16,
    author = {Ziegeldorf, Jan Henrik and Matzutt, Roman and Henze, Martin and Grossmann, Fred and Wehrle, Klaus},
    journal = {Future Generation Computer Systems (FGCS)},
    title = {{Secure and Anonymous Decentralized Bitcoin Mixing}},
    year = {2016},
    doi = {10.1016/j.future.2016.05.018},
    abstract = {The decentralized digital currency Bitcoin presents an anonymous alternative to the centralized banking system and indeed enjoys widespread and increasing adoption. Recent works, however, show how users can be reidentified and their payments linked based on Bitcoin's most central element, the blockchain, a public ledger of all transactions. Thus, many regard Bitcoin's central promise of financial privacy as broken.
    In this paper, we propose CoinParty, an efficient decentralized mixing service that allows users to reestablish their financial privacy in Bitcoin and related cryptocurrencies. CoinParty, through a novel combination of decryption mixnets with threshold signatures, takes a unique place in the design space of mixing services, combining the advantages of previously proposed centralized and decentralized mixing services in one system. Our prototype implementation of CoinParty scales to large numbers of users and achieves anonymity sets by orders of magnitude higher than related work as we quantify by analyzing transactions in the actual Bitcoin blockchain. CoinParty can easily be deployed by any individual group of users, i.e., independent of any third parties, or provided as a commercial or voluntary service, e.g., as a community service by privacy-aware organizations.},
    }

2015

  • J. H. Ziegeldorf, J. Hiller, M. Henze, H. Wirtz, and K. Wehrle, “Bandwidth-optimized Secure Two-Party Computation of Minima,” in The 14th International Conference on Cryptology and Network Security (CANS), 2015.
    [BibTeX] [Abstract] [PDF] [DOI]

    Secure Two-Party Computation (STC) allows two mutually untrusting parties to securely evaluate a function on their private inputs. While tremendous progress has been made towards reducing processing overheads, STC still incurs significant communication overhead that is in fact prohibitive when no high-speed network connection is available, e.g., when applications are run over a cellular network. In this paper, we consider the fundamental problem of securely computing a minimum and its argument, which is a basic building block in a wide range of applications that have been proposed as STCs, e.g., Nearest Neighbor Search, Auctions, and Biometric Matchings. We first comprehensively analyze and compare the communication overhead of implementations of the three major STC concepts, i.e., Yao’s Garbled Circuits, the Goldreich-Micali-Wigderson protocol, and Homomorphic Encryption. We then propose an algorithm for securely computing minima in the semi-honest model that, compared to current state-of-the-art, reduces communication overheads by 18 % to 98 %. Lower communication overheads result in faster runtimes in constrained networks and lower direct costs for users.

    @inproceedings{ZHH+15,
    author = {Ziegeldorf, Jan Henrik and Hiller, Jens and Henze, Martin and Wirtz, Hanno and Wehrle, Klaus},
    title = {{Bandwidth-optimized Secure Two-Party Computation of Minima}},
    booktitle = {The 14th International Conference on Cryptology and Network Security (CANS)},
    month = {12},
    year = {2015},
    doi = {10.1007/978-3-319-26823-1_14},
    abstract = {Secure Two-Party Computation (STC) allows two mutually untrusting parties to securely evaluate a function on their private inputs. While tremendous progress has been made towards reducing processing overheads, STC still incurs significant communication overhead that is in fact prohibitive when no high-speed network connection is available, e.g., when applications are run over a cellular network. In this paper, we consider the fundamental problem of securely computing a minimum and its argument, which is a basic building block in a wide range of applications that have been proposed as STCs, e.g., Nearest Neighbor Search, Auctions, and Biometric Matchings. We first comprehensively analyze and compare the communication overhead of implementations of the three major STC concepts, i.e., Yao’s Garbled Circuits, the Goldreich-Micali-Wigderson protocol, and Homomorphic Encryption. We then propose an algorithm for securely computing minima in the semi-honest model that, compared to current state-of-the-art, reduces communication overheads by 18 % to 98 %. Lower communication overheads result in faster runtimes in constrained networks and lower direct costs for users.},
    }

  • J. H. Ziegeldorf, M. Henze, R. Hummen, and K. Wehrle, “Comparison-based Privacy: Nudging Privacy in Social Media,” in The 10th International Workshop on Data Privacy Management (DPM), 2015.
    [BibTeX] [Abstract] [PDF] [DOI]

    Social media continues to lead imprudent users into oversharing, exposing them to various privacy threats. Recent research thus focusses on nudging the user into the ‘right’ direction. In this paper, we propose Comparison-based Privacy (CbP), a design paradigm for privacy nudges that overcomes the limitations and challenges of existing approaches. CbP is based on the observation that comparison is a natural human behavior. With CbP , we transfer this observation to decision-making processes in the digital world by enabling the user to compare herself along privacy-relevant metrics to user-selected comparison groups. In doing so, our approach provides a framework for the integration of existing nudges under a self-adaptive, user-centric norm of privacy. Thus, we expect CbP not only to provide technical improvements, but to also increase user acceptance of privacy nudges. We also show how CbP can be implemented and present preliminary results.

    @inproceedings{ZHHW15,
    author = {Ziegeldorf, Jan Henrik and Henze, Martin and Hummen, Ren{\'e} and Wehrle, Klaus},
    title = {{Comparison-based Privacy: Nudging Privacy in Social Media}},
    booktitle = {The 10th International Workshop on Data Privacy Management (DPM)},
    month = {09},
    year = {2015},
    doi = {10.1007/978-3-319-29883-2_15},
    abstract = {Social media continues to lead imprudent users into oversharing, exposing them to various privacy threats. Recent research thus focusses on nudging the user into the 'right' direction. In this paper, we propose Comparison-based Privacy (CbP), a design paradigm for privacy nudges that overcomes the limitations and challenges of existing approaches. CbP is based on the observation that comparison is a natural human behavior. With CbP , we transfer this observation to decision-making processes in the digital world by enabling the user to compare herself along privacy-relevant metrics to user-selected comparison groups. In doing so, our approach provides a framework for the integration of existing nudges under a self-adaptive, user-centric norm of privacy. Thus, we expect CbP not only to provide technical improvements, but to also increase user acceptance of privacy nudges. We also show how CbP can be implemented and present preliminary results.},
    }

  • J. H. Ziegeldorf, J. Metzke, M. Henze, and K. Wehrle, “Choose Wisely: A Comparison of Secure Two-Party Computation Frameworks,” in 2015 IEEE Security and Privacy Workshops, 2015.
    [BibTeX] [Abstract] [PDF] [DOI]

    Secure Two-Party Computation (STC), despite being a powerful tool for privacy engineers, is rarely used practically due to two reasons: i) STCs incur significant overheads and ii) developing efficient STCs requires expert knowledge. Recent works propose a variety of frameworks that address these problems. However, the varying assumptions, scenarios, and benchmarks in these works render results incomparable. It is thus hard, if not impossible, for an inexperienced developer of STCs to choose the best framework for her task. In this paper, we present a thorough quantitative performance analysis of recent STC frameworks. Our results reveal significant performance differences and we identify potential for optimizations as well as new research directions for STC. Complemented by a qualitative discussion of the frameworks’ usability, our results provide privacy engineers with a dependable information basis to take the decision for the right STC framework fitting their application.

    @inproceedings{ZMHW15,
    author = {Ziegeldorf, Jan Henrik and Metzke, Jan and Henze, Martin and Wehrle, Klaus},
    title = {{Choose Wisely: A Comparison of Secure Two-Party Computation Frameworks}},
    booktitle = {2015 IEEE Security and Privacy Workshops},
    month = {05},
    year = {2015},
    doi = {10.1109/SPW.2015.9},
    abstract = {Secure Two-Party Computation (STC), despite being a powerful tool for privacy engineers, is rarely used practically due to two reasons: i) STCs incur significant overheads and ii) developing efficient STCs requires expert knowledge. Recent works propose a variety of frameworks that address these problems. However, the varying assumptions, scenarios, and benchmarks in these works render results incomparable. It is thus hard, if not impossible, for an inexperienced developer of STCs to choose the best framework for her task. In this paper, we present a thorough quantitative performance analysis of recent STC frameworks. Our results reveal significant performance differences and we identify potential for optimizations as well as new research directions for STC. Complemented by a qualitative discussion of the frameworks' usability, our results provide privacy engineers with a dependable information basis to take the decision for the right STC framework fitting their application.},
    }

  • J. H. Ziegeldorf, F. Grossmann, M. Henze, N. Inden, and K. Wehrle, “CoinParty: Secure Multi-Party Mixing of Bitcoins,” in The Fifth ACM Conference on Data and Application Security and Privacy (CODASPY), 2015.
    [BibTeX] [Abstract] [PDF] [DOI]

    Bitcoin is a digital currency that uses anonymous cryptographic identities to achieve financial privacy. However, Bitcoin’s promise of anonymity is broken as recent work shows how Bitcoin’s blockchain exposes users to reidentification and linking attacks. In consequence, different mixing services have emerged which promise to randomly mix a user’s Bitcoins with other users’ coins to provide anonymity based on the unlinkability of the mixing. However, proposed approaches suffer either from weak security guarantees and single points of failure, or small anonymity sets and missing deniability. In this paper, we propose CoinParty a novel, decentralized mixing service for Bitcoin based on a combination of decryption mixnets with threshold signatures. CoinParty is secure against malicious adversaries and the evaluation of our prototype shows that it scales easily to a large number of participants in real-world network settings. By the application of threshold signatures to Bitcoin mixing, CoinParty achieves anonymity by orders of magnitude higher than related work as we quantify by analyzing transactions in the actual Bitcoin blockchain and is first among related approaches to provide plausible deniability.

    @inproceedings{ZGH+15,
    author = {Ziegeldorf, Jan Henrik and Grossmann, Fred and Henze, Martin and Inden, Nicolas and Wehrle, Klaus},
    title = {{CoinParty: Secure Multi-Party Mixing of Bitcoins}},
    booktitle = {The Fifth ACM Conference on Data and Application Security and Privacy (CODASPY)},
    month = {03},
    year = {2015},
    doi = {10.1145/2699026.2699100},
    abstract = {Bitcoin is a digital currency that uses anonymous cryptographic identities to achieve financial privacy. However, Bitcoin's promise of anonymity is broken as recent work shows how Bitcoin's blockchain exposes users to reidentification and linking attacks. In consequence, different mixing services have emerged which promise to randomly mix a user's Bitcoins with other users' coins to provide anonymity based on the unlinkability of the mixing. However, proposed approaches suffer either from weak security guarantees and single points of failure, or small anonymity sets and missing deniability. In this paper, we propose CoinParty a novel, decentralized mixing service for Bitcoin based on a combination of decryption mixnets with threshold signatures. CoinParty is secure against malicious adversaries and the evaluation of our prototype shows that it scales easily to a large number of participants in real-world network settings. By the application of threshold signatures to Bitcoin mixing, CoinParty achieves anonymity by orders of magnitude higher than related work as we quantify by analyzing transactions in the actual Bitcoin blockchain and is first among related approaches to provide plausible deniability.},
    }

2014

  • M. Eggert, R. Häußling, M. Henze, L. Hermerschmidt, R. Hummen, D. Kerpen, A. Navarro Pérez, B. Rumpe, D. Thißen, and K. Wehrle, “SensorCloud: Towards the Interdisciplinary Development of a Trustworthy Platform for Globally Interconnected Sensors and Actuators,” in Trusted Cloud Computing, H. Krcmar, R. Reussner, and B. Rumpe, Eds., Springer, 2014.
    [BibTeX] [Abstract] [PDF] [DOI]

    Although Cloud Computing promises to lower IT costs and increase users’ productivity in everyday life, the unattractive aspect of this new technology is that the user no longer owns all the devices which process personal data. To lower scepticism, the project SensorCloud investigates techniques to understand and compensate these adoption barriers in a scenario consisting of cloud applications that utilize sensors and actuators placed in private places. This work provides an interdisciplinary overview of the social and technical core research challenges for the trustworthy integration of sensor and actuator devices with the Cloud Computing paradigm. Most importantly, these challenges include i) ease of development, ii) security and privacy, and iii) social dimensions of a cloud-based system which integrates into private life. When these challenges are tackled in the development of future cloud systems, the attractiveness of new use cases in a sensor-enabled world will considerably be increased for users who currently do not trust the Cloud.

    @incollection{EHH+14,
    author = {Eggert, Michael and H{\"a}u{\ss}ling, Roger and Henze, Martin and Hermerschmidt, Lars and Hummen, Ren{\'e} and Kerpen, Daniel and Navarro P{\'e}rez, Antonio and Rumpe, Bernhard and Thi{\ss}en, Dirk and Wehrle, Klaus},
    title = {{SensorCloud: Towards the Interdisciplinary Development of a Trustworthy Platform for Globally Interconnected Sensors and Actuators}},
    booktitle = {Trusted Cloud Computing},
    editor = {Krcmar, Helmut and Reussner, Ralf and Rumpe, Bernhard},
    month = {12},
    year = {2014},
    publisher = {Springer},
    doi = {10.1007/978-3-319-12718-7_13},
    abstract = {Although Cloud Computing promises to lower IT costs and increase users' productivity in everyday life, the unattractive aspect of this new technology is that the user no longer owns all the devices which process personal data. To lower scepticism, the project SensorCloud investigates techniques to understand and compensate these adoption barriers in a scenario consisting of cloud applications that utilize sensors and actuators placed in private places. This work provides an interdisciplinary overview of the social and technical core research challenges for the trustworthy integration of sensor and actuator devices with the Cloud Computing paradigm. Most importantly, these challenges include i) ease of development, ii) security and privacy, and iii) social dimensions of a cloud-based system which integrates into private life. When these challenges are tackled in the development of future cloud systems, the attractiveness of new use cases in a sensor-enabled world will considerably be increased for users who currently do not trust the Cloud.},
    }

  • M. Henze, R. Hummen, R. Matzutt, and K. Wehrle, “A Trust Point-based Security Architecture for Sensor Data in the Cloud,” in Trusted Cloud Computing, H. Krcmar, R. Reussner, and B. Rumpe, Eds., Springer, 2014.
    [BibTeX] [Abstract] [DOI]

    The SensorCloud project aims at enabling the use of elastic, on-demand resources of today’s Cloud offers for the storage and processing of sensed information about the physical world. Recent privacy concerns regarding the Cloud computing paradigm, however, constitute an adoption barrier that must be overcome to leverage the full potential of the envisioned scenario. To this end, a key goal of the SensorCloud project is to develop a security architecture that offers full access control to the data owner when outsourcing her sensed information to the Cloud. The central idea of this security architecture is the introduction of the trust point, a security-enhanced gateway at the border of the information sensing network. Based on a security analysis of the SensorCloud scenario, this chapter presents the design and implementation of the main components of our proposed security architecture. Our evaluation results confirm the feasibility of our proposed architecture with respect to the elastic, on-demand resources of today’s commodity Cloud offers.

    @incollection{HHMW14,
    author = {Henze, Martin and Hummen, Ren{\'e} and Matzutt, Roman and Wehrle, Klaus},
    title = {{A Trust Point-based Security Architecture for Sensor Data in the Cloud}},
    booktitle = {Trusted Cloud Computing},
    editor = {Krcmar, Helmut and Reussner, Ralf and Rumpe, Bernhard},
    month = {12},
    year = {2014},
    publisher = {Springer},
    doi = {10.1007/978-3-319-12718-7_6},
    abstract = {The SensorCloud project aims at enabling the use of elastic, on-demand resources of today’s Cloud offers for the storage and processing of sensed information about the physical world. Recent privacy concerns regarding the Cloud computing paradigm, however, constitute an adoption barrier that must be overcome to leverage the full potential of the envisioned scenario. To this end, a key goal of the SensorCloud project is to develop a security architecture that offers full access control to the data owner when outsourcing her sensed information to the Cloud. The central idea of this security architecture is the introduction of the trust point, a security-enhanced gateway at the border of the information sensing network. Based on a security analysis of the SensorCloud scenario, this chapter presents the design and implementation of the main components of our proposed security architecture. Our evaluation results confirm the feasibility of our proposed architecture with respect to the elastic, on-demand resources of today's commodity Cloud offers.},
    }

  • M. Henze, S. Bereda, R. Hummen, and K. Wehrle, “SCSlib: Transparently Accessing Protected Sensor Data in the Cloud,” in The 6th International Symposium on Applications of Ad hoc and Sensor Networks (AASNET), 2014.
    [BibTeX] [Abstract] [PDF] [DOI]

    As sensor networks get increasingly deployed in real-world scenarios such as home and industrial automation, there is a similarly growing demand in analyzing, consolidating, and storing the data collected by these networks. The dynamic, on-demand resources offered by today’s cloud computing environments promise to satisfy this demand. However, prevalent security concerns still hinder the integration of sensor networks and cloud computing. In this paper, we show how recent progress in standardization can provide the basis for protecting data from diverse sensor devices when outsourcing data processing and storage to the cloud. To this end, we present our Sensor Cloud Security Library (SCSlib) that enables cloud service developers to transparently access cryptographically protected sensor data in the cloud. SCSlib specifically allows domain specialists who are not security experts to build secure cloud services. Our evaluation proves the feasibility and applicability of SCSlib for commodity cloud computing environments.

    @inproceedings{HBHW14,
    author = {Henze, Martin and Bereda, Sebastian and Hummen, Ren{\'e} and Wehrle, Klaus},
    title = {{SCSlib: Transparently Accessing Protected Sensor Data in the Cloud}},
    booktitle = {The 6th International Symposium on Applications of Ad hoc and Sensor Networks (AASNET)},
    series = {Procedia Computer Science},
    volume = {37},
    month = {09},
    year = {2014},
    doi = {10.1016/j.procs.2014.08.055},
    abstract = {As sensor networks get increasingly deployed in real-world scenarios such as home and industrial automation, there is a similarly growing demand in analyzing, consolidating, and storing the data collected by these networks. The dynamic, on-demand resources offered by today's cloud computing environments promise to satisfy this demand. However, prevalent security concerns still hinder the integration of sensor networks and cloud computing. In this paper, we show how recent progress in standardization can provide the basis for protecting data from diverse sensor devices when outsourcing data processing and storage to the cloud. To this end, we present our Sensor Cloud Security Library (SCSlib) that enables cloud service developers to transparently access cryptographically protected sensor data in the cloud. SCSlib specifically allows domain specialists who are not security experts to build secure cloud services. Our evaluation proves the feasibility and applicability of SCSlib for commodity cloud computing environments.},
    }

  • M. Henze, L. Hermerschmidt, D. Kerpen, R. Häußling, B. Rumpe, and K. Wehrle, “User-driven Privacy Enforcement for Cloud-based Services in the Internet of Things,” in 2014 International Conference on Future Internet of Things and Cloud (FiCloud), 2014.
    [BibTeX] [Abstract] [PDF] [DOI]

    Internet of Things devices are envisioned to penetrate essentially all aspects of life, including homes and urban spaces, in use cases such as health care, assisted living, and smart cities. One often proposed solution for dealing with the massive amount of data collected by these devices and offering services on top of them is the federation of the Internet of Things and cloud computing. However, user acceptance of such systems is a critical factor that hinders the adoption of this promising approach due to severe privacy concerns. We present UPECSI, an approach for user-driven privacy enforcement for cloud-based services in the Internet of Things to address this critical factor. UPECSI enables enforcement of all privacy requirements of the user once her sensitive data leaves the border of her network, provides a novel approach for the integration of privacy functionality into the development process of cloud-based services, and offers the user an adaptable and transparent configuration of her privacy requirements. Hence, UPECSI demonstrates an approach for realizing user-accepted cloud services in the Internet of Things.

    @inproceedings{HHK+14,
    author = {Henze, Martin and Hermerschmidt, Lars and Kerpen, Daniel and H{\"a}u{\ss}ling, Roger and Rumpe, Bernhard and Wehrle, Klaus},
    title = {{User-driven Privacy Enforcement for Cloud-based Services in the Internet of Things}},
    booktitle = {2014 International Conference on Future Internet of Things and Cloud (FiCloud)},
    month = {08},
    year = {2014},
    doi = {10.1109/FiCloud.2014.38},
    abstract = {Internet of Things devices are envisioned to penetrate essentially all aspects of life, including homes and urban spaces, in use cases such as health care, assisted living, and smart cities. One often proposed solution for dealing with the massive amount of data collected by these devices and offering services on top of them is the federation of the Internet of Things and cloud computing. However, user acceptance of such systems is a critical factor that hinders the adoption of this promising approach due to severe privacy concerns. We present UPECSI, an approach for user-driven privacy enforcement for cloud-based services in the Internet of Things to address this critical factor. UPECSI enables enforcement of all privacy requirements of the user once her sensitive data leaves the border of her network, provides a novel approach for the integration of privacy functionality into the development process of cloud-based services, and offers the user an adaptable and transparent configuration of her privacy requirements. Hence, UPECSI demonstrates an approach for realizing user-accepted cloud services in the Internet of Things.},
    }

  • J. H. Ziegeldorf, N. Viol, M. Henze, and K. Wehrle, “POSTER: Privacy-preserving Indoor Localization,” in 7th ACM Conference on Security and Privacy in Wireless & Mobile Networks (WiSec), 2014.
    [BibTeX] [Abstract] [PDF] [DOI]

    Upcoming WiFi-based localization systems for indoor environments face a conflict of privacy interests: Server-side localization violates location privacy of the users, while localization on the user’s device forces the localization provider to disclose the details of the system, e.g., sophisticated classification models. We show how Secure Two-Party Computation can be used to reconcile privacy interests in a state-of-the-art localization system. Our approach provides strong privacy guarantees for all involved parties, while achieving room-level localization accuracy at reasonable overheads.

    @inproceedings{ZVHW14,
    author = {Ziegeldorf, Jan Henrik and Viol, Nicolai and Henze, Martin and Wehrle, Klaus},
    title = {{POSTER: Privacy-preserving Indoor Localization}},
    booktitle = {7th ACM Conference on Security and Privacy in Wireless & Mobile Networks (WiSec)},
    month = {07},
    year = {2014},
    doi = {10.13140/2.1.2847.4886},
    abstract = {Upcoming WiFi-based localization systems for indoor environments face a conflict of privacy interests: Server-side localization violates location privacy of the users, while localization on the user’s device forces the localization provider to disclose the details of the system, e.g., sophisticated classification models. We show how Secure Two-Party Computation can be used to reconcile privacy interests in a state-of-the-art localization system. Our approach provides strong privacy guarantees for all involved parties, while achieving room-level localization accuracy at reasonable overheads.},
    }

  • F. Schmidt, M. Henze, and K. Wehrle, “Piccett: Protocol-Independent Classification of Corrupted Error-Tolerant Traffic,” in 18th IEEE Symposium on Computers and Communications (ISCC), 2014.
    [BibTeX] [Abstract] [PDF] [DOI]

    Bit errors regularly occur in wireless communications. While many media streaming codecs in principle provide bit error tolerance and resilience, packet-based communication typically drops packets that are not transmitted perfectly. We present PICCETT, a method to heuristically identify which connections corrupted packets belong to, and to assign them to the correct applications instead of dropping them. PICCETT is a receiver-side classifier that requires no support from the sender or network, and no information which communication protocols are used. We show that PICCETT can assign virtually all packets to the correct connections at bit error rates up to 7–10%, and prevents misassignments even during error bursts. PICCET’s classification algorithm needs no prior offline training and both trains and classifies fast enough to easily keep up with IEEE 802.11 communication speeds.

    @inproceedings{SHW14,
    author = {Schmidt, Florian and Henze, Martin and Wehrle, Klaus},
    title = {{Piccett: Protocol-Independent Classification of Corrupted Error-Tolerant Traffic}},
    booktitle = {18th IEEE Symposium on Computers and Communications (ISCC)},
    month = {06},
    year = {2014},
    doi = {10.1109/ISCC.2014.6912582},
    abstract = {Bit errors regularly occur in wireless communications. While many media streaming codecs in principle provide bit error tolerance and resilience, packet-based communication typically drops packets that are not transmitted perfectly. We present PICCETT, a method to heuristically identify which connections corrupted packets belong to, and to assign them to the correct applications instead of dropping them. PICCETT is a receiver-side classifier that requires no support from the sender or network, and no information which communication protocols are used. We show that PICCETT can assign virtually all packets to the correct connections at bit error rates up to 7–10%, and prevents misassignments even during error bursts. PICCET's classification algorithm needs no prior offline training and both trains and classifies fast enough to easily keep up with IEEE 802.11 communication speeds.},
    }

  • I. Aktas, M. Henze, M. H. Alizai, K. Möllering, and K. Wehrle, “Graph-based Redundancy Removal Approach for Multiple Cross-Layer Interactions,” in 2014 Sixth International Conference on Communication Systems and Networks (COMSNETS), 2014.
    [BibTeX] [Abstract] [PDF] [DOI]

    Research has shown that the availability of cross-layer information from different protocol layers enable adaptivity advantages of applications and protocols which significantly enhance the system performance. However, the development of such cross-layer interactions typically residing in the OS is very difficult mainly due to limited interfaces. The development gets even more complex for multiple running cross-layer interactions which may be added by independent developers without coordination causing (i) redundancy in cross-layer interactions leading to a waste of memory and CPU time and (ii) conflicting cross-layer interactions. In this paper, we focus on the former problem and propose a graph-based redundancy removal algorithm that automatically detects and resolves such redundancies without any feedback from the developer. We demonstrate the applicability of our approach for the cross-layer architecture CRAWLER that utilizes module compositions to realize cross-layer interactions. Our evaluation shows that our approach effectively resolves redundancies at runtime.

    @inproceedings{AHA+14,
    author = {Aktas, Ismet and Henze, Martin and Alizai, Muhammad Hamad and M{\"o}llering, Kevin and Wehrle, Klaus},
    title = {{Graph-based Redundancy Removal Approach for Multiple Cross-Layer Interactions}},
    booktitle = {2014 Sixth International Conference on Communication Systems and Networks (COMSNETS)},
    month = {01},
    year = {2014},
    doi = {10.1109/COMSNETS.2014.6734899},
    abstract = {Research has shown that the availability of cross-layer information from different protocol layers enable adaptivity advantages of applications and protocols which significantly enhance the system performance. However, the development of such cross-layer interactions typically residing in the OS is very difficult mainly due to limited interfaces. The development gets even more complex for multiple running cross-layer interactions which may be added by independent developers without coordination causing (i) redundancy in cross-layer interactions leading to a waste of memory and CPU time and (ii) conflicting cross-layer interactions. In this paper, we focus on the former problem and propose a graph-based redundancy removal algorithm that automatically detects and resolves such redundancies without any feedback from the developer. We demonstrate the applicability of our approach for the cross-layer architecture CRAWLER that utilizes module compositions to realize cross-layer interactions. Our evaluation shows that our approach effectively resolves redundancies at runtime.},
    }

2013

  • M. Henze, M. Großfengels, M. Koprowski, and K. Wehrle, “Towards Data Handling Requirements-aware Cloud Computing,” in 2013 IEEE International Conference on Cloud Computing Technology and Science (CloudCom), 2013.
    [BibTeX] [Abstract] [PDF] [DOI]

    The adoption of the cloud computing paradigm is hindered by severe security and privacy concerns which arise when outsourcing sensitive data to the cloud. One important group are those concerns regarding the handling of data. On the one hand, users and companies have requirements how their data should be treated. On the other hand, lawmakers impose requirements and obligations for specific types of data. These requirements have to be addressed in order to enable the affected users and companies to utilize cloud computing. However, we observe that current cloud offers, especially in an intercloud setting, fail to meet these requirements. Users have no way to specify their requirements for data handling in the cloud and providers in the cloud stack – even if they were willing to meet these requirements – can thus not treat the data adequately. In this paper, we identify and discuss the challenges for enabling data handling requirements awareness in the (inter-)cloud. To this end, we show how to extend a data storage service, AppScale, and Cassandra to follow data handling requirements. Thus, we make an important step towards data handling requirements-aware cloud computing.

    @inproceedings{HGKW13,
    author = {Henze, Martin and Gro{\ss}fengels, Marcel and Koprowski, Maik and Wehrle, Klaus},
    title = {{Towards Data Handling Requirements-aware Cloud Computing}},
    booktitle = {2013 IEEE International Conference on Cloud Computing Technology and Science (CloudCom)},
    month = {12},
    year = {2013},
    doi = {10.1109/CloudCom.2013.145},
    abstract = {The adoption of the cloud computing paradigm is hindered by severe security and privacy concerns which arise when outsourcing sensitive data to the cloud. One important group are those concerns regarding the handling of data. On the one hand, users and companies have requirements how their data should be treated. On the other hand, lawmakers impose requirements and obligations for specific types of data. These requirements have to be addressed in order to enable the affected users and companies to utilize cloud computing.
    However, we observe that current cloud offers, especially in an intercloud setting, fail to meet these requirements. Users have no way to specify their requirements for data handling in the cloud and providers in the cloud stack - even if they were willing to meet these requirements - can thus not treat the data adequately. In this paper, we identify and discuss the challenges for enabling data handling requirements awareness in the (inter-)cloud. To this end, we show how to extend a data storage service, AppScale, and Cassandra to follow data handling requirements. Thus, we make an important step towards data handling requirements-aware cloud computing.},
    }

  • M. Henze, R. Hummen, R. Matzutt, D. Catrein, and K. Wehrle, “Maintaining User Control While Storing and Processing Sensor Data in the Cloud,” International Journal of Grid and High Performance Computing (IJGHPC), vol. 5, iss. 4, 2013.
    [BibTeX] [Abstract] [PDF] [DOI]

    Clouds provide a platform for efficiently and flexibly aggregating, storing, and processing large amounts of data. Eventually, sensor networks will automatically collect such data. A particular challenge regarding sensor data in Clouds is the inherent sensitive nature of sensed information. For current Cloud platforms, the data owner loses control over her sensor data once it enters the Cloud. This imposes a major adoption barrier for bridging Cloud computing and sensor networks, which we address henceforth. After analyzing threats to sensor data in Clouds, the authors propose a Cloud architecture that enables end-to-end control over sensitive sensor data by the data owner. The authors introduce a well-defined entry point from the sensor network into the Cloud, which enforces end-to-end data protection, applies encryption and integrity protection, and grants data access. Additionally, the authors enforce strict isolation of services. The authors show the feasibility and scalability of their Cloud architecture using a prototype and measurements.

    @article{HHM+13,
    author = {Henze, Martin and Hummen, Ren{\'e} and Matzutt, Roman and Catrein, Daniel and Wehrle, Klaus},
    journal = {International Journal of Grid and High Performance Computing (IJGHPC)},
    title = {{Maintaining User Control While Storing and Processing Sensor Data in the Cloud}},
    month = {12},
    year = {2013},
    volume = {5},
    number = {4},
    doi = {10.4018/ijghpc.2013100107},
    abstract = {Clouds provide a platform for efficiently and flexibly aggregating, storing, and processing large amounts of data. Eventually, sensor networks will automatically collect such data. A particular challenge regarding sensor data in Clouds is the inherent sensitive nature of sensed information. For current Cloud platforms, the data owner loses control over her sensor data once it enters the Cloud. This imposes a major adoption barrier for bridging Cloud computing and sensor networks, which we address henceforth. After analyzing threats to sensor data in Clouds, the authors propose a Cloud architecture that enables end-to-end control over sensitive sensor data by the data owner. The authors introduce a well-defined entry point from the sensor network into the Cloud, which enforces end-to-end data protection, applies encryption and integrity protection, and grants data access. Additionally, the authors enforce strict isolation of services. The authors show the feasibility and scalability of their Cloud architecture using a prototype and measurements.},
    }

  • R. Hummen, J. Hiller, M. Henze, and K. Wehrle, “Slimfit – A HIP DEX Compression Layer for the IP-based Internet of Things,” in 1st International Workshop on Internet of Things Communications and Technologies (IoT), 2013.
    [BibTeX] [Abstract] [PDF] [DOI]

    The HIP Diet EXchange (DEX) is an end-to-end security protocol designed for constrained network environments in the IP-based Internet of Things (IoT). It is a variant of the IETF-standardized Host Identity Protocol (HIP) with a refined protocol design that targets performance improvements of the original HIP protocol. To stay compatible with existing protocol extensions, the HIP DEX specification thereby aims at preserving the general HIP architecture and protocol semantics. As a result, HIP DEX inherits the verbose HIP packet structure and currently does not consider the available potential to tailor the transmission overhead to constrained IoT environments. In this paper, we present Slimfit, a novel compression layer for HIP DEX. Most importantly, Slimfit i) preserves the HIP DEX security guarantees, ii) allows for stateless (de-)compression at the communication end-points or an on-path gateway, and iii) maintains the flexible packet structure of the original HIP protocol. Moreover, we show that Slimfit is also directly applicable to the original HIP protocol. Our evaluation results indicate a maximum compression ratio of 1.55 for Slimfit-compressed HIP DEX packets. Furthermore, Slimfit reduces HIP DEX packet fragmentation by 25 % and thus further decreases the transmission overhead for lossy network links. Finally, the compression of HIP DEX packets leads to a reduced processing time at the network layers below Slimfit. As a result, processing of Slimfit-compressed packets shows an overall performance gain at the HIP DEX peers.

    @inproceedings{HHHW13,
    author = {Hummen, Ren{\'e} and Hiller, Jens and Henze, Martin and Wehrle, Klaus},
    title = {{Slimfit - A HIP DEX Compression Layer for the IP-based Internet of Things}},
    booktitle = {1st International Workshop on Internet of Things Communications and Technologies (IoT)},
    month = {10},
    year = {2013},
    doi = {10.1109/WiMOB.2013.6673370},
    abstract = {The HIP Diet EXchange (DEX) is an end-to-end security protocol designed for constrained network environments in the IP-based Internet of Things (IoT). It is a variant of the IETF-standardized Host Identity Protocol (HIP) with a refined protocol design that targets performance improvements of the original HIP protocol. To stay compatible with existing protocol extensions, the HIP DEX specification thereby aims at preserving the general HIP architecture and protocol semantics. As a result, HIP DEX inherits the verbose HIP packet structure and currently does not consider the available potential to tailor the transmission overhead to constrained IoT environments. In this paper, we present Slimfit, a novel compression layer for HIP DEX. Most importantly, Slimfit i) preserves the HIP DEX security guarantees, ii) allows for stateless (de-)compression at the communication end-points or an on-path gateway, and iii) maintains the flexible packet structure of the original HIP protocol. Moreover, we show that Slimfit is also directly applicable to the original HIP protocol. Our evaluation results indicate a maximum compression ratio of 1.55 for Slimfit-compressed HIP DEX packets. Furthermore, Slimfit reduces HIP DEX packet fragmentation by 25 % and thus further decreases the transmission overhead for lossy network links. Finally, the compression of HIP DEX packets leads to a reduced processing time at the network layers below Slimfit. As a result, processing of Slimfit-compressed packets shows an overall performance gain at the HIP DEX peers.},
    }

  • M. Henze, R. Hummen, and K. Wehrle, “The Cloud Needs Cross-Layer Data Handling Annotations,” in 2013 IEEE Security and Privacy Workshops, 2013.
    [BibTeX] [Abstract] [PDF] [DOI]

    Nowadays, an ever-increasing number of service providers takes advantage of the cloud computing paradigm in order to efficiently offer services to private users, businesses, and governments. However, while cloud computing allows to transparently scale back-end functionality such as computing and storage, the implied distributed sharing of resources has severe implications when sensitive or otherwise privacy-relevant data is concerned. These privacy implications primarily stem from the in-transparency of the involved backend providers of a cloud-based service and their dedicated data handling processes. Likewise, back-end providers cannot determine the sensitivity of data that is stored or processed in the cloud. Hence, they have no means to obey the underlying privacy regulations and contracts automatically. As the cloud computing paradigm further evolves towards federated cloud environments, the envisioned integration of different cloud platforms adds yet another layer to the existing in-transparencies. In this paper, we discuss initial ideas on how to overcome these existing and dawning data handling in-transparencies and the accompanying privacy concerns. To this end, we propose to annotate data with sensitivity information as it leaves the control boundaries of the data owner and travels through to the cloud environment. This allows to signal privacy properties across the layers of the cloud computing architecture and enables the different stakeholders to react accordingly.

    @inproceedings{HHW13,
    author = {Henze, Martin and Hummen, Rene and Wehrle, Klaus},
    booktitle = {2013 IEEE Security and Privacy Workshops},
    title = {{The Cloud Needs Cross-Layer Data Handling Annotations}},
    month = {05},
    year = {2013},
    doi = {10.1109/SPW.2013.31},
    abstract = {Nowadays, an ever-increasing number of service providers takes advantage of the cloud computing paradigm in order to efficiently offer services to private users, businesses, and governments. However, while cloud computing allows to transparently scale back-end functionality such as computing and storage, the implied distributed sharing of resources has severe implications when sensitive or otherwise privacy-relevant data is concerned. These privacy implications primarily stem from the in-transparency of the involved backend providers of a cloud-based service and their dedicated data handling processes. Likewise, back-end providers cannot determine the sensitivity of data that is stored or processed in the cloud. Hence, they have no means to obey the underlying privacy regulations and contracts automatically. As the cloud computing paradigm further evolves towards federated cloud environments, the envisioned integration of different cloud platforms adds yet another layer to the existing in-transparencies.
    In this paper, we discuss initial ideas on how to overcome these existing and dawning data handling in-transparencies and the accompanying privacy concerns. To this end, we propose to annotate data with sensitivity information as it leaves the control boundaries of the data owner and travels through to the cloud environment. This allows to signal privacy properties across the layers of the cloud computing architecture and enables the different stakeholders to react accordingly.},
    }

  • R. Hummen, J. Hiller, H. Wirtz, M. Henze, H. Shafagh, and K. Wehrle, “6LoWPAN Fragmentation Attacks and Mitigation Mechanisms,” in Proceedings of the sixth ACM Conference on Security and privacy in Wireless and Mobile Networks (WiSec), 2013.
    [BibTeX] [Abstract] [PDF] [DOI]

    6LoWPAN is an IPv6 adaptation layer that defines mechanisms to make IP connectivity viable for tightly resource-constrained devices that communicate over low power, lossy links such as IEEE 802.15.4. It is expected to be used in a variety of scenarios ranging from home automation to industrial control systems. To support the transmission of IPv6 packets exceeding the maximum frame size of the link layer, 6LoWPAN defines a packet fragmentation mechanism.However, the best effort semantics for fragment transmissions, the lack of authentication at the 6LoWPAN layer, and the scarce memory resources of the networked devices render the design of the fragmentation mechanism vulnerable. In this paper, we provide a detailed security analysis of the 6LoWPAN fragmentation mechanism. We identify two attacks at the 6LoWPAN design-level that enable an attacker to (selectively) prevent correct packet reassembly on a target node at considerably low cost. Specifically, an attacker can mount our identified attacks by only sending a single protocol-compliant 6LoWPAN fragment. To counter these attacks, we propose two complementary, lightweight defense mechanisms, the content chaining scheme and the split buffer approach. Our evaluation shows the practicality of the identified attacks as well as the effectiveness of our proposed defense mechanisms at modest trade-offs.

    @inproceedings{HHW+13,
    author = {Hummen, Ren{\'e} and Hiller, Jens and Wirtz, Hanno and Henze, Martin and Shafagh, Hossein and Wehrle, Klaus},
    title = {{6LoWPAN Fragmentation Attacks and Mitigation Mechanisms}},
    booktitle = {Proceedings of the sixth ACM Conference on Security and privacy in Wireless and Mobile Networks (WiSec)},
    month = {04},
    year = {2013},
    doi = {10.1145/2462096.2462107},
    abstract = {6LoWPAN is an IPv6 adaptation layer that defines mechanisms to make IP connectivity viable for tightly resource-constrained devices that communicate over low power, lossy links such as IEEE 802.15.4. It is expected to be used in a variety of scenarios ranging from home automation to industrial control systems. To support the transmission of IPv6 packets exceeding the maximum frame size of the link layer, 6LoWPAN defines a packet fragmentation mechanism.However, the best effort semantics for fragment transmissions, the lack of authentication at the 6LoWPAN layer, and the scarce memory resources of the networked devices render the design of the fragmentation mechanism vulnerable.
    In this paper, we provide a detailed security analysis of the 6LoWPAN fragmentation mechanism. We identify two attacks at the 6LoWPAN design-level that enable an attacker to (selectively) prevent correct packet reassembly on a target node at considerably low cost. Specifically, an attacker can mount our identified attacks by only sending a single protocol-compliant 6LoWPAN fragment. To counter these attacks, we propose two complementary, lightweight defense mechanisms, the content chaining scheme and the split buffer approach. Our evaluation shows the practicality of the identified attacks as well as the effectiveness of our proposed defense mechanisms at modest trade-offs.},
    }

2012

  • R. Hummen, M. Henze, D. Catrein, and K. Wehrle, “A Cloud Design for User-controlled Storage and Processing of Sensor Data,” in 2012 IEEE 4th International Conference on Cloud Computing Technology and Science (CloudCom), 2012.
    [BibTeX] [Abstract] [PDF] [DOI]

    Ubiquitous sensing environments such as sensor networks collect large amounts of data. This data volume is destined to grow even further with the vision of the Internet of Things. Cloud computing promises to elastically store and process such sensor data. As an additional benefit, storage and processing in the Cloud enables the efficient aggregation and analysis of information from different data sources. However, sensor data often contains privacy-relevant or otherwise sensitive information. For current Cloud platforms, the data owner looses control over her data once it enters the Cloud. This imposes adoption barriers due to legal or privacy concerns. Hence, a Cloud design is required that the data owner can trust to handle her sensitive data securely. In this paper, we analyze and define properties that a trusted Cloud design has to fulfill. Based on this analysis, we present the security architecture of SensorCloud. Our proposed security architecture enforces end-to-end data access control by the data owner reaching from the sensor network to the Cloud storage and processing subsystems as well as strict isolation up to the service-level. We evaluate the validity and feasibility of our Cloud design with an analysis of our early prototype. Our results show that our proposed security architecture is a promising extension of today’s Cloud offers.

    @inproceedings{HHCW12,
    author = {Hummen, Ren{\'e} and Henze, Martin and Catrein, Daniel and Wehrle, Klaus},
    booktitle = {2012 IEEE 4th International Conference on Cloud Computing Technology and Science (CloudCom)},
    title = {{A Cloud Design for User-controlled Storage and Processing of Sensor Data}},
    month = {12},
    year = {2012},
    doi = {10.1109/CloudCom.2012.6427523},
    abstract = {Ubiquitous sensing environments such as sensor networks collect large amounts of data. This data volume is destined to grow even further with the vision of the Internet of Things. Cloud computing promises to elastically store and process such sensor data. As an additional benefit, storage and processing in the Cloud enables the efficient aggregation and analysis of information from different data sources. However, sensor data often contains privacy-relevant or otherwise sensitive information. For current Cloud platforms, the data owner looses control over her data once it enters the Cloud. This imposes adoption barriers due to legal or privacy concerns. Hence, a Cloud design is required that the data owner can trust to handle her sensitive data securely. In this paper, we analyze and define properties that a trusted Cloud design has to fulfill. Based on this analysis, we present the security architecture of SensorCloud. Our proposed security architecture enforces end-to-end data access control by the data owner reaching from the sensor network to the Cloud storage and processing subsystems as well as strict isolation up to the service-level. We evaluate the validity and feasibility of our Cloud design with an analysis of our early prototype. Our results show that our proposed security architecture is a promising extension of today's Cloud offers.},
    }