Evolutionary algorithms in theory and practice: evolution strategies, evolutionary programming, genetic algorithms
Evolution and Optimum Seeking: The Sixth Generation
Evolution and Optimum Seeking: The Sixth Generation
Silicon physical random functions
Proceedings of the 9th ACM conference on Computer and communications security
Identification and authentication of integrated circuits: Research Articles
Concurrency and Computation: Practice & Experience - Computer Security
Physical unclonable functions for device authentication and secret key generation
Proceedings of the 44th annual Design Automation Conference
Policy Gradients with Parameter-Based Exploration for Control
ICANN '08 Proceedings of the 18th international conference on Artificial Neural Networks, Part I
Proceedings of the 2008 IEEE/ACM International Conference on Computer-Aided Design
2010 Special Issue: Parameter-exploring policy gradients
Neural Networks
Modeling attacks on physical unclonable functions
Proceedings of the 17th ACM conference on Computer and communications security
Extracting secret keys from integrated circuits
IEEE Transactions on Very Large Scale Integration (VLSI) Systems
Lightweight and secure PUF key storage using limits of machine learning
CHES'11 Proceedings of the 13th international conference on Cryptographic hardware and embedded systems
Hi-index | 0.00 |
So-called Physical Unclonable Functions are an emerging, new cryptographic and security primitive. They can potentially replace secret binary keys in vulnerable hardware systems and have other security advantages. In this paper, we deal with the cryptanalysis of this new primitive by use of machine learning methods. In particular, we investigate to what extent the security of circuit-based PUFs can be challenged by a new machine learning technique named Policy Gradients with Parameter-based Exploration. Our findings show that this technique has several important advantages in cryptanalysis of Physical Unclonable Functions compared to other machine learning fields and to other policy gradient methods.