Krishnan Pillaipakkamnatt is an associate professor in the computer science department at Hofstra University. He holds an undergraduate degree in computer engineering (1989) from Andhra University, India and a Ph.D. in computer science (1995) from Vanderbilt University, Nashville, TN. He joined Hofstra as an assistant professor in 1995.
Professor Pillaipakkamnatt's research interests lie in data mining, machine learning, and computational learning theory. More narrowly, he works algorithms for cluster analysis and prediction in data streams, and the extension of data stream mining algorithms to distributed environments. He is also interested in data mining algorithms that preserve an individual's privacy.
Professor Pillaipakkmnatt teaches a wide range of graduate and undergraduate courses. He has introduced a number of graduate and undergraduate courses on new and emerging areas of applied computing. He is especially interested in studying the challenges involved in teaching distance learning courses. He is a member of the ACM.
Industry Expertise (2)
Areas of Expertise (7)
Vanderbilt University: Ph.D., Computer Science 1995
Indian Institute of Science: M.E., Computer Science 1990
Andhra University: B.E., Computer Science 1989
2013 Motivated by the semi-supervised model in the data mining literature, we propose a model for differentially-private learning in which private data is augmented by public data to achieve better accuracy. Our main result is a differentially private classifier with significantly improved accuracy compared to previous work. We experimentally demonstrate that such a classifier produces good prediction accuracies even in those situations where the amount of private data is fairly limited. This expands the range of useful applications of differential privacy since typical results in the differential privacy model require large private data sets to obtain good accuracy.
2010 The ability to store vast quantities of data and the emergence of high speed networking have led to intense interest in distributed data mining. However, privacy concerns, as well as regulations, often prevent the sharing of data between multiple parties. Privacy-preserving distributed data mining allows the cooperative computation of data mining ...
2009 In this paper, we study the problem of constructing private classifiers using decision trees, within the framework of differential privacy. We first construct privacy-preserving ID3 decision trees using differentially private sum queries. Our experiments show that for many data sets a reasonable privacy guarantee can only be obtained via this method at a steep cost of accuracy in predictions. We then present a differentially private decision tree ensemble algorithm using the random decision tree approach. We demonstrate experimentally that our approach yields good prediction accuracy even when the size of the datasets is small. We also present a differentially private algorithm for the situation in which new data is periodically appended to an existing database. Our experiments show that our differentially private random decision tree classifier handles data updates in a way that maintains the same level of privacy guarantee.
2008 The sum-of-squares algorithm (SS) was introduced by Csirik, Johnson, Kenyon, Shor, and Weber for online bin packing of integral-sized items into integral-sized bins. First, we show the results of experiments from two new variants of the SS algorithm. The first variant, which runs in time O(n&sqrt;BlogB), appears to have almost identical expected waste ...
2007 We present a distributed privacy-preserving protocol for the clustering of data streams. The participants of the se- cure protocol learn cluster centers only on completion of the protocol. Our protocol does not reveal intermediate candidate cluster centers. It is also efficient in terms of communication. The protocol is based on a new memory- efficient clustering algorithm for data streams. Our experi- ments show that, on average, the accuracy of this algorithm is better than that of the well known k-means algorithm, and compares well with BIRCH, but has far smaller mem- ory requirements.