Dr. Swati Gupta is an Assistant Professor in the H. Milton Stewart School of Industrial and Systems Engineering at Georgia Tech.
Prior to her arrival at Georgia Tech, she spent two semesters as a Fellow at the Simons Institute, UC Berkeley, participating in programs on Bridging Continuous and Discrete Optimization and Real-time Decision Making. She received her Ph.D. in operations research from the Massachusetts Institute of Technology Operations Research Center and a dual degree (B.Tech and M.Tech) in computer science and engineering from the Indian Institute of Technology, Delhi.
Gupta's research interests lie primarily in combinatorial, convex, and robust optimization with applications in online learning and data-driven decision-making under partial information. Her work focuses on speeding up fundamental bottlenecks that arise in learning problems due to the combinatorial nature of the decisions, as well as drawing from machine learning to improve traditional optimization methods.
She has worked on providing optimized inventory routing decisions under uncertain demand, and pricing items optimally while incorporating effects of sales and promotions. She has collaborated with industrial research labs such as the IBM Research Lab in Zurich, Switzerland and the Oracle Retail Data Science Group. Gupta is further interested in exploring strategic behavior of customers, fairness and bias in decisions, and unintended consequences of optimization.
Gupta was the Microsoft Research Fellow at Simons Institute in Spring 2018, and she received the prestigious Simons-Berkeley Research Fellowship for the academic year 2017-18. Her collaborative work on systematically evaluating heuristics and understanding which heuristic or algorithm works best on unseen problem instances received a special recognition from the INFORMS Computing Society in their Student Paper Competition in 2016. She was also a finalist for the INFORMS Service Science Student Paper Competition for her work on promotion optimization for retail items. Gupta received the Google Women in Engineering Award in India in 2011.
Areas of Expertise (4)
Data-driven Decision-making Under Partial Information
Online and Machine Learning
Combinatorial, Convex and Robust Optimization
Fairness and Bias in Decisions
Selected Accomplishments (2)
Simons-Berkeley Research Fellowship
For Bridging Continuous and Discrete Optimization and Real-Time Decision Making Programs Fall 2017 - Spring 2018
Google Women in Engineering Award
Simons Institute, UC Berkeley: Research Fellow 2018
Real-Time Decision Making in Spring (2018), Bridging Discrete and Continuous Optimization in Fall (2017)
Massachusetts Institute of Technology: Ph.D., Operations Research 2017
Indian Institute of Technology, Delhi: B.Tech & M.Tech, Computer Science and Engineering 2011
Selected Media Appearances (1)
Georgia Tech Guest Post: Meet Your Newest Job Recruiter, the Algorithm
Atlanta Inno online
When you apply for a job, chances are your resume has been through numerous automated screening processes powered by hiring algorithms before it lands in a recruiter’s hands. These algorithms look at things like work history, job title progression and education to weed out resumes. There are pros and cons to this – employers are eager to harness the artificial intelligence and big data captured by the algorithms to speed up the hiring process. But depending on the data used, automated hiring decisions can be very biased.
Selected Articles (5)
John Silberholz, Bruce Golden, Swati Gupta, Xingyin Wang
2018 Metaheuristics are truly diverse in nature—under the overarching theme of performing operations to escape local optima, algorithms as different as ant colony optimization, tabu search, harmony search, and genetic algorithms have emerged. Due to the unique functionality of each type of metaheuristic, the computational comparison of metaheuristics is in many ways more difficult than other algorithmic comparisons. In this chapter, we discuss techniques for the meaningful computational comparison of metaheuristics. We discuss how to create and classify instances in a new testbed and how to make sure other researchers have access to these test instances for future metaheuristic comparisons. In addition, we discuss the disadvantages of large parameter sets and how to measure complicated parameter interactions in a metaheuristic’s parameter space. Finally, we explain how to compare metaheuristics in terms of both solution quality and runtime and how to compare parallel metaheuristics.
Swati Gupta, Michel Goemans, Patrick Jaillet
2016 In order to find Nash-equilibria for two-player zero-sum games where each player plays combinatorial objects like spanning trees, matchings etc, we consider two online learning algorithms: the online mirror descent (OMD) algorithm and the multiplicative weights update (MWU) algorithm. The OMD algorithm requires the computation of a certain Bregman projection, that has closed form solutions for simple convex sets like the Euclidean ball or the simplex. However, for general polyhedra one often needs to exploit the general machinery of convex optimization. We give a novel primal-style algorithm for computing Bregman projections on the base polytopes of polymatroids. Next, in the case of the MWU algorithm, although it scales logarithmically in the number of pure strategies or experts in terms of regret, the algorithm takes time polynomial in ; this especially becomes a problem when learning combinatorial objects. We give a general recipe to simulate the multiplicative weights update algorithm in time polynomial in their natural dimension. This is useful whenever there exists a polynomial time generalized counting oracle (even if approximate) over these objects. Finally, using the combinatorial structure of symmetric Nash-equilibria (SNE) when both players play bases of matroids, we show that these can be found with a single projection or convex minimization (without using online learning).
Iain Dunning, Swati Gupta, John Silberholz
2015 Though empirical testing is broadly used to evaluate heuristics, there are shortcomings with how it is often applied in practice. In a systematic review of Max-Cut and quadratic unconstrained binary optimization (QUBO) heuristics papers, we found only 4% publish source code, only 14% compare heuristics with identical termination criteria, and most experiments are performed with an artificial, homogeneous set of problem instances. To address these limitations, we implement and release as open-source a code-base of 10 Max-Cut and 27 QUBO heuristics. We perform heuristic evaluation using cloud computing on a library of 3,296 instances. This large-scale evaluation provides insight into the types of problem instances for which each heuristic performs well or poorly. Because no single heuristic outperforms all others across all problem instances, we use machine learning to predict which heuristic will work best on a previously unseen problem instance, a key question facing practitioners.
Nishita Aggarwal, Naveen Garg, Swati Gupta
2011 We provide a polynomial time 4/3 approximation algorithm for TSP on metrics arising from the metric completion of cubic 3-edge connected graphs.
s Swati Gupta, Kristen LeFevre, Atul Prakash
2009 Incorrect policy configurations are a major cause of security failures in large-scale systems. Policy analyzers and testing tools can help with this, but often the tools are specific to one type of policy (eg, firewalls). In contrast, the most insidious security problems often require understanding the interactions of policies across systems (eg, firewalls, SSH, file systems, etc.). Currently, much of this analysis must be done manually. In this paper, we propose a common framework called SPAN (Security Policy Analyzer) to help analyze policies from heterogeneous systems. On the front-end, SPAN presents administrators with a simple, unified, abstraction and flexible query language. Internally, policies and queries are implemented compactly and efficiently using decision diagrams.