CS Theory Seminar

 

Oganizers: Young-San Lin, Simina Brânzei, Elena Grigorescu

Our seminar will meet sporadically this semester. Please check the calendar for the exact details. To get announcement before the talk, please also join our mailing list. You might also wish to sign up for the reading group.


Schedule, Spring 2018

Fri Jan 12

Simina Branzei

Purdue

Title: Universal Growth in Production Economies Room TBD, 11am-12

Abstract: We study a basic model of an economy over time, in which players invest their money in different goods and use the bundles obtained for production. We show that a simple decentralized dynamic, where players update their bids proportionally to how useful the investments were, leads to growth of the economy in the long term (whenever growth is possible) but also creates unbounded inequality, i.e. very rich and very poor players emerge. We analyze several other phenomena, such as how the relation of a player with others influences its development and the Gini index of the system.

Joint work with Ruta Mehta and Noam Nisan.

Mon Jan 22

Samson Zhou

Purdue

Title: On the Computational Complexity of Minimal Cumulative Cost Graph Pebbling Room TBD, 1-2pm

Abstract: We consider the computational complexity of finding a legal black pebbling of a DAG G=(V,E) with minimum cumulative cost. A black pebbling is a sequence P_0,..., P_t \subseteq V of sets of nodes which must satisfy the following properties: P_0 = \emptyset (we start off with no pebbles on G), \sinks(G) \subseteq \bigcup_{j<=t} P_j (every sink node was pebbled at some point) and \parents(P_{i+1}\P_i) \subseteq P_i (we can only place a new pebble on a node v if all of v's parents had a pebble during the last round). The cumulative cost of a pebbling P_0, P_1,..., P_t \subseteq V is cc(P) = P_1 + ... + |P_t|. The cumulative pebbling cost is an especially important security metric for data-independent memory hard functions, an important primitive for password hashing. Thus, an efficient (approximation) algorithm would be an invaluable tool for the cryptanalysis of password hash functions as it would provide an automated tool to establish tight bounds on the amortized space-time cost of computing the function. We show that such a tool is unlikely to exist in the most general case. In particular, we prove the following results. • It is NP-hard to find a pebbling minimizing cumulative cost. • The natural linear program relaxation for the problem has integrality gap \tilde{O}(n), where n is the number of nodes in G. We conjecture that the problem is hard to approximate. • We show that a related problem, find the minimum size subset S\subseteq V such that depth(G-S) <= d, is also NP-hard. In fact, under the Unique Games Conjecture there is no (2-\epsilon)-approximation algorithm.

Joint work with Jeremiah Blocki

Thrs Jan 25

Daniel Lokshtanov

U of Bergen

Title: Coping with NP-hardness Room LWSN 3102, 10:30-11:30; Part of the CS Colloquium

Abstract: The vast majority of optimization problems arising in practical applications are NP-hard. Thus it is difficult to over-state the importance of understanding how to handle NP-hard problems algorithmically. When a problem is NP-hard, this means that (unless P=NP) there cannot exist an algorithm that solves all instances of the problem optimally using time polynomial in the size of the instance. However, this does not mean that algorithms stand powerless in the face of NP-hardness. Indeed, an NP-hard problem may still admit algorithms of the following kind. - An algorithm that solves all instances of the problem optimally in time O(1.001^n), or even O(2^sqrt(n)), where n is the size of the input instance. Such algorithms are studied in the field of exact exponential time algorithms, and may still be useful even for moderate size worst-case input instances. - An algorithm that solves all instances of the problem within a factor 1.01 of optimality using time polynomial in the size of the instance. In many cases, being just 1% worse than the optimal solution is acceptable. Such algorithms are studied in the field of approximation algorithms. - An algorithm that solves structurally simple instances of the problem optimally using time polynomial in the size of the instance. This is very useful if we know that the instances that we actually want to solve are structurally simple. Algorithms of this kind are studied in the fields of restricted input and parameterized algorithms. - An algorithm that works in polynomial time and reduces possibly large yet structurally simple instances to equivalent small instances. This can be very useful provided that the output instance is so small that a solution can be found by brute force. The performance of such algorithms is studied in the field of kernelization. In this talk I will give a sample of each of the above fields, viewed through the lens of my contributions.

BIO: Daniel Lokshtanov received his PhD in Computer Science (2009), from the University of Bergen. Lokshtanov spent two years (2010-2012) as a Simons Postdoctoral Fellow at University of California at San Diego, and is now a professor at the Department of Informatics at the University of Bergen. His main research interests are in graph algorithms, parameterized algorithms and complexity. He is a recipient of the Meltzer prize, the Bergen Research Foundation young researcher grant, and of an ERC starting grant on parameterized algorithms. He is a co-author of the recent Parameterized Algorithms book [http://parameterized-algorithms.mimuw.edu.pl/

Fri Feb 9

Karthik Chandrasekaran

UIUC

Title: Hypergraph k-cut in randomized polynomial time Room LWSN 3102, 10:30-11:30

Abstract: In the hypergraph k-cut problem, the input is a hypergraph and the goal is to find a smallest subset of hyperedges whose removal ensures that the remaining hypergraph has at least k connected components. When k is part of the input, the hypergraph k-cut problem is at least as hard as the densest k-subgraph problem (Chekuri and Li, 2015). For constant k, the graph k-cut problem was known to be solvable efficiently (Goldschmidt and Hochbaum, 1994) while the complexity of the hypergraph k-cut problem was open. In this talk, I will present a randomized polynomial-time algorithm to solve the hypergraph k-cut problem.

Joint work with Chao Xu and Xilin Yu.

March 7

Jing Chen

Stony Brook

Title: Algorand: A Secure and Efficient Distributed Ledger Room LWSN 1142, Time 1pm. Colloquium Talk

Abstract:A distributed ledger is a tamperproof sequence of data that can be publicly accessed and augmented by everyone, without being maintained by a centralized party. Distributed ledgers stand to revolutionize the way a modern society operates. They can secure all kinds of traditional transactions, such as payments, asset transfers and titles, in the exact order in which the transactions occur; and enable totally new transactions, such as cryptocurrencies and smart contracts. They can remove intermediaries and usher in a new paradigm for trust. As currently implemented, however, distributed ledgers scale poorly and cannot achieve their enormous potential. In this work we propose Algorand, an alternative, secure and efficient distributed ledger. Algorand is permissionless and works in a highly asynchronous environment. Unlike prior implementations of distributed ledgers based on “proof of work,” Algorand dispenses with “miners” and requires only a negligible amount of computation. Moreover, its transaction history “forks” only with negligible probability: that is, Algorand guarantees the finality of a transaction the moment the transaction enters the ledger.

Joint work with Silvio Micali.

Bio: Jing Chen is an Assistant Professor in the Computer Science Department at Stony Brook University. She is also an Affiliated Assistant Professor in the Economics Department and an Affiliated Member of the Stony Brook Center for Game Theory. Her main research interests include game theory, distributed ledgers and algorithms. Jing received her Bachelor and Master degrees in Computer Science from Tsinghua University, and her PhD in Computer Science from MIT. Before joining Stony Brook, she did a one-year postdoc at the Institute for Advanced Study, Princeton. She received the NSF CAREER Award in 2016.

Mon Mar 18

Jason Hartine

Northwestern University

Title: Data Science and Mechanism Design Room LWSN 3102, 1pm. Colloquium Talk

Abstract: Computer systems have become the primary mediator of social and economic interactions. A defining aspect of such systems is that the participants have preferences over system outcomes and will manipulate their behavior to obtain outcomes they prefer. Such manipulation interferes with data-driven methods for designing and testing system improvements. A standard approach to resolve this interference is to infer preferences from behavioral data and employ the inferred preferences to evaluate novel system designs. In this talk I will describe a method for estimating and comparing the performance of novel systems directly from behavioral data from the original system. This approach skips the step of estimating preferences and is more accurate. Estimation accuracy can be further improved by augmenting the original system; its accuracy then compares favorably with ideal controlled experiments, a.k.a., A/B testing, which are often infeasible. A motivating example will be the paradigmatic problem of designing an auction for the sale of advertisements on an Internet search engine.

Bio: Jason Hartline is an associate professor of computer science at Northwestern University. His research introduces design and analysis methodologies from computer science to understand and improve outcomes of economic systems. Optimal behavior and outcomes in complex environments are complex and, therefore, should not be expected; instead, the theory of approximation can show that simple and natural behaviors are approximately optimal in complex environments. This approach is applied to auction theory and mechanism design in his graduate textbook Mechanism Design and Approximation which is under preparation (http://jasonhartline.com/MDnA/). Prof. Hartline received his Ph.D. in 2003 from the University of Washington under the supervision of Anna Karlin. He was a postdoctoral fellow at Carnegie Mellon University under the supervision of Avrim Blum; and subsequently a researcher at Microsoft Research in Silicon Valley. He joined Northwestern University in 2008

Mon Apr 2

Parinaz Naghizadeh
Purdue EE

Title: Network games with applications in cyber-security Room LWSN 3102, 1pm

Abstract: The outcomes of many social and economic interactions are affected by an underlying network of connections among their agents. For instance, networks play a central role in the provision of public goods (e.g. cyber security), where they determine the externalities among agents, and consequently, the aggregate level of public good in the network (e.g. network security). In this talk, we first study the design of incentive mechanisms, and in particular cyber-insurance contracts, for improving the provision of security over a network of agents with interdependent security. We discuss the challenges and opportunities that arise due to the agents' interdependence, and show that knowledge of the network structure plays a key role in the design of incentive mechanisms in these environments. Motivated by this finding, we develop a unified framework for characterizing the role of the underlying graph structure in determining outcomes of strategic interactions over networks. To this end, we establish a connection between the Nash equilibrium outcomes of network games with non-linear (linear) best-response functions and variational inequality (linear complementarity) problems. Using these connections, we identify sufficient (and necessary) conditions for existence and uniqueness of Nash equilibria in these games. We further derive a connection between agents' efforts and their centralities in the network, and illustrate how this finding can provide guidelines for targeted interventions that improve a network's state of cyber-security.

Wed Apr 25

Vladimir Braverman

JHU

Title: Streaming Algorithms for Software Defined Networks, Cosmological Simulations, and Beyond. Room LWSN 3102, 1pm

Abstract: In this talk we will give a gentle introduction to streaming and sketching algorithms and demonstrate their applicability in cosmological N-body simulations, network monitoring for Software Defined Networks (SDN) and other applications. For SDN, we will present the UnivMon (short for Universal Monitoring) framework that can simultaneously achieve both generality and high ?delity across a broad spectrum of monitoring tasks. UnivMon builds on and extends recent theoretical advances in universal sketching and we will describe the connection of universal sketches to the concentration of measure and Milman’s theorem. For cosmological simulations, we will describe efficient algorithms for the identification of “halos,” which are concentrations of mass in the output of the simulations. Traditional in-memory methods for these tasks do not scale to the datasets that are forbiddingly large in modern simulations. Our experiments show that our tool can scale to datasets with up to 1012 particles, while using less than an hour of running time on a single Nvidia GTX GPU. Our methods are based on streaming algorithms for Heavy Hitters, or most popular items in a stream and we will discuss our recent and nearly optimal algorithms for L2 heavy hitters in insertion only streams. If time permits we will discuss other applications in healthcare, machine learning and statistical inference.




Links to schedule for Fall 2017, Spring 2017, Fall 2016, Sprinig 2016, Fall 2015, Spring 2015, Fall 2014. Spring 2014, Fall 2013. Spring 2013. Fall 2012. <html>