Purdue CS researchers present seventeen papers at the USENIX Security Symposium - Department of Computer Science - Purdue University Skip to main content

Purdue CS researchers present seventeen papers at the USENIX Security Symposium

08-14-2023

Purdue University Gateway to the Future

Purdue University Gateway to the Future

 

This year, Purdue CS faculty presented seventeen papers at the 32nd USENIX Security Symposium.


Held in Anaheim, CA from August 9-11, the The USENIX Security Symposium brings together researchers, practitioners, system administrators, system programmers, and others interested in the latest advances in the security and privacy of computer systems and networks.

 

Purdue CS Papers at USENIX 2023

 

Medusa Attack: Exploring Security Hazards of In-App QR Code Scanning

Xing Han, Yuheng Zhang, and Xue Zhang, University of Electronic Science and Technology of China and Shanghai Qi Zhi Institute; Zeyuan Chen, G.O.S.S.I.P; Mingzhe Wang, Xidian University; Yiwei Zhang, Purdue University; Siqi Ma, The University of New South Wales; Yu Yu, Shanghai Qi Zhi Institute and Shanghai Jiao Tong University;Elisa Bertino, Purdue University; Juanru Li, Shanghai Qi Zhi Institute and Shanghai Jiao Tong University

Smartphone users are eliminating traditional QR codes as many apps have integrated QR code scanning as a built-in functionality. With the support of embedded QR code scanning components, apps can read QR codes and immediately execute relevant activities, such as boarding a flight. Handling QR codes in such an automated manner is obviously user-friendly. However, this automation also creates an opportunity for attackers to exploit apps through malicious QR codes if the apps fail to properly check these codes.

In this paper, we systematize and contextualize attacks on mobile apps that use built-in QR code readers. We label these as MEDUSA attacks, which allow attackers to remotely exploit the in-app QR code scanning of a mobile app. Through a MEDUSA attack, remote attackers can invoke a specific type of app functions – Remotely Accessible Handlers (RAHs), and perform tasks such as sending authentication tokens or making a payment. We conducted an empirical study on 800 very popular Android and iOS apps with billions of users in the two largest mobile ecosystems, the US and mainland China mobile markets, to investigate the prevalence and severity of MEDUSA attack related security vulnerabilities. Based on our proposed vulnerability detection technique, we thoroughly examined the target apps and discovered that a wide range of them are affected. Among the 377/800 apps with in-app QR code scanning functionality, we found 123 apps containing 2,872 custom RAHs that were vulnerable to the MEDUSA attack. By constructing proof-of-concept exploits to test the severity, we confirmed 46 apps with critical or high-severity vulnerabilities, which allows attackers to access sensitive local resources or remotely modify the user data.

 

That Person Moves Like A Car: Misclassification Attack Detection for Autonomous Systems Using Spatiotemporal Consistency

Yanmao Man, University of Arizona; Raymond Muller, Purdue University; Ming Li, University of Arizona; Z. Berkay Celik, Purdue University; Ryan Gerdes, Virginia Tech

Autonomous systems commonly rely on object detection and tracking (ODT) to perceive the environment and predict the trajectory of surrounding objects for planning purposes. An ODT’s output contains object classes and tracks that are traditionally predicted independently. Recent studies have shown that ODT’s output can be falsified by various perception attacks with well-crafted noise, but existing defenses are limited to specific noise injection methods and thus fail to generalize. In this work we propose PercepGuard for the detection of misclassification attacks against perception modules regardless of attack methodologies. PercepGuard exploits the spatiotemporal properties of a detected object (inherent in the tracks), and cross-checks the consistency between the track and class predictions. To improve adversarial robustness against defense-aware (adaptive) attacks, we additionally consider context data (such as ego-vehicle velocity) for contextual consistency verification, which dramatically increases the attack difficulty. Evaluations with both real-world and simulated datasets produce a FPR of 5% and a TPR of 99% against adaptive attacks. A baseline comparison confirms the advantage of leveraging temporal features. Real-world experiments with displayed and projected adversarial patches show that PercepGuard detects 96% of the attacks on average.

 

Fuzz The Power: Dual-role State Guided Black-box Fuzzing for USB Power Delivery

Kyungtae Kim and Sungwoo Kim, Purdue University; Kevin R. B. Butler, University of Florida; Antonio Bianchi, Rick Kennell, and Dave (Jing) Tian, Purdue University

USB Power Delivery (USBPD) is a state-of-the-art charging protocol for advanced power supply. Thanks to its high volume of power supply, it has been widely adopted by consumer devices, such as smartphones and laptops, and has become the de facto USB charging standard in both EU and North America. Due to the low-level nature of charging and the complexity of the protocol, USBPD is often implemented as proprietary firmware running on a dedicated microcontroller unit (MCU) with a USBPD physical layer. Bugs within these implementations can not only lead to safety issues, e.g., over-charging, but also cause security issues, such as allowing attackers to reflash USBPD firmware.

This paper proposes FUZZPD, the first black-box fuzzing technique with dual-role state guidance targeting off-the-shelf USBPD devices with closed-source USBPD firmware. FUZZPD only requires a physical USB Type-C connection to operate in a plug-n-fuzz fashion. To facilitate the black-box fuzzing of USBPD firmware, FUZZPD manually creates a dual-role state machine from the USBPD specification, which enables both state coverage and transitions from fuzzing inputs. FUZZPD further provides a multi-level mutation strategy, allowing for fine-grained state-aware fuzzing with intra- and inter-state mutations. We implement FUZZPD using a Chromebook as the fuzzing host and evaluate it against 12 USBPD mobile devices from 7 different vendors, 7 USB hubs from 7 different vendors, and 5 chargers from 5 different vendors. FUZZPD has found 15 unique bugs, 9 of which have been confirmed by the corresponding vendors. We additionally conduct a comparison between FUZZPD and multiple state-of-the-art black-box fuzzing techniques, demonstrating that FUZZPD achieves code coverage that is 40% to 3x higher than other solutions. We then compare FUZZPD with the USBPD compliance test suite from USBIF and show that FUZZPD can find 7 more bugs with 2x higher code coverage. FUZZPD is the first step towards secure and trustworthy USB charging.

 

PatchVerif: Discovering Faulty Patches in Robotic Vehicles

Hyungsub Kim, Muslum Ozgur Ozmen, Z. Berkay Celik, Antonio Bianchi, and Dongyan Xu, Purdue University

Modern software is continuously patched to fix bugs and security vulnerabilities. Patching is particularly important in robotic vehicles (RVs), in which safety and security bugs can cause severe physical damages. However, existing automated methods struggle to identify faulty patches in RVs, due to their inability to systematically determine patch-introduced behavioral modifications, which affect how the RV interacts with the physical environment.

In this paper, we introduce PATCHVERIF, an automated patch analysis framework. PATCHVERIF’s goal is to evaluate whether a given patch introduces bugs in the patched RV control software. To this aim, PATCHVERIF uses a combination of static and dynamic analysis to measure how the analyzed patch affects the physical state of an RV. Specifically, PATCHVERIF uses a dedicated input mutation algorithm to generate RV inputs that maximize the behavioral differences (in the physical space) between the original code and the patched one. Using the collected information about patch-introduced behavioral modifications, PATCHVERIF employs support vector machines (SVMs) to infer whether a patch is faulty or correct.

We evaluated PATCHVERIF on two popular RV control software (ArduPilot and PX4), and it successfully identified faulty patches with an average precision and recall of 97.9% and 92.1%, respectively. Moreover, PATCHVERIF discovered 115 previously unknown bugs, 103 of which have been acknowledged, and 51 of them have already been fixed.

 

Discovering Adversarial Driving Maneuvers against Autonomous Vehicles

Ruoyu Song, Muslum Ozgur Ozmen, Hyungsub Kim, Raymond Muller, Z. Berkay Celik, and Antonio Bianchi, Purdue University

Over 33% of vehicles sold in 2021 had integrated autonomous driving (AD) systems. While many adversarial machine learning attacks have been studied against these systems, they all require an adversary to perform specific (and often unrealistic) actions, such as carefully modifying traffic signs or projecting malicious images, which may arouse suspicion if discovered. In this paper, we present Acero, a robustness-guided framework to discover adversarial maneuver attacks against autonomous vehicles (AVs). These maneuvers look innocent to the outside observer but force the victim vehicle to violate safety rules for AVs, causing physical consequences, e.g., crashing with pedestrians and other vehicles. To optimally find adversarial driving maneuvers, we formalize seven safety requirements for AD systems and use this formalization to guide our search. We also formalize seven physical constraints that ensure the adversary does not place themselves in danger or violate traffic laws while conducting the attack. Acero then leverages trajectory-similarity metrics to cluster successful attacks into unique groups, enabling AD developers to analyze the root cause of attacks and mitigate them. We evaluated Acero on two open-source AD software, openpilot and Autoware, running on the CARLA simulator. Acero discovered 219 attacks against openpilot and 122 attacks against Autoware. 73.3% of these attacks cause the victim to collide with a third-party vehicle, pedestrian, or static object.

 

NVLeak: Off-Chip Side-Channel Attacks via Non-Volatile Memory Systems

Zixuan Wang, UC San Diego; Mohammadkazem Taram, Purdue University and UC San Diego; Daniel Moghimi, UT Austin and UC San Diego; Steven Swanson, Dean Tullsen, and Jishen Zhao, UC San Diego

We study microarchitectural side-channel attacks and defenses on non-volatile RAM (NVRAM) DIMMs. In this study, we first perform reverse-engineering of NVRAMs as implemented by the Intel Optane DIMM and reveal several of its previously undocumented microarchitectural details: on-DIMM cache structures (NVCache) and wear-leveling policies. Based on these findings, we first develop cross-core and cross-VM covert channels to establish the channel capacity of these shared hardware resources. Then, we devise NVCache-based side channels under the umbrella of NVLeak. We apply NVLeak to a series of attack case studies, including compromising the privacy of databases and key-value storage backed by NVRAM and spying on the execution path of code pages when NVRAM is used as a volatile runtime memory. Our results show that side-channel attacks exploiting NVRAM are practical and defeat previously-proposed defense that only focuses on on-chip hardware resources. To fill this gap in defense, we develop system-level mitigations based on cache partitioning to prevent side-channel leakage from NVCache.

 

GLeeFuzz: Fuzzing WebGL Through Error Message Guided Mutation

Hui Peng, Purdue University; Zhihao Yao and Ardalan Amiri Sani, UC Irvine; Dave (Jing) Tian, Purdue University; Mathias Payer, EPFL

WebGL is a set of standardized JavaScript APIs for GPU accelerated graphics. Security of the WebGL interface is paramount because it exposes remote and unsandboxed access to the underlying graphics stack (including the native GL libraries and GPU drivers) in the host OS. Unfortunately, applying state-of-the-art fuzzing techniques to the WebGL interface for vulnerability discovery is challenging because of (1) its huge input state space, and (2) the infeasibility of collecting code coverage across concurrent processes, closed-source libraries, and device drivers in the kernel.

Our fuzzing technique, GLeeFuzz, guides input mutation by error messages instead of code coverage. Our key observation is that browsers emit meaningful error messages to aid developers in debugging their WebGL programs. Error messages indicate which part of the input fails (e.g., incomplete arguments, invalid arguments, or unsatisfied dependencies between API calls). Leveraging error messages as feedback, the fuzzer effectively expands coverage by focusing mutation on erroneous parts of the input. We analyze Chrome’s WebGL implementation to identify the dependencies between error-emitting statements and rejected parts of the input, and use this information to guide input mutation. We evaluate our GLeeFuzz prototype on Chrome, Firefox, and Safari on diverse desktop and mobile OSes. We discovered 7 vulnerabilities, 4 in Chrome, 2 in Safari, and 1 in Firefox. The Chrome vulnerabilities allow a remote attacker to freeze the GPU and possibly execute remote code at the browser privilege.

 

AIRS: Explanation for Deep Reinforcement Learning based Security Applications

Jiahao Yu, Northwestern University; Wenbo Guo, Purdue University; Qi Qin, ShanghaiTech University; Gang Wang, University of Illinois at Urbana-Champaign; Ting Wang, The Pennsylvania State University; Xinyu Xing, Northwestern University

Recently, we have witnessed the success of deep reinforcement learning (DRL) in many security applications, ranging from malware mutation to selfish blockchain mining. Like all other machine learning methods, the lack of explainability has been limiting its broad adoption as users have difficulty establishing trust in DRL models' decisions. Over the past years, different methods have been proposed to explain DRL models but unfortunately, they are often not suitable for security applications, in which explanation fidelity, efficiency, and the capability of model debugging are largely lacking. 

In this work, we propose AIRS, a general framework to explain deep reinforcement learning-based security applications. Unlike previous works that pinpoint important features to the agent's current action, our explanation is at the step level. It models the relationship between the final reward and the key steps that a DRL agent takes, and thus outputs the steps that are most critical towards the final reward the agent has gathered. Using four representative security-critical applications, we evaluate AIRS from the perspectives of explainability, fidelity, stability, and efficiency. We show that AIRS could outperform alternative explainable DRL methods. We also showcase AIRS's utility, demonstrating that our explanation could facilitate the DRL model's failure offset, help users establish trust in a model decision, and even assist the identification of inappropriate reward designs.

 

Intender: Fuzzing Intent-Based Networking with Intent-State Transition Guidance

Jiwon Kim, Purdue University; Benjamin E. Ujcich, Georgetown University; Dave (Jing) Tian, Purdue University

Intent-based networking (IBN) abstracts network configuration complexity from network operators by focusing on what operators want the network to do rather than how such configuration should be implemented. While such abstraction eases network management challenges, little attention to date has focused on IBN’s new security concerns that adversely impact an entire network’s correct operation. To motivate the prevalence of such security concerns, we systematize IBN’s security challenges by studying existing bug reports from a representative IBN implementation within the ONOS network operating system. We find that 61% of IBN-related bugs are semantic bugs that are challenging, if not impossible, to detect efficiently by state-of-the-art vulnerability discovery tools.

To tackle existing limitations, we present Intender, the first semantically-aware fuzzing framework for IBN. Intender leverages network topology information and intent-operation dependencies (IOD) to efficiently generate testing inputs. Intender introduces a new feedback mechanism, intent-state transition guidance (ISTG), which traces the history of transitions in intent states. We evaluate Intender using ONOS and find 12 bugs, 11 of which were CVE-assigned security-critical vulnerabilities affecting network-wide control plane integrity and availability. Compared to state-of-the-art fuzzing tools AFL, Jazzer, Zest, and PAZZ, Intender generates up to 78.7× more valid fuzzing input, achieves up to 2.2× better coverage, and detects up to 82.6× more unique errors. Intender with IOD reduces 73.02% of redundant operations and spends 10.74% more time on valid operations. Intender with ISTG leads to 1.8× more intent-state transitions compared to code-coverage guidance.

 

Hard-label Black-box Universal Adversarial Patch Attack

Guanhong Tao, Shengwei An, Siyuan Cheng, Guangyu Shen, and Xiangyu Zhang, Purdue University

Deep learning models are widely used in many applications. Despite their impressive performance, the security aspect of these models has raised serious concerns. Universal adversarial patch attack is one of the security problems in deep learning, where an attacker can generate a patch trigger on pre-trained models using gradient information. Whenever the trigger is pasted on an input, the model will misclassify it to a target label. Existing attacks are realized with access to the model's gradient or its output confidence. In this paper, we propose a novel attack method HardBeat that generates universal adversarial patches with access only to the predicted label. It utilizes historical data points during the search for an optimal patch trigger and performs focused/directed search through a novel importance-aware gradient approximation to explore the neighborhood of the current trigger. The evaluation is conducted on four popular image datasets with eight models and two online commercial services. The experimental results show HardBeat is significantly more effective than eight baseline attacks, having more than twice high-ASR (attack success rate) patch triggers (>90%) on local models and 17.5% higher ASR on online services. Three existing advanced defense techniques fail to defend against HardBeat.

 

PELICAN: Exploiting Backdoors of Naturally Trained Deep Learning Models In Binary Code Analysis

Zhuo Zhang, Guanhong Tao, Guangyu Shen, Shengwei An, Qiuling Xu, Yingqi Liu, and Yapeng Ye, Purdue University; Yaoxuan Wu, University of California, Los Angeles; Xiangyu Zhang, Purdue University

Deep Learning (DL) models are increasingly used in many cyber-security applications and achieve superior performance compared to traditional solutions. In this paper, we study backdoor vulnerabilities in naturally trained models used in binary analysis. These backdoors are not injected by attackers but rather products of defects in datasets and/or training processes. The attacker can exploit these vulnerabilities by injecting some small fixed input pattern (e.g., an instruction) called backdoor trigger to their input (e.g., a binary code snippet for a malware detection DL model) such that misclassification can be induced (e.g., the malware evades the detection). We focus on transformer models used in binary analysis. Given a model, we leverage a trigger inversion technique particularly designed for these models to derive trigger instructions that can induce misclassification. During attack, we utilize a novel trigger injection technique to insert the trigger instruction(s) to the input binary code snippet. The injection makes sure that the code snippets' original program semantics are preserved and the trigger becomes an integral part of such semantics and hence cannot be easily eliminated. We evaluate our prototype PELICAN on 5 binary analysis tasks and 15 models. The results show that PELICAN can effectively induce misclassification on all the evaluated models in both white-box and black-box scenarios. Our case studies demonstrate that PELICAN can exploit the backdoor vulnerabilities of two closed-source commercial tools.

 

ZBCAN: A Zero-Byte CAN Defense System

Khaled Serag, Rohit Bhatia, Akram Faqih, and Muslum Ozgur Ozmen, Purdue University; Vireshwar Kumar, Indian Institute of Technology, Delhi; Z. Berkay Celik and Dongyan Xu, Purdue University

Controller Area Network (CAN) is a widely used network protocol. In addition to being the main communication medium for vehicles, it is also used in factories, medical equipment, elevators, and avionics. Unfortunately, CAN was designed without any security features. Consequently, it has come under scrutiny by the research community, showing its security weakness. Recent works have shown that a single compromised ECU on a CAN bus can launch a multitude of attacks ranging from message injection, to bus flooding, to attacks exploiting CAN's error-handling mechanism. Although several works have attempted to secure CAN, we argue that none of their approaches could be widely adopted for reasons inherent in their design. In this work, we introduce ZBCAN, a defense system that uses zero bytes of the CAN frame to secure against the most common CAN attacks, including message injection, impersonation, flooding, and error handling, without using encryption or MACs, while taking into consideration performance metrics such as delay, busload, and data-rate.

 

ARGUS: A Framework for Staged Static Taint Analysis of GitHub Workflows and Actions

Siddharth Muralee, Purdue University; Igibek Koishybayev, Aleksandr Nahapetyan, Greg Tystahl, and Brad Reaves, North Carolina State University; Antonio Bianchi, Purdue University; William Enck and Alexandros Kapravelos, North Carolina State University; Aravind Machiry, Purdue University

Millions of software projects leverage automated workflows, like GitHub Actions, for performing common build and deploy tasks. While GitHub Actions have greatly improved the software build process for developers, they pose significant risks to the software supply chain by adding more dependencies and code complexity that may introduce security bugs. This paper presents ARGUS, the first static taint analysis system for identifying code injection vulnerabilities in GitHub Actions. We used ARGUS to perform a large-scale evaluation on 2,778,483 Workflows referencing 31,725 Actions and discovered critical code injection vulnerabilities in 4,307 Workflows and 80 Actions. We also directly compared ARGUS to two existing pattern-based GitHub Actions vulnerability scanners, demonstrating that our system exhibits a marked improvement in terms of vulnerability detection, with a discovery rate more than seven times (7x) higher than the state-of-the-art approaches. These results demonstrate that command injection vulnerabilities in the GitHub Actions ecosystem are not only pervasive but also require taint analysis to be detected.

 

Extracting Protocol Format as State Machine via Controlled Static Loop Analysis

Qingkai Shi, Xiangzhe Xu, and Xiangyu Zhang, Purdue University

Reverse engineering of protocol message formats is critical for many security applications. Mainstream techniques use dynamic analysis and inherit its low-coverage problem -- the inferred message formats only reflect the features of their inputs. To achieve high coverage, we choose to use static analysis to infer message formats from the implementation of protocol parsers. In this work, we focus on a class of extremely challenging protocols whose formats are described via constraint-enhanced regular expressions and parsed using finite-state machines. Such state machines are often implemented as complicated parsing loops, which are inherently difficult to analyze via conventional static analysis. Our new technique extracts a state machine by regarding each loop iteration as a state and the dependency between loop iterations as state transitions. To achieve high, i.e., path-sensitive, precision but avoid path explosion, the analysis is controlled to merge as many paths as possible based on carefully-designed rules. The evaluation results show that we can infer a state machine and, thus, the message formats, in five minutes with over 90% precision and recall, far better than state of the art. We also applied the state machines to enhance protocol fuzzers, which are improved by 20% to 230% in terms of coverage and detect ten more zero-days compared to baselines.

 

Fine-grained Poisoning Attack to Local Differential Privacy Protocols for Mean and Variance Estimation

Xiaoguang Li, Xidian University and Purdue University; Ninghui Li and Wenhai Sun, Purdue University; Neil Zhenqiang Gong, Duke University; Hui Li, Xidian University

Although local differential privacy (LDP) protects individual users' data from inference by an untrusted data curator, recent studies show that an attacker can launch a data poisoning attack from the user side to inject carefully-crafted bogus data into the LDP protocols in order to maximally skew the final estimate by the data curator. In this work, we further advance this knowledge by proposing a new fine-grained attack, which allows the attacker to fine-tune and simultaneously manipulate mean and variance estimations that are popular analytical tasks for many real-world applications. To accomplish this goal, the attack leverages the characteristics of LDP to inject fake data into the output domain of the local LDP instance. We call our attack the output poisoning attack (OPA). We observe a security-privacy consistency where a small privacy loss enhances the security of LDP, which contradicts the known security-privacy trade-off from prior work. We further study the consistency and reveal a more holistic view of the threat landscape of data poisoning attacks on LDP. We comprehensively evaluate our attack against a baseline attack that intuitively provides false input to LDP. The experimental results show that OPA outperforms the baseline on three real-world datasets. We also propose a novel defense method that can recover the result accuracy from polluted data collection and offer insight into the secure LDP design.

 

Your Exploit is Mine: Instantly Synthesizing Counterattack Smart Contract

Zhuo Zhang, Purdue University; Zhiqiang Lin and Marcelo Morales, Ohio State University; Xiangyu Zhang and Kaiyuan Zhang, Purdue University

Smart contracts are susceptible to exploitation due to their unique nature. Despite efforts to identify vulnerabilities using fuzzing, symbolic execution, formal verification, and manual auditing, exploitable vulnerabilities still exist and have led to billions of dollars in monetary losses. To address this issue, it is critical that runtime defenses are in place to minimize exploitation risk. In this paper, we present STING, a novel runtime defense mechanism against smart contract exploits. The key idea is to instantly synthesize counterattack smart contracts from attacking transactions and leverage the power of Maximal Extractable Value (MEV) to front run attackers. Our evaluation with 62 real-world recent exploits demonstrates its effectiveness, successfully countering 54 of the exploits (i.e., intercepting all the funds stolen by the attacker). In comparison, a general front-runner defense could only handle 12 exploits. Our results provide a clear proof-of-concept that STING is a viable defense mechanism against smart contract exploits and has the potential to significantly reduce the risk of exploitation in the smart contract ecosystem.

LocIn: Inferring Semantic Location from Spatial Maps in Mixed Reality

Habiba Farrukh, Reham Mohamed, Aniket Nare, Antonio Bianchi, and Z. Berkay Celik, Purdue University

Mixed reality (MR) devices capture 3D spatial maps of users' surroundings to integrate virtual content into their physical environment. Existing permission models implemented in popular MR platforms allow all MR apps to access these 3D spatial maps without explicit permission. Unmonitored access of MR apps to these 3D spatial maps poses serious privacy threats to users as these maps capture detailed geometric and semantic characteristics of users' environments. In this paper, we present LocIn, a new location inference attack that exploits these detailed characteristics embedded in 3D spatial maps to infer a user's indoor location type. LocIn develops a multi-task approach to train an end-to-end encoder-decoder network that extracts a spatial feature representation for capturing contextual patterns of the user's environment. LocIn leverages this representation to detect 3D objects and surfaces and integrates them into a classification network with a novel unified optimization function to predict the user's indoor location. We demonstrate LocIn attack on spatial maps collected from three popular MR devices. We show that LocIn infers a user's location type with an average 84.1% accuracy.

 

About the Department of Computer Science at Purdue University

Founded in 1962, the Department of Computer Science was created to be an innovative base of knowledge in the emerging field of computing as the first degree-awarding program in the United States. The department continues to advance the computer science industry through research. US News & Reports ranks Purdue CS #20 and #16 overall in graduate and undergraduate programs respectively, seventh in cybersecurity, 10th in software engineering, 13th in programming languages, data analytics, and computer systems, and 19th in artificial intelligence. Graduates of the program are able to solve complex and challenging problems in many fields. Our consistent success in an ever-changing landscape is reflected in the record undergraduate enrollment, increased faculty hiring, innovative research projects, and the creation of new academic programs. The increasing centrality of computer science in academic disciplines and society, and new research activities - centered around data science, artificial intelligence, programming languages, theoretical computer science, machine learning, and cybersecurity - are the future focus of the department. cs.purdue.edu

Last Updated: Oct 31, 2023 11:34 AM

Department of Computer Science, 305 N. University Street, West Lafayette, IN 47907

Phone: (765) 494-6010 • Fax: (765) 494-0739

Copyright © 2024 Purdue University | An equal access/equal opportunity university | Copyright Complaints

Trouble with this page? Disability-related accessibility issue? Please contact the College of Science.