Available Research Projects

Available Research Projects

  • Using Deep Neural Networks to develop in silico neuronal
    models

  • Faculty Member
    Roy Ben-Shalom

    Description

    The goal of this project is to predict the biophysical properties of a neuron based on its electrophysiological response to stimuli. We built a deep learning convolutional network to predict the free parameters of a neuron model given its voltage response to a set of stimuli. Trainees in this project will have the opportunity to interact with various stages of the machine learning process, from data generation, analysis to neural network training.

    For this project we are looking for students with an interest in machine learning, optimization, statistics and/or neuroscience with good programming skills to help us improve the algorithm, generate training data, and increase the accuracy of our models. https://www.biorxiv.org/content/10.1101/727974v1

    Requirements
    Background in DL and interest to learn neuroscience.

    To apply, please email rbenshalom@ucdavis.edu.

  • Video-based quantification of dexterous finger movement kinematics using computer vision and deep learning techniques

  • Faculty Member
    Wilsaan Joiner and Karen Moxon

    Description
    This project will apply computer vision and deep learning techniques to analyze the dexterous finger movements of nonhuman primates (rhesus macaque monkeys). The subjects are recorded while performing a task which involves retrieving food rewards from variously-oriented shallow wells (i.e., the Brinkman Board task). The MS student is expected to assist in streamlining the analysis of the videos and applying DeepLabCut, a deep learning toolset that allows for the markerless tracking of various locations across multiple video frames. The information obtained from movement tracking will then be used to quantify several features of finger movements (separation, extension and preshaping) in order to provide behavioral measures that are sensitive to injury (e.g., spinal cord contusion) and treatments. Importantly, this will provide critical information to evaluate the effectiveness of novel interventions for clinical conditions that affect the motor system.

    Requirements

    Applicants should have expertise in machine learning, deep learning and computer vision concepts, and ample experience with common programming languages such as C++, Python and Matlab.

    To apply, please email your CV and interest statement to: wmjoiner@ucdavis.edu

  • Build a human-in-the-loop psychophysics speech synthesis simulator for a brain-computer interface to restore speech

  • Faculty Member
    Sergey Stavisky, David Brandman, Lee Miller

    Description
    The UC Davis Neuroprosthetics Lab (PIs: Sergey Stavisky and David Brandman) and the Auditory Neuroengineering and Speech Recognition Lab (PI: Lee Miller) a collaborative Masters project opportunity related to developing a brain-computer interface (BCI). We aim to help people who have lost their ability to speak by directly translating neural signals into a synthesized voice.

    A speech synthesis BCI will introduce a latency between a user’s desire to speak and a computer’s synthesized interpretation of their neural signals. The goal of this project is to quantify the effects of speech latency and errors on speech production. This project will begin by building a low-latency audio feedback platform. Next, this platform will be used to measure peoples’ ability to speak under different latency and alteration conditions. This project addresses a key design question for our speech BCI, by modeling how much delay the user can tolerate between when they try to speak, and when the system outputs voice.

    Requirements
    Candidates will need to be comfortable with programming, particularly for implementing a closed-loop experiment system. Candidates’ preferred skills might include (we certainly don’t expect all these skills): signal processing, speech and language pathology, neuroscience, audiology, audio electronics.

    If interested, please send an email (CC’ing all three PIs) and include your CV and a brief explanation of why you’re interested in one of these project(s).

  • Develop better metrics of speech synthesis quality for a brain-computer interface to restore speech

  • Faculty Member
    Sergey Stavisky, David Brandman, Lee Miller

    Description
    The UC Davis Neuroprosthetics Lab (PIs: Sergey Stavisky and David Brandman) and the Auditory Neuroengineering and Speech Recognition Lab (PI: Lee Miller) a collaborative Masters project opportunity related to developing a brain-computer interface (BCI). We aim to help people who have lost their ability to speak by directly translating neural signals into a synthesized voice.

    The current approaches to measuring speech BCI output quality are crude (basically correlating the true and synthesized audio spectrograms), and don’t take into account how intelligible the output is. Without good metrics, it’s difficult to measure progress in the field. It’s also difficult to choose meaningful loss functions for training machine learning algorithms to reconstruct speech from neural activity. We anticipate this project to start with a literature review of existing speech quality/intelligibility metrics from adjacent fields (e.g. speech language pathology, stroke neurology, acoustic engineering, etc.) then move on to implementing and adapting them for use with speech BCIs.


    Requirements
    Candidates will need to be comfortable with programming, particularly for data analysis. Candidates’ preferred skills might include (we certainly don’t expect all these skills): signal processing, speech and language pathology, neuroscience, audiology, audio electronics.

    If interested, please send an email (CC’ing all three PIs) and include your CV and a brief explanation of why you’re interested in one of these project(s).

  • Portable Sensor System to Assess the Health Conditions of Individuals working Under Harsh Environments

  • Faculty Member
    Cristina Davis

    Description
    This project aims to design, prototype, and test an integrated sensor platform that will record physiological data (e.g., heart rate, oxygen saturation, physical activity levels, skin temperature, and galvanic skin response) of athletes and individuals who work in harsh environments. The envisioned lightweight device will consist of several commercially available sensors and a microcontroller for physiological data acquisition and integration. A standalone, portable, and small single-board computer (e.g., Raspberry Pi, or alternative) will complement the device for analyzing the extracted data based on prebuilt machine learning models. The system will report data by bluetooth to a WiFi connection hub.

    Requirements
    -The applicant from a computer science background should have a solid knowledge in data structures and algorithms
    -The applicant from a electrical engineering background should know microcontroller coding and circuit designs
    -Willingness to adapt to several programming languages
    -Team work may be required

    To apply, please email your CV and interest statement to:
    biomems.ucdavis@gmail.com.   

  • Optimal Traffic Control with Deep Reinforcement Learning-based Traffic Signal Controllers and Autonomous Vehicles

  • Faculty Member
    Chen-Nee Chuah

    Description
    Deep reinforcement learning (DRL) is a promising machine learning tool that combines artificial neural networks with reinforcement learning algorithms. DRL models have been applied to different control domains including intelligent transportation systems. We have seen very promising results for DRL-based traffic signal controllers (TSC) on city level traffic flow in terms of travel delay and air pollution. In the context of autonomous vehicles (AV), DRL can be applied to control optimization, path planning and navigation. However, it remains an open question as to how these DRL-TSCs and DRL-AVs can co-exist and collaborate effectively. Since AVs are great tools for traffic platooning, it will be interesting to quantify the performance of DRL-TSCs in mixed traffic (with a combination of autonomous and human-driven vehicles).

    Requirements
    Expertise in Python programming and machine learning libraries (Numpy, Tensorflow, Matplotlib, Pandas), ability to research on intelligent systems, knowledge about deep reinforcement learning concept.

    If interested, please email your resume/CV to chuah@ucdavis.edu with [DRL-TSC with AV] in the subject title.

  • Security of Deep Reinforcement Learning-based Traffic Signal Controllers (TSC)

  • Faculty Member
    Chen-Nee Chuah

    Description
    Next generation of TSCs expected to communicate with traffic environment and learn how to behave in different traffic conditions. For this purpose, we have shown that the traffic signals controlled with deep reinforcement learning (DRL) are effective in terms of traffic flow and air quality. However, adversarial attacks may target such edge controllers. The impact of adversarial attacks to the learning-based TSCs could have serios consequences beyond traffic congestion, such as life threatening traffic accidents. Initial results of this project show that learning based TSCs are vulnerable to adversarial attacks. This project further extends the study to a different level and seeks novel solutions for DRL- TSCs on city level San Francisco downtown network and different learning configurations such as different state, action, and reward definitions.

    Requirements
    Expertise in Python programming and machine learning libraries (Numpy, Tensorflow, Matplotlib, Pandas), ability to research on intelligent systems, knowledge about deep reinforcement learning concept and security of machine learning.

    If interested, please email your resume/CV to chuah@ucdavis.edu with [Security of DRL-TSC] in the subject title.

  • Inferring and Predicting Neural Activities for Neuroengineering

  • Faculty Member
    Zhaodan Kong

    Description
    Current brain-machine interface (BMI) technology is focused on the needs of controlling motor prostheses. In the future, however, we will see more and more devices that directly interface with human cognitive systems. The goal of the MS project is to use deep learning (DL) approaches to infer and predict neural activities pertaining to high-level cognitive processes, such as decision making and trust. Specifically, the MS student will, given data collected by either intrusive, such as electrode arrays, or non-intrusive devices, such as EEGs, explore different DL methods and compare their performances in terms of predicting neural activities, such as spike counts, and behaviors, such as trust vs. distrust. The knowledge generated from this project can be later used in the treatment of neurological and psychiatric disorders as well as augmenting cognitive functions.

    Requirements
    -Experience with deep learning, particularly CNN, RNN, and LSTM
    -Willingness to learn some basic knowledge on neuroscience
    -Willingness to work with neuroscientists and biomedical engineers

    If you are interested, please send an email to zdkong@ucdavis.edu with your resume or CV.

  • ResilientDB: Global Scale Resilient Blockchain Fabric

  • Faculty Member
    Mohammad Sadoghi

    Description
    Sadoghi’s research group focuses on all facets of building secure and massive-scale data management. We aim to pioneer a next-generation resilient data platform at scale, a distributed ledger centered around a democratic and decentralized computational model, named ResilientDB Blockchain Fabric

    At the heart of blockchain lies the problem of consensus, which is at the forefront of our research and development in ResilientDB. Currently, we are investigating many exciting directions such as speculative consensus, concurrent & parallel consensus, hardware-accelerated consensus (e.g., SGX or RDMA), view-change-less consensus, reconfigurable consensus, hybrid consensus (e.g., BFT + PoS + PoW), and a wide array sharding and cross-chain protocols. 

    To learn more, we invite you to review ResilientDB BlogWikiCodebaseHands-on TutorialPublications, and Roadmap. We are seeking creative students who aim to be independent thinkers with controversial ideas. Funding may be available for exceptional students upon demonstration of solid progress.

    Requirements
    -Strong C/C++ skills are a must
    -Experience with operating systems, distributed systems, database transactions, concurrency controls, multi-threaded programming, and synchronization would be terrific

    To join us at ExpoLab, please email your resume to Prof. Sadoghi, msadoghi@ucdavis.edu.

  • SSL-Pathology: Semi-supervised Learning in Pathology Detection of Alzheimer's Disease

  • Faculty Member
    Chen-Nee Chuah

    Description
    While supervised learning (SL) techniques such as convolutional neural networks achieve promising results in medical images, procuring a sufficiently large dataset with annotations is labor-intensive, especially in gigapixel pathology images. To circumvent the need for large labeled datasets, semi-supervised learning (SSL) can be a potential approach. Amyloid-beta plaques are hallmarks of Alzheimer's disease. A supervised detection model has been established to classify three types of plaques. However, it relies on more than 50,000 annotated images for training the model. In this project, we will adopt SSL to this problem and explore the upper bound of SSL to relieve the reliance on a large labeled dataset.

    Requirements
    Expertise in machine learning concepts, Docker, and Python programming inclusive of scikit-learn, Pandas, PyTorch/Tensorflow.

    If interested, please email your resume/CV to chuah@ucdavis.edu with [SSL] in the subject title.

  • CeDP:  Computational Efficiency of Deep Learning in Digital Pathology

  • Faculty Member
    Chen-Nee Chuah

    Description
    While supervised learning (SL) techniques such as convolutional neural networks achieve promising results in pathology images, the computational complexity is still significantly heavy due to the gigapixel resolution of pathology images. To make deep learning more practical in digital pathology, it is necessary to comprehensively study the tradeoff between performance and complexity. In this project, we will study how to deploy efficient deep learning models on edge devices for pathology image analysis and how to remove unnecessary computation in the recent state-of-the-art deep learning networks. We will also benchmark the complexity of different models on our pathology datasets.

    Requirements
    Expertise in machine learning concepts, Docker, and Python programming inclusive of scikit-learn, Pandas, PyTorch/Tensorflow.

    If interested, please email your resume/CV to chuah@ucdavis.edu with [CeDP] in the subject title.

  • Efficient private aggregate queries on sensitive databases with perfect security

  • Faculty Members
    Chen-Nee Chuah and Zubair Shafiq

    Postdoc Mentor
    Syed Mahbub Hafiz

    Description
    Private information retrieval (PIR) is a powerful cryptographic primitive that solves the ubiquitous problem of safeguarding the privacy of users’ (reading) access patterns to remote, untrusted databases hosted by the cloud. This project plans to extend the state-of-the-art (vector-matrix model-based) information-theoretic private information retrieval (IT-PIR) schemes that allow expressive (SQL-like) queries. This novel extension aims to build perfectly secure systems to enable aggregate queries (e.g., COUNT and SUM) without compromising efficiency. With this new feature to PIR systems, a clinician can ask “how many patients admitted in a hospital with a given symptom/disease in any particular year” to a sensitive hospital database managed by the cloud obliviously. This project will employ GPU-enabled parallel computing to assess the proposed IT-PIR methodologies to illustrate the efficiency.

    Requirements
    Programming knowledge: C++ (including STL), Nvidia CUDA C++, Python, Shell scripting. Research interests: privacy-preserving systems, practical/applied cryptography.

    If interested, please send your resume/cv to Dr. Hafiz at shafiz@ucdavis.edu.

  • Composability of secure multi-party computation (MPC)-based privacy-preserving machine learning for the next-generation edge computing

  • Faculty Members
    Chen-Nee Chuah and Zubair Shafiq

    Postdoc Mentor
    Syed Mahbub Hafiz

    Description
    Next-generation (beyond 5G) network architecture demands effective distribution of privacy-preserving machine learning (PPML)-based application (e.g., smart health) workloads between heterogeneous edge nodes of the network that are closer to the user device. Secure multi-party computation (MPC)-based PPML techniques are promising candidates for meeting such composability requirements. This project explores novel mechanisms for efficiently distributing MPC-driven PPML workload between edge nodes with heterogeneous compute and communication resources without compromising the perfect privacy guarantee. For instance, the participating parties (nodes) will perform varying work compared to each other, unlike the conventional MPC paradigm.

    Requirements
    Programming knowledge: C++/C, Nvidia CUDA C/C++, Python, Tensorflow/PyTorch, Shell scripting. Research interests: privacy-preserving machine learning, secure multi-party computation, distributed learning, AI-enabled edge computing.

    If interested, please send your resume/cv to Dr. Hafiz at shafiz@ucdavis.edu.

  • Augmented Reality Quadcopter Game Control
  • Faculty Member
    Nelson Max

    Description
    Professor Nelson Max is leading a team to develop a quadcopter-based augmented reality video game, in which the players pilot quadcopters “first-person”, viewing an AR game environment through a head-mounted display. The team is seeking a student to continue development of the quadcopter control system using the Robot Operating System (ROS). The student will be responsible for improving the existing control algorithm and interfacing the control algorithm to the Unity game engine to coordinate the real and virtual game experiences. The student will collaborate with other team members responsible for game design and quadcopter localization.

    Requirements
    Required
    ♦   Python programming experience
    ♦   C++ programming experience
    ♦   Familiarity with Linux operating systems (Ubuntu)
    ♦   Basic familiarity with version control (Git)
    ♦   Strong skills in troubleshooting Linux software
    Preferred
    ♦   Familiarity with Robot Operating System (ROS)
    ♦   Familiarity with ArduPilot and/or PX4 autopilot firmware, MAVLink communication protocol
    ♦   Experience piloting consumer drones
    ♦   Basic familiarity with computer networking

  • Gunrock, Parallel Graph Analytics on GPUs
  • Faculty Member
    John Owens

    Description
    John Owens’ research group focuses on GPU computing and has a large project on parallel graph analytics called Gunrock. We have a large need for application development on Gunrock, writing interesting graph applications that use our framework (we have a long list of these from our funding agency). We would hope to train you in GPU computing and in using our framework. This could potentially lead to MS thesis opportunities but also could be a shorter project with an option of switching to another group if interested. We need talented students who can learn quickly and work independently. Funding may be available.

    Requirements
    ♦   Strong C/C++ skills are a must
    ♦   Experience with parallel computing would be terrific, but not required

  • Multiplayer Augmented Reality Quadcopter Game System
  • Faculty Member
    Nelson Max

    Description
    Dr. Max is looking for more master’s students to help with our multiplayer augmented reality quadcopter game system. The system includes for each game player a Solo 3DR quadcopter with a mounted GoPro 4 Black video camera, a computer with an NVIDIA GTX 1070 GPU, Oculus Rift VR goggles, Oculus Touch hand held controllers for flying the drone, and wireless communication links. Using markers in the environment, as seen by the video cameras, the computers determine the position of each quadcopter, and use the inertial sensors and quadcopter physics simulation to extrapolate to future frames to decrease VR latency. The games are written in Unity. The quadcopter positions are communicated to the master computer, and are used in the game physics calculations. The master computer receives the user control signals, and either sends them directly to the quadcopter, or modifies them according to the game physics and to avoid collisions. This centralized master server also contain the game logic, such as scoring.

    The video camera is fixed on the drone, with a wide angle lens so that the part of the image can be selected appropriate to the user head position and orientation. The computer graphics (CG) augmented elements are added in stereo onto the real video background, also accounting for the user head motion. Thus the game players feel as if they were looking through the windows of a real aircraft at the actual environment in which they are flying. We are using the Oculus Rift software development environment, which allows the video input and computer graphics elements to be supplied on separate layers, with different updates and motion extrapolation parameters. Using the known quadcopter positions, the images of the other quadcopters in the video background can be covered up with stereo CG models, so that they also appear in 3D.

    Our initial game was a pong-like paddleball game, with a paddle at each quadcopter, and a virtual ball, which we hope to replace with a third quadcopter. There are game displays showing top down and side views, either on the cockpit dashboard or in a heads-up display on its window, and sound effects when the ball is hit by the paddle, or hits the walls, floor, or ceiling of the game space. Our second game was a maze racing game, where two players start at opposite corners of a two-level maze like track, and attempt to overtake each other.

    We are now developing a shooting game, where each player has a gun to shoot opponents, and the controlling computer decides when an opponent has been hit, adding appropriate graphics like fire. The projectiles are shown in stereo CG. When a player’s quadcopter has been disabled, the computer will take control of its flight and bring it to a safe landing. We are also evolving the paddle-ball game into a 3D soccer game, with goal areas on two opposite walls, which will light up when there is a goal.

    Aspects of the system development which could lead to Master’s projects are:

    The computer vision system for Simultaneous Location and Mapping (SLAM)
    The control of the quadcopter, including anticipating and preventing collisions
    Creating new games, for example, 3D billiards

    Requirements
    ♦   Continuing or admitted Master’s student in the graduate program in Computer Science