Available Research Projects

Available Research Projects

  • VitalCue
  • Faculty Member
    Dipak Ghosal

    Description
    Modern smartphones capture a myriad of kinetic biometric (movement) based data such as step count, distance, speed, step length, and walking asymmetry as well as behavioral-based data such as number and duration of phone calls made and received, amount of screen time, and number of texts sent and received.  All of this biometric data is readily accessible.  We know that movement is directly impacted by physical parameters including a person’s weight, musculoskeletal, respiratory, and cardiovascular health.  Additionally, both movement and behavior are influenced by mental acuity and their collective deterioration results in frailty.  While a single set of biometric measurements is unlikely to be useful to assessing an individual patient, changes in the trend over a larger period of measurement may be very clinically significant. 
    We propose analyzing the kinetic and behavioral data already captured by smartphones to establish baseline activity levels for patients with chronic medical conditions such as end-stage renal disease requiring either hemodialysis or peritoneal dialysis and end-stage liver disease.  We will then test whether deviations from baseline activity identified by machine learning algorithms correspond with clinically significant changes in patients’ health. 

    Requirements
            - The candidate will use their expertise to make scientific contributions to one or more research studies and will contribute to research and development in the use of kinetic and behavioral data for continuity of care. Under the general direction of the Principal Investigators, the candidate will collaborate with other research personnel as part of a project that seeks to collect, consolidate, and analyze research participant’s kinetic  and behavioral data from their smartphones. 
            - The candidate will review the academic literature in Machine Learning (ML) and AI techniques for prediction and anomaly detection in multi-modal time series data.
            - The candidate will be actively engaged in preparing data for, and validating, AI models designed for this project and will work with collaborators in the laboratory as well as from other departments at UC Davis.
            - The candidate will draft tables and figures for presentations, and scientific reports, publications and proposals as well as participate in drafting and revising manuscripts as needed.
            - The candidate will assist in deploying and maintaining the  app that will collect kinetic  and behavioral data from the research participants' smartphones.

    Basic qualifications (required at time of application)
    •    A minimum of Bachelor's degree in Computer Science.
    •    Skilled in ML and AI.  Knowledge in ML/AI techniques for time series data
    •    Fluent in pytorch and other ML/AI tools. 
    •    Fluent with DevOPs  and MLOPs.
    •    Familiarity with literature review, data collection, and data consolidation.

    Additional qualifications (required at time of start)
    •    Evidence of prior ML/AI research and development  experience.
    •    Evidence of professional competence and activity.
    •    Evidence of strong organizational skills and attention to detail.
    •    Evidence of excellence in communication skills with proficiency in both written and verbal English.
    •    Evidence of ability to work both independently and as part of a team.
    •    Familiarity with organizing data for the training and validation of AI modeling.
    •    Familiarity with research on  time series data.

    How to Apply: Please email the following to dghosal@ucdavis.edu

    1.    Subject line of the email must include the text “Application for VitalCue” (without the quotes)
    2.    Include CV, transcript of course work, 1-page (max) writeup on prior experience with relevant ML projects

  • Early Wildfire Detection Using Multimodal Imaging
  • Faculty Member
    Zhaodan Kong

    Description
    The goal of this project is to develop and implement deep learning algorithms for early wildfire detection using multimodal imaging (RGB + thermal infrared). With the expansion of the wildland–urban interface (WUI) and the impacts of climate change, wildfires have become one of the most pressing natural hazards in the United States, particularly in California. Early detection is critical for enabling rapid response, targeted firefighting efforts, and timely evacuation protocols that reduce risks to lives, property, and ecosystems.

    Our lab is developing an engineering solution that leverages multiple uncrewed aerial systems (UAS) equipped with both RGB and thermal cameras to provide fast and reliable fire detection and tracking. The focus of this MS project is on multimodal perception: developing algorithms that integrate and fuse RGB and thermal imagery for improved early wildfire detection. The student will design, implement, and evaluate machine learning models (e.g., CNNs, object detection architectures, multimodal fusion methods) using large-scale RGB + thermal datasets. If time permits, the algorithms may also be validated in field settings with UAS flights over prescribed burns or other controlled fire scenarios.

    In addition, for students interested in applied implementation, there is an opportunity to explore edge computing aspects of wildfire detection, such as model compression, quantization, and deployment on resource-constrained UAS platforms.

    This project can be scoped as either an MS project or an MS thesis, depending on the student’s interest and goals.

    Requirements

    Technical background:
    - Strong foundation in machine learning and deep learning (e.g., PyTorch, TensorFlow)
    - Experience in computer vision and object detection (e.g., YOLO, Faster R-CNN, DETR)
    - Familiarity with multimodal data fusion (RGB + thermal): preprocessing, sensor alignment, training/evaluating detection models
    - Experience with data cleaning, annotation, augmentation, and large-scale dataset management
    - Exposure to model optimization, deployment, or integration into pipelines is a plus

    Programming skills:
    - Proficiency in Python
    - Experience with large datasets, GPU/Colab environments, Docker, and Git/version control

    Experimentation and reproducibility:
    - Ability to establish baselines, track experiments, and manage results (e.g., MLflow, Weights & Biases)
    - Commitment to reproducible research with clean, documented code

    Preferred (nice to have):
    - Background in multimodal learning, sensor fusion, or remote sensing
    - Experience with UAS/drone platforms and applied research in wildfire detection or disaster response

    What You Will Gain
    - Hands-on experience applying state-of-the-art multimodal deep learning methods to a high-impact real-world problem
    - Opportunities to collaborate with an interdisciplinary team working on UAVs, sensing, and AI for disaster response
    - Optional experience with edge AI deployment and real-time systems for students interested in embedded ML
    - Potential contributions to field experiments, open datasets, and publishable research outcomes

    How to Apply: Please email the following to zdkong@ucdavis.edu:

    - A brief statement of interest (1–2 paragraphs) describing your background and motivation
    - Your CV/resume
    - Unofficial transcript

  • Designing GPU-Efficient Algorithms for LLM Inference
  • Faculty Member
    John Owens

    Description
    The boom of LLM inference serving has exacerbated inefficiencies in GPU utilization. The self-attention layer of Transformers becomes heavily memory bound during auto-regressive inference (one token at a time). The computational patterns result in the GPU having lots of idle time, waiting on large activation tensors (KV cache) to load from memory.

    The purpose of this project is to help research and productize new linear algebra techniques that compress these large activations. Our goal is to accelerate inference request latency and save memory space, while preserving model accuracy on industry standard benchmarks. We are designing custom GPU kernels that achieve a speedup by operating on these compressed activation tensors. This is all wrapped inside of a drop-in PyTorch layer, with no extra training required. The work itself is a co-design of the full DL software stack, from PyTorch frameworks all the way down to GPU assembly code. Be prepared to get your hands dirty!

    What you can expect:
    - Learn about industry challenges and constraints around LLM inference serving.
    - Contribute to building out our inference framework support.
    - Run and manage hyper-parameter sweeps across language benchmarks and models to validate speed and quality. 
    - Learn state-of-the-art techniques for writing GEMM GPU kernels.
    - Help with profiling and debugging performance issues, iterating on GPU kernel design.

    Requirements
    - Fundamentals of machine learning, deep learning, linear algebra. 
    - Basic understanding of modern GPU architectures.
    - Some familiarity writing and profiling GPU kernels (CUDA/ROCm C++). 
    - Experience using PyTorch for inference on large datasets.
    - (Preferred) Knowledge of the Transformer architecture, and the uses of various types of attention mechanisms
    - (Preferred) Experience working with heavily templated C++ codebases.


    How to Apply: Send an email to Cameron Shinn at ctshinn@ucdavis.edu with "MS Project" in the subject line. Include (1) a statement of interest, (2) a brief summary of your qualifications and (3) an attached copy of your CV or resume.

  • Quantitative Biology computer labs conversion
  • Faculty Member
    Mark Goldman

    Description
    The undergraduate quantitative biology courses MAT/BIS 27A (Linear algebra with Applications to Biology), MAT/BIS 27B (Differential equations with Applications to Biology), and MAT/BIS 107 (Probability and Stochastic Processes with Applications to Biology) are part of a nation-leading effort to re-design biological science and bioengineering training to tightly integrate computational and mathematical modeling as part of core post-calculus mathematics courses. A set of weekly computational laboratories for the above 3 courses was originally written in MATLAB, but needs to be converted to Python to accommodate the switch of many engineering majors to using Python as the new standard programming language. The project would be to perform this conversion, which will require converting the code in a manner that makes the converted Python code approachable to a student who starts the course with no prior programming experience. Accomplishment of this project will additionally benefit graduate students interested in biotechnology and bioinformatics, as the project team will learn classic problems and modeling approaches in quantitative and computational biology.

    Requirements
    Prior coursework in linear algebra, differential equations, and probability.

    How to Apply: Please contact Prof. Mark Goldman, msgoldman@ucdavis.edu
  • New Algorithm for Explainable AI
  • Faculty Member
    Xin Liu

    Description
    The project aims to develop a new post-hoc explainability algorithm for deep learning models, specifically for computer vision applications. The idea revolves around enhancing the existing and widely used method of "Integrated Gradients". 

    Requirements
    Applicants should have expertise in machine learning, deep learning and basic computer vision concepts, and ample experience with Python and Pytorch.

    How to Apply: Fill the Google form: https://forms.gle/yv9u1mhFq8XJ5xkc7
     
  • Deep Networks to Decode Brain Signals in Listeners with Cochlear Implants
  • Faculty Member
    Lee Miller

    Description
    The Speech Neuroengineering and Cybernetics Lab (PI: Lee Miller) in collaboration with the Cognitive Neurolinguistics Lab (PI: David Corina) and Dr. Doron Sagiv (Otolaryngology / Head & Neck Surgery) has a research project investigating speech perception in listeners with cochlear implants, including children born deaf as well as very old adults.

    Cochlear implants are the most widespread and successful neural prosthesis ever, with over a million users worldwide. They can restore speech perception even in children born deaf. We use EEG to study how children’s brains change and improve as they learn to hear with an implant. One of the greatest challenges for clinicians and researchers, however, is that the implant itself adds electrical noise, thereby obscuring the brain signals. The overall goal of this project is to develop a deep neural network to separate neural signals from cochlear implant noise. This will have a profound effect on design and clinical evaluation the implants. This project will prepare candidates for careers in domains such as medical devices, consumer tech, data science, brain-computer interfaces, and many more.

    Requirements
    - Candidates will need to be comfortable training and testing deep neural networks (e.g. with Pytorch) and basic signal processing. 
    - The expected deliverable will be a first-author, peer-reviewed journal publication or conference proceeding, so a demonstrated ability to write well is also required. 
    - Specific skills may include any of the following (helpful but not required): Time series analysis, machine learning, data augmentation, blind source separation, and working with autoencoders.

    How to Apply: If interested, please send an email to leemiller@ucdavis.edu with subject line “MS Project…” including a brief explanation of why you’re interested in this project, your resume, unofficial transcript, graduation timeline, details on programming abilities, time available (hours/week now, over summer, and through next year), and one example of a report that you have written on your own (for a class or project).

  • Prediction-Oriented Methods in Handling Missing Data
  • Faculty Member
    Norm Matloff, Emeritus

    Description
    Many, probably most, real-world datasets contain missing values. A wealth of methods have been developed to deal with this problem, but few if any are prediction-oriented, and not many are suitable for machine learning. Here we will develop novel approaches to this problem.

    A software package will be developed, and a paper written that explores the efficiency of the methods. The software will be written in R, with Python interfaces.

    Requirements
    Some background in linear models and machine learning methods. Previous exposure to R is nice but not required; strong coding skills, including debugging, are a must.

    How to Apply: Please contact nsmatloff@ucdavis.edu.

  • Machine Learning Assisted Gamification for Education
  • Faculty Member
    Setareh Rafatirad

    Description
    This project aims at developing a machine learning assisted gamification framework to promote equity and inclusion in education.

    Requirements
    Applicants need to have experience with python programming and a basic machine learning experience.

    How to apply: Please contact Prof. Setareh Rafatirad srafatirad@ucdavis.edu and send your resume and transcript and the reason for choosing this research topic.
  • Python programming for physics modeling
  • Faculty Member
    Emilie Roncali

    Description
    The Roncali lab in the Department of Biomedical Engineering at UC Davis (https://roncalilab.engineering.ucdavis.edu/) is looking for a highly motivated graduate student (MSc.) with an interest in medical physics and AI programming. Our lab develops physics simulation and AI-based simulations (e.g. GANs), which need to be refactored in Python, specifically for GPU computing.

    Requirements
    A strong background in computer science and advanced programming skills (Python) are required. The candidate should be familiar with Matlab and C++. The student will work closely with the postdoctoral researchers in the lab to translate their code in Python, implementing good programming practice.

    Qualified candidates can apply by sending their CV and a short statement of research interests to Dr. Emilie Roncali (eroncali@ucdavis.edu).

  • Scientific software development for DNA nanotechnology
  • Faculty Member
    David Doty

    Description
    The project will involve:

    1. scientific software development, for instance on scadnano (https://github.com/UC-Davis-molecular-computing/scadnano#readme) for structural DNA nanotech design, and nuad (https://github.com/UC-Davis-molecular-computing/nuad#readme) for DNA sequence design.

    2. (optional for MS project) Algorithmic and modeling research in support of DNA sequence design.

    3. (optional for MS project) Collaborations with partner institutions on wet-lab experiments in nucleic acid strand displacement and self-assembly to tune the modeling and design software.

    Requirements
    Background in computer science/computer engineering/software engineering (either through a formal degree, or experience with programming)

    How to apply:
    Contact David Doty at doty@ucdavis.edu and indicate your background with software development.

  • Analysis and Visualization of Unstructured Climate Data
  • Faculty Member
    Paul Ullrich

    Description
    Professor Paul Ullrich is leading a team to develop tools for analysis and visualization of climate data, particularly global, unstructured climate datasets defined in spherical geometry. These tools are widely employed throughout the climate science community, including the U.S. Department of Energy, the National Oceanic and Atmospheric Administration, and National Center for Atmospheric Research. Interested students can work with Prof. Ullrich and his team to develop new visualization or analysis capabilities in C++ or Python. Our core software repositories for this project can be found at:
    https://github.com/SEATStandards/ncvis
    https://github.com/UXARRAY/uxarray 

    Requirements
    - If interested in visualization of climate data, experience with C++
    - If interested in analysis of climate data, experience with Python
    - Familiarity with Linux operating systems
    - Basic familiarity with version control (Git)

    How to apply:
    If interested, email Prof. Paul Ullrich (paullrich@ucdavis.edu)
  • Augmented Reality Quadcopter Project
  • Faculty Member
    Nelson Max 

    Description
    Emeritus Professor Nelson Max is looking for more Master’s students to help with a multiplayer augmented reality quadcopter game system. The system includes for each game player a Solo 3DR quadcopter with a mounted ZED2 stereo video camera, a computer with an NVIDIA GTX 1070 GPU, Oculus Rift or Quest VR goggles, Oculus Touch hand held controllers for flying the drone, and wireless communication links. The computers do Simultaneous Localization and Mapping (SLAM) on feature points in the environment, as seen by the video cameras, combined with data from inertial sensors on the quadcopters, to compute their 3D positions.The quadcopter positions are communicated to the master computer, and are used in the game physics calculations. The master computer receives the user flight control signals, and either sends them directly to the quadcopters, or modifies them according to the game physics and to avoid collisions. The games are written in Unity. 

    The video camera has a wide angle lens so that part of the video image can be displayed on the goggles,  appropriate to the user's head position and orientation. The computer graphics augmented reality elements are added in stereo onto the real video background, also accounting for the user head motion. Thus the game players feel as if they were looking through the windows of a real aircraft at the actual environment in which they are flying.

    Our initial game was a pong-like paddleball game, with a paddle at each quadcopter, and a virtual ball, which we hope to replace with a third quadcopter. There are game displays showing top down and side views, either on the cockpit dashboard or in a heads-up display on its window, and sound effects when the ball is hit by the paddle, or hits the walls, floor, or ceiling of the game space. Our second game was a maze racing game, where two players start at opposite corners of a 3D two-level maze like track, and attempt to overtake each other.

    Aspects of the system development which could lead to Master’s projects are:
    - The computer vision system for Simultaneous Location and Mapping (SLAM)
    - The control of the quadcopter, including anticipating and preventing collisions
    - Integrating the different components into a playable system
    - Creating new games using the system

    How to apply:
    If interested, email Prof. Max at max@cs.ucdavis.edu.