Computer Science Welcomes Four New Faculty Members
The Department of Computer Science is excited to welcome four new faculty members in the 2021-22 academic year. Assistant professors Isaac Kim, Slobodan Mitrović and Jiawei Zhang and associate professor Hamed Pirsiavash bring new expertise in quantum computing, neural networks, graph data and computer vision, respectively, as well as experience in both academia and industry.
Learn more about each below:
Isaac Kim, Assistant Professor
Kim’s research focuses on building powerful, efficient and reliable quantum computers. To simulate and study quantum mechanical systems, such as cations and electrons orbiting an atom, researchers need a computer that thinks along the same lines.
“In order to describe these systems reliably, we need to reproduce that quantum entanglement, but on a classical computer, doing that is very hard and you need to write an enormous amount of numbers,” he said. “If you use a quantum computer, you’re not writing down all these numbers explicitly in your memory—you’re simply creating a toy version of this physical system in your quantum computer.”
One of the biggest challenges with current quantum computers is reliability. Quantum computations can require millions or billions of steps to solve a problem, so Kim works from both a hardware and software perspective to make sure everything works correctly and runs smoothly. Having a reliable quantum computer allows researchers to perform calculations that aren’t currently possible, which can help them better understand the universe.
Kim got into quantum computing as an undergraduate researcher both because of the challenge and this potential.
“I thought it was very cool that you could do a kind of computation that is totally different from what the current computers can do, and moreover, that you can do something useful for society with this technology,” he said.
He received his B.S. in math and physics at the Massachusetts Institute of Technology (MIT), his Ph.D. in physics at Caltech and was a postdoctoral scholar at IBM, Stanford University and the Perimeter Institute in Canada. Before joining UC Davis, Kim worked as a quantum architect at PSIQUANTUM and a lecturer at the University of Sydney. He feels his experience and connections in both industry and academia will uniquely position his group for fruitful collaborations.
“[Being in industry] really made me think about the bottlenecks in building a quantum computer and gave me a much better understanding of what we need to do to do something useful,” he said.
Slobodan Mitrović, Assistant Professor
Mitrović’s group focuses on solving fundamental problems of developing new algorithms that help large, graph-like networks of machines to work together in processing data and completing tasks.
For most of computing history, calculations were done on a single hard disk containing all the data. Modern computing is vastly different. A simple operation like Google search or a social media message happens in a fraction of a second, but it involves several machines working together: a laptop sends a request to a machine that routes it to a large data center—sometimes the size of an airplane—that makes the computation and returns results.
“These networks are huge, and if you want to solve some problems efficiently, you need fast algorithms,” he said. “Otherwise, it can run for months or years, and it’s not very practical.”
The problem is that many of the algorithms computer scientists use for these systems are still based on the assumption of “one machine per computation”. Though they are efficient in some specific scenarios, there is still a big gap between tech capabilities and the algorithmic infrastructure employing it, something Mitrović found out first-hand as a graduate student collaborating with Google Research.
“It seemed that everything we know dates back to 30-40 years ago,” he said. “Once I started thinking about how to implement some of the basic algorithms when you have many machines at once, it felt like there was nothing quite new to explore.”
He realized the need for new graph algorithms to match these modern computing techniques and keep up with big data. Developing breakthroughs, insights and approaches to close this technology gap is a formidable task, but Mitrović thinks it is crucial to advance computing. His long-term goal is to design efficient, all-purpose algorithms that help these graph-like systems of devices process data on a massive scale.
“You can think of this as some sort of competition—there’s an improvement in technology which requires an improvement in our algorithmic thinking,” he said. “New algorithms make computation faster, but then even newer technology emerges, and then we seek to improve our algorithmic understanding.”
Mitrović received his M.S. and Ph.D. at the Swiss Federal Institute of Technology – Lausanne and worked as a postdoctoral scholar at MIT before joining UC Davis in 2022. He’s excited to start building his research group, work with colleagues and connect with industry to make a difference.
Hamed Pirsiavash, Associate Professor
Pirsiavash studies computer vision and machine learning algorithms. These algorithms, which can recognize what’s happening in an image or video, are a critical part of modern AI systems like self-driving cars, which need to make quick, accurate and safe decisions based on data from their surroundings.
Pirsiavash’s group focuses on training these algorithms through unsupervised learning, where it learns from data that hasn’t been labeled by a human to say “this is a cat,” for example. One way he’s looking into doing this is multi-modal learning, or using one type of data to supervise another.
“Instead of annotating an image as a ‘cat’ to use in learning, you can extract frames of a video along with the soundtrack, or use images along with the surrounding text as the label” he explained.
His group also works on domain adaptation—making sure an algorithm that’s tested in one place can be deployable in another. Pirsiavash uses the example of a household assistant robot that can learn what everything is in its owner’s house despite not knowing the specific shapes and layout ahead of time.
“That kind of adaptation could be done in an unsupervised way, which means I can sell you a robot and you can put it in your house and just let it explore for a while,” he explained. “It’s going to collect some unlabeled data, and then the robot is going to modify its adaptive algorithms so that it works well in your house.”
Pirsiavash also focuses on making these algorithms robust and resistant to attacks. Attackers can design malicious patches that can jam a camera on a car and intentionally make it blind to pedestrians, or something equally as dangerous. By developing attacks, the group can test their systems, find the weak points and fix them.
“If you want to deploy AI, you have to make sure that that is robust, and that somebody cannot easily fool the model,” he explained.
Pirsiavash began computer vision research as an undergraduate, as he tackled the difficult but important challenge of getting a computer to recognize hand-written Farsi text.
“Farsi is like cursive by definition,” he said. “The letters are connected to each other in shape, so it’s difficult to recognize those written words.”
He continued his research and earned his M.S. at Sharif University of Technology and his Ph.D. at UC Irvine. He worked as a postdoctoral scholar at MIT and an assistant professor at the University of Maryland – Baltimore County before returning to California and joining UC Davis. He looks forward to making connections in the department and with companies in Silicon Valley.
Jiawei Zhang, Assistant Professor
Zhang develops deep learning models and neural networks that can learn from graph data, which describes networks of sometimes millions of connected nodes. These datasets are crucial for studying social networks, molecular interactions and neuron behavior in the brain, but they’re more challenging for machine learning models to learn from.
Conventional deep learning models can sort through a data set by labeling and sorting single nodes, but graph data adds a new layer of complexity because the program has to understand how they’re connected. With the emergence of big data and the prevalence of graph data in modern research, Zhang thinks it’s important to build tools that can learn from it.
Zhang learned how useful graph data could be while studying online social networks as a graduate student. At the same time, he realized that there weren’t deep learning models that could adequately understand these graphs, so he focused on combining the fields.
“We have such diverse graph data in the real world, but right now, deep learning models have some limitations when applied to graph data,” he said.
To date, his group has built neural networks to find and flag potential fake news articles on social media and study brain imaging data. He wants to expand his research to develop neural networks for online recommendation systems and molecular mining. His long-term goal is to develop a general model for graph-based neural networks that can be adapted depending on the application.
“Our target is to propose a base model which can deal with all kinds of graph data and be useful for very diverse application problems,” he said.
Zhang received his B.S. at Nanjing University, his Ph.D. at the University of Illinois – Chicago and began his career at the Florida State University. His research is supported by NSF with several medium-sized grants. Since joining UC Davis this fall, he’s been excited to start his research group and collaborate with his colleagues and industry.
“The focus on research is real at UC Davis,” he said. “When I gave a talk about my recent work, they proposed so many different ideas for applications or collaborations, and that’s one of the main reasons I decided to come here.”