uc davis computer science facebook probability programming neural network aditya thakur
Photo: Reeta Asmai/UC Davis.

Aditya Thakur Receives 2020 Facebook Probability and Programming Research Award

For the second year in a row, CS assistant professor Aditya Thakur is the winner of a Facebook Probability and Programming Research Award. The award, established in 2019, seek proposals from the worldwide computer science community that address problems at the intersection of machine learning, programming languages, statistics and software engineering. Thakur’s proposal was one of 19 selected.

The Probability and Programming Award marks the third award Thakur has received from Facebook since 2018. He won the same award last year with associate professor Cindy Rubio González and won Facebook’s Testing and Verification Award in 2018.

“It is a great recognition of the work we have done at UC Davis and we are grateful to have won multiple of these awards recently,” said Thakur.

Thakur’s proposal, titled, “Provable Polytope Patching of Deep Neural Networks” expands on the group’s research on verification of deep neural networks.

Deep neural networks (DNNs) are a type of machine learning algorithm modeled after the human brain, comprised of a vast, layered web of “neurons” that are trained using data to make complex, probability-based decisions. DNNs can contain thousands of these neurons, which depend on millions of “weights.” These systems are becoming increasingly popular for image processing, natural language processing, controlling autonomous vehicles and supporting decision-making for credit, risk assessment and insurance.

While the group’s previous work has focused on finding bugs in DNNs, this proposal takes it a step further—how do you efficiently fix a bug in a DNN while providing guarantees about the fix,  including whether it was effective and if the change was minimal?

The concept is simple, but the problem is complex. Currently, the only recourse researchers have is to retrain the DNN and hope that the behavior is corrected. This is computationally expensive, as retraining takes weeks and uses hundreds of graphics processing units (GPUs). Furthermore, researchers often only have the trained DNN, and  not the original training data, which  could be lost, proprietary or private. Retraining also does not provide guarantees that the DNN’s desired behavior is corrected and might also introduce new bugs

The proposal aims to compute minimal changes (“patches”) to the weights of the neurons that provably correct the DNN’s behavior. Preliminary results show that the approach is effective and efficient. In one example, it was able to correct state-of-the-art image-processing DNNs in a matter of minutes.

“We give guarantees that these patches to the neural networks are correct and minimal,” he said. “People have worked on automatically repairing traditional programs, but there is less work on repairing deep neural networks.”

Like last year, Thakur will give a talk at a Facebook-hosted workshop this fall as part of his award. As he works on the project, Thakur is thrilled to have his team of undergraduate and graduate researchers behind him. He especially credits undergraduate researcher Matthew Sotoudeh, who already has an impressive research track record, for his work on verification of neural networks and his contributions to the proposal. The proposal builds upon their work published in the NeurIPS 2019 Workshop on Safety and Robustness in Decision Making.

“We haven’t many seen people focus on this problem, so it is great to get feedback that this is a relevant problem with real-world applications,” he said.

Learn more about the award.

Primary Category

Tags