Skip to main content
Vibhav Gogate

Vibhav Gogate

Associate Professor - Computer Science
 
972-883-4245
ECS3406
Website
Tags:

Professional Preparation

PHD - Information and Computer Science
University of California Irvine - 2009
M.S. - Computer Science
University of Maine, Orono - 2002
B.S. - Computer Engineering
University of Mumbai - 1999

Research Areas

Honors and Awards
  • Co-winner of the UAI Approximate Inference Challenge, 2010 (won four out of six categories participated).
  • Thesis nominated by University of California, Irvine for ACM Doctoral Dissertation award, 2009.
  • Co-winner of Probabilistic Inference Evaluation, 2008.
  • Joseph Fischer Memorial Fellowship Award for Outstanding Academic Achievement in Computer Science at University of California, Irvine, 2004.
  • Graduate Fellowship, University of California, Irvine, 2002-2003.
Research Interests

Machine learning, Artificial Intelligence, Data mining and Big data.

Publications

Deepak Venugopal and Vibhav Gogate, GiSS: Combining SampleSearch and Importance Sampling for Inference in Mixed Probabilistic and Deterministic Graphical Models, In 27th AAAI Conference on Artificial Intelligence (AAAI), 2013. 2013 - Publication
David Smith and Vibhav Gogate, The Inclusion-Exclusion Rule and its Application to the Junction Tree Algorithm, In 23rd International Joint Conference on Artificial Intelligence (IJCAI), 2013. 2013 - Publication
Vibhav Gogate and Pedro Domingos, Structured Message Passing, In 29th Conference on Uncertainty in Artificial Intelligence (UAI), 2013. 2013 - Publication
Deepak Venugopal and Vibhav Gogate, Dynamic Blocking and Collapsing for Gibbs Sampling, In 29th Conference on Uncertainty in Artificial Intelligence (UAI), 2013. 2013 - Publication
Somdeb Sarkhel and Vibhav Gogate, Lifting WALKSAT-based Local Search Algorithms for MAP Inference, In AAAI-13 Workshop on Statistical Relational Artificial Intelligence, 2013. 2013 - Publication
Vibhav Gogate and Rina Dechter, Importance Sampling based Estimation over AND/OR search spaces for graphical models, Artificial Intelligence Journal. To appear. 2012.    2012 - Publication
Vibhav Gogate, Abhay Jha and Deepak Venugopal, Advances in Lifted Importance Sampling, In 26th AAAI Conference on Artificial Intelligence (AAAI), 2012. 2012 - Publication
Deepak Venugopal and Vibhav Gogate, On Lifting the Gibbs Sampling Algorithm, In 26th Annual Conference on Neural Information Processing Systems (NIPS), 2012. 2012 - Publication

Additional Information

Personal Statement
Vibhav Gogate is an Assistant Professor in the Computer Science Department at the University of Texas at Dallas. He got his Ph.D. at University of California, Irvine in 2009 and then did a two-year post-doc at University of Washington. His research interests are in artificial intelligence, machine learning and data mining. His ongoing focus is on probabilistic graphical models, their first-order logic based extensions such as Markov logic and probabilistic programming. He has published over 25 papers in top-tier conferences and journals such as AAAI, UAI, NIPS, AISTATS, AIJ and JAIR. He is the co-winner of the last two probabilistic inference competitions - the 2010 UAI approximate inference challenge and the 2012 PASCAL probabilistic inference competition.

News Articles

Computer Scientist Gets CAREER Award for Artificial Intelligence Work
UT Dallas assistant professor of computer science Dr. Vibhav Gogate has earned a National Science Foundation Faculty Early Career Development (CAREER) Award for his work to improve a type of computer algorithm used in artificial intelligence and machine learning.

Gogate’s award, which will run for five years, will support his work to develop new scalable approaches for learning and inference in Markov logic networks (MLNs).
“MLNs are used in many artificial intelligence sub-fields, such as computer vision, robotics, natural language processing and computational biology,” Gogate said. “Algorithms developed in this proposal can be immediately leveraged in these domains. We intend to use MLNs to solve much larger and harder reasoning problems than is possible today.”