News & Events

21 News items, Awards, Events or Talks found.



Learn about the MERL Seminar Series.



  •  NEWS    MERL researchers presenting four papers and organizing the VLAR-SMART101 Workshop at ICCV 2023
    Date: October 2, 2023 - October 6, 2023
    Where: Paris/France
    MERL Contacts: Moitreya Chatterjee; Anoop Cherian; Michael J. Jones; Toshiaki Koike-Akino; Suhas Lohit; Tim K. Marks; Pedro Miraldo; Kuan-Chuan Peng; Ye Wang
    Research Areas: Artificial Intelligence, Computer Vision, Machine Learning
    Brief
    • MERL researchers are presenting 4 papers and organizing the VLAR-SMART-101 workshop at the ICCV 2023 conference, which will be held in Paris, France October 2-6. ICCV is one of the most prestigious and competitive international conferences in computer vision. Details are provided below.

      1. Conference paper: “Steered Diffusion: A Generalized Framework for Plug-and-Play Conditional Image Synthesis,” by Nithin Gopalakrishnan Nair, Anoop Cherian, Suhas Lohit, Ye Wang, Toshiaki Koike-Akino, Vishal Patel, and Tim K. Marks

      Conditional generative models typically demand large annotated training sets to achieve high-quality synthesis. As a result, there has been significant interest in plug-and-play generation, i.e., using a pre-defined model to guide the generative process. In this paper, we introduce Steered Diffusion, a generalized framework for fine-grained photorealistic zero-shot conditional image generation using a diffusion model trained for unconditional generation. The key idea is to steer the image generation of the diffusion model during inference via designing a loss using a pre-trained inverse model that characterizes the conditional task. Our model shows clear qualitative and quantitative improvements over state-of-the-art diffusion-based plug-and-play models, while adding negligible computational cost.

      2. Conference paper: "BANSAC: A dynamic BAyesian Network for adaptive SAmple Consensus," by Valter Piedade and Pedro Miraldo

      We derive a dynamic Bayesian network that updates individual data points' inlier scores while iterating RANSAC. At each iteration, we apply weighted sampling using the updated scores. Our method works with or without prior data point scorings. In addition, we use the updated inlier/outlier scoring for deriving a new stopping criterion for the RANSAC loop. Our method outperforms the baselines in accuracy while needing less computational time.

      3. Conference paper: "Robust Frame-to-Frame Camera Rotation Estimation in Crowded Scenes," by Fabien Delattre, David Dirnfeld, Phat Nguyen, Stephen Scarano, Michael J. Jones, Pedro Miraldo, and Erik Learned-Miller

      We present a novel approach to estimating camera rotation in crowded, real-world scenes captured using a handheld monocular video camera. Our method uses a novel generalization of the Hough transform on SO3 to efficiently find the camera rotation most compatible with the optical flow. Because the setting is not addressed well by other data sets, we provide a new dataset and benchmark, with high-accuracy and rigorously annotated ground truth on 17 video sequences. Our method is more accurate by almost 40 percent than the next best method.

      4. Workshop paper: "Tensor Factorization for Leveraging Cross-Modal Knowledge in Data-Constrained Infrared Object Detection" by Manish Sharma*, Moitreya Chatterjee*, Kuan-Chuan Peng, Suhas Lohit, and Michael Jones

      While state-of-the-art object detection methods for RGB images have reached some level of maturity, the same is not true for Infrared (IR) images. The primary bottleneck towards bridging this gap is the lack of sufficient labeled training data in the IR images. Towards addressing this issue, we present TensorFact, a novel tensor decomposition method which splits the convolution kernels of a CNN into low-rank factor matrices with fewer parameters. This compressed network is first pre-trained on RGB images and then augmented with only a few parameters. This augmented network is then trained on IR images, while freezing the weights trained on RGB. This prevents it from over-fitting, allowing it to generalize better. Experiments show that our method outperforms state-of-the-art.

      5. “Vision-and-Language Algorithmic Reasoning (VLAR) Workshop and SMART-101 Challenge” by Anoop Cherian,  Kuan-Chuan Peng, Suhas Lohit, Tim K. Marks, Ram Ramrakhya, Honglu Zhou, Kevin A. Smith, Joanna Matthiesen, and Joshua B. Tenenbaum

      MERL researchers along with researchers from MIT, GeorgiaTech, Math Kangaroo USA, and Rutgers University are jointly organizing a workshop on vision-and-language algorithmic reasoning at ICCV 2023 and conducting a challenge based on the SMART-101 puzzles described in the paper: Are Deep Neural Networks SMARTer than Second Graders?. A focus of this workshop is to bring together outstanding faculty/researchers working at the intersections of vision, language, and cognition to provide their opinions on the recent breakthroughs in large language models and artificial general intelligence, as well as showcase their cutting edge research that could inspire the audience to search for the missing pieces in our quest towards solving the puzzle of artificial intelligence.

      Workshop link: https://wvlar.github.io/iccv23/
  •  
  •  NEWS    MERL Researchers Present Thirteen Papers at the 2023 IEEE International Conference on Robotics and Automation (ICRA)
    Date: May 29, 2023 - June 2, 2023
    Where: 2023 IEEE International Conference on Robotics and Automation (ICRA)
    MERL Contacts: Anoop Cherian; Radu Corcodel; Siddarth Jain; Devesh K. Jha; Toshiaki Koike-Akino; Tim K. Marks; Daniel N. Nikovski; Arvind Raghunathan; Diego Romeres
    Research Areas: Computer Vision, Machine Learning, Optimization, Robotics
    Brief
    • MERL researchers will present thirteen papers, including eight main conference papers and five workshop papers, at the 2023 IEEE International Conference on Robotics and Automation (ICRA) to be held in London, UK from May 29 to June 2. ICRA is one of the largest and most prestigious conferences in the robotics community. The papers cover a broad set of topics in Robotics including estimation, manipulation, vision-based object recognition and segmentation, tactile estimation and tool manipulation, robotic food handling, robot skill learning, and model-based reinforcement learning.

      In addition to the paper presentations, MERL robotics researchers will also host an exhibition booth and look forward to discussing our research with visitors.
  •  
  •  NEWS    MERL presenting 8 papers at ICASSP 2022
    Date: May 22, 2022 - May 27, 2022
    Where: Singapore
    MERL Contacts: Anoop Cherian; Chiori Hori; Toshiaki Koike-Akino; Jonathan Le Roux; Tim K. Marks; Philip V. Orlik; Kuan-Chuan Peng; Pu (Perry) Wang; Gordon Wichern
    Research Areas: Artificial Intelligence, Computer Vision, Signal Processing, Speech & Audio
    Brief
    • MERL researchers are presenting 8 papers at the IEEE International Conference on Acoustics, Speech & Signal Processing (ICASSP), which is being held in Singapore from May 22-27, 2022. A week of virtual presentations also took place earlier this month.

      Topics to be presented include recent advances in speech recognition, audio processing, scene understanding, computational sensing, and classification.

      ICASSP is the flagship conference of the IEEE Signal Processing Society, and the world's largest and most comprehensive technical conference focused on the research advances and latest technological development in signal and information processing. The event attracts more than 2000 participants each year.
  •  
  •  NEWS    MERL work on scene-aware interaction featured in IEEE Spectrum
    Date: March 1, 2022
    MERL Contacts: Anoop Cherian; Chiori Hori; Jonathan Le Roux; Tim K. Marks; Anthony Vetro
    Research Areas: Artificial Intelligence, Computer Vision, Machine Learning, Speech & Audio
    Brief
    • MERL's research on scene-aware interaction was recently featured in an IEEE Spectrum article. The article, titled "At Last, A Self-Driving Car That Can Explain Itself" and authored by MERL Senior Principal Research Scientist Chiori Hori and MERL Director Anthony Vetro, gives an overview of MERL's efforts towards developing a system that can analyze multimodal sensing information for highly natural and intuitive interaction with humans through context-dependent generation of natural language. The technology recognizes contextual objects and events based on multimodal sensing information, such as images and video captured with cameras, audio information recorded with microphones, and localization information measured with LiDAR.

      Scene-Aware Interaction for car navigation, one target application that the article focuses on, will provide drivers with intuitive route guidance. Scene-Aware Interaction technology is expected to have wide applicability, including human-machine interfaces for in-vehicle infotainment, interaction with service robots in building and factory automation systems, systems that monitor the health and well-being of people, surveillance systems that interpret complex scenes for humans and encourage social distancing, support for touchless operation of equipment in public areas, and much more. MERL's Scene-Aware Interaction Technology had previously been featured in a Mitsubishi Electric Corporation Press Release.

      IEEE Spectrum is the flagship magazine and website of the IEEE, the world’s largest professional organization devoted to engineering and the applied sciences. IEEE Spectrum has a circulation of over 400,000 engineers worldwide, making it one of the leading science and engineering magazines.
  •  
  •  NEWS    MERL's Scene-Aware Interaction Technology Featured in Mitsubishi Electric Corporation Press Release
    Date: July 22, 2020
    Where: Tokyo, Japan
    MERL Contacts: Anoop Cherian; Chiori Hori; Jonathan Le Roux; Tim K. Marks; Anthony Vetro
    Research Areas: Artificial Intelligence, Computer Vision, Machine Learning, Speech & Audio
    Brief
    • Mitsubishi Electric Corporation announced that the company has developed what it believes to be the world’s first technology capable of highly natural and intuitive interaction with humans based on a scene-aware capability to translate multimodal sensing information into natural language.

      The novel technology, Scene-Aware Interaction, incorporates Mitsubishi Electric’s proprietary Maisart® compact AI technology to analyze multimodal sensing information for highly natural and intuitive interaction with humans through context-dependent generation of natural language. The technology recognizes contextual objects and events based on multimodal sensing information, such as images and video captured with cameras, audio information recorded with microphones, and localization information measured with LiDAR.

      Scene-Aware Interaction for car navigation, one target application, will provide drivers with intuitive route guidance. The technology is also expected to have applicability to human-machine interfaces for in-vehicle infotainment, interaction with service robots in building and factory automation systems, systems that monitor the health and well-being of people, surveillance systems that interpret complex scenes for humans and encourage social distancing, support for touchless operation of equipment in public areas, and much more. The technology is based on recent research by MERL's Speech & Audio and Computer Vision groups.
  •  
  •  NEWS    MERL researchers presenting four papers and organizing two workshops at CVPR 2020 conference
    Date: June 14, 2020 - June 19, 2020
    MERL Contacts: Anoop Cherian; Michael J. Jones; Toshiaki Koike-Akino; Tim K. Marks; Kuan-Chuan Peng; Ye Wang
    Research Areas: Artificial Intelligence, Computer Vision, Machine Learning
    Brief
    • MERL researchers are presenting four papers (two oral papers and two posters) and organizing two workshops at the IEEE/CVF Computer Vision and Pattern Recognition (CVPR 2020) conference.

      CVPR 2020 Orals with MERL authors:
      1. "Dynamic Multiscale Graph Neural Networks for 3D Skeleton Based Human Motion Prediction," by Maosen Li, Siheng Chen, Yangheng Zhao, Ya Zhang, Yanfeng Wang, Qi Tian
      2. "Collaborative Motion Prediction via Neural Motion Message Passing," by Yue Hu, Siheng Chen, Ya Zhang, Xiao Gu

      CVPR 2020 Posters with MERL authors:
      3. "LUVLi Face Alignment: Estimating Landmarks’ Location, Uncertainty, and Visibility Likelihood," by Abhinav Kumar, Tim K. Marks, Wenxuan Mou, Ye Wang, Michael Jones, Anoop Cherian, Toshiaki Koike-Akino, Xiaoming Liu, Chen Feng
      4. "MotionNet: Joint Perception and Motion Prediction for Autonomous Driving Based on Bird’s Eye View Maps," by Pengxiang Wu, Siheng Chen, Dimitris N. Metaxas

      CVPR 2020 Workshops co-organized by MERL researchers:
      1. Fair, Data-Efficient and Trusted Computer Vision
      2. Deep Declarative Networks.
  •  
  •  AWARD    MERL Researchers win Best Paper Award at ICCV 2019 Workshop on Statistical Deep Learning in Computer Vision
    Date: October 27, 2019
    Awarded to: Abhinav Kumar, Tim K. Marks, Wenxuan Mou, Chen Feng, Xiaoming Liu
    MERL Contact: Tim K. Marks
    Research Areas: Artificial Intelligence, Computer Vision, Machine Learning
    Brief
    • MERL researcher Tim Marks, former MERL interns Abhinav Kumar and Wenxuan Mou, and MERL consultants Professor Chen Feng (NYU) and Professor Xiaoming Liu (MSU) received the Best Oral Paper Award at the IEEE/CVF International Conference on Computer Vision (ICCV) 2019 Workshop on Statistical Deep Learning in Computer Vision (SDL-CV) held in Seoul, Korea. Their paper, entitled "UGLLI Face Alignment: Estimating Uncertainty with Gaussian Log-Likelihood Loss," describes a method which, given an image of a face, estimates not only the locations of facial landmarks but also the uncertainty of each landmark location estimate.
  •  
  •  NEWS    MERL presenting 16 papers at ICASSP 2019
    Date: May 12, 2019 - May 17, 2019
    Where: Brighton, UK
    MERL Contacts: Petros T. Boufounos; Anoop Cherian; Chiori Hori; Toshiaki Koike-Akino; Jonathan Le Roux; Dehong Liu; Hassan Mansour; Tim K. Marks; Philip V. Orlik; Anthony Vetro; Pu (Perry) Wang; Gordon Wichern
    Research Areas: Computational Sensing, Computer Vision, Machine Learning, Signal Processing, Speech & Audio
    Brief
    • MERL researchers will be presenting 16 papers at the IEEE International Conference on Acoustics, Speech & Signal Processing (ICASSP), which is being held in Brighton, UK from May 12-17, 2019. Topics to be presented include recent advances in speech recognition, audio processing, scene understanding, computational sensing, and parameter estimation. MERL is also a sponsor of the conference and will be participating in the student career luncheon; please join us at the lunch to learn about our internship program and career opportunities.

      ICASSP is the flagship conference of the IEEE Signal Processing Society, and the world's largest and most comprehensive technical conference focused on the research advances and latest technological development in signal and information processing. The event attracts more than 2000 participants each year.
  •  
  •  NEWS    Tim Marks to give invited Keynote talk at AMFG 2017 Workshop, at ICCV 2017
    Date: October 28, 2017
    Where: Venice, Italy
    MERL Contact: Tim K. Marks
    Research Area: Machine Learning
    Brief
    • MERL Senior Principal Research Scientist Tim K. Marks will give an invited keynote talk at the 2017 IEEE Workshop on Analysis and Modeling of Faces and Gestures (AMFG 2017). The workshop will take place On October 28, 2017, at the International Conference on Computer Vision (ICCV 2017) in Venice, Italy.
  •  
  •  EVENT    Tim Marks to give lunch talk at Face and Gesture 2017 conference
    Date: Thursday, June 1, 2017
    Location: IEEE Conference on Automatic Face and Gesture Recognition (FG 2017), Washington, DC
    Speaker: Tim K. Marks
    MERL Contact: Tim K. Marks
    Research Area: Machine Learning
    Brief
    • MERL Senior Principal Research Scientist Tim K. Marks will give the invited lunch talk on Thursday, June 1, at the IEEE International Conference on Automatic Face and Gesture Recognition (FG 2017). The talk is entitled "Robust Real-Time 3D Head Pose and 2D Face Alignment.".
  •  
  •  NEWS    MERL Researcher Tim Marks presents an invited talk at MIT Lincoln Laboratory
    Date: April 27, 2017
    Where: Lincoln Laboratory, Massachusetts Institute of Technology
    MERL Contact: Tim K. Marks
    Research Area: Machine Learning
    Brief
    • MERL researcher Tim K. Marks presented an invited talk as part of the MIT Lincoln Laboratory CORE Seminar Series on Biometrics. The talk was entitled "Robust Real-Time 2D Face Alignment and 3D Head Pose Estimation."

      Abstract: Head pose estimation and facial landmark localization are key technologies, with widespread application areas including biometrics and human-computer interfaces. This talk describes two different robust real-time face-processing methods, each using a different modality of input image. The first part of the talk describes our system for 3D head pose estimation and facial landmark localization using a commodity depth sensor. The method is based on a novel 3D Triangular Surface Patch (TSP) descriptor, which is viewpoint-invariant as well as robust to noise and to variations in the data resolution. This descriptor, combined with fast nearest-neighbor lookup and a joint voting scheme, enable our system to handle arbitrary head pose and significant occlusions. The second part of the talk describes our method for face alignment, which is the localization of a set of facial landmark points in a 2D image or video of a face. Face alignment is particularly challenging when there are large variations in pose (in-plane and out-of-plane rotations) and facial expression. To address this issue, we propose a cascade in which each stage consists of a Mixture of Invariant eXperts (MIX), where each expert learns a regression model that is specialized to a different subset of the joint space of pose and expressions. We also present a method to include deformation constraints within the discriminative alignment framework, which makes the algorithm more robust. Both our 3D head pose and 2D face alignment methods outperform the previous results on standard datasets. If permitted, I plan to end the talk with a live demonstration.
  •  
  •  NEWS    MERL researcher Tim Marks presents invited talk at University of Utah
    Date: April 10, 2017
    Where: University of Utah School of Computing
    MERL Contact: Tim K. Marks
    Research Area: Machine Learning
    Brief
    • MERL researcher Tim K. Marks presented an invited talk at the University of Utah School of Computing, entitled "Action Detection from Video and Robust Real-Time 2D Face Alignment."

      Abstract: The first part of the talk describes our multi-stream bi-directional recurrent neural network for action detection from video. In addition to a two-stream convolutional neural network (CNN) on full-frame appearance (images) and motion (optical flow), our system trains two additional streams on appearance and motion that have been cropped to a bounding box from a person tracker. To model long-term temporal dynamics within and between actions, the multi-stream CNN is followed by a bi-directional Long Short-Term Memory (LSTM) layer. Our method outperforms the previous state of the art on two action detection datasets: the MPII Cooking 2 Dataset, and a new MERL Shopping Dataset that we have made available to the community. The second part of the talk describes our method for face alignment, which is the localization of a set of facial landmark points in a 2D image or video of a face. Face alignment is particularly challenging when there are large variations in pose (in-plane and out-of-plane rotations) and facial expression. To address this issue, we propose a cascade in which each stage consists of a Mixture of Invariant eXperts (MIX), where each expert learns a regression model that is specialized to a different subset of the joint space of pose and expressions. We also present a method to include deformation constraints within the discriminative alignment framework, which makes the algorithm more robust. Our face alignment system outperforms the previous results on standard datasets. The talk will end with a live demo of our face alignment system.
  •  
  •  NEWS    MERL presents three papers at the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)
    Date: June 27, 2016 - June 30, 2016
    Where: 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV
    MERL Contacts: Michael J. Jones; Tim K. Marks
    Research Area: Machine Learning
    Brief
    • MERL researchers in the Computer Vision group presented three papers at the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR 2016), which had a paper acceptance rate of 29.9%.
  •  
  •  NEWS    The International Journal of Robotics Research: publication by Yuichi Taguchi, Tim K. Marks, C. Oncel Tuzel, Ming-Yu Liu and others
    Date: May 8, 2012
    Where: The International Journal of Robotics Research
    MERL Contact: Tim K. Marks
    Research Area: Computer Vision
    Brief
    • The article "Fast Object Localization and Pose Estimation in Heavy Clutter for Robotic Bin Picking" by Liu, M.-Y., Tuzel, O., Veeraraghavan, A., Taguchi, Y., Marks, T.K. and Chellappa, R. was published in The International Journal of Robotics Research.
  •  
  •  NEWS    ICCV 2011: publication by Michael J. Jones, Tim K. Marks and others
    Date: November 6, 2011
    Where: IEEE International Conference on Computer Vision (ICCV)
    MERL Contacts: Tim K. Marks; Michael J. Jones
    Brief
    • The paper "Fully Automatic Pose-Invariant Face Recognition via 3D Pose Normalization" by Asthana, A., Marks, T.K., Jones, M.J., Tieu, K.H. and Rohith, M. was presented at the IEEE International Conference on Computer Vision (ICCV).
  •  
  •  NEWS    IROS 2011: publication by Yuichi Taguchi, John R. Hershey and Tim K. Marks
    Date: September 25, 2011
    Where: IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)
    MERL Contact: Tim K. Marks
    Brief
    • The paper "Entropy-Based Motion Selection for Touch-Based Registration Using Rao-Blackwellized Particle Filtering" by Taguchi, Y., Marks, T.K. and Hershey, J.R. was presented at the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS).
  •  
  •  NEWS    BMVC 2011: publication by Michael J. Jones, Tim K. Marks and others
    Date: August 29, 2011
    Where: British Machine Vision Conference (BMVC)
    MERL Contacts: Michael J. Jones; Tim K. Marks
    Brief
    • The paper "Pose Normalization via Learned 2D Warping for Fully Automatic Face Recognition" by Asthana, A., Jones, M.J., Marks, T.K., Tieu, K.H. and Goecke, R. was presented at the British Machine Vision Conference (BMVC).
  •  
  •  NEWS    ECCV 2010: 5 publications by Yuichi Taguchi, Srikumar Ramalingam, Amit K. Agrawal, C. Oncel Tuzel and Tim K. Marks
    Date: September 5, 2010
    Where: European Conference on Computer Vision (ECCV)
    MERL Contact: Tim K. Marks
    Research Area: Computer Vision
    Brief
    • The papers "Image Invariants for Smooth Reflective Surfaces" by Sankaranarayanan, A.C., Veeraraghavan, A., Tuzel, O. and Agrawal, A., "Analytical Forward Projection for Axial Non-Central Dioptric & Catadioptric Cameras" by Agrawal, A., Taguchi, Y. and Ramalingam, S., "P2Pi: A Minimal Solution for Registration of 3D Points to 3D Planes" by Ramalingam, S., Taguchi, Y., Marks, T.K. and Tuzel, O., "Fast Approximate Nearest Neighbor Methods for Non-Euclidean Manifolds with Applications to Human Activity Analysis in Videos" by Chaudhry, R. and Ivanov, Y. and "Flexible Voxels for Motion-Aware Videography" by Gupta, M., Agrawal, A., Veeraraghavan, A. and Narasimhan, S.G. were presented at the European Conference on Computer Vision (ECCV).
  •  
  •  NEWS    CVPR 2010: 8 publications by C. Oncel Tuzel, Tim K. Marks, Yuichi Taguchi, Srikumar Ramalingam, Michael J. Jones and Amit K. Agrawal
    Date: June 13, 2010
    Where: IEEE Conference on Computer Vision and Pattern Recognition (CVPR)
    MERL Contacts: Michael J. Jones; Tim K. Marks
    Brief
    • The papers "Optimal Coded Sampling for Temporal Super-Resolution" by Agrawal, A.K., Gupta, M., Veeraraghavan, A.N. and Narasimhan, S.G., "Breaking the Interactive Bottleneck in Multi-class Classification with Active Selection and Binary Feedback" by Joshi, A.J., Porikli, F.M. and Papanikolopoulos, N., "Axial Light Field for Curved Mirrors: Reflect Your Perspective, Widen Your View" by Taguchi, Y., Agrawal, A.K., Ramalingam, S. and Veeraraghavan, A.N., "Morphable Reflectance Fields for Enhancing Face Recognition" by Kumar, R., Jones, M.J. and Marks, T.K., "Increasing Depth Resolution of Electron Microscopy of Neural Circuits using Sparse Tomographic Reconstruction" by Veeraraghavan, A., Genkin, A.V., Vitaladevuni, S., Scheffer, L., Xu, S., Hess, H., Fetter, R., Cantoni, M., Knott, G. and Chklovskii, D., "Specular Surface Reconstruction from Sparse Reflection Correspondences" by Sankaranarayanan, A., Veeraraghavan, A.N., Tuzel, C.O. and Agrawal, A.K., "Fast Directional Chamfer Matching" by Liu, M.-Y., Tuzel, C.O., Veeraraghavan, A.N. and Chellappa, R. and "Robust RVM regression using sparse outlier model" by Mitra, K., Veeraraghavan, A. and Chellappa, R. were presented at the IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
  •  
  •  NEWS    ICRA 2010: 3 publications by Yuichi Taguchi, Amit K. Agrawal, C. Oncel Tuzel, Tim K. Marks and others
    Date: May 3, 2010
    Where: IEEE International Conference on Robotics and Automation (ICRA)
    MERL Contact: Tim K. Marks
    Research Area: Computer Vision
    Brief
    • The papers "Pose Estimation in Heavy Clutter Using a Multi-Flash Camera" by Liu, M.-Y., Tuzel, C.O., Veeraraghavan, A.N., Chellappa, R., Agrawal, A.K. and Okuda, H., "Rao-Blackwellized Particle Filtering for Probing-based 6-DOF Localization in Robotic Assembly" by Taguchi, Y., Marks, T.K. and Okuda, H. and "Multi-Class Batch-mode Active Learning for Image Classification" by Joshi, A.J., Porikli, F. and Papanikolopoulos, N. were presented at the IEEE International Conference on Robotics and Automation (ICRA).
  •  
  •  NEWS    IEEE Transactions on Pattern Analysis and Machine Intelligence: publication by Tim K. Marks and others
    Date: February 1, 2010
    Where: IEEE Transactions on Pattern Analysis and Machine Intelligence
    MERL Contact: Tim K. Marks
    Research Area: Computer Vision
    Brief
    • The article "Tracking Motion, Deformation and Texture Using Conditionally Gaussian Processes" by Marks, T.K., Hershey, J.R. and Movellan, J.R. was published in IEEE Transactions on Pattern Analysis and Machine Intelligence.
  •