News & Events

211 were found.


  •  EVENT   MERL to celebrate 25 years of innovation
    Date: Thursday, June 2, 2016
    MERL Contacts: Elizabeth Phillips; Anthony Vetro
    Location: Norton's Woods Conference Center at American Academy of Arts & Sciences, Cambridge, MA
    Research Areas: Algorithms, Data Analytics, Electronics & Communications, Computer Vision, Mechatronics, Multimedia
    Brief
    • A celebration event to mark MERL's 25th anniversary will be held on Thursday, June 2 at the Norton's Woods Conference Center at the American Academy of Arts & Sciences in Cambridge, MA. This event will feature keynote talks, panel sessions, and a research showcase. The event itself is invitation-only, but videos and other highlights will be made available online. Further details about the program can be obtained at the link below.
  •  
  •  NEWS   MERL researcher, Oncel Tuzel, gives keynote talk at 2016 International Symposium on Visual Computing
    Date: December 14, 2015 - December 16, 2015
    Where: Las Vegas, NV, USA
    Research Areas: Computer Vision, Machine Learning, Decision Optimization
    Brief
    • MERL researcher, Oncel Tuzel, gave a keynote talk at 2016 International Symposium on Visual Computing in Las Vegas, Dec. 16, 2015. The talk was titled: "Machine vision for robotic bin-picking: Sensors and algorithms" and reviewed MERL's research in the application of 2D and 3D sensing and machine learning to the problem of general pose estimation.

      The talk abstract was: For over four years, at MERL, we have worked on the robot "bin-picking" problem: using a 2D or 3D camera to look into a bin of parts and determine the pose, 3D rotation and translation, of a good candidate to pick up. We have solved the problem several different ways with several different sensors. I will briefly describe the sensors and the algorithms. In the first half of the talk, I will describe the Multi-Flash camera, a 2D camera with 8 flashes, and explain how this inexpensive camera design is used to extract robust geometric features, depth edges and specular edges, from the parts in a cluttered bin. I will present two pose estimation algorithms, (1) Fast directional chamfer matching--a sub-linear time line matching algorithm and (2) specular line reconstruction, for fast and robust pose estimation of parts with different surface characteristics. In the second half of the talk, I will present a voting-based pose estimation algorithm applicable to 3D sensors. We represent three-dimensional objects using a set of oriented point pair features: surface points with normals and boundary points with directions. I will describe a max-margin learning framework to identify discriminative features on the surface of the objects. The algorithm selects and ranks features according to their importance for the specified task which leads to improved accuracy and reduced computational cost.
  •  
  •  NEWS   MERL presented 3 papers at the 2015 IEEE Global Conference on Signal and Information Processing (GlobalSIP)
    Date: December 15, 2015
    Where: 2015 IEEE Global Conference on Signal and Information Processing (GlobalSIP)
    MERL Contacts: Andrew Knyazev; Hassan Mansour; Dong Tian
    Research Areas: Algorithms, Multimedia, Computer Vision, Machine Learning, Speech & Audio, Electronics & Communications, Signal Processing, Wireless Communications, Digital Video
    Brief
    • MERL researcher Andrew Knyazev gave 3 talks at the 2015 IEEE Global Conference on Signal and Information Processing (GlobalSIP). The papers were published in IEEE conference proceedings.
  •  
  •  NEWS   Teng-Yok Lee co-chairs Large Data Analysis and Visualization workshop
    Date: October 25, 2015
    Where: Large Data Analysis and Visualization (LDAV)
    MERL Contact: Teng-Yok Lee
    Research Area: Computer Vision
    Brief
    • Teng-Yok Lee served as the poster co-chair for the Large Data Analysis and Visualization (LDAV) workshop at IEEEVis 2015 in Chicago, Oct. 25-30. At IEEEVis there were over 2000 attendees and three highly competitive main subconferences (SciVis, InfoVis, and Visual Analytics and Technology (VAST)).
  •  
  •  AWARD   Fujisankei Newspaper Gold and Bronze Medal Advertisement Award
    Date: September 30, 2015
    Awarded to: Mitsubishi Electric Corp.
    MERL Contact: Yuichi Taguchi
    Research Area: Computer Vision
    Brief
    • Mitsubishi Electric Corp. (MELCO) advertisements based on 3D reconstruction received a Gold medal and a Bronze medal in the Fujisankei Newspaper. "Will I fit?", "He'll fit just fine.", and "Oops, did you think in 3D?".
  •  
  •  EVENT   Celebrating "Women in Science at MERL" luncheon
    Date & Time: Tuesday, August 4, 2015; 12:00
    MERL Contacts: Elizabeth Phillips; Jinyun Zhang
    Location: Mitsubishi Electric Research Laboratories
    Research Areas: Algorithms, Electronics & Communications, Data Analytics, Multimedia, Mechatronics, Computer Vision
    Brief
    • To celebrate "Women in Science at MERL," a luncheon event was organized on August 4. Eleven female interns, three female researchers, and female members of HQ staff, interns’ hosts/managers and MERL executives participated in that event. All female interns introduced their research projects and their positive experiences at MERL; female researchers shared their own career development stories; and at the end all discussed how to be successful in the field of science. Every participant was inspired to continue contributing to the future of science.
  •  
  •  NEWS   Scene interpretation results of SA group members are listed as the leader of benchmark competition
    Date: July 13, 2015 - July 17, 2015
    MERL Contact: Jay Thornton
    Research Areas: Computer Vision, Machine Learning
    Brief
    • SA group members (M. Liu, S. Lin (intern), S. Ramalingam, O. Tuzel) presented a paper at the Robotics Science and Systems Conference in Rome July 13-17 called “Layered Interpretation of Street View Images”. The results they reported are now listed as the leader of the benchmark competition sponsored by Daimler. [Note that at that URL ref 2 is from collaboration with Daimler and it uses a FPGA for high speed, whereas MERL result is obtained with desktop computer and GPU.]
  •  
  •  NEWS   3D reconstruction on Tokyo TV
    Date: February 20, 2015
    MERL Contact: Yuichi Taguchi
    Research Area: Computer Vision
  •  
  •  NEWS   R&D 100 Award for MELFA-3D Vision system
    Date: July 11, 2014
    Where: R&D Magazine
    MERL Contacts: Yuichi Taguchi; Jay Thornton
    Research Area: Computer Vision
    Brief
    • A team with members from MERL, ATC, and Meiden received an R&D 100 award for its work on Mitsubishi Electric's MELFA-3D Vision system for industrial robot arms. This system completely automates bin picking a task for picking up parts that are randomly placed in a bin and aligning their poses for assembly processes.
  •  
  •  NEWS   MERL's High-speed optimization algorithms showcased at Mitsubishi Electric Corporation annual R&D Open House
    Date: February 13, 2014
    MERL Contact: Matthew Brand
    Research Areas: Algorithms, Mechatronics, Computer Vision
    Brief
    • Mitsubishi Electric Corporation announced its development of advanced optimization algorithms and high-speed calculation methods aimed at optimizing the performance of three practical systems: laser-processing machines for high-speed cutting of sheet metal using the shortest possible trajectories, moon probes achieved with minimized fuel consumption, and particle beam therapies for prompt medical treatments.
  •  
  •  TALK   Embedded Vision R&D at Texas Instruments
    Date & Time: Friday, October 4, 2013; 12:00 PM
    Speaker: Dr. Goksel Dedeoglu, Texas Instruments
    Research Area: Computer Vision
    Brief
    • There are growing needs to accelerate computer vision algorithms on embedded processors for wide-ranging equipment including mobile phones, network cameras, robots, and automotive safety systems. In our Vision R&D group, we conduct various projects to understand how the vision requirements can be best addressed on Digital Signal Processors (DSP), where the compute bottlenecks are, and how we should evolve our hardware & software architectures to meet our customers' future needs. Towards this end, we build prototypes wherein we design and optimize embedded software for real-world application performance and robustness. In this talk, I will provide examples of vision problems that we have recently tackled.
  •  
  •  NEWS   International Conference on 3DTV-Conference: publication by Ming-Yu Liu and others
    Date: June 29, 2013
    Where: International Conference on 3DTV-Conference
    Research Area: Computer Vision
    Brief
    • The paper "Model-Based Vehicle Pose Estimation and Tracking in Videos Using Random Forests" by Hodlmoser, M., Micusik, B., Pollegeys, M., Liu, M-Y. and Kampel, M. was presented at the International Conference on 3DTV-Conference
  •  
  •  EVENT   CCD/PROCAMS 2013 - IEEE 2nd Workshop of Computational Cameras and Displays
    Date & Time: Friday, June 28, 2013; 9:00 AM - 5:00 PM
    Location: Portland, Oregon
    Research Area: Computer Vision
    Brief
    • Amit Agrawal is the co-organizer of the CCD/PROCAMS 2013 Workshop of Computational Cameras and Displays.
  •  
  •  NEWS   CVPR 2013: 3 publications by Yuichi Taguchi, Srikumar Ramalingam, C. Oncel Tuzel, Amit K. Agrawal and Ming-Yu Liu
    Date: June 23, 2013
    Where: IEEE Conference on Computer Vision and Pattern Recognition (CVPR)
    MERL Contact: Yuichi Taguchi
    Research Area: Computer Vision
    Brief
    • The papers "Single Image Calibration of Multi-Axial Imaging Systems" by Agrawal, A. and Ramalingam, S., "Joint Geodesic Upsampling of Depth Images" by Liu, M-Y, Tuzel, O. and Taguchi, Y. and "Manhattan Junction Catalogue for Spatial Reasoning of Indoor Scenes" by Ramalingam, S., Pillai, J.K., Jain, A. and Taguchi, Y. were presented at the IEEE Conference on Computer Vision and Pattern Recognition (CVPR)
  •  
  •  TALK   Holistic Models for Visual Perception in Autonomous Systems
    Date & Time: Thursday, May 23, 2013; 12:00 PM
    Speaker: Prof. Raquel Urtasun, TTI-Chicago
    Research Area: Computer Vision
    Brief
    • The development of autonomous systems that can effectively assist people with everyday tasks is one of the grand challenges in modern computer science. Notable examples are personal robotics for the elderly and people with disabilities, as well as autonomous driving systems which can help decrease fatalities caused by traffic accidents. To achieve full autonomy, multiple perception tasks must be solved: Autonomous systems should sense the environment, recognize the 3D world and interact with it. While most approaches have tackled individual perceptual components in isolation, I believe that the next generation of perceptual systems should reason jointly about multiple tasks.

      In this talk I'll argue that there are four key aspects towards developing such holistic models: (i) learning, (ii) inference (iii) representation, and (iv) data. I'll describe efficient Markov random field learning and inference algorithms that exploit both the structure of the problem as well as parallel computation to achieve computational and memory efficiency. I'll demonstrate the effectiveness of our models on a wide variety of examples, and show representations and inference strategies that allow us to achieve state-of-the-art performance and result in several orders of magnitude speed-ups in a variety of challenging tasks, including 3D reconstruction, 3D layout parsing, object detection, semantic segmentation and free text exploitation for holistic visual recognition.
  •  
  •  NEWS   ICRA 2013: publication by Yuichi Taguchi, Srikumar Ramalingam and others
    Date: May 14, 2013
    Where: IEEE International Conference on Robotics & Automation (ICRA)
    MERL Contact: Yuichi Taguchi
    Research Area: Computer Vision
    Brief
    • The paper "Point-Plane SLAM for Hand-Held 3D Sensors" by Taguchi, Y., Jian, Y-D, Ramalingam, S. and Feng, C. was presented at the IEEE International Conference on Robotics & Automation (ICRA)
  •  
  •  NEWS   IEEE Transactions on Pattern Analysis and Machine Intelligence: publication by MERL researchers and others
    Date: April 1, 2013
    Where: IEEE Transactions on Pattern Analysis and Machine Intelligence
    Research Area: Computer Vision
    Brief
    • The article "Support Vector Shape: A Classifier Based Shape Representation" by Nguyen, H. V. and Porikli, F. was published in IEEE Transactions on Pattern Analysis and Machine Intelligence
  •  
  •  NEWS   IEEE Transactions on Pattern Analysis and Machine Intelligence: publication by MERL researchers and others
    Date: February 14, 2013
    Where: IEEE Transactions on Pattern Analysis and Machine Intelligence
    Research Area: Computer Vision
    Brief
    • The article "Nonlinear Camera Response Functions and Image Deblurring: Theoretical Analysis and Practice" by Tai, Y-W, Chen, X., Kim, S., Kim, S.J., Li, F., Yang, J., Yu, J., Matsushita, Y. and Brown, M.S. was published in IEEE Transactions on Pattern Analysis and Machine Intelligence
  •  
  •  NEWS   ICMLA 2012: publication by MERL researchers and others
    Date: December 12, 2012
    Where: International Conference on Machine Learning and Applications (ICMLA)
    Research Areas: Computer Vision, Machine Learning
    Brief
    • The paper "Compressive Clustering of High-Dimensional Data" by Ruta, A. and Porikli, F. was presented at the International Conference on Machine Learning and Applications (ICMLA)
  •  
  •  TALK   Sensitive Manipulation
    Date & Time: Thursday, November 15, 2012; 12:00 PM
    Speaker: Dr. Eduardo Torres-Jara, Worcester Polytechnic Institute
    MERL Host: Jay Thornton
    Research Area: Computer Vision
    Brief
    • This talk presents an alternative approach to robotic manipulation. In this approach, manipulation is mainly guided by tactile feedback as opposed to vision. The motivation behind this approach stems from the fact that manipulating an object necessarily implies coming into contact with it. As a result, directly sensing physical contact seems more important than vision to control the interaction of the object and the robot. In this work, the traditional approach of a highly precise arm guided by a vision system is replaced by one that uses a low mechanical impedance arm with dense tactile sensing and exploration capabilities.

      The robots OBRERO and GoBot have been built to implement this approach. We have developed a novel tactile sensing technology and mounted our sensors on the robots' hands. These sensors are biologically inspired and present adequate features for manipulation. The success of this approach is shown by picking up objects in a poorly modeled environment. This task, simple for humans, has been a challenge for robots. The robot can deal with new, unmodeled objects. Specifically, OBRERO can gently contact, explore, lift, and place an object in a different location. It can also detect basic slippage and external forces acting on an object while it is held. These tasks can be performed successfully with very light objects, without fixtures, and on slippery surfaces. Similarly, GoBot is capable of manipulating small objects such as the stones in the game GO. Both OBRERO and GoBot perform all of their manipulations using tactile feedback.
  •