Computational Sensing

Utilizing computation to improve sensing capabilities.

Our research in the area of computational sensing focuses on signal acquisition and design, signal modeling and reconstruction algorithms, including inverse problems, as well as array signal processing techniques.

  • Researchers

  • Awards

    •  AWARD    MERL’s Paper on Wi-Fi Sensing Earns Top 3% Paper Recognition at ICASSP 2023, Selected as a Best Student Paper Award Finalist
      Date: June 9, 2023
      Awarded to: Cristian J. Vaca-Rubio, Pu Wang, Toshiaki Koike-Akino, Ye Wang, Petros Boufounos and Petar Popovski
      MERL Contacts: Petros T. Boufounos; Toshiaki Koike-Akino; Pu (Perry) Wang; Ye Wang
      Research Areas: Artificial Intelligence, Communications, Computational Sensing, Dynamical Systems, Machine Learning, Signal Processing
      Brief
      • A MERL Paper on Wi-Fi sensing was recognized as a Top 3% Paper among all 2709 accepted papers at the 2023 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP 2023). Co-authored by Cristian Vaca-Rubio and Petar Popovski from Aalborg University, Denmark, and MERL researchers Pu Wang, Toshiaki Koike-Akino, Ye Wang, and Petros Boufounos, the paper "MmWave Wi-Fi Trajectory Estimation with Continous-Time Neural Dynamic Learning" was also a Best Student Paper Award finalist.

        Performed during Cristian’s stay at MERL first as a visiting Marie Skłodowska-Curie Fellow and then as a full-time intern in 2022, this work capitalizes on standards-compliant Wi-Fi signals to perform indoor localization and sensing. The paper uses a neural dynamic learning framework to address technical issues such as low sampling rate and irregular sampling intervals.

        ICASSP, a flagship conference of the IEEE Signal Processing Society (SPS), was hosted on the Greek island of Rhodes from June 04 to June 10, 2023. ICASSP 2023 marked the largest ICASSP in history, boasting over 4000 participants and 6128 submitted papers, out of which 2709 were accepted.
    •  
    •  AWARD    Joshua Rapp wins Best Dissertation Award from the IEEE Signal Processing Society
      Date: December 20, 2021
      Awarded to: Joshua Rapp
      MERL Contact: Joshua Rapp
      Research Areas: Computational Sensing, Signal Processing
      Brief
      • Joshua Rapp has won the 2021 Best PhD Dissertation Award from the IEEE Signal Processing Society.
        The award recognizes a PhD thesis completed on a signal processing subject within the past three years for its relevant work in signal processing while stimulating further research in the field.

        Dr. Rapp completed his PhD at Boston University in 2020 with a thesis entitled "Probabilistic Modeling for Single-Photon Lidar." The dissertation tackles challenges of the acquisition and processing of 3D depth maps reconstructed from time-of-flight data captured one photon at a time.
        The award will be presented at the 2022 IEEE International Conference on Image Processing (ICIP) in France.
    •  
    •  AWARD    Petros Boufounos Elevated to IEEE Fellow
      Date: January 1, 2022
      Awarded to: Petros T. Boufounos
      MERL Contact: Petros T. Boufounos
      Research Areas: Computational Sensing, Signal Processing
      Brief
      • MERL’s Petros Boufounos has been elevated to IEEE Fellow, effective January 2022, for “contributions to compressed sensing.”

        IEEE Fellow is the highest grade of membership of the IEEE. It honors members with an outstanding record of technical achievements, contributing importantly to the advancement or application of engineering, science and technology, and bringing significant value to society. Each year, following a rigorous evaluation procedure, the IEEE Fellow Committee recommends a select group of recipients for elevation to IEEE Fellow. Less than 0.1% of voting members are selected annually for this member grade elevation.
    •  

    See All Awards for Computational Sensing
  • News & Events

    •  NEWS    MERL Researchers to Present 2 Conference and 11 Workshop Papers at NeurIPS 2024
      Date: December 10, 2024 - December 15, 2024
      Where: Advances in Neural Processing Systems (NeurIPS)
      MERL Contacts: Petros T. Boufounos; Matthew Brand; Ankush Chakrabarty; Anoop Cherian; François Germain; Toshiaki Koike-Akino; Christopher R. Laughman; Jonathan Le Roux; Jing Liu; Suhas Lohit; Tim K. Marks; Yoshiki Masuyama; Kieran Parsons; Kuan-Chuan Peng; Diego Romeres; Pu (Perry) Wang; Ye Wang; Gordon Wichern
      Research Areas: Artificial Intelligence, Communications, Computational Sensing, Computer Vision, Control, Data Analytics, Dynamical Systems, Machine Learning, Multi-Physical Modeling, Optimization, Robotics, Signal Processing, Speech & Audio, Human-Computer Interaction, Information Security
      Brief
      • MERL researchers will attend and present the following papers at the 2024 Advances in Neural Processing Systems (NeurIPS) Conference and Workshops.

        1. "RETR: Multi-View Radar Detection Transformer for Indoor Perception" by Ryoma Yataka (Mitsubishi Electric), Adriano Cardace (Bologna University), Perry Wang (Mitsubishi Electric Research Laboratories), Petros Boufounos (Mitsubishi Electric Research Laboratories), Ryuhei Takahashi (Mitsubishi Electric). Main Conference. https://neurips.cc/virtual/2024/poster/95530

        2. "Evaluating Large Vision-and-Language Models on Children's Mathematical Olympiads" by Anoop Cherian (Mitsubishi Electric Research Laboratories), Kuan-Chuan Peng (Mitsubishi Electric Research Laboratories), Suhas Lohit (Mitsubishi Electric Research Laboratories), Joanna Matthiesen (Math Kangaroo USA), Kevin Smith (Massachusetts Institute of Technology), Josh Tenenbaum (Massachusetts Institute of Technology). Main Conference, Datasets and Benchmarks track. https://neurips.cc/virtual/2024/poster/97639

        3. "Probabilistic Forecasting for Building Energy Systems: Are Time-Series Foundation Models The Answer?" by Young-Jin Park (Massachusetts Institute of Technology), Jing Liu (Mitsubishi Electric Research Laboratories), François G Germain (Mitsubishi Electric Research Laboratories), Ye Wang (Mitsubishi Electric Research Laboratories), Toshiaki Koike-Akino (Mitsubishi Electric Research Laboratories), Gordon Wichern (Mitsubishi Electric Research Laboratories), Navid Azizan (Massachusetts Institute of Technology), Christopher R. Laughman (Mitsubishi Electric Research Laboratories), Ankush Chakrabarty (Mitsubishi Electric Research Laboratories). Time Series in the Age of Large Models Workshop.

        4. "Forget to Flourish: Leveraging Model-Unlearning on Pretrained Language Models for Privacy Leakage" by Md Rafi Ur Rashid (Penn State University), Jing Liu (Mitsubishi Electric Research Laboratories), Toshiaki Koike-Akino (Mitsubishi Electric Research Laboratories), Shagufta Mehnaz (Penn State University), Ye Wang (Mitsubishi Electric Research Laboratories). Workshop on Red Teaming GenAI: What Can We Learn from Adversaries?

        5. "Spatially-Aware Losses for Enhanced Neural Acoustic Fields" by Christopher Ick (New York University), Gordon Wichern (Mitsubishi Electric Research Laboratories), Yoshiki Masuyama (Mitsubishi Electric Research Laboratories), François G Germain (Mitsubishi Electric Research Laboratories), Jonathan Le Roux (Mitsubishi Electric Research Laboratories). Audio Imagination Workshop.

        6. "FV-NeRV: Neural Compression for Free Viewpoint Videos" by Sorachi Kato (Osaka University), Takuya Fujihashi (Osaka University), Toshiaki Koike-Akino (Mitsubishi Electric Research Laboratories), Takashi Watanabe (Osaka University). Machine Learning and Compression Workshop.

        7. "GPT Sonography: Hand Gesture Decoding from Forearm Ultrasound Images via VLM" by Keshav Bimbraw (Worcester Polytechnic Institute), Ye Wang (Mitsubishi Electric Research Laboratories), Jing Liu (Mitsubishi Electric Research Laboratories), Toshiaki Koike-Akino (Mitsubishi Electric Research Laboratories). AIM-FM: Advancements In Medical Foundation Models: Explainability, Robustness, Security, and Beyond Workshop.

        8. "Smoothed Embeddings for Robust Language Models" by Hase Ryo (Mitsubishi Electric), Md Rafi Ur Rashid (Penn State University), Ashley Lewis (Ohio State University), Jing Liu (Mitsubishi Electric Research Laboratories), Toshiaki Koike-Akino (Mitsubishi Electric Research Laboratories), Kieran Parsons (Mitsubishi Electric Research Laboratories), Ye Wang (Mitsubishi Electric Research Laboratories). Safe Generative AI Workshop.

        9. "Slaying the HyDRA: Parameter-Efficient Hyper Networks with Low-Displacement Rank Adaptation" by Xiangyu Chen (University of Kansas), Ye Wang (Mitsubishi Electric Research Laboratories), Matthew Brand (Mitsubishi Electric Research Laboratories), Pu Wang (Mitsubishi Electric Research Laboratories), Jing Liu (Mitsubishi Electric Research Laboratories), Toshiaki Koike-Akino (Mitsubishi Electric Research Laboratories). Workshop on Adaptive Foundation Models.

        10. "Preference-based Multi-Objective Bayesian Optimization with Gradients" by Joshua Hang Sai Ip (University of California Berkeley), Ankush Chakrabarty (Mitsubishi Electric Research Laboratories), Ali Mesbah (University of California Berkeley), Diego Romeres (Mitsubishi Electric Research Laboratories). Workshop on Bayesian Decision-Making and Uncertainty. Lightning talk spotlight.

        11. "TR-BEACON: Shedding Light on Efficient Behavior Discovery in High-Dimensions with Trust-Region-based Bayesian Novelty Search" by Wei-Ting Tang (Ohio State University), Ankush Chakrabarty (Mitsubishi Electric Research Laboratories), Joel A. Paulson (Ohio State University). Workshop on Bayesian Decision-Making and Uncertainty.

        12. "MEL-PETs Joint-Context Attack for the NeurIPS 2024 LLM Privacy Challenge Red Team Track" by Ye Wang (Mitsubishi Electric Research Laboratories), Tsunato Nakai (Mitsubishi Electric), Jing Liu (Mitsubishi Electric Research Laboratories), Toshiaki Koike-Akino (Mitsubishi Electric Research Laboratories), Kento Oonishi (Mitsubishi Electric), Takuya Higashi (Mitsubishi Electric). LLM Privacy Challenge. Special Award for Practical Attack.

        13. "MEL-PETs Defense for the NeurIPS 2024 LLM Privacy Challenge Blue Team Track" by Jing Liu (Mitsubishi Electric Research Laboratories), Ye Wang (Mitsubishi Electric Research Laboratories), Toshiaki Koike-Akino (Mitsubishi Electric Research Laboratories), Tsunato Nakai (Mitsubishi Electric), Kento Oonishi (Mitsubishi Electric), Takuya Higashi (Mitsubishi Electric). LLM Privacy Challenge. Won 3rd Place Award.

        MERL members also contributed to the organization of the Multimodal Algorithmic Reasoning (MAR) Workshop (https://marworkshop.github.io/neurips24/). Organizers: Anoop Cherian (Mitsubishi Electric Research Laboratories), Kuan-Chuan Peng (Mitsubishi Electric Research Laboratories), Suhas Lohit (Mitsubishi Electric Research Laboratories), Honglu Zhou (Salesforce Research), Kevin Smith (Massachusetts Institute of Technology), Tim K. Marks (Mitsubishi Electric Research Laboratories), Juan Carlos Niebles (Salesforce AI Research), Petar Veličković (Google DeepMind).
    •  
    •  NEWS    MERL Papers and Workshops at CVPR 2024
      Date: June 17, 2024 - June 21, 2024
      Where: Seattle, WA
      MERL Contacts: Petros T. Boufounos; Moitreya Chatterjee; Anoop Cherian; Michael J. Jones; Toshiaki Koike-Akino; Jonathan Le Roux; Suhas Lohit; Tim K. Marks; Pedro Miraldo; Jing Liu; Kuan-Chuan Peng; Pu (Perry) Wang; Ye Wang; Matthew Brand
      Research Areas: Artificial Intelligence, Computational Sensing, Computer Vision, Machine Learning, Speech & Audio
      Brief
      • MERL researchers are presenting 5 conference papers, 3 workshop papers, and are co-organizing two workshops at the CVPR 2024 conference, which will be held in Seattle, June 17-21. CVPR is one of the most prestigious and competitive international conferences in computer vision. Details of MERL contributions are provided below.

        CVPR Conference Papers:

        1. "TI2V-Zero: Zero-Shot Image Conditioning for Text-to-Video Diffusion Models" by H. Ni, B. Egger, S. Lohit, A. Cherian, Y. Wang, T. Koike-Akino, S. X. Huang, and T. K. Marks

        This work enables a pretrained text-to-video (T2V) diffusion model to be additionally conditioned on an input image (first video frame), yielding a text+image to video (TI2V) model. Other than using the pretrained T2V model, our method requires no ("zero") training or fine-tuning. The paper uses a "repeat-and-slide" method and diffusion resampling to synthesize videos from a given starting image and text describing the video content.

        Paper: https://www.merl.com/publications/TR2024-059
        Project page: https://merl.com/research/highlights/TI2V-Zero

        2. "Long-Tailed Anomaly Detection with Learnable Class Names" by C.-H. Ho, K.-C. Peng, and N. Vasconcelos

        This work aims to identify defects across various classes without relying on hard-coded class names. We introduce the concept of long-tailed anomaly detection, addressing challenges like class imbalance and dataset variability. Our proposed method combines reconstruction and semantic modules, learning pseudo-class names and utilizing a variational autoencoder for feature synthesis to improve performance in long-tailed datasets, outperforming existing methods in experiments.

        Paper: https://www.merl.com/publications/TR2024-040

        3. "Gear-NeRF: Free-Viewpoint Rendering and Tracking with Motion-aware Spatio-Temporal Sampling" by X. Liu, Y-W. Tai, C-T. Tang, P. Miraldo, S. Lohit, and M. Chatterjee

        This work presents a new strategy for rendering dynamic scenes from novel viewpoints. Our approach is based on stratifying the scene into regions based on the extent of motion of the region, which is automatically determined. Regions with higher motion are permitted a denser spatio-temporal sampling strategy for more faithful rendering of the scene. Additionally, to the best of our knowledge, ours is the first work to enable tracking of objects in the scene from novel views - based on the preferences of a user, provided by a click.

        Paper: https://www.merl.com/publications/TR2024-042

        4. "SIRA: Scalable Inter-frame Relation and Association for Radar Perception" by R. Yataka, P. Wang, P. T. Boufounos, and R. Takahashi

        Overcoming the limitations on radar feature extraction such as low spatial resolution, multipath reflection, and motion blurs, this paper proposes SIRA (Scalable Inter-frame Relation and Association) for scalable radar perception with two designs: 1) extended temporal relation, generalizing the existing temporal relation layer from two frames to multiple inter-frames with temporally regrouped window attention for scalability; and 2) motion consistency track with a pseudo-tracklet generated from observational data for better object association.

        Paper: https://www.merl.com/publications/TR2024-041

        5. "RILA: Reflective and Imaginative Language Agent for Zero-Shot Semantic Audio-Visual Navigation" by Z. Yang, J. Liu, P. Chen, A. Cherian, T. K. Marks, J. L. Roux, and C. Gan

        We leverage Large Language Models (LLM) for zero-shot semantic audio visual navigation. Specifically, by employing multi-modal models to process sensory data, we instruct an LLM-based planner to actively explore the environment by adaptively evaluating and dismissing inaccurate perceptual descriptions.

        Paper: https://www.merl.com/publications/TR2024-043

        CVPR Workshop Papers:

        1. "CoLa-SDF: Controllable Latent StyleSDF for Disentangled 3D Face Generation" by R. Dey, B. Egger, V. Boddeti, Y. Wang, and T. K. Marks

        This paper proposes a new method for generating 3D faces and rendering them to images by combining the controllability of nonlinear 3DMMs with the high fidelity of implicit 3D GANs. Inspired by StyleSDF, our model uses a similar architecture but enforces the latent space to match the interpretable and physical parameters of the nonlinear 3D morphable model MOST-GAN.

        Paper: https://www.merl.com/publications/TR2024-045

        2. “Tracklet-based Explainable Video Anomaly Localization” by A. Singh, M. J. Jones, and E. Learned-Miller

        This paper describes a new method for localizing anomalous activity in video of a scene given sample videos of normal activity from the same scene. The method is based on detecting and tracking objects in the scene and estimating high-level attributes of the objects such as their location, size, short-term trajectory and object class. These high-level attributes can then be used to detect unusual activity as well as to provide a human-understandable explanation for what is unusual about the activity.

        Paper: https://www.merl.com/publications/TR2024-057

        MERL co-organized workshops:

        1. "Multimodal Algorithmic Reasoning Workshop" by A. Cherian, K-C. Peng, S. Lohit, M. Chatterjee, H. Zhou, K. Smith, T. K. Marks, J. Mathissen, and J. Tenenbaum

        Workshop link: https://marworkshop.github.io/cvpr24/index.html

        2. "The 5th Workshop on Fair, Data-Efficient, and Trusted Computer Vision" by K-C. Peng, et al.

        Workshop link: https://fadetrcv.github.io/2024/

        3. "SuperLoRA: Parameter-Efficient Unified Adaptation for Large Vision Models" by X. Chen, J. Liu, Y. Wang, P. Wang, M. Brand, G. Wang, and T. Koike-Akino

        This paper proposes a generalized framework called SuperLoRA that unifies and extends different variants of low-rank adaptation (LoRA). Introducing new options with grouping, folding, shuffling, projection, and tensor decomposition, SuperLoRA offers high flexibility and demonstrates superior performance up to 10-fold gain in parameter efficiency for transfer learning tasks.

        Paper: https://www.merl.com/publications/TR2024-062
    •  

    See All News & Events for Computational Sensing
  • Research Highlights

  • Internships

    • ST0116: Internship - Deep Learning for Radar Perception

      The Computation Sensing team at MERL is seeking a highly motivated intern to conduct fundamental research in radar perception. Expertise in deep learning-based object detection, pose estimation, segmentation, multiple object tracking (MOT), and representation learning on radar data is required. Previous hands-on experience with open indoor and outdoor radar datasets is a plus. Familiarity with basic radar concepts and MERL's recent work in radar perception is an asset. The intern will work closely with MERL researchers to develop novel algorithms, design experiments with MERL in-house testbed, and prepare results for patents and publication. The internship is expected to last 3 months with a preferred start date after June 2025.

      Required Specific Experience

      • Solid understanding of state-of-the-art perception frameworks including transformer-based (e.g., DETR) and diffusion-based (e.g., DiffusionDet) methods.
      • Hands-on experience with open large-scale radar datasets such as MMVR, HIBER, RADIATE, and K-Radar.
      • Proficiency in Python and experience with job scheduling on GPU clusters using tools like Slurm.
      • Proven publication records in top-tier venues such as CVPR, ICCV, ECCV, NeurIPS.
      • Knowledge of basic radar concepts such as FMCW, MIMO, (micro-) Doppler signature, radar point clouds, heatmaps, and raw ADC waveforms.
      • Familiarity with MERL's recent radar perception research such as TempoRadar, SIRA, MMVR, and RETR.

    • ST0081: Internship - Optical Sensing for Airflow Reconstruction

      The Computational Sensing team at MERL is seeking motivated and qualified individuals to develop algorithms that can perform background oriented schlieren (BOS) tomography. The project goal is to utilize both analytical and learning-based architectures to enable the reconstruction of 3D air flows in an indoor setting from BOS measurements coupled with physics informed machine learning. Ideal candidates should be Ph.D. students and have solid background and publication record in any of the following, or related areas: imaging inverse problems, large-scale optimization, differentiable scene rendering, learning-based modeling for imaging, and physics informed neural networks. Preferred skills include experience with schlieren tomography, inverse rendering, neural scene representation, and computational imaging hardware. Publication of the results produced during our internships is expected. The duration of the internships is anticipated to be 3-6 months. Start date is flexible.

      Required Specific Experience

      • Experience with differentiable/physics-based rendering.

    • ST0126: Internship - Particle-Efficient Interacting Particle Systems for Inverse Problems

      The Computational Sensing Team at MERL is seeking an intern to work with MERL researchers on algorithms based on interacting particle systems for solving inverse problems. The focus of the project is particle-efficiency and applicability to non-log-concave posterior distributions (which may result from nonlinear forward operators). The project includes algorithm design, (finite-particle) convergence analysis, and/or empirical evaluation for challenging inverse problems such as full waveform inversion. The ideal candidate would be a PhD student with a solid background in applied probability, nonconvex optimization, or Bayesian sampling. Programming skills in Python or MATLAB are required. The duration is anticipated to be at least 3 months with a flexible start date.


    See All Internships for Computational Sensing
  • Openings


    See All Openings at MERL
  • Recent Publications

    •  Yataka, R., Cardace, A., Wang, P., Boufounos, P.T., Takahashi, R., "RETR: Multi-View Radar Detection Transformer for Indoor Perception", Advances in Neural Information Processing Systems (NeurIPS), November 2024.
      BibTeX TR2024-159 PDF Software
      • @inproceedings{Yataka2024nov3,
      • author = {Yataka, Ryoma and Cardace, Adriano and Wang, Pu and Boufounos, Petros T. and Takahashi, Ryuhei}},
      • title = {RETR: Multi-View Radar Detection Transformer for Indoor Perception},
      • booktitle = {Advances in Neural Information Processing Systems (NeurIPS)},
      • year = 2024,
      • month = nov,
      • url = {https://www.merl.com/publications/TR2024-159}
      • }
    •  Sholokhov, A., Nabi, S., Rapp, J., Brunton, S., Kutz, N., Boufounos, P.T., Mansour, H., "Single-pixel imaging of spatio-temporal flows using differentiable latent dynamics", IEEE Transactions on Computational Imaging, October 2024.
      BibTeX TR2024-151 PDF
      • @article{Sholokhov2024oct,
      • author = {{Sholokhov, Aleksei and Nabi, Saleh and Rapp, Joshua and Brunton, Steven and Kutz, Nathan and Boufounos, Petros T. and Mansour, Hassan}},
      • title = {Single-pixel imaging of spatio-temporal flows using differentiable latent dynamics},
      • journal = {IEEE Transactions on Computational Imaging},
      • year = 2024,
      • month = oct,
      • url = {https://www.merl.com/publications/TR2024-151}
      • }
    •  Jin, S., Wang, P., Boufounos, P.T., Orlik, P.V., Takahashi, R., Roy, S., "Spatial-Domain Mutual Interference Mitigation for MIMO-FMCW Automotive Radar", IEEE Transactions on Vehicular Technology, DOI: 10.1109/​TVT.2024.3467917, September 2024.
      BibTeX TR2024-148 PDF
      • @article{Jin2024sep,
      • author = {Jin, Sian and Wang, Pu and Boufounos, Petros T. and Orlik, Philip V. and Takahashi, Ryuhei and Roy, Sumit}},
      • title = {Spatial-Domain Mutual Interference Mitigation for MIMO-FMCW Automotive Radar},
      • journal = {IEEE Transactions on Vehicular Technology},
      • year = 2024,
      • month = sep,
      • doi = {10.1109/TVT.2024.3467917},
      • issn = {1939-9359},
      • url = {https://www.merl.com/publications/TR2024-148}
      • }
    •  Rahman, M., Yataka, R., Kato, S., Wang, P., Li, P., Cardace, A., Boufounos, P.T., "MMVR: Millimeter-wave Multi-View Radar Dataset and Benchmark for Indoor Perception", European Conference on Computer Vision (ECCV), DOI: 10.1007/​978-3-031-72986-7_18, September 2024, pp. 306–322.
      BibTeX TR2024-117 PDF Data
      • @inproceedings{Rahman2024sep,
      • author = {Rahman, Mahbub and Yataka, Ryoma and Kato, Sorachi and Wang, Pu and Li, Peizhao and Cardace, Adriano and Boufounos, Petros T.}},
      • title = {MMVR: Millimeter-wave Multi-View Radar Dataset and Benchmark for Indoor Perception},
      • booktitle = {European Conference on Computer Vision (ECCV)},
      • year = 2024,
      • pages = {306–322},
      • month = sep,
      • publisher = {Springer},
      • doi = {10.1007/978-3-031-72986-7_18},
      • url = {https://www.merl.com/publications/TR2024-117}
      • }
    •  Shastri, S., Ma, Y., Boufounos, P.T., Mansour, H., "Deep Calibration and Operator Learning for Ground Penetrating Radar Imaging", European Signal Processing Conference (EUSIPCO), August 2024.
      BibTeX TR2024-128 PDF
      • @inproceedings{Shastri2024aug,
      • author = {Shastri, Saurav and Ma, Yanting and Boufounos, Petros T. and Mansour, Hassan}},
      • title = {Deep Calibration and Operator Learning for Ground Penetrating Radar Imaging},
      • booktitle = {European Signal Processing Conference (EUSIPCO)},
      • year = 2024,
      • month = aug,
      • url = {https://www.merl.com/publications/TR2024-128}
      • }
    •  Zhang, X., Mao, W., Mowlavi, S., Benosman, M., Basar, T., "Controlgym: Large-Scale Control Environments for Benchmarking Reinforcement Learning Algorithms", Learning for Dynamics & Control Conference (L4DC), July 2024, pp. 181-196.
      BibTeX TR2024-098 PDF
      • @inproceedings{Zhang2024jul2,
      • author = {Zhang, Xiangyuan and Mao, Weichao and Mowlavi, Saviz and Benosman, Mouhacine and Basar, Tamer}},
      • title = {Controlgym: Large-Scale Control Environments for Benchmarking Reinforcement Learning Algorithms},
      • booktitle = {Learning for Dynamics & Control Conference (L4DC)},
      • year = 2024,
      • pages = {181--196},
      • month = jul,
      • publisher = {PMLR},
      • url = {https://www.merl.com/publications/TR2024-098}
      • }
    •  Yataka, R., Wang, P., Boufounos, P.T., Takahashi, R., "SIRA: Scalable Inter-frame Relation and Association for Radar Perception", IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2024, pp. 15024-15034.
      BibTeX TR2024-041 PDF Video
      • @inproceedings{Yataka2024jun,
      • author = {Yataka, Ryoma and Wang, Pu and Boufounos, Petros T. and Takahashi, Ryuhei},
      • title = {SIRA: Scalable Inter-frame Relation and Association for Radar Perception},
      • booktitle = {IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
      • year = 2024,
      • pages = {15024--15034},
      • month = jun,
      • url = {https://www.merl.com/publications/TR2024-041}
      • }
    •  Vaca-Rubio, C., Wang, P., Koike-Akino, T., Wang, Y., Boufounos, P.T., Popovski, P., "Object Trajectory Estimation with Continuous-Time Neural Dynamic Learning of Millimeter-Wave Wi-Fi", IEEE Journal of Selected Topics in Signal Processing, DOI: 10.1109/​JSTSP.2024.3388930, April 2024.
      BibTeX TR2024-044 PDF
      • @article{Vaca-Rubio2024apr,
      • author = {Vaca-Rubio, Cristian and Wang, Pu and Koike-Akino, Toshiaki and Wang, Ye and Boufounos, Petros T. and Popovski, Petar},
      • title = {Object Trajectory Estimation with Continuous-Time Neural Dynamic Learning of Millimeter-Wave Wi-Fi},
      • journal = {IEEE Journal of Selected Topics in Signal Processing},
      • year = 2024,
      • month = apr,
      • doi = {10.1109/JSTSP.2024.3388930},
      • issn = {1941-0484},
      • url = {https://www.merl.com/publications/TR2024-044}
      • }
    See All Publications for Computational Sensing
  • Videos

  • Software & Data Downloads