TARS work accepted at ECCV, one of the top three computer vision sites

Nikolas Lamb, Dr. Sean Banerjee, Dr. Natasha Banerjee

Nikolas Lamb, PhD student at Terascale All-sensing Research Studio (TARS), will present his accepted paper at the 2022 European Conference on Computer Vision (ECCV), one of the top three ranked venues for computer vision research .

Lamb is advised in his research on repairing damaged objects by Dr. Natasha Banerjee and Dr. Sean Banerjee, associate professors in the Department of Computer Science and co-directors of TARS. Lamb’s paper will be published in the conference proceedings. Lamb’s paper is Clarkson’s first to be published at ECCV, a venue dominated by researchers from major tech companies such as Amazon, Google, Meta/Facebook, Microsoft, Adobe and Apple, and research institutes leading such as Harvard, Stanford. , MIT, Columbia, Yale, Princeton, UC Berkeley, Carnegie Mellon, Oxford, Cambridge and Max Planck Institute to name a few.

Given how quickly knowledge advances in computing, conferences are the norm for the immediate dissemination of information, and as such are peer-reviewed and have the same status as journals in other fields. . ECCV is globally known as one of the top three ranked sites for computer vision and is held once every two years, making it one of the hardest sites to post in computer vision.

As noted on Google Scholar, ECCV has an h5 index of 186 and ranks third among computer vision conferences in the h5 index, the other two being the Conference on Computer Vision and Pattern Recognition (CVPR) and the International Conference on computer vision (ICCV). The conference also demonstrates the pervasive scientific impact of computer vision, as it is currently the 40th ranked place of publication (conference and/or journal) overall in the h5 index, and 15th in Engineering and Computer Science. Lamb is one of the few ECCV attendees to have received an ECCV student grant that covers registration and travel to the conference from October 23-27, 2022.

In July 2022, Lamb presented MendNet, a then-state-of-the-art method for repairing damaged objects at the Geometry Processing Symposium using deep neural networks to represent the structure of damaged, complete, and repaired objects. A few months later, Lamb’s ECCV paper contributed a new algorithm, DeepMend, which overcomes the limitations of his earlier work, by linking a mathematical representation of the occupancy of damaged and repaired objects to complete objects and the surface. of fracture, allowing a compact representation. shape via deep networks and establishing a new state of the art.

Lamb’s rapid and – even ongoing – publication of new state-of-the-art algorithms is in keeping with the accelerating pace of computer science research. As Alexei Efros, winner of the Association for Computing Machinery (ACM) award in computer science and professor of computer science at the University of California, Berkeley, said “The half-life of computer knowledge is quite short. In machine learning, it’s about three months.

Lamb’s research puts the repair of damaged objects in the hands of the average consumer, bringing us one step closer to sustainable use of objects. It also bridges the gap between materials science and computer science research by linking artificial intelligence to defining the geometry of the damaged object, enabling repair in nature. By using deep learning to hypothesize what a repair patch should look like, Lamb’s work also contributes to the restoration of cultural heritage objects and items of personal significance, for example, a valuable coin of pottery.

TARS, of which Lamb is a member, conducts research on human awareness of next-generation artificial intelligence and robotics systems. Research at TARS covers areas such as computer vision, computer graphics, human-computer interaction, robotics, virtual reality, and computational manufacturing. TARS supports the research of 15 graduate students and nearly 20 undergraduate students each semester. TARS has one of the largest high-performance computing facilities at Clarkson, with over 275,000 CUDA cores and over 4,800 Tensor cores spread across over 50 GPUs, and 1 petabyte of storage (almost full!). TARS is home to the Gazebo, a massively dense multi-modal multi-viewpoint motion capture facility for imaging multi-person interactions containing 192 high-speed 226FPS cameras, 16 Microsoft Azure Kinect RGB-D sensors, 12 Sierra Olympic Viento thermal cameras -G, and 16 surface electromyography (sEMG) sensors, and the Cube, a one- and two-person 3D imaging facility containing 4 high-speed cameras, 4 RGB-D sensors and 5 thermal cameras. The team thanks the Office of Information Technology for providing access to the ACRES GPU node with 4 V100s containing 20,480 CUDA cores and 2,560 Tensor cores.

Comments are closed.