Project Highlights My research interests are in the areas of computer vision, visual and multimedia technology, and robotics.
Common themes that my students and I emphasize in performing research are the formulation of sound theories which use the physical, geometrical, and semantic properties involved in perceptual and control processes in order to create intelligent machines, and the demonstration of the working systems based on these theories.
My current projects include basic research and system development in computer vision (motion, stereo and object recognition), recognition of facial expressions, virtual(ized) reality, content-based video and image retrieval, VLSI-based computational sensors, medical robotics, and an autonomous helicopter.
In contrast, my approach to vision is to transform the physical, geometrical, optical and statistical processes, which underlie vision, into mathematical and computational models.
This approach results in algorithms that are far more powerful and revealing than traditional ad hoc methods based solely on heuristic knowledge.
a user takes a video tape of a scene or an object by either moving a camera or moving the object, and then from the video a three-dimensional model of the scene or the object is created.
The multi-baseline stereo method, the second example, is a new stereo theory that uses multi-image fusion for creating a dense depth map of a natural scene.
Based on this theory, a video-rate stereo machine has been developed, which can produce a 200×200 depth image at 30 frames/sec, aligned with an intensity image;
In contrast, the virtualized reality delays the selection of the viewing angle till view time, using techniques from computer vision and computer graphics.
Searches within a large data set or lengthy video would take a user through vast amounts of material irrelevant to the search topic.
The Informedia Digital Video Library, funded by NSF, ARPA, and NASA, is developing intelligent, automatic mechanisms to populate the video library and allow for a full-content knowledge-based search, retrieval and presentation of video.
The first successful example was an ultra fast range sensor which can produce approximately 1000 frames of range images per second an improvement of two orders of magnitude over the state of the art.
The work is based on biomechanics-based surgical simulations and less invasive and more accurate vision-based techniques for determining the position of the patient anatomy during a robot surgery.
Vision-based Autonomous Helicopter An unmanned helicopter can take maximum advantage of the high maneuverability of helicopters in dangerous support tasks, such as search and rescue, and fire fighting, since it does not place a human pilot in danger.
The CMU Vision-Guided Helicopter Project (with Dr. Omead Amidi) has been developing the basic technologies for an unmanned autonomous helicopter including robust control methods, vision algorithms for real-time object detection and tracking, integration of GPS, motion sensors, vision output for robust positioning, and high-speed real-time hardware.
After having tested various control algorithms and real-time vision algorithms using an electric helicopter on an indoor teststand, we have developed a computer controlled helicopter (4 m long), which carries two CCD cameras, GPS, gyros and accelerometers together with a multiprocessor computing system.
Computer vision is an interdisciplinary field that deals with how computers can be made for gaining high-level understanding from digital images or videos.
From the perspective of engineering, it seeks to automate tasks that the human visual system can do. Computer vision tasks include methods for acquiring, processing, analyzing and understanding digital images, and extraction of high-dimensional data from the real world in order to produce numerical or symbolic information, e.g., in the forms of decisions. Understanding in this context means the transformation of visual images (the input of the retina) into descriptions of the world that can interface with other thought processes and elicit appropriate action.
This image understanding can be seen as the disentangling of symbolic information from image data using models constructed with the aid of geometry, physics, statistics, and learning theory. As a scientific discipline, computer vision is concerned with the theory behind artificial systems that extract information from images.
From the perspective of engineering, it seeks to automate tasks that the human visual system can do. 'Computer vision is concerned with the automatic extraction, analysis and understanding of useful information from a single image or a sequence of images.
It involves the development of a theoretical and algorithmic basis to achieve automatic visual understanding.' As a scientific discipline, computer vision is concerned with the theory behind artificial systems that extract information from images.
The image data can take many forms, such as video sequences, views from multiple cameras, or multi-dimensional data from a medical scanner. As a technological discipline, computer vision seeks to apply its theories and models for the construction of computer vision systems.
It was meant to mimic the human visual system, as a stepping stone to endowing robots with intelligent behavior. In 1966, it was believed that this could be achieved through a summer project, by attaching a camera to a computer and having it 'describe what it saw'. What distinguished computer vision from the prevalent field of digital image processing at that time was a desire to extract three-dimensional structure from images with the goal of achieving full scene understanding.
Studies in the 1970s formed the early foundations for many of the computer vision algorithms that exist today, including extraction of edges from images, labeling of lines, non-polyhedral and polyhedral modeling, representation of objects as interconnections of smaller structures, optical flow, and motion estimation. The next decade saw studies based on more rigorous mathematical analysis and quantitative aspects of computer vision.
This included image-based rendering, image morphing, view interpolation, panoramic image stitching and early light-field rendering. Recent work has seen the resurgence of feature-based methods, used in conjunction with machine learning techniques and complex optimization frameworks. Areas of artificial intelligence deal with autonomous planning or deliberation for robotical systems to navigate through an environment.
Some strands of computer vision research are closely related to the study of biological vision – indeed, just as many strands of AI research are closely tied with research into human consciousness, and the use of stored knowledge to interpret, integrate and utilize visual information.
On the other hand, it appears to be necessary for research groups, scientific journals, conferences and companies to present or market themselves as belonging specifically to one of these fields and, hence, various characterizations which distinguish each of the fields from the others have been presented.
Modern military concepts, such as 'battlefield awareness', imply that various sensors, including image sensors, provide a rich set of information about a combat scene which can be used to support strategic decisions.
Computer vision tasks include methods for acquiring, processing, analyzing and understanding digital images, and extraction of high-dimensional data from the real world in order to produce numerical or symbolic information, e.g., in the forms of decisions. Understanding in this context means the transformation of visual images (the input of the retina) into descriptions of the world that can interface with other thought processes and elicit appropriate action.
This image understanding can be seen as the disentangling of symbolic information from image data using models constructed with the aid of geometry, physics, statistics, and learning theory. The classical problem in computer vision, image processing, and machine vision is that of determining whether or not the image data contains some specific object, feature, or activity.
Several specialized tasks based on recognition exist, such as: Several tasks relate to motion estimation where an image sequence is processed to produce an estimate of the velocity either at each points in the image or in the 3D scene, or even of the camera that produces the images .
By first analysing the image data in terms of the local image structures, such as lines or edges, and then controlling the filtering based on local information from the analysis step, a better level of noise removal is usually obtained compared to the simpler approaches.
Some systems are stand-alone applications which solve a specific measurement or detection problem, while others constitute a sub-system of a larger design which, for example, also contains sub-systems for control of mechanical actuators, planning, information databases, man-machine interfaces, etc.
While inference refers to the process of deriving new, not explicitly represented facts from currently known facts, control refers to the process that selects which of the many inference, search, and matching techniques should be applied at a particular stage of processing.
Inference and control requirements for IUS are: search and hypothesis activation, matching and hypothesis testing, generation and use of expectations, change and focus of attention, certainty and strength of belief, inference and goal satisfaction. There are many kinds of computer vision systems, nevertheless all of them contain these basic elements: a power source, at least one image acquisition device (i.e.
While traditional broadcast and consumer video systems operate at a rate of 30 frames per second, advances in digital signal processing and consumer graphics hardware has made high-speed image acquisition, processing, and display possible for real-time systems on the order of hundreds to thousands of frames per second.
When combined with a high-speed projector, fast image acquisition allows 3D measurement and feature tracking to be realised. Egocentric vision systems are composed of a wearable camera that automatically take pictures from a first-person perspective.
New computer vision challenge wants to teach robots to see in 3D
The ImageNet Challenge, which has boosted the development of image-recognition algorithms, will be replaced by a new competition next year that aims to help robots see the world in all its depth.
Since 2010, researchers have trained image recognition algorithms on the ImageNet database, a go-to set of more than 14 million images hand-labelled with information about the objects they depict.
In 2015, a team from Microsoft built a system that was over 95 per cent accurate, surpassing human performance for the first time in the challenge’s history.
Although the details of this competition have yet to be decided, it will tackle a problem computer vision has yet to master: making systems that can classify objects in the real world, not just in 2D images, and describe them using natural language.
Building a large database of images complete with 3D information would allow robots to be trained to recognise objects around them and map out the best route to get somewhere.
The existing ImageNet database consists of images collected from across the internet and then labelled by hand, but these lack the depth information needed to understand a 3D scene.
The database for the new competition could consist of digital models that simulate real-world environments or 360-degree photos that include depth information, says Berg.
- On 17. september 2021
Computer Vision with MATLAB for Object Detection and Tracking
Download a trial: See what's new in the latest release of MATLAB and Simulink: Computer vision uses images and video to detect, classify, and track.
Computer Vision System Design Deep Learning and 3D Vision
Free MATLAB Trial: Request a Quote: Contact Us: Learn more about MATLAB: Learn more about Simulink
Computer Vision Inspection
100% inspection computer vision system system integrated in our Roll to Roll machines. Full developed by AIS Vision System & In2 Printing Solutions.
Complex Machine Vision System for Quality Control
Further Details: In this highly complex machine vision system Industrial Vision Systems Ltd have designed and built a machine with three high resolution digital..
OpenCV Face Detection with Raspberry Pi - Robotics with Python p.7
Next, we're going to touch on using OpenCV with the Raspberry Pi's camera, giving our robot the gift of sight. There are many steps involved to this process, so there's a lot that is about...
COMPUTER VISION CONTROLLED ROBOT ARM
The focus of this project is to implement computer vision using web cam and use it within robotics, it include making a working robot using electronic device spare parts such as parts of old...
Mobile Robots Localization Using Computer Vision and Sphere Markers
Mobile Robots Localization Using Computer Vision and Sphere Markers. This project provides a resource for localization by external vision that can be easily applied to mobile robots. A marker...
Robotic Arm finding a target and avoiding obstacles (Computer Vision)
This project was developed during my studies as my final exam work. In my diploma work I developed a semi - automated robotic system, composed of two cameras attached to a metal construction,...
How we teach computers to understand pictures | Fei Fei Li
When a very young child looks at a picture, she can identify simple elements: "cat," "book," "chair." Now, computers are getting smart enough to do that too. What's next? In a thrilling talk,...
COMPUTER VISION CONTROLLED ROBOT ARM
The focus of this project is to implement computer vision using webcam and use it within robotics, it include making a working robot using electronic device spare parts such as parts of old...