Visual Cloud Computing Methods Could Help First Responders in Disaster Scenarios
Algorithms developed by MU researchers could provide critical data for quick decisions
June 23rd, 2016
COLUMBIA, Mo. – In natural or man-made disasters, the ability to process massive amounts of visual electronic data quickly and efficiently could mean the difference between life and death for survivors. Visual data created by numerous security cameras, personal mobile devices and aerial video provide useful data for first responders and law enforcement. That data can be critical in terms of knowing where to send emergency personnel and resources, tracking suspects in man-made disasters, or detecting hazardous materials. Recently, a group of computer science researchers from the University of Missouri developed a visual cloud computing architecture that streamlines the process.
“In disaster scenarios, the amount of visual data generated can create a bottleneck in the network,” said Prasad Calyam, assistant professor of computer science in the MU College of Engineering. “This abundance of visual data, especially high-resolution video streams, is difficult to process even under normal circumstances. In a disaster situation, the computing and networking resources needed to process it may be scarce and even not be available. We are working to develop the most efficient way to process data and study how to quickly present visual information to first responders and law enforcement.”
The research team, including Kannappan Palaniappan and Ye Duan, associate professors in the Department of Computer Science, developed a framework for disaster incident data computation that links the system to mobile devices in a mobile cloud. Algorithms designed by the team help determine what information needs to be processed by the cloud and what information can be processed on local devices, such as laptops and smartphones. This spreads the processing over multiple devices and helps responders receive the information faster.
“Often, we see many of the same images from overlapping cameras,” Palaniappan said. “Responders generally do not need to see two separate pictures but rather the distinctive parts. That mosaic stitching that we helped define happens in the periphery of the network to limit the amount of data that needs to be sent to the cloud. This is a natural way of compressing visual data without losing information. Clever algorithms help determine what types of visual processing to perform in the edge or fog of the network, and what data and computation should be done in the core cloud.”
“Incident-supporting visual cloud computing utilizing software-defined networking” recently was published in the journal IEEE Transactions on Circuits and Systems for Video Technology in a special issue on cloud computing for mobile devices. Guna Seetharaman of the U.S. Naval Research Laboratory also contributed to the study. Funding for the project came from a combination of ongoing grants from the National Science Foundation, Air Force Research Laboratory and the U.S. National Academies Jefferson Science Fellowship. The content is solely the responsibility of the authors and does not necessarily represent the official views of the funding agencies.
Editor’s Note: For more on the story, please see: “Computer science collaboration leads to improvements in data transmissions in disasters.”