Colorado State University Professor Receives Defense Grant to Make Computers Translate Pictures into Words

Note to Reporters: A photo of Bruce Draper is available with the news release at

A Colorado State University computer scientist will spend the next two years teaching computers to take pictures and describe the pictures in words, which could eventually help the U.S. military with remote surveillance.

Associate Professor Bruce Draper and his team received a $625,000 grant from the Defense Advanced Research Projects Agency, or DARPA, to teach computers to “learn” from what they see and spit out physical descriptions that can be shared quickly and remotely. The idea is to help computers deliver critical, real-time information without human involvement, which would be ideal for military surveillance, Draper said.

“Right now, if the military wants to monitor a village in Afghanistan or Iraq, they have to have remote cameras with someone watching them,” Draper said. “This could someday allow them to use cameras they don’t have to watch. They could receive e-mailed text descriptions about what’s going on at these sites.”

Draper said there are plenty of non-military applications as well. For example, such surveillance could help design new playgrounds based on physical descriptions of where children play and what swingsets or jungle gyms they use the most.

Other Colorado State professors working with Draper on the grant are Ross Beveridge, also a computer science professor, and Michael Kirby and Chris Peterson, professors in the Department of Mathematics.

“If I put a camera in one spot, it’s going to have to adapt to wherever I put it. Where we work with the math department is in trying to understand the patterns of what you see,” Draper said. “We’re trying to have the camera learn from that based purely on its experience – we never tell it what the answer is.”

The Colorado State team is one of 12 across the country that received the DARPA grants to use different technical methods to address the same problem.

Draper’s team will develop the technology using thousands of short, mundane video clips of activities like people playing catch with a football.

“How you get from a video to a description of what’s in that video is very complex. It’s easy to get the image into the computer, but to get the computer to understand what’s in that image – that’s the trick,” Draper said. “DARPA has funded 12 teams with different approaches to try to solve that problem.”

Draper’s DARPA funding could grow to $1.5 million over the next five years.