January 13, 2015: Recent Yale grad David Luan is interested in perception: how computers and other machines “see” things and report back to us what they see. After internships at iRobot and Microsoft, Luan immersed himself in the world of robotics in San Francisco among other makers and hackers, many of whom were building robots and “smart” devices that were equipped with cameras. But no one had yet discovered how to build systems that could make use of all the visual data these cameras were capturing. He and his friend Sanchit Arora, who graduated from the University of Pennsylvania in 2013 with a Master’s in Robotics, set out to create that system—never realizing its far-reaching implications.
“We wanted to create a drop-in API [application programming interface] where any developer can make use of the visual data coming through those cameras without needing to know any computer vision or machine learning,” Luan says. They launched the API in 2013 and caused a buzz on Hacker News, a tech news site created by Paul Graham, cofounder of Silicon Valley accelerator Y Combinator.
The response was immediate. Days after the launch, developers were using their code and suggesting new uses: counting cars in London, watching for birds vs. squirrels at bird feeders, determining the position of animals as seen from a drone. Within weeks, Luan was being contacted by larger companies who saw even broader applications for their code: augmented reality, analyzing social media for brands, online advertising, automated security.
“Their use cases were far more diverse than we had initially conceived,” Luan says. “Companies in many industries have large quantities of visual data that they need to be able to analyze and ask questions on top of. So that’s the larger problem we set out to address.”
Luan and Arora were accepted into the YEI Fellowship at the Yale Entrepreneurial Institute in 2013, spending 10 weeks over the summer developing their startup Dextro with the goal of analyzing images online. Soon, they widened the scope to include analyzing video—another technological feat. Since leaving YEI, Dextro has set up shop in New York City, raised $1.56 million, including $100,000 from the YEI Innovation Fund, and has a staff of five working to perfect their latest video-content-analyzing API.
The system they’ve built allows developers to pick a set of categories such as “automotive,” “cats” and “Best Buy,” and apply that query to a video or video dataset. Dextro’s API will tell them how relevant each query is in a particular video.
When news broke of Dextro’s video-analyzing API, they were voted to the front page of Hacker News again. The startup was also recently profiled on tech site GigaOm which noted the excitement building in fields surrounding computer vision platforms.
“Our customers are developers,” Luan says. “People can put this as a processing engine to crawl the web and search videos.” Companies can use Dextro’s system to discover where their products or brands are in videos from YouTube to Vimeo, among many other use cases. The startup’s ultimate goal, he says, is “to give all developers [access to the] the capabilities that Google and Facebook are rumored to have.”
CONTACT: Brita Belli, Communications Officer, Yale Entrepreneurial Institute, firstname.lastname@example.org