AI-powered cloud video platform
Video is clearly our favorite medium for consuming content: the average American spends six hours a day watching it, and video will occupy 82% of web traffic by 2021. With great smartphone cameras and the rise of video conferencing, we are now collectively recording massive amounts of video.
Yet, most of us don’t share the video we record with others. Why not?
We believe that the main constraint is the difficulty of video editing. Great content is hidden deep in troves of recorded video. Finding the right moments and editing share-worthy clips is time-consuming, expensive, and typically requires professionals.
We have created a machine learning-powered cloud video platform that anyone can use: if you can write a text document, you can search, edit, and share video with Reduct. The video hosted on our platform is searchable down to the millisecond of when something was said. When you find an interesting snippet, sharing just requires selecting some text. You can even edit video using just the text of what was said: video editing is as easy as word-processing. The platform is cloud-based, works on your smartphone, and is at least 10x faster than using traditional tools.
Our product is currently in use, and we achieved profitability within the beachhead market of Design and UX Researchers before raising our pre-seed round. Customers include Cruise Automation, Spotify, Target, Dropbox, and IDEO. In addition to radically speeding up existing workflows, our product is enabling customers to create workflows that“ would have been impossible” without our tools.
The cloud video market is large and growing in new ways. Internal enterprise video is growing at 20% CAGR and will be a $40B market in 2022. Video marketing is exploding, with 89% of B2B Marketers now using video, and digital video taking up $135B/yr of advertising spend in the next year. With mobile video traffic expected to grow 9x by 2021, we expect new markets to be created once editing constraints are significantly lowered.
The Reduct team is led by Prabhas Pokharel and Robert Ochshorn, who met in high school. After a CS undergrad at Harvard, Prabhas led product teams, started UNICEF’s first Innovations Lab, and attended the Stanford d.school. Robert did CS at Cornell, was a research assistant at Harvard and MIT, and conducted media-interface research at the Alan Kay-initiated CDG Labs. His work in speech recognition, machine learning interfaces, and language processing has been presented at Google Brain, Apple, and Stanford.