Visual Search is a feature on the Amazon App driven by computer vision technologies. I directed the redesign of the app to include a region of interest, tags, redesigned of alerts, and photo upload. This feature has been launched into the Amazon mobile app for iOS. 
Originally product management had planned incremental revisions to our camera search feature for 2016. Looking across our competition and our usage stats I proposed a whole scale design revision of our app instead. Some improvements existed in the product management road map, but were planned to be executed singly and without a coherent design strategy. If we had done this, the app would have remained built around an interface design strategy that was outdated and didn't reflect our understanding of our customer and advancements in the underlying computer vision technology. 
My plan was accepted and after 9 months of design and development, the newly redesigned app went live in October of 2016.
During the redesign, my role was to direct the overall design, support the designer as he developed interaction design solutions, as well as coordinate with engineering, scientists, front end developers and the design teams in Seattle who were impacted by our work.
We tested several approaches to starting the app. The tests involved 3 days of testing in a lab with 12 subjects. Two performed very well. This approach shows a snap approach where a photo is taken after which the user is taken to results. 
Because we used machine learning to drive the search results we did not have any text to support our results. As a result, the engineering team worked with us to create tags by referencing text searches that delivered in similar search results. This text became the platform for tagging.
In order to be successful in my role leading the Mobile Innovation Team, I had to have a deep understanding of machine learning. This translated into influencing the design of the computer vision feature to leverage results coming from several services - machine learning, visual search suggestions, category recognition, text recognition, etc. Each set of results engages the interface differently. Delivering  a consistently designed UX to the customer despite different kinds of results, is a hidden challenge of working with computer vision but one we were able to have success with.
Back to Top