About the AI Explorer

Overview

Starting in 2016 the Harvard Art Museums department of Digital Infrastructure and Emerging Technology (DIET) began using artificial intelligence to describe the museums collection. Since then DIET has built a research dataset of 28,364,745 machine-generated descriptions and tags covering over 100,000 artworks. Ranging from feature recognition to face analysis that predicts gender, age, and emotion, the data reveals how computers interpret paintings, photographs, and sculptures. This website allows users to explore the extensive collection of data by searching for artworks by machine-generated keyword and looking at aggregated data for individual pieces.

Why?

The Harvard Art Museums is using computer vision to categorize, tag, describe, and annotate its collection of art pieces. Since the computer lacks any context or formal training in art history, the machine views and annotates our collection as if walking into an art museum for the first time. The perspective offered by AI leans closer to reflecting the public rather than experts. Currently, the Harvard Art Museums’ search interface relies on descriptions written by art historians. The addition of AI-generated annotations to our database makes the Harvard Art Museums’ art collection more accessible to non-specialists.

How?

The Harvard Art Museums has collected artificially-generated data on artworks from five different AI and computer-vision services: Amazon Rekognition, Clarifai, Imagga, Google Vision, and Microsoft Cognitive Services. For each art piece, these services provide interpretations otherwise known as “annotations” that include generated tags and captions and object, face, and text recognition. When a user searches for a keyword, this site takes the user-inputted keyword and finds artworks that contain a matching machine-generated tag. From there, the user can go to an individual piece to see and compare the annotations from the five AI services.

Start exploring