Human Generated Data

Title

Untitled (family wearing Native American costumes)

Date

c. 1950

People

Artist: Lucian and Mary Brown, American

Classification

Photographs

Credit Line

Harvard Art Museums/Fogg Museum, Transfer from the Carpenter Center for the Visual Arts, American Professional Photographers Collection, 4.2002.17790

Human Generated Data

Title

Untitled (family wearing Native American costumes)

People

Artist: Lucian and Mary Brown, American

Date

c. 1950

Classification

Photographs

Machine Generated Data

Tags

Amazon
created on 2022-02-26

Human 99.8
Person 99.8
Person 99.6
Person 99.5
Person 99.3
Chair 98.8
Furniture 98.8
Person 93.7
Person 93
Clothing 90.4
Shoe 90.4
Footwear 90.4
Apparel 90.4
Person 86.4
Person 86.1
People 74.6
Meal 73.8
Food 73.8
Art 68.2
Female 65.7
Figurine 65.2
Crowd 62.9
Dish 62.9
Toy 62.8
Shorts 59.6
Table 56.9
Shoe 56.7
Shoe 54.1
Person 53.6
Person 43.5

Imagga
created on 2022-02-26

dancer 32.5
person 29.1
performer 26.7
musical instrument 26.6
wind instrument 23.3
people 22.3
adult 21.7
brass 20.9
man 18.2
entertainer 17.8
black 17.4
male 17
body 16.8
lifestyle 16.6
sport 14.3
exercise 13.6
human 13.5
professional 13.3
men 12.9
dance 12.1
stage 12
active 11.8
lady 11.4
group 11.3
attractive 11.2
portrait 11
fitness 10.8
fashion 10.6
sexy 10.4
trombone 10.4
legs 10.4
motion 10.3
anatomy 9.7
player 9.5
art 9.5
elegant 9.4
percussion instrument 9.3
action 9.3
business 9.1
one 9
women 8.7
athlete 8.7
jump 8.6
youth 8.5
music 8.5
casual 8.5
elegance 8.4
studio 8.4
health 8.3
musician 8.3
holding 8.3
fun 8.2
style 8.2
happy 8.1
dress 8.1
cool 8
indoors 7.9
smile 7.8
color 7.8
model 7.8
modern 7.7
performance 7.7
silhouette 7.4
platform 7.4
competition 7.3
pose 7.2
team 7.2
medical 7.1
teacher 7

Google
created on 2022-02-26

Microsoft
created on 2022-02-26

text 98.7
cartoon 95.2
window 94.3
clothing 92
person 89.5
drawing 83.4
man 77.3
old 68.4
black and white 64.4
posing 39.3

Face analysis

Amazon

Google

AWS Rekognition

Age 28-38
Gender Male, 99%
Happy 91.2%
Calm 4.5%
Surprised 1.5%
Sad 0.9%
Disgusted 0.6%
Confused 0.6%
Angry 0.6%
Fear 0.1%

AWS Rekognition

Age 42-50
Gender Female, 76.2%
Calm 68.5%
Happy 25.2%
Sad 3.8%
Confused 0.9%
Disgusted 0.6%
Angry 0.4%
Surprised 0.4%
Fear 0.2%

AWS Rekognition

Age 23-31
Gender Male, 96.7%
Happy 77.3%
Calm 15.2%
Sad 5%
Angry 1%
Surprised 0.8%
Fear 0.3%
Disgusted 0.3%
Confused 0.2%

AWS Rekognition

Age 6-14
Gender Male, 98.2%
Calm 86.3%
Sad 9.4%
Happy 2.4%
Confused 0.8%
Disgusted 0.7%
Angry 0.2%
Surprised 0.2%
Fear 0.1%

AWS Rekognition

Age 33-41
Gender Female, 99.3%
Happy 93.9%
Calm 5.5%
Surprised 0.4%
Sad 0.1%
Disgusted 0.1%
Fear 0%
Angry 0%
Confused 0%

AWS Rekognition

Age 49-57
Gender Female, 88.9%
Calm 97%
Sad 1.2%
Happy 0.9%
Confused 0.5%
Disgusted 0.1%
Surprised 0.1%
Fear 0.1%
Angry 0.1%

AWS Rekognition

Age 28-38
Gender Male, 99.7%
Happy 66%
Calm 25.5%
Surprised 1.9%
Sad 1.7%
Fear 1.5%
Confused 1.5%
Disgusted 1.1%
Angry 0.8%

Google Vision

Surprise Very unlikely
Anger Very unlikely
Sorrow Very unlikely
Joy Very unlikely
Headwear Very unlikely
Blurred Very unlikely

Google Vision

Surprise Very unlikely
Anger Very unlikely
Sorrow Very unlikely
Joy Very unlikely
Headwear Unlikely
Blurred Very unlikely

Google Vision

Surprise Very unlikely
Anger Very unlikely
Sorrow Very unlikely
Joy Very unlikely
Headwear Very unlikely
Blurred Very unlikely

Google Vision

Surprise Very unlikely
Anger Very unlikely
Sorrow Very unlikely
Joy Very unlikely
Headwear Very unlikely
Blurred Very unlikely

Google Vision

Surprise Very unlikely
Anger Very unlikely
Sorrow Very unlikely
Joy Very unlikely
Headwear Very unlikely
Blurred Very unlikely

Google Vision

Surprise Very unlikely
Anger Very unlikely
Sorrow Very unlikely
Joy Very unlikely
Headwear Very unlikely
Blurred Very unlikely

Google Vision

Surprise Very unlikely
Anger Very unlikely
Sorrow Very unlikely
Joy Possible
Headwear Unlikely
Blurred Very unlikely

Google Vision

Surprise Very unlikely
Anger Very unlikely
Sorrow Very unlikely
Joy Very unlikely
Headwear Very unlikely
Blurred Possible

Feature analysis

Amazon

Person 99.8%
Shoe 90.4%

Captions

Microsoft

a group of people posing for a photo in front of a window 78.6%
a group of people posing for a photo 78.5%
a group of people standing next to a window 78.4%

Text analysis

Amazon

KODAK---ITW

Google

MJI7--YT
-
3.3A°2
MJI7--YT 3.3A°2 - - XAGON
XAGON