Human Generated Data

Title

Untitled (women in long dresses and crowns dancing in a line on stage)

Date

1946

People

Artist: Jack Gould, American

Classification

Photographs

Credit Line

Harvard Art Museums/Fogg Museum, Transfer from the Carpenter Center for the Visual Arts, American Professional Photographers Collection, 4.2002.14303

Human Generated Data

Title

Untitled (women in long dresses and crowns dancing in a line on stage)

People

Artist: Jack Gould, American

Date

1946

Classification

Photographs

Credit Line

Harvard Art Museums/Fogg Museum, Transfer from the Carpenter Center for the Visual Arts, American Professional Photographers Collection, 4.2002.14303

Machine Generated Data

Tags

Amazon
created on 2022-01-29

Human 97.5
Person 95.9
Person 95.2
Person 93.7
Dance Pose 92.5
Leisure Activities 92.5
Person 90.5
Person 90.2
Clothing 83.5
Apparel 83.5
Dance 82.1
Person 79.3
Person 67.9
Crowd 66.6
Photography 63.5
Photo 63.5
Person 62.9
Person 61.9
Flower 59.6
Plant 59.6
Blossom 59.6
Portrait 58.1
Face 58.1
Stage 55.5

Clarifai
created on 2023-10-29

people 99.7
group 98.6
many 97.2
wear 94.2
adult 93.5
man 92.7
child 91.9
dress 91.7
group together 91.4
leader 89.2
woman 88.6
administration 87.8
crowd 83.2
outfit 81.9
wedding 79.7
boy 78
veil 77.5
art 74.1
ceremony 73.7
outerwear 66.6

Imagga
created on 2022-01-29

boutique 41.5
hall 29.2
crowd 28.8
hanger 28.2
coat hanger 26.7
business 22.5
group 21.8
outfit 21.5
people 20.6
clothing 20.5
person 18.4
businessman 17.6
team 16.1
support 16.1
teamwork 15.8
silhouette 15.7
businesswoman 14.5
clothes 14
male 13.5
man 13.4
audience 12.7
men 12
travel 12
women 11.9
design 11.8
store 11.3
scene 11.2
occupation 11
work 11
black 10.3
shop 9.9
interior 9.7
job 9.7
flag 9.7
mortarboard 9.7
boss 9.6
gown 9.5
presentation 9.3
adult 9.1
fashion 9
dress 9
room 9
cheering 8.8
stadium 8.8
vibrant 8.7
symbol 8.7
icon 8.7
water 8.7
patriotic 8.6
nation 8.5
lights 8.3
garment 8.3
river 8
hanging 7.9
supporters 7.9
closet 7.9
urban 7.9
bright 7.9
president 7.8
suit 7.8
nighttime 7.8
speech 7.8
corporate 7.7
modern 7.7
motion 7.7
leader 7.7
cap 7.7
dark 7.5
human 7.5
row 7.4
landscape 7.4
vivid 7.4
success 7.2
snow 7.1
day 7.1
academic gown 7.1
indoors 7

Google
created on 2022-01-29

Microsoft
created on 2022-01-29

text 91.3
dress 84
standing 81.9
group 76.4
white 73.1
posing 66.2
clothing 62.9
old 49.4
lined 39.1
line 37.2
clothes 23.1
several 10.6

Color Analysis

Face analysis

Amazon

Google

AWS Rekognition

Age 28-38
Gender Female, 83.1%
Happy 54.9%
Surprised 42.1%
Calm 0.8%
Sad 0.6%
Angry 0.5%
Fear 0.4%
Disgusted 0.4%
Confused 0.2%

AWS Rekognition

Age 30-40
Gender Female, 83.9%
Sad 71.9%
Calm 14.5%
Happy 4.5%
Fear 3.9%
Surprised 2.5%
Confused 1.1%
Disgusted 0.8%
Angry 0.7%

AWS Rekognition

Age 13-21
Gender Female, 95.1%
Calm 32.4%
Fear 29.6%
Sad 17.2%
Disgusted 7.6%
Surprised 5.7%
Angry 3.2%
Confused 2.7%
Happy 1.6%

AWS Rekognition

Age 38-46
Gender Female, 63.6%
Happy 84.2%
Surprised 7.4%
Sad 2.6%
Calm 1.8%
Fear 1.2%
Disgusted 1.1%
Confused 1%
Angry 0.6%

AWS Rekognition

Age 50-58
Gender Male, 98.7%
Sad 51.9%
Calm 35.1%
Happy 5.1%
Disgusted 2.6%
Angry 2.1%
Confused 1.5%
Fear 0.8%
Surprised 0.8%

AWS Rekognition

Age 25-35
Gender Female, 58.8%
Fear 93.7%
Calm 3%
Surprised 1%
Angry 0.7%
Happy 0.6%
Confused 0.4%
Disgusted 0.3%
Sad 0.3%

AWS Rekognition

Age 22-30
Gender Female, 77.8%
Angry 42.7%
Fear 39.8%
Surprised 7%
Disgusted 2.8%
Calm 2.7%
Happy 2.7%
Sad 1.2%
Confused 1.1%

AWS Rekognition

Age 23-31
Gender Male, 86.1%
Calm 98.8%
Confused 0.3%
Sad 0.3%
Surprised 0.2%
Disgusted 0.1%
Angry 0.1%
Fear 0.1%
Happy 0%

AWS Rekognition

Age 21-29
Gender Male, 84.1%
Calm 70.1%
Sad 23.5%
Happy 3.5%
Angry 1.1%
Confused 0.6%
Disgusted 0.6%
Surprised 0.3%
Fear 0.2%

AWS Rekognition

Age 25-35
Gender Female, 80.7%
Sad 80.5%
Calm 12.8%
Confused 3.7%
Happy 0.9%
Disgusted 0.8%
Surprised 0.5%
Angry 0.4%
Fear 0.3%

Google Vision

Surprise Very unlikely
Anger Very unlikely
Sorrow Very unlikely
Joy Very unlikely
Headwear Very unlikely
Blurred Very unlikely

Google Vision

Surprise Very unlikely
Anger Very unlikely
Sorrow Very unlikely
Joy Very unlikely
Headwear Very unlikely
Blurred Very unlikely

Google Vision

Surprise Very unlikely
Anger Very unlikely
Sorrow Very unlikely
Joy Very unlikely
Headwear Very unlikely
Blurred Very unlikely

Google Vision

Surprise Very unlikely
Anger Very unlikely
Sorrow Very unlikely
Joy Very unlikely
Headwear Very unlikely
Blurred Very likely

Google Vision

Surprise Very unlikely
Anger Very unlikely
Sorrow Very unlikely
Joy Very unlikely
Headwear Very unlikely
Blurred Very unlikely

Google Vision

Surprise Very unlikely
Anger Very unlikely
Sorrow Very unlikely
Joy Very unlikely
Headwear Very unlikely
Blurred Very unlikely

Google Vision

Surprise Very unlikely
Anger Very unlikely
Sorrow Very unlikely
Joy Very unlikely
Headwear Very unlikely
Blurred Very likely

Google Vision

Surprise Very unlikely
Anger Very unlikely
Sorrow Very unlikely
Joy Very unlikely
Headwear Very unlikely
Blurred Unlikely

Google Vision

Surprise Very unlikely
Anger Very unlikely
Sorrow Very unlikely
Joy Very unlikely
Headwear Very unlikely
Blurred Very unlikely

Google Vision

Surprise Very unlikely
Anger Very unlikely
Sorrow Very unlikely
Joy Very unlikely
Headwear Very unlikely
Blurred Very unlikely

Feature analysis

Amazon

Person
Person 95.9%
Person 95.2%
Person 93.7%
Person 90.5%
Person 90.2%
Person 79.3%
Person 67.9%
Person 62.9%
Person 61.9%

Categories

Text analysis

Amazon

113
M. 113
M.