Human Generated Data

Title

Untitled (bridesmaid combing bride's hair in vanity mirror while young girl watches)

Date

1945-1955

People

Artist: Martin Schweig, American 20th century

Classification

Photographs

Credit Line

Harvard Art Museums/Fogg Museum, Transfer from the Carpenter Center for the Visual Arts, American Professional Photographers Collection, 4.2002.9167

Human Generated Data

Title

Untitled (bridesmaid combing bride's hair in vanity mirror while young girl watches)

People

Artist: Martin Schweig, American 20th century

Date

1945-1955

Classification

Photographs

Credit Line

Harvard Art Museums/Fogg Museum, Transfer from the Carpenter Center for the Visual Arts, American Professional Photographers Collection, 4.2002.9167

Machine Generated Data

Tags

Amazon
created on 2022-01-23

Person 99
Human 99
Person 98
Person 97.9
Person 97.5
Clothing 95.1
Apparel 95.1
Home Decor 73.7
Restaurant 73.7
Meal 72.4
Food 72.4
Furniture 71.1
Icing 64.2
Dessert 64.2
Cake 64.2
Cream 64.2
Creme 64.2
Person 63.6
Poster 62
Advertisement 62
Chair 59.7
Overcoat 58
Coat 58
Couch 57.9
Cafeteria 57.5
Dish 56.9
Collage 56.2
Room 56.1
Indoors 56.1
Cafe 56
Hat 55.1

Clarifai
created on 2023-10-26

people 99.9
wear 98.9
woman 98.9
group 98.8
adult 98.7
man 96.5
wedding 94.1
actress 93.8
dress 93.5
three 93.5
administration 93.3
group together 92.3
two 91.3
outfit 90.6
child 89.2
monochrome 88.8
actor 87.4
dancing 87.1
veil 86.9
several 85.1

Imagga
created on 2022-01-23

man 29.5
passenger 25.8
people 24.5
adult 19.4
male 17
person 16.2
vehicle 14.7
men 13.7
couple 13.1
human 12.7
wagon 12.3
work 11.8
black 10.9
car 10.9
clothing 10.7
love 10.2
inside 10.1
fashion 9.8
old 9.7
driver 9.7
portrait 9.7
wheeled vehicle 9.7
worker 9.6
shop 9.3
occupation 9.2
business 9.1
transportation 9
looking 8.8
sitting 8.6
room 8.4
happy 8.1
religion 8.1
uniform 7.8
travel 7.7
industry 7.7
senior 7.5
city 7.5
transport 7.3
dress 7.2
women 7.1
mask 7.1
surgeon 7
architecture 7

Google
created on 2022-01-23

Microsoft
created on 2022-01-23

text 97.7
person 95.5
black and white 93.2
clothing 91.1
dress 73.8
woman 64.4

Color Analysis

Face analysis

Amazon

Google

AWS Rekognition

Age 23-33
Gender Female, 66.5%
Calm 93.8%
Surprised 3.9%
Disgusted 0.8%
Confused 0.6%
Happy 0.4%
Angry 0.2%
Sad 0.2%
Fear 0.1%

AWS Rekognition

Age 29-39
Gender Female, 65.5%
Surprised 82.4%
Calm 14.4%
Fear 0.9%
Sad 0.7%
Happy 0.6%
Confused 0.4%
Disgusted 0.3%
Angry 0.3%

AWS Rekognition

Age 23-33
Gender Female, 83.3%
Calm 58.8%
Sad 18%
Happy 15%
Confused 3%
Fear 1.9%
Surprised 1.3%
Disgusted 1.2%
Angry 0.9%

Google Vision

Surprise Very unlikely
Anger Very unlikely
Sorrow Very unlikely
Joy Very unlikely
Headwear Very unlikely
Blurred Very unlikely

Google Vision

Surprise Very unlikely
Anger Very unlikely
Sorrow Very unlikely
Joy Very unlikely
Headwear Very unlikely
Blurred Very unlikely

Feature analysis

Amazon

Person 99%

Text analysis

Amazon

3
start n P 3
YT3RAS
M
M_M7
M_M7 YT3RAS ACHA
n
start
ACHA
P

Google

MH7 YT3RA2 0MA
MH7
YT3RA2
0MA