Human Generated Data

Title

Untitled (people in audience watching concert with Jello displays on ends of stage)

Date

1940

People

Artist: Durette Studio, American 20th century

Classification

Photographs

Credit Line

Harvard Art Museums/Fogg Museum, Transfer from the Carpenter Center for the Visual Arts, American Professional Photographers Collection, 4.2002.4213

Human Generated Data

Title

Untitled (people in audience watching concert with Jello displays on ends of stage)

People

Artist: Durette Studio, American 20th century

Date

1940

Classification

Photographs

Credit Line

Harvard Art Museums/Fogg Museum, Transfer from the Carpenter Center for the Visual Arts, American Professional Photographers Collection, 4.2002.4213

Machine Generated Data

Tags

Amazon
created on 2019-06-01

Person 95.9
Human 95.9
Lamp 95.6
Lighting 87.5
Chandelier 85.9
Indoors 81.7
Room 81.7
Person 70.4
Person 66
Person 48.2
Person 45.7

Clarifai
created on 2019-06-01

indoors 94.8
inside 93.4
room 92.3
empty 88.7
contemporary 87.9
people 87.1
desktop 86.2
design 85.9
modern 85.4
family 85
window 84.3
man 84.2
architecture 84
light 83.9
no person 81.1
business 79.8
office 79.7
wall 79
house 78.8
furniture 77.1

Imagga
created on 2019-06-01

art 30.6
drawing 29.8
sketch 27.4
design 26.5
pattern 25.3
clean 17.6
graphic 17.5
decoration 17.4
retro 17.2
frame 17.2
backgrounds 17.1
set 16.1
backdrop 15.7
transparent 15.3
shape 14.9
vintage 14.9
element 14.9
splash 14.8
old 14.7
wallpaper 14.6
representation 14.5
icon 14.3
digital 13.8
color 13.4
technology 12.6
style 12.6
decorative 12.5
artistic 12.2
card 12.1
line 12
grunge 11.9
floral 11.9
ornate 11.9
texture 11.8
creative 11.5
curve 11.4
wave 11.4
modern 11.2
ornament 11.2
cold 11.2
motion 11.1
web 11
glass 11
leaf 10.9
light 10.7
liquid 10.5
paper 10.4
water 10
drop 10
silhouette 9.9
lines 9.9
tracing 9.8
decor 9.7
business 9.7
antique 9.6
symbol 9.4
industry 9.4
architecture 9.4
holiday 9.3
drink 9.2
border 9.1
futuristic 9
snow 9
new 8.9
graphics 8.9
water faucet 8.8
boutique 8.8
curl 8.6
space 8.5
bubble 8.5
black 8.4
flowing 8.4
organic 8.4
sign 8.3
plant 8.3
template 8.3
map 8.1
cartoon 8
close 8
text 7.9
clear 7.9
drawn 7.8
splashing 7.7
flower 7.7
winter 7.7
frozen 7.7
ripple 7.6
menu 7.6
trendy 7.5
simple 7.5
greeting 7.4
elements 7.4
purity 7.4
artwork 7.3
stylish 7.2

Google
created on 2019-06-01

White 97.9
Photograph 96.8
Snapshot 82.5
Black-and-white 74.4
Room 71.4
Photography 67.8
Monochrome 60.1
Style 51

Microsoft
created on 2019-06-01

indoor 88.5
white 72.7
black and white 67.1
house 53.2

Color Analysis

Face analysis

Amazon

AWS Rekognition

Age 14-25
Gender Female, 50.5%
Sad 49.9%
Happy 49.5%
Surprised 49.5%
Calm 49.9%
Disgusted 49.5%
Confused 49.6%
Angry 49.6%

AWS Rekognition

Age 20-38
Gender Female, 50.5%
Happy 49.9%
Disgusted 49.6%
Angry 49.5%
Calm 49.8%
Surprised 49.6%
Confused 49.5%
Sad 49.5%

AWS Rekognition

Age 19-36
Gender Female, 50.5%
Surprised 49.5%
Confused 49.5%
Disgusted 49.5%
Happy 50.1%
Sad 49.5%
Calm 49.8%
Angry 49.5%

AWS Rekognition

Age 30-47
Gender Female, 50.5%
Angry 49.5%
Happy 49.9%
Sad 49.5%
Disgusted 49.5%
Confused 49.5%
Calm 49.9%
Surprised 49.6%

AWS Rekognition

Age 20-38
Gender Female, 50.4%
Angry 49.6%
Happy 49.8%
Confused 49.6%
Calm 49.8%
Surprised 49.6%
Sad 49.6%
Disgusted 49.6%

AWS Rekognition

Age 19-36
Gender Female, 50.3%
Disgusted 49.5%
Surprised 49.6%
Angry 49.7%
Confused 49.7%
Sad 49.7%
Calm 49.8%
Happy 49.5%

AWS Rekognition

Age 26-43
Gender Female, 50.5%
Confused 49.6%
Calm 50.1%
Sad 49.5%
Surprised 49.5%
Angry 49.5%
Disgusted 49.6%
Happy 49.7%

AWS Rekognition

Age 26-43
Gender Female, 50.4%
Disgusted 49.7%
Happy 49.6%
Surprised 49.6%
Sad 49.6%
Angry 49.6%
Confused 49.6%
Calm 49.8%

AWS Rekognition

Age 26-43
Gender Female, 50.4%
Sad 50%
Angry 49.5%
Happy 49.5%
Confused 49.8%
Calm 49.6%
Surprised 49.6%
Disgusted 49.5%

AWS Rekognition

Age 23-38
Gender Female, 50.5%
Disgusted 49.5%
Calm 49.8%
Angry 49.5%
Sad 49.5%
Happy 50.1%
Surprised 49.5%
Confused 49.5%

Feature analysis

Amazon

Person 95.9%

Categories

Imagga

interior objects 97.5%
text visuals 2.2%

Captions