Human Generated Data

Title

Saint Anne Teaching the Virgin to Read

Date

17th-18th century

People

Artist: Antonio Gherardi, Italian 1638 - 1702

Classification

Paintings

Credit Line

Harvard Art Museums/Fogg Museum, Gift of Mrs. Ropes Cabot, 1957.125

Human Generated Data

Title

Saint Anne Teaching the Virgin to Read

People

Artist: Antonio Gherardi, Italian 1638 - 1702

Date

17th-18th century

Classification

Paintings

Credit Line

Harvard Art Museums/Fogg Museum, Gift of Mrs. Ropes Cabot, 1957.125

Machine Generated Data

Tags

Amazon
created on 2020-04-24

Person 99.1
Human 99.1
Person 98.7
Person 98.4
Art 95.6
Painting 92.7
Person 78.2
Face 58
Portrait 58
Photography 58
Photo 58

Clarifai
created on 2020-04-24

people 99.9
group 99.1
portrait 98.9
art 98.7
adult 98.6
woman 97.5
child 97
three 96.2
man 94.8
baby 94.8
two 94.2
offspring 93.9
family 92.9
painting 92.8
affection 92.7
wear 92.7
son 92.5
four 89.4
furniture 87
sibling 86.8

Imagga
created on 2020-04-24

kin 44
portrait 27.2
people 26.8
child 26.8
adult 26.5
love 25.2
couple 24.4
attractive 23.8
mother 23.3
man 22.9
sexy 21.7
male 20.8
fashion 20.3
model 20.2
happy 19.4
face 19.2
passenger 18.1
lifestyle 18.1
parent 16.7
person 16.7
family 16
smiling 15.2
women 15
happiness 14.9
black 14.5
youth 14.5
cute 14.3
hair 14.3
body 13.6
human 13.5
fun 13.5
together 13.1
smile 12.8
sensuality 12.7
pretty 12.6
brunette 12.2
lady 12.2
sensual 11.8
hug 11.6
skin 11.1
two 11
elegance 10.9
joy 10.9
romantic 10.7
girlfriend 10.6
loving 10.5
relationship 10.3
brother 10.3
sibling 10.3
dark 10
daughter 9.9
cheerful 9.7
one 9.7
outdoors 9.7
boy 9.6
passion 9.4
hand 9.1
romance 8.9
handsome 8.9
style 8.9
posing 8.9
kid 8.9
father 8.7
boyfriend 8.7
sitting 8.6
wife 8.5
friends 8.4
dress 8.1
husband 8.1
hugging 7.8
erotic 7.7
sexual 7.7
expression 7.7
studio 7.6
holding 7.4
20s 7.3
looking 7.2

Google
created on 2020-04-24

Microsoft
created on 2020-04-24

painting 99.2
text 99.1
drawing 97
human face 96.6
sketch 94.9
person 94.8
baby 84.3
old 72.1
smile 67.4
clothing 66.7
woman 60.9
posing 37.7
picture frame 11.6

Color Analysis

Face analysis

Amazon

Microsoft

Google

AWS Rekognition

Age 32-48
Gender Female, 90.8%
Angry 1.9%
Fear 0.9%
Confused 1.7%
Sad 1.1%
Surprised 2%
Disgusted 2.1%
Happy 18.6%
Calm 71.6%

AWS Rekognition

Age 0-4
Gender Female, 98.1%
Disgusted 0%
Surprised 0%
Calm 99.6%
Happy 0.1%
Fear 0%
Confused 0%
Angry 0%
Sad 0.1%

AWS Rekognition

Age 20-32
Gender Male, 60.1%
Disgusted 0.5%
Confused 1.4%
Calm 35.7%
Angry 12.5%
Fear 0.6%
Surprised 0.3%
Happy 6.4%
Sad 42.7%

AWS Rekognition

Age 21-33
Gender Female, 89%
Angry 1.7%
Fear 0.1%
Calm 91.4%
Sad 5%
Happy 0.1%
Confused 0.6%
Surprised 0.2%
Disgusted 0.9%

AWS Rekognition

Age 2-8
Gender Female, 93.3%
Angry 12.7%
Happy 1.7%
Fear 23.4%
Calm 5.3%
Surprised 2.1%
Disgusted 4.4%
Sad 47.2%
Confused 3.2%

Microsoft Cognitive Services

Age 57
Gender Female

Microsoft Cognitive Services

Age 3
Gender Female

Google Vision

Surprise Very unlikely
Anger Very unlikely
Sorrow Very unlikely
Joy Very unlikely
Headwear Very unlikely
Blurred Very unlikely

Google Vision

Surprise Very unlikely
Anger Very unlikely
Sorrow Very unlikely
Joy Very unlikely
Headwear Very unlikely
Blurred Very unlikely

Google Vision

Surprise Very unlikely
Anger Very unlikely
Sorrow Very unlikely
Joy Very unlikely
Headwear Very unlikely
Blurred Very unlikely

Google Vision

Surprise Very unlikely
Anger Very unlikely
Sorrow Very unlikely
Joy Possible
Headwear Very unlikely
Blurred Very unlikely

Google Vision

Surprise Very unlikely
Anger Very unlikely
Sorrow Very unlikely
Joy Very unlikely
Headwear Very unlikely
Blurred Very unlikely

Feature analysis

Amazon

Person 99.1%
Painting 92.7%

Categories

Imagga

people portraits 69.3%
paintings art 29.9%