-
Notifications
You must be signed in to change notification settings - Fork 0
Expand file tree
/
Copy pathvisualSeg.html
More file actions
127 lines (112 loc) · 6.15 KB
/
visualSeg.html
File metadata and controls
127 lines (112 loc) · 6.15 KB
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<meta http-equiv="X-UA-Compatible" content="IE=edge">
<meta name="viewport" content="width=device-width, initial-scale=1">
<meta name="description" content="">
<meta name="author" content="nfaraj, claunay" >
<link rel="icon" type="image/png" href="data/P5_mini_logo.png" />
<link rel="shortcut icon" href="data/P5_mini_logo.png" type="image/png" />
<title>Claire Launay</title>
<!-- Bootstrap Core CSS -->
<link href="vendor/bootstrap/css/bootstrap.min.css" rel="stylesheet">
<!-- Custom Fonts -->
<link href="vendor/font-awesome/css/font-awesome.min.css" rel="stylesheet" type="text/css">
<link href='https://fonts.googleapis.com/css?family=Open+Sans:300italic,400italic,600italic,700italic,800italic,400,300,600,700,800' rel='stylesheet' type='text/css'>
<link href='https://fonts.googleapis.com/css?family=Merriweather:400,300,300italic,400italic,700,700italic,900,900italic' rel='stylesheet' type='text/css'>
<!-- Plugin CSS -->
<link href="vendor/magnific-popup/magnific-popup.css" rel="stylesheet">
<!-- Theme CSS -->
<link href="css/creative.min.css" rel="stylesheet">
<!-- HTML5 Shim and Respond.js IE8 support of HTML5 elements and media queries -->
<!-- WARNING: Respond.js doesn't work if you view the page via file:// -->
<!--[if lt IE 9]>
<script src="https://oss.maxcdn.com/libs/html5shiv/3.7.0/html5shiv.js"></script>
<script src="https://oss.maxcdn.com/libs/respond.js/1.4.2/respond.min.js"></script>
<![endif]-->
</head>
<body id="page-top" class="bg-secondary">
<nav id="mainNav" class="navbar navbar-default navbar-fixed-top">
<div class="container-fluid">
<!-- Brand and toggle get grouped for better mobile display -->
<div class="navbar-header">
<button type="button" class="navbar-toggle collapsed" data-toggle="collapse" data-target="#bs-example-navbar-collapse-1">
<span class="sr-only">Toggle navigation</span> Menu <i class="fa fa-bars"></i>
</button>
</div>
<!-- Collect the nav links, forms, and other content for toggling -->
<div class="collapse navbar-collapse" id="bs-example-navbar-collapse-1">
<ul class="nav navbar-nav navbar-left">
<li>
<a href="index.html">Home</a>
</li>
<li>
<a href="index.html#research">Research</a>
</li>
<li>
<a href="index.html#teaching">Teaching</a>
</li>
<li>
<a href="index.html#contact">Contact</a>
</li>
</ul>
</div>
<!-- /.navbar-collapse -->
</div>
<!-- /.container-fluid -->
</nav>
<header>
<div class="header-content">
<div class="header-content-inner">
<h1 id="homeHeading">Claire Launay</h1>
<h3>Postdoctoral researcher - Albert Einstein College of Medicine</h3>
</div>
</div>
</header>
<!-- /.row -->
<section id="Unsupervised Video Segmentation Algorithms Based On Flexibly Regularized Mixture Models">
<div class="container">
<div class="row">
<div class="col-lg-12">
<p align="justify">
<h3>Measuring uncertainty in human visual segmentation</h3>
Jonathan Vacher, Claire Launay, Pascal Mamassian, Ruben Coen-Cagli</p>
<p align="justify">
<center><img src="data/visualSeg/expe.png" width="750" height=auto style="margin: 0px 0px 0px 0px;" alt="Static segmentation experiment" ></center>
</p>
<h3>Abstract</h3>
<p align="justify">
Segmenting visual stimuli into distinct groups of features and visual objects is central to visual function.
Classical psychophysical methods have helped uncover many rules of human perceptual segmentation, and recent progress
in machine learning has produced successful algorithms. Yet, the computational logic of human segmentation remains unclear,
partially because we lack well-controlled paradigms to measure perceptual segmentation maps and compare models quantitatively.
Here we propose a new, integrated approach: given an image, we measure multiple pixel-based same–different judgments and perform
model–based reconstruction of the underlying segmentation map. The reconstruction is robust to several experimental manipulations
and captures the variability of individual participants. We demonstrate the validity of the approach on human segmentation of
natural images and composite textures. We show that image uncertainty affects measured human variability, and it influences how
participants weigh different visual features. Because any putative segmentation algorithm can be inserted to perform the reconstruction,
our paradigm affords quantitative tests of theories of perception as well as new benchmarks for segmentation algorithms.
</p>
<p align="justify">
The arXiv preprint can be found <a href="https://arxiv.org/abs/2301.07807" target="_blank">here</a>.
</p>
<p align="justify">
If you have any question, feel free to send us an email, we'll be happy to answer them.
</p>
</div>
</div>
</div>
</section>
<!-- jQuery -->
<script src="vendor/jquery/jquery.min.js"></script>
<!-- Bootstrap Core JavaScript -->
<script src="vendor/bootstrap/js/bootstrap.min.js"></script>
<!-- Plugin JavaScript -->
<script src="http://cdnjs.cloudflare.com/ajax/libs/jquery-easing/1.3/jquery.easing.min.js"></script>
<script src="vendor/scrollreveal/scrollreveal.min.js"></script>
<script src="vendor/magnific-popup/jquery.magnific-popup.min.js"></script>
<!-- Theme JavaScript -->
<script src="js/creative.min.js"></script>
</body>
</html>