-
Notifications
You must be signed in to change notification settings - Fork 0
/
Copy pathindex.html
589 lines (545 loc) · 27.7 KB
/
index.html
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
<!DOCTYPE html>
<html>
<head>
<title>Double Y: Tropical storm damage detection</title>
<link rel="icon" type="image/png" href="static/images/EY Satelite Image.png">
<link href="https://fonts.googleapis.com/css?family=Google+Sans|Noto+Sans|Castoro" rel="stylesheet">
<!-- Meta tags for social media banners, these should be filled in appropriatly as they are your "business card" -->
<meta name="description" content="Double Y: Tropical storm damage detection">
<meta property="og:title" content="Double Y: Tropical storm damage detection"/>
<meta property="og:description" content="AUTOMATING COASTAL VULNERABILITY ASSESSMENT"/>
<meta property="og:url" content="https://github.com/Double-Y-EY-Challenge-2024"/>
<link rel="stylesheet" href="static/css/bulma.min.css">
<link rel="stylesheet" href="static/css/bulma-carousel.min.css">
<link rel="stylesheet" href="static/css/bulma-slider.min.css">
<link rel="stylesheet" href="static/css/fontawesome.all.min.css">
<link rel="stylesheet" href="https://cdn.jsdelivr.net/gh/jpswalsh/academicons@1/css/academicons.min.css">
<link rel="stylesheet" href="static/css/index.css">
<script src="https://ajax.googleapis.com/ajax/libs/jquery/3.5.1/jquery.min.js"></script>
<script src="https://documentcloud.adobe.com/view-sdk/main.js"></script>
<script defer src="static/js/fontawesome.all.min.js"></script>
<script src="static/js/bulma-carousel.min.js"></script>
<script src="static/js/bulma-slider.min.js"></script>
<script src="static/js/index.js"></script>
</head>
<body>
<section class="hero">
<div class="hero-body">
<div class="container is-max-desktop">
<div class="columns is-centered">
<div class="column has-text-centered">
<h1 class="title is-1 publication-title">Automating Coastal Vulnerability Assessment: AI-Driven Geospatial Analysis via Building Damage Detection</h1>
<div class="is-size-5 publication-authors">
<!-- Paper authors -->
<span class="author-block">
<a href="https://www.linkedin.com/in/wongyijie/" target="_blank">Yi Jie WONG</a>,</span>
<span class="author-block">
<a href="https://www.linkedin.com/in/yinloonkhor/" target="_blank">Yin Loon KHOR</a>,</span>
<span class="author-block">
<a href="https://www.linkedin.com/in/ziweiliu2023/" target="_blank">Ziwei LIU</a>
</span>
</div>
<div class="is-size-5 publication-authors">
<span class="author-block"><strong>Group Name:</strong> Double-Y | <strong>Public
Leaderboard:</strong> 8th out of 11,000 registrants <br>
<a href="https://challenge.ey.com/challenges/tropical-cyclone-damage-assessment-lrrno2xm">EY Open
Science Data Challenge Program 2024</a>
</span>
<!-- <span class="eql-cntrb"><small><br><sup>*</sup>Indicates Equal Contribution</small></span> -->
</div>
<div class="column has-text-centered">
<div class="publication-links">
<!-- PDF -->
<span class="link-block">
<a href="https://doi.org/10.36227/techrxiv.172963135.56918790/v1" target="_blank"
class="external-link button is-normal is-rounded is-dark">
<span class="icon">
<i class="fas fa-file-pdf"></i>
</span>
<span>Paper</span>
</a>
</span>
<!-- PDF -->
<span class="link-block">
<a href="static/pdfs/Team Double Y - Approach Document.pdf" target="_blank"
class="external-link button is-normal is-rounded is-dark">
<span class="icon">
<i class="fas fa-file-pdf"></i>
</span>
<span>Technical Report</span>
</a>
</span>
<!-- Github link -->
<span class="link-block">
<a href="https://github.com/Double-Y-EY-Challenge-2024/EY-challenge-2024" target="_blank"
class="external-link button is-normal is-rounded is-dark">
<span class="icon">
<i class="fab fa-github"></i>
</span>
<span>Source Code</span>
</a>
</span>
<!-- Best model -->
<span class="link-block">
<a href="https://github.com/Double-Y-EY-Challenge-2024/EY-challenge-2024/blob/main/best-trained-model.pt"
target="_blank" class="external-link button is-normal is-rounded is-dark">
<span class="icon">
<i class="fab fa-github"></i>
</span>
<span>Best Model</span>
</a>
</span>
</div>
</div>
</div>
</div>
</div>
</div>
</section>
<!-- Teaser GIF -->
<section class="hero teaser">
<div class="container is-max-desktop">
<div class="hero-body">
<img src="static/images/heatmaps/movie.gif" alt="Inference samples predicted by our trained model.">
<h2 class="subtitle">
Geospatial analysis.
</h2>
</div>
</div>
</section>
<!-- End teaser GIF -->
<!-- Summary -->
<section class="section hero is-light">
<div class="container is-max-desktop">
<div class="columns is-centered has-text-centered">
<div class="column is-four-fifths">
<h2 class="title is-3">Summary</h2>
<div class="content has-text-justified">
<p> Coastal regions are extremely vulnerable to storms and tropical cyclones, which have caused
significant economic costs and numerous fatalities. This underscores the urgent need for action
to protect the sustainability and resilience of coastal communities. In this paper, we present
an AI-driven geospatial analysis pipeline for automating coastal disaster assessment by detecting
building damage from satellite imagery. Firstly, we propose an effective and scalable pipeline
to train an artificial intelligence (AI) model for damaged building detection using a limited dataset.
Specifically, we use Microsoft’s Building Footprint dataset as pretraining data, allowing our AI model
to quickly adapt and learn the Puerto Rico landscape. Subsequently, we fine tune the model with a
carefully engineered sequence using manually annotated data and self-annotated data. Upon training,
we used our AI model to generate geospatial heatmaps of damaged building counts and damage ratio,
which are useful to assess the storm damage and coastal vulnerability. Our approach placed us in top 5%
in the public leaderboard, enabling us to be shortliested for the global semi-final rounds.
</p>
</div>
</div>
</div>
</div>
</section>
<!-- End paper abstract -->
<!-- Competition Overview -->
<div class="columns is-centered has-text-centered">
<div class="column is-four-fifths">
<h4 class="title is-3" style="white-space: nowrap;">Competition Overview</h4>
<div class="content has-text-justified">
<div style="text-align: justify;">
<p><strong>1. Objective:</strong> The goal of the challenge is to develop a machine learning model to identify and detect
“damaged” and “un-damaged” coastal infrastructure (residential and commercial buildings), which have been impacted by
natural calamities such as hurricanes, cyclones, etc. Participants will be given pre- and post-cyclone satellite images
of a site impacted by Hurricane Maria in 2017 and build a machine learning model, designed to detect four different objects
in a satellite image of a cyclone impacted area:</p>
<ul>
<li>Undamaged residential buildings</li>
<li>Damaged residential buildings</li>
<li>Undamaged commercial buildings</li>
<li>Damaged commercial buildings</li>
</ul>
<p><strong>2. Mandatory Dataset:</strong></p>
<ul>
<li>High-resolution panchromatic satellite images before and after a tropical cyclone: Maxar GeoEye-1
(optical)</li>
</ul>
<p><strong>3. Optional Dataset (that we used):</strong></p>
<ul>
<li><a href="https://planetarycomputer.microsoft.com/dataset/ms-buildings">Microsoft Building footprint dataset</a>,
with over 999 million buildings from Bing Maps imagery between 2014 and 2021 including Maxar and Airbus
imagery.</li>
</ul>
</div>
</div>
<br><br>
</div>
</div>
<!-- Competition Overview -->
<!-- Key Challenges -->
<div class="columns is-centered has-text-centered">
<div class="column is-four-fifths">
<h4 class="title is-3" style="white-space: nowrap;">Key Challenges</h4>
<div class="content has-text-justified">
<div style="text-align: justify;">
<p>
<strong>1. Dataset Collection:</strong> Manually annotating all four classes in the provided high-resolution
satellite dataset from Maxar's GEO-1 mission, covering an area of 327 sq.km of San Juan, Puerto Rico, is a
time-consuming task. With only one month for the competition duration, this task poses significant
challenges in terms of time and energy allocation.
</p>
<p>
<strong>2. Class Imbalanced:</strong> The dataset contains four unique classes. However, our analysis
indicates that damaged buildings are significantly underrepresented compared to undamaged ones. Moreover,
residential buildings are more prevalent than commercial ones. This imbalance may introduce bias into the model,
causing it to favor the majority class.
</p>
<p>
<strong>3. Out-of-Distribution data:</strong> We noticed that the competition’s validation dataset comprises only
buildings from rural settings. However, the training dataset consists of a mixture of images from rural settings,
industrial zones, and urban areas. Our empirical study reveals that mixing images from non-rural settings can have
a severe impact on model learning.
</p>
</div>
</div>
<br><br>
</div>
</div>
<!-- Key Challenges -->
<!-- Key elements and Assumptions -->
<div class="columns is-centered has-text-centered">
<div class="column is-four-fifths">
<h4 class="title is-3" style="white-space: nowrap;">Key Elements and Assumptions</h4>
<div class="content has-text-justified">
<div style="text-align: justify;">
<p>Before delving into the proposed methodology, we introduce the key elements and assumptions as shown here:
</p>
<table class="table is-bordered is-hoverable">
<thead>
<tr>
<th></th>
<th>Key Element</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>1</td>
<td>Target Region</td>
<td>Puerto Rico</td>
</tr>
<tr>
<td>2</td>
<td>Object Detection Model</td>
<td>YOLOv8n</td>
</tr>
<tr>
<td>3</td>
<td>Microsoft BF dataset</td>
<td>Only the Puerto Rico region</td>
</tr>
<tr>
<td>4</td>
<td>Puerto Rico dataset</td>
<td>5690 unique data</td>
</tr>
<tr>
<td>5</td>
<td>Non-experts</td>
<td>Annotators with limited expertise in the given task</td>
</tr>
<tr>
<td>6</td>
<td>Experts</td>
<td>Annotators with expertise in the given task</td>
</tr>
<tr>
<td>7</td>
<td>Crowdsourced dataset</td>
<td>Dataset annotated by non-experts (200 unique data)</td>
</tr>
<tr>
<td>8</td>
<td>Expert dataset</td>
<td>Datasets annotated by the experts (28 unique data)</td>
</tr>
</tbody>
</table>
<p>Our assumptions:</p>
<ol type="1" padding-left: 0;">
<li style="margin-bottom: 5px;">When labelling multiple versions of the provided post-disaster dataset,
we observed that not all annotated data aligns with the expected outcomes in the EY validation images.
Some of our annotated datasets yield high mAP, some yield low mAP. </li>
<li style="margin-bottom: 5px;">Consequently, we assume datasets that perform exceptionally well as the
“expert dataset,” annotated by “expert annotators.” Conversely, the datasets that do not yield results
as good as the expert dataset are referred to as the “crowd-sourced dataset”. </li>
<li style="margin-bottom: 5px;">We assume expert annotators could effectively differentiate damaged/undamaged
commercial and residential buildings. Logically, the expert dataset is a high-quality dataset but lower in
quantity. </li>
<li style="margin-bottom: 5px;">Meanwhile, crowd-sourced dataset is a high quantity dataset, with lower-quality
annotations. We assume this is labelled by volunteer annotators in real life scenario, rather than the
experts.</li>
<li style="margin-bottom: 5px;">We assume all buildings in Microsoft BF dataset are undamaged residential buildings
(since majority buildings are residential). The exact class is not important, since the dataset is only for
pretraining.</li>
</ol>
</div>
</div>
</div>
</div>
<!-- Key elements and Assumptions -->
<!-- Methodology Overview -->
<div class="columns is-centered has-text-centered">
<div class="column is-four-fifths">
<h2 class="title is-3" style="white-space: nowrap;">Methodology Overview</h2>
<div class="content has-text-justified">
<div style="text-align: center;">
<img src="static/images/Team Double Y - Methodology.jpg" alt="PrepareData" width="820">
<p class="caption" style="width: 100%; text-align: center;"><b>Figure 1. Overview of the proposed methodology.</b><br>
Workflow illustrating the complete process from data acquisition to model training.</p>
</div>
</div>
<br>
</div>
</div>
<!-- End methodology overview-->
<!-- Key Highlights of our Pipeline -->
<!--
<div class="columns is-centered has-text-centered">
<div class="column is-four-fifths">
<h4 class="title is-3" style="white-space: nowrap;">Key Highlights</h4>
<div class="content has-text-justified">
<div style="text-align: justify;">
<p>
<strong>1. Dataset Collection:</strong> We remove images that only feature undamaged residential buildings
(the majority class) to ensure the training dataset does not skew towards one class over another.
</p>
<p>
<strong>2. Class Imbalanced:</strong> We removed images that only feature undamaged residential buildings
(the majority class) to ensure the training dataset does not skew towards one class over another.
</p>
<p>
<strong>3. Out-of-Distribution data:</strong> We removed images featuring industrial zones and urban areas,
as the validation dataset primarily consists of rural settings.
</p>
</div>
</div>
<br><br>
</div>
</div>
-->
<!-- Key Highlights of our Pipeline -->
<!-- Model -->
<div class="columns is-centered has-text-centered">
<div class="column is-four-fifths">
<h4 class="title is-3" style="white-space: nowrap;">Model</h4>
<div class="content has-text-justified">
<div style="text-align: justify;">
<p>
The goal of Phase 1 is to identify and detect “damaged” and “undamaged” coastal infrastructure, which is an
object detection task. To tackle this challenge, our team has opted for Ultralytics <strong>YOLOv8</strong>,
one of the state-of-the-art (SOTA) object detection models renowned for its speed and accuracy. Despite the availability
of competitors like YOLOv9, we prefer Ultralytics YOLOv8 for its user-friendliness and well-documented workflows
that streamline training and deployment. We choose the smallest YOLOv8 - <strong>YOLOv8n</strong>, since it is
<strong>unwise to use larger model when dealing with limited dataset</strong>, as it may lead to overfitting.
Given more time, we would explore other YOLOv8 version and other SOTA models when we have a bigger dataset.
Meanwhile, our empirical study revealed that the main influencing factor on the detection accuracy is
the quantity and quality of the annotated dataset. Hence, we argue that the main focus of the challenge
should be data annotation. We provide details on how we built our training dataset in the next section.
</p>
</div>
</div>
<br><br>
</div>
</div>
<!-- Model -->
<!-- Submission Experiments Table -->
<div class="columns is-centered has-text-centered">
<div class="column is-four-fifths">
<h4 class="title is-3" style="white-space: nowrap;">Submission Experiment</h4>
<div class="content has-text-justified">
<div style="text-align: justify;">
<p>We conducted a comprehensive series of experiments, submitting a total of 30 entries. Here are select
highlights:</p>
<table class="table is-bordered is-hoverable">
<thead>
<tr>
<th>Setup</th>
<th>Pretraining</th>
<th>Crowdsourced Dataset</th>
<th>Expert Dataset</th>
<th>MLOps</th>
<th>mAP</th>
</tr>
</thead>
<tbody>
<tr>
<td>A</td>
<td>✓</td>
<td></td>
<td></td>
<td></td>
<td>0.10</td>
</tr>
<tr>
<td>B</td>
<td>✓</td>
<td>✓</td>
<td></td>
<td></td>
<td>0.44</td>
</tr>
<tr>
<td>C</td>
<td>✓</td>
<td></td>
<td>✓</td>
<td></td>
<td>0.39</td>
</tr>
<tr>
<td>D</td>
<td>✓</td>
<td>✓</td>
<td>✓</td>
<td></td>
<td>0.50</td>
</tr>
<tr>
<td>E</td>
<td></td>
<td>✓</td>
<td>✓</td>
<td></td>
<td>0.24</td>
</tr>
<tr>
<td>F</td>
<td>✓</td>
<td>✓</td>
<td>✓</td>
<td>✓</td>
<td><b>0.51</b></td>
</tr>
</tbody>
</table>
<!--
<p style="font-size: 12px;">Note: Class Ratio (Undamaged Residential Building : Damaged Residential Building :
Undamaged Commercial Building : Damaged Commercial Building)</p>
-->
<p>Further experiments were conducted, including:</p>
<ul style="list-style-type: none; padding-left: 0;">
<li style="margin-bottom: 5px;"><strong>Setup A:</strong> 0.10 - We pretrained a YOLOv8n model using the
Puerto Rico dataset. Surprisingly, we achieved a mAP of 0.10 on the EY validation dataset, without any
manual annotation from our side!</li>
<li style="margin-bottom: 5px;"><strong>Setup B:</strong> 0.44 - When fine-tuning the pretrained model on
the crowd-sourced dataset, we achieved an mAP of 0.44, which exceeds the completion threshold for this
challenge (mAP 0.40). </li>
<li style="margin-bottom: 5px;"><strong>Setup C:</strong> 0.39 - When fine-tuning the pretrained model
directly on the Expert dataset, we can achieve an mAP of 0.39, despite the dataset containing only 28
unique data (84 after augmentation). This shows that the quality of data is equally important, if not
more important than the quantity of data.</li>
<li style="margin-bottom: 5px;"><strong>Setup D:</strong> 0.50 - We initially fine-tune the pretrained
model using a large-scale crowd-sourced dataset to quickly warm it up. Subsequently, we fine-tune the
model on the expert dataset, which has more accurate labels. With this approach, we achieved a mAP of
0.50.</li>
<li style="margin-bottom: 5px;"><strong>Setup E:</strong> 0.24 - We demonstrate that without pretraining,
the performance is not satisfying even when both the crowd-sourced and expert datasets are utilised,
only achieving mAP 0.24.</li>
<li style="margin-bottom: 5px;"><strong>Setup F:</strong> 0.51 - Finally, we demonstrate that by employing
the proposed MLOps cycle, we can enhance the model’s mAP to 0.51. Notably, the sole human intervention
in this MLOps cycle involves verifying the self-labelled data using the baseline model from Setup E.</li>
</ul>
</div>
</div>
<br><br>
</div>
</div>
<!-- End of Submission Experiments Table -->
<!-- Conclusion -->
<div class="columns is-centered has-text-centered">
<div class="column is-four-fifths">
<h4 class="title is-3" style="white-space: nowrap;">Key Takeaways</h4>
<div class="content has-text-justified">
<div style="text-align: justify;">
<p>
<strong>1. Dataset quality is what you need:</strong> There are 2 observations from our study. Firstly,
data quality is as important as data quantity. Secondly, having annotators with expertise in building damage
assessment is crucial for producing the high-quality 'expert dataset.' On the contrary, non-experts tend to
generate a lower quality dataset, which we refer to as a 'crowdsourced dataset.' However, a high-quality dataset
tends to be smaller in size because it takes time to carefully annotate the data. Conversely, a high-quantity
dataset tends to have lower quality due to a lack of expertise and attention. This mirrors a real-world scenario
of a quality-quantity tradeoff. Fortunately, we found that we can combine the strengths of both datasets, as
demonstrated in Setup D from our ablation study in Table I. This involves fine-tuning the pretrained model on
the crowdsourced dataset, followed by fine-tuning on the expert dataset.
</p>
<p>
<strong>2. Start with a small model:</strong> We recommend starting with a smaller model. It is unwise to use a
larger model when dealing with a limited dataset, as it may lead to overfitting. Our empirical study agrees with
this hypothesis, as we failed to achieve a high mAP score using the bigger YOLOv8 version. Given more time,
we would explore the bigger YOLOv8 version and other state-of-the-art (SOTA) models when we have a larger dataset.
</p>
</div>
</div>
<br><br>
</div>
</div>
<!-- Conclusion -->
<!-- Logo Acknowledgment -->
<div class="columns is-centered has-text-centered">
<div class="column is-four-fifths">
<h4 class="title is-3" style="white-space: nowrap;">Technological Stack</h4>
<div class="content has-text-justified">
<div style="text-align: justify;">
<p>
<a href="https://github.com/ultralytics/ultralytics" target="_blank"><img
src="static/icons/ultralyticsyolo-logo.svg" alt="ultralytics" style="width: 200px;"></a>
<a href="https://roboflow.com/" target="_blank"><img
src="static/icons/roboflow-logo.png" alt="roboflow" style="width: 200px;"></a>
<a href="https://pytorch.org/" target="_blank"><img src="static/icons/pytorch-logo.svg" alt="pytorch"
style="width: 210px;"></a>
<a href="https://jupyter.org/" target="_blank"><img src="static/icons/jupyter-logo.png" alt="jupyter"
style="width: 200px;"></a>
<a href="https://www.python.org/" target="_blank"><img src="static/icons/python-logo.svg" alt="python"
style="width: 200px;"></a>
</p>
</div>
</div>
<br><br>
</div>
</div>
<!-- Logo Acknowledgment -->
<footer class="footer">
<div class="container">
<div class="columns is-centered">
<div class="column is-8">
<div class="content">
<p>
Special thanks to <a href="https://ey-groupie2024wg.github.io/">EY Groupie-WG</a> for their well-documented
methodology report. Please feel free to visit their report to see their proposed approach as well!
</p>
<p>
This page was built using the <a href="https://github.com/eliahuhorwitz/Academic-project-page-template"
target="_blank">Academic Project Page Template</a> which was adopted from the <a
href="https://nerfies.github.io" target="_blank">Nerfies</a> project page.
You are free to borrow the of this website, we just ask that you link back to this page in the footer.
</p>
<p>
This website is licensed under a <a rel="license"
href="http://creativecommons.org/licenses/by-sa/4.0/" target="_blank">Creative
Commons Attribution-ShareAlike 4.0 International License</a>.
</p>
</div>
</div>
</div>
</div>
</footer>
<!-- Default Statcounter code for EY project website -->
<!--
<script type="text/javascript">
var sc_project = 12976265;
var sc_invisible = 1;
var sc_security = "c70be6f1";
</script>
<script type="text/javascript" src="https://www.statcounter.com/counter/counter.js" async></script>
<noscript>
<div class="statcounter"><a title="Web Analytics" href="https://statcounter.com/" target="_blank"><img
class="statcounter" src="https://c.statcounter.com/12976265/0/c70be6f1/1/" alt="Web Analytics"
referrerPolicy="no-referrer-when-downgrade"></a></div>
</noscript>
-->
<!-- End of Statcounter Code -->
</body>
</html>