Skip to content

Commit 69dc8b8

Browse files
committed
new update
1 parent be9ca6e commit 69dc8b8

File tree

8 files changed

+202
-219
lines changed

8 files changed

+202
-219
lines changed

doc/pub/week14/html/week14-bs.html

Lines changed: 12 additions & 38 deletions
Original file line numberDiff line numberDiff line change
@@ -198,7 +198,6 @@
198198
None,
199199
'pennylane-implementations'),
200200
('Iris Dataset', 2, None, 'iris-dataset'),
201-
('Classical functions', 2, None, 'classical-functions'),
202201
('Plans for next week', 2, None, 'plans-for-next-week')]}
203202
end of tocinfo -->
204203

@@ -309,7 +308,6 @@
309308
<!-- navigation toc: --> <li><a href="#discussion-of-implementation" style="font-size: 80%;">Discussion of Implementation</a></li>
310309
<!-- navigation toc: --> <li><a href="#pennylane-implementations" style="font-size: 80%;">PennyLane implementations</a></li>
311310
<!-- navigation toc: --> <li><a href="#iris-dataset" style="font-size: 80%;">Iris Dataset</a></li>
312-
<!-- navigation toc: --> <li><a href="#classical-functions" style="font-size: 80%;">Classical functions</a></li>
313311
<!-- navigation toc: --> <li><a href="#plans-for-next-week" style="font-size: 80%;">Plans for next week</a></li>
314312

315313
</ul>
@@ -1523,7 +1521,7 @@ <h2 id="quantum-feature-map" class="anchor">Quantum feature map </h2>
15231521

15241522
<p>A quantum feature map
15251523
\( \boldsymbol{x}\mapsto\vert \phi(\boldsymbol{x})\rangle \) combined with the kernel
1526-
\( k(\boldsymbol{x},\boldsymbol{x}&#8217;)=\vert \langle\phi(\boldsymbol{x})\vert \phi(\boldsymbol{x}&#8217;)\rangle\vert ^2 \)
1524+
\( K(\boldsymbol{x},\boldsymbol{x}&#8217;)=\vert \langle\phi(\boldsymbol{x})\vert \phi(\boldsymbol{x}&#8217;)\rangle\vert ^2 \)
15271525
defines a QSVM kernel. The intuition is that the quantum device is
15281526
implicitly computing a rich similarity measure via its
15291527
high-dimensional state space.
@@ -1543,7 +1541,7 @@ <h2 id="examples-of-feature-map-circuits" class="anchor">Examples of Feature Map
15431541
<div class="panel panel-default">
15441542
<div class="panel-body">
15451543
<!-- subsequent paragraphs come in larger fonts, so start with a paragraph -->
1546-
<p>For an \( n \)-dimensional feature vector \( \boldsymbol{x}=(x_1,\dots,x_n) \), apply single-qubit rotations \( \mathrm{R}_X(x_i) \) or \( \mathrm{R}Y(x_i) \) to the $i$th qubit. For example:</p>
1544+
<p>For an \( n \)-dimensional feature vector \( \boldsymbol{x}=(x_1,\dots,x_n) \), apply single-qubit rotations \( \mathrm{R}_X(x_i) \) or \( \mathrm{R}_Y(x_i) \) to the $i$th qubit. For example:</p>
15471545
$$
15481546
U(\boldsymbol{x})=\bigotimes{i=1}^n R_Y(x_i) = R_Y(x_1)\otimes \cdots \otimes R_Y(x_n).
15491547
$$
@@ -1654,21 +1652,21 @@ <h2 id="quantum-support-vector-machine-theory" class="anchor">Quantum Support Ve
16541652
<!-- !split -->
16551653
<h2 id="qsvm-formulation-via-quantum-kernels" class="anchor">QSVM Formulation via Quantum Kernels </h2>
16561654

1657-
<p>Given training data \( (\boldsymbol{x}i,y_i) \), we choose a quantum feature
1658-
map \( U(\boldsymbol{x}) \) and define the kernel \( K{ij} =
1655+
<p>Given training data \( (\boldsymbol{x}_i,y_i) \), we choose a quantum feature
1656+
map \( U(\boldsymbol{x}) \) and define the kernel \( K_{ij} =
16591657
\vert \langle\phi(\boldsymbol{x}i)\vert \phi(\boldsymbol{x}j)\rangle\vert ^2 \). Then we solve
16601658
the binary type of SVM problems discussed last week, with \( x_i^T x_j \) replaced
1661-
by \( K{ij} \). That is:
1659+
by \( K_{ij} \). That is:
16621660
</p>
16631661
$$
1664-
\max{\alpha} \; \sum_i \alpha_i - \frac{1}{2}\sum_{i,j} \alpha_i \alpha_j y_i y_j \,K(\boldsymbol{x}_i,\boldsymbol{x}j),
1662+
\max{\lambda} \; \sum_i \lambda_i - \frac{1}{2}\sum_{i,j} \lambda_i \lambda_j y_i y_j \,K(\boldsymbol{x}_i,\boldsymbol{x}_j),
16651663
$$
16661664

1667-
<p>subject to \( \sum_i \alpha_i y_i=0 \), \( 0\le \alpha_i\le C \). After
1668-
solving for \( \alpha_i \), the decision function is
1665+
<p>subject to \( \sum_i \lambda_i y_i=0 \), \( 0\le \lambda_i\le C \). After
1666+
solving for \( \lambda_i \), the decision function is
16691667
</p>
16701668
$$
1671-
f(\boldsymbol{x})=\operatorname{sign}\Bigl(\sum{i} \alpha_i y_i \,K(\boldsymbol{x}_i,\boldsymbol{x}) + b\Bigr).
1669+
f(\boldsymbol{x})=\operatorname{sign}\Bigl(\sum{i} \lambda_i y_i \,K(\boldsymbol{x}_i,\boldsymbol{x}) + b\Bigr).
16721670
$$
16731671

16741672
<p>All kernel values \( K(\boldsymbol{x}_i,\boldsymbol{x}_j) \) are estimated by the
@@ -1796,9 +1794,7 @@ <h2 id="actual-implementations" class="anchor">Actual implementations </h2>
17961794
<p>In practice on current hardware, QSVM speed is dominated by circuit
17971795
execution time. However, QSVM experiments are valuable to test
17981796
expressivity: for some data, a simple quantum feature map yields
1799-
perfect classification where classical kernels fail. For example, the
1800-
<b>Swiss roll</b> or concentric circle datasets are often used as quantum
1801-
kernel benchmarks.
1797+
perfect classification where classical kernels fail.
18021798
</p>
18031799

18041800
<p>Finally, it is worth noting that both the variational quantum
@@ -1952,7 +1948,7 @@ <h2 id="defining-quantum-feature-maps-in-pennylane" class="anchor">Defining Quan
19521948
<h2 id="computing-quantum-kernel-matrices" class="anchor">Computing Quantum Kernel Matrices </h2>
19531949

19541950
<p>To train an SVM, we need the kernel matrix
1955-
\( K_{ij}=k(\boldsymbol{x}_i,\boldsymbol{x}_j) \). PennyLane provides
1951+
\( K_{ij}=K(\boldsymbol{x}_i,\boldsymbol{x}_j) \). PennyLane provides
19561952
qml.kernels.kernel$\_$matrix, which takes two datasets and a kernel
19571953
function. We must supply a function <b>kernel(x1,x2)</b> that returns the
19581954
overlap of states.
@@ -2288,32 +2284,10 @@ <h2 id="iris-dataset" class="anchor">Iris Dataset </h2>
22882284
</div>
22892285

22902286

2291-
<!-- !split -->
2292-
<h2 id="classical-functions" class="anchor">Classical functions </h2>
2293-
2294-
<p>Finally we define the classical functions \( \phi_i(\vec{x}) = x_i \) and
2295-
\( \phi_{i,j}(\vec{x}) = (\pi - x_i)( \pi- x_j) \).
2296-
</p>
2297-
2298-
<p>If we write this ansatz for 2 qubits and \( S \leq 2 \) we see how it
2299-
simplifies:
2300-
</p>
2301-
$$
2302-
U_{\Phi(x)} = \exp \left(i \left(x_1 Z_1 + x_2 Z_2 + (\pi - x_1)( \pi- x_2) Z_1 Z_2 \right) \right).
2303-
$$
2304-
2305-
<p>We won't get into details to much here, why we would take this ansatz.
2306-
It is simply an ansatz that is simple enough an leads to good results.
2307-
</p>
2308-
2309-
<p>Finally we can define a depth of these circuits. Depth 2 means we repeat
2310-
this ansatz two times. Which means our feature map becomes
2311-
\( U_{\Phi(x)} \otimes H^{\otimes n} \otimes U_{\Phi(x)} \otimes H^{\otimes n} \).
2312-
</p>
2313-
23142287
<!-- !split -->
23152288
<h2 id="plans-for-next-week" class="anchor">Plans for next week </h2>
23162289
<ol>
2290+
<li> Summary of quantum support vector machines</li>
23172291
<li> Quantum neural networks</li>
23182292
</ol>
23192293
<!-- ------------------- end of main content --------------- -->

doc/pub/week14/html/week14-reveal.html

Lines changed: 12 additions & 39 deletions
Original file line numberDiff line numberDiff line change
@@ -1481,7 +1481,7 @@ <h2 id="quantum-feature-map">Quantum feature map </h2>
14811481

14821482
<p>A quantum feature map
14831483
\( \boldsymbol{x}\mapsto\vert \phi(\boldsymbol{x})\rangle \) combined with the kernel
1484-
\( k(\boldsymbol{x},\boldsymbol{x}&#8217;)=\vert \langle\phi(\boldsymbol{x})\vert \phi(\boldsymbol{x}&#8217;)\rangle\vert ^2 \)
1484+
\( K(\boldsymbol{x},\boldsymbol{x}&#8217;)=\vert \langle\phi(\boldsymbol{x})\vert \phi(\boldsymbol{x}&#8217;)\rangle\vert ^2 \)
14851485
defines a QSVM kernel. The intuition is that the quantum device is
14861486
implicitly computing a rich similarity measure via its
14871487
high-dimensional state space.
@@ -1502,7 +1502,7 @@ <h2 id="examples-of-feature-map-circuits">Examples of Feature Map Circuits </h2>
15021502
<div class="alert alert-block alert-block alert-text-normal">
15031503
<b>Angle (or rotation) embedding:</b>
15041504
<p>
1505-
<p>For an \( n \)-dimensional feature vector \( \boldsymbol{x}=(x_1,\dots,x_n) \), apply single-qubit rotations \( \mathrm{R}_X(x_i) \) or \( \mathrm{R}Y(x_i) \) to the $i$th qubit. For example:</p>
1505+
<p>For an \( n \)-dimensional feature vector \( \boldsymbol{x}=(x_1,\dots,x_n) \), apply single-qubit rotations \( \mathrm{R}_X(x_i) \) or \( \mathrm{R}_Y(x_i) \) to the $i$th qubit. For example:</p>
15061506
<p>&nbsp;<br>
15071507
$$
15081508
U(\boldsymbol{x})=\bigotimes{i=1}^n R_Y(x_i) = R_Y(x_1)\otimes \cdots \otimes R_Y(x_n).
@@ -1616,24 +1616,24 @@ <h2 id="quantum-support-vector-machine-theory">Quantum Support Vector Machine Th
16161616
<section>
16171617
<h2 id="qsvm-formulation-via-quantum-kernels">QSVM Formulation via Quantum Kernels </h2>
16181618

1619-
<p>Given training data \( (\boldsymbol{x}i,y_i) \), we choose a quantum feature
1620-
map \( U(\boldsymbol{x}) \) and define the kernel \( K{ij} =
1619+
<p>Given training data \( (\boldsymbol{x}_i,y_i) \), we choose a quantum feature
1620+
map \( U(\boldsymbol{x}) \) and define the kernel \( K_{ij} =
16211621
\vert \langle\phi(\boldsymbol{x}i)\vert \phi(\boldsymbol{x}j)\rangle\vert ^2 \). Then we solve
16221622
the binary type of SVM problems discussed last week, with \( x_i^T x_j \) replaced
1623-
by \( K{ij} \). That is:
1623+
by \( K_{ij} \). That is:
16241624
</p>
16251625
<p>&nbsp;<br>
16261626
$$
1627-
\max{\alpha} \; \sum_i \alpha_i - \frac{1}{2}\sum_{i,j} \alpha_i \alpha_j y_i y_j \,K(\boldsymbol{x}_i,\boldsymbol{x}j),
1627+
\max{\lambda} \; \sum_i \lambda_i - \frac{1}{2}\sum_{i,j} \lambda_i \lambda_j y_i y_j \,K(\boldsymbol{x}_i,\boldsymbol{x}_j),
16281628
$$
16291629
<p>&nbsp;<br>
16301630

1631-
<p>subject to \( \sum_i \alpha_i y_i=0 \), \( 0\le \alpha_i\le C \). After
1632-
solving for \( \alpha_i \), the decision function is
1631+
<p>subject to \( \sum_i \lambda_i y_i=0 \), \( 0\le \lambda_i\le C \). After
1632+
solving for \( \lambda_i \), the decision function is
16331633
</p>
16341634
<p>&nbsp;<br>
16351635
$$
1636-
f(\boldsymbol{x})=\operatorname{sign}\Bigl(\sum{i} \alpha_i y_i \,K(\boldsymbol{x}_i,\boldsymbol{x}) + b\Bigr).
1636+
f(\boldsymbol{x})=\operatorname{sign}\Bigl(\sum{i} \lambda_i y_i \,K(\boldsymbol{x}_i,\boldsymbol{x}) + b\Bigr).
16371637
$$
16381638
<p>&nbsp;<br>
16391639

@@ -1769,9 +1769,7 @@ <h2 id="actual-implementations">Actual implementations </h2>
17691769
<p>In practice on current hardware, QSVM speed is dominated by circuit
17701770
execution time. However, QSVM experiments are valuable to test
17711771
expressivity: for some data, a simple quantum feature map yields
1772-
perfect classification where classical kernels fail. For example, the
1773-
<b>Swiss roll</b> or concentric circle datasets are often used as quantum
1774-
kernel benchmarks.
1772+
perfect classification where classical kernels fail.
17751773
</p>
17761774

17771775
<p>Finally, it is worth noting that both the variational quantum
@@ -1929,7 +1927,7 @@ <h2 id="defining-quantum-feature-maps-in-pennylane">Defining Quantum Feature Map
19291927
<h2 id="computing-quantum-kernel-matrices">Computing Quantum Kernel Matrices </h2>
19301928

19311929
<p>To train an SVM, we need the kernel matrix
1932-
\( K_{ij}=k(\boldsymbol{x}_i,\boldsymbol{x}_j) \). PennyLane provides
1930+
\( K_{ij}=K(\boldsymbol{x}_i,\boldsymbol{x}_j) \). PennyLane provides
19331931
qml.kernels.kernel$\_$matrix, which takes two datasets and a kernel
19341932
function. We must supply a function <b>kernel(x1,x2)</b> that returns the
19351933
overlap of states.
@@ -2272,35 +2270,10 @@ <h2 id="iris-dataset">Iris Dataset </h2>
22722270
</div>
22732271
</section>
22742272

2275-
<section>
2276-
<h2 id="classical-functions">Classical functions </h2>
2277-
2278-
<p>Finally we define the classical functions \( \phi_i(\vec{x}) = x_i \) and
2279-
\( \phi_{i,j}(\vec{x}) = (\pi - x_i)( \pi- x_j) \).
2280-
</p>
2281-
2282-
<p>If we write this ansatz for 2 qubits and \( S \leq 2 \) we see how it
2283-
simplifies:
2284-
</p>
2285-
<p>&nbsp;<br>
2286-
$$
2287-
U_{\Phi(x)} = \exp \left(i \left(x_1 Z_1 + x_2 Z_2 + (\pi - x_1)( \pi- x_2) Z_1 Z_2 \right) \right).
2288-
$$
2289-
<p>&nbsp;<br>
2290-
2291-
<p>We won't get into details to much here, why we would take this ansatz.
2292-
It is simply an ansatz that is simple enough an leads to good results.
2293-
</p>
2294-
2295-
<p>Finally we can define a depth of these circuits. Depth 2 means we repeat
2296-
this ansatz two times. Which means our feature map becomes
2297-
\( U_{\Phi(x)} \otimes H^{\otimes n} \otimes U_{\Phi(x)} \otimes H^{\otimes n} \).
2298-
</p>
2299-
</section>
2300-
23012273
<section>
23022274
<h2 id="plans-for-next-week">Plans for next week </h2>
23032275
<ol>
2276+
<p><li> Summary of quantum support vector machines</li>
23042277
<p><li> Quantum neural networks</li>
23052278
</ol>
23062279
</section>

doc/pub/week14/html/week14-solarized.html

Lines changed: 12 additions & 37 deletions
Original file line numberDiff line numberDiff line change
@@ -225,7 +225,6 @@
225225
None,
226226
'pennylane-implementations'),
227227
('Iris Dataset', 2, None, 'iris-dataset'),
228-
('Classical functions', 2, None, 'classical-functions'),
229228
('Plans for next week', 2, None, 'plans-for-next-week')]}
230229
end of tocinfo -->
231230

@@ -1430,7 +1429,7 @@ <h2 id="quantum-feature-map">Quantum feature map </h2>
14301429

14311430
<p>A quantum feature map
14321431
\( \boldsymbol{x}\mapsto\vert \phi(\boldsymbol{x})\rangle \) combined with the kernel
1433-
\( k(\boldsymbol{x},\boldsymbol{x}&#8217;)=\vert \langle\phi(\boldsymbol{x})\vert \phi(\boldsymbol{x}&#8217;)\rangle\vert ^2 \)
1432+
\( K(\boldsymbol{x},\boldsymbol{x}&#8217;)=\vert \langle\phi(\boldsymbol{x})\vert \phi(\boldsymbol{x}&#8217;)\rangle\vert ^2 \)
14341433
defines a QSVM kernel. The intuition is that the quantum device is
14351434
implicitly computing a rich similarity measure via its
14361435
high-dimensional state space.
@@ -1450,7 +1449,7 @@ <h2 id="examples-of-feature-map-circuits">Examples of Feature Map Circuits </h2>
14501449
<div class="alert alert-block alert-block alert-text-normal">
14511450
<b>Angle (or rotation) embedding:</b>
14521451
<p>
1453-
<p>For an \( n \)-dimensional feature vector \( \boldsymbol{x}=(x_1,\dots,x_n) \), apply single-qubit rotations \( \mathrm{R}_X(x_i) \) or \( \mathrm{R}Y(x_i) \) to the $i$th qubit. For example:</p>
1452+
<p>For an \( n \)-dimensional feature vector \( \boldsymbol{x}=(x_1,\dots,x_n) \), apply single-qubit rotations \( \mathrm{R}_X(x_i) \) or \( \mathrm{R}_Y(x_i) \) to the $i$th qubit. For example:</p>
14541453
$$
14551454
U(\boldsymbol{x})=\bigotimes{i=1}^n R_Y(x_i) = R_Y(x_1)\otimes \cdots \otimes R_Y(x_n).
14561455
$$
@@ -1558,21 +1557,21 @@ <h2 id="quantum-support-vector-machine-theory">Quantum Support Vector Machine Th
15581557
<!-- !split --><br><br><br><br><br><br><br><br><br><br>
15591558
<h2 id="qsvm-formulation-via-quantum-kernels">QSVM Formulation via Quantum Kernels </h2>
15601559

1561-
<p>Given training data \( (\boldsymbol{x}i,y_i) \), we choose a quantum feature
1562-
map \( U(\boldsymbol{x}) \) and define the kernel \( K{ij} =
1560+
<p>Given training data \( (\boldsymbol{x}_i,y_i) \), we choose a quantum feature
1561+
map \( U(\boldsymbol{x}) \) and define the kernel \( K_{ij} =
15631562
\vert \langle\phi(\boldsymbol{x}i)\vert \phi(\boldsymbol{x}j)\rangle\vert ^2 \). Then we solve
15641563
the binary type of SVM problems discussed last week, with \( x_i^T x_j \) replaced
1565-
by \( K{ij} \). That is:
1564+
by \( K_{ij} \). That is:
15661565
</p>
15671566
$$
1568-
\max{\alpha} \; \sum_i \alpha_i - \frac{1}{2}\sum_{i,j} \alpha_i \alpha_j y_i y_j \,K(\boldsymbol{x}_i,\boldsymbol{x}j),
1567+
\max{\lambda} \; \sum_i \lambda_i - \frac{1}{2}\sum_{i,j} \lambda_i \lambda_j y_i y_j \,K(\boldsymbol{x}_i,\boldsymbol{x}_j),
15691568
$$
15701569

1571-
<p>subject to \( \sum_i \alpha_i y_i=0 \), \( 0\le \alpha_i\le C \). After
1572-
solving for \( \alpha_i \), the decision function is
1570+
<p>subject to \( \sum_i \lambda_i y_i=0 \), \( 0\le \lambda_i\le C \). After
1571+
solving for \( \lambda_i \), the decision function is
15731572
</p>
15741573
$$
1575-
f(\boldsymbol{x})=\operatorname{sign}\Bigl(\sum{i} \alpha_i y_i \,K(\boldsymbol{x}_i,\boldsymbol{x}) + b\Bigr).
1574+
f(\boldsymbol{x})=\operatorname{sign}\Bigl(\sum{i} \lambda_i y_i \,K(\boldsymbol{x}_i,\boldsymbol{x}) + b\Bigr).
15761575
$$
15771576

15781577
<p>All kernel values \( K(\boldsymbol{x}_i,\boldsymbol{x}_j) \) are estimated by the
@@ -1699,9 +1698,7 @@ <h2 id="actual-implementations">Actual implementations </h2>
16991698
<p>In practice on current hardware, QSVM speed is dominated by circuit
17001699
execution time. However, QSVM experiments are valuable to test
17011700
expressivity: for some data, a simple quantum feature map yields
1702-
perfect classification where classical kernels fail. For example, the
1703-
<b>Swiss roll</b> or concentric circle datasets are often used as quantum
1704-
kernel benchmarks.
1701+
perfect classification where classical kernels fail.
17051702
</p>
17061703

17071704
<p>Finally, it is worth noting that both the variational quantum
@@ -1855,7 +1852,7 @@ <h2 id="defining-quantum-feature-maps-in-pennylane">Defining Quantum Feature Map
18551852
<h2 id="computing-quantum-kernel-matrices">Computing Quantum Kernel Matrices </h2>
18561853

18571854
<p>To train an SVM, we need the kernel matrix
1858-
\( K_{ij}=k(\boldsymbol{x}_i,\boldsymbol{x}_j) \). PennyLane provides
1855+
\( K_{ij}=K(\boldsymbol{x}_i,\boldsymbol{x}_j) \). PennyLane provides
18591856
qml.kernels.kernel$\_$matrix, which takes two datasets and a kernel
18601857
function. We must supply a function <b>kernel(x1,x2)</b> that returns the
18611858
overlap of states.
@@ -2191,32 +2188,10 @@ <h2 id="iris-dataset">Iris Dataset </h2>
21912188
</div>
21922189

21932190

2194-
<!-- !split --><br><br><br><br><br><br><br><br><br><br>
2195-
<h2 id="classical-functions">Classical functions </h2>
2196-
2197-
<p>Finally we define the classical functions \( \phi_i(\vec{x}) = x_i \) and
2198-
\( \phi_{i,j}(\vec{x}) = (\pi - x_i)( \pi- x_j) \).
2199-
</p>
2200-
2201-
<p>If we write this ansatz for 2 qubits and \( S \leq 2 \) we see how it
2202-
simplifies:
2203-
</p>
2204-
$$
2205-
U_{\Phi(x)} = \exp \left(i \left(x_1 Z_1 + x_2 Z_2 + (\pi - x_1)( \pi- x_2) Z_1 Z_2 \right) \right).
2206-
$$
2207-
2208-
<p>We won't get into details to much here, why we would take this ansatz.
2209-
It is simply an ansatz that is simple enough an leads to good results.
2210-
</p>
2211-
2212-
<p>Finally we can define a depth of these circuits. Depth 2 means we repeat
2213-
this ansatz two times. Which means our feature map becomes
2214-
\( U_{\Phi(x)} \otimes H^{\otimes n} \otimes U_{\Phi(x)} \otimes H^{\otimes n} \).
2215-
</p>
2216-
22172191
<!-- !split --><br><br><br><br><br><br><br><br><br><br>
22182192
<h2 id="plans-for-next-week">Plans for next week </h2>
22192193
<ol>
2194+
<li> Summary of quantum support vector machines</li>
22202195
<li> Quantum neural networks</li>
22212196
</ol>
22222197
<!-- ------------------- end of main content --------------- -->

0 commit comments

Comments
 (0)