-
Notifications
You must be signed in to change notification settings - Fork 589
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
New tutorial/example for qGAN #339
base: master
Are you sure you want to change the base?
Changes from 1 commit
1cef46d
13c980e
d709a0d
49cfeae
217aa84
dd52835
5095336
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,41 @@ | ||
{ | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. mu = sigma =
why do you
There are many typos. Please replace all hardardman --> Hadamard intialise --> initialize
Lint issues. please keep the number of characters in a line under 80. For example, # Discretize the remaining samples so the continous distribution can be # approximated by a discrete distribution
what's the purpose of this line? it uses the list of bin values [0.5, 1.5, 2.5, 3.5] for num_qubits = 2. why not use [0., 1.0, 2.0, 3.0, 4.0] with # Convert the decimal into binary tensor
Also, we can clean up the last for-loop code. noise = tfq.convert_to_tensor(
Reply via ReviewNB There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Why the result have 3 qubits even if num_qubits = 2 and you mentioned "2-qubit example" Reply via ReviewNB There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Use format theta = sympy.symbols(f'a0:{num_qubits}')
Lint please.
inputs = tf.keras.Input(shape=(),
Simplify it parameterized_circuit = cirq.Circuit([cirq.ry(
Line please. < 80 characeters
# Entangle all the qubits by applying CZ in a circular fashion # except when there are only two qubits and then just apply one CZ
Simplify it. entangle_circuit = cirq.Circuit( [cirq.CZ(q1, q2) for q1, q2 in zip(qubits[0:-1], qubits[1:])])
Lint please. # Add this circuit layer to the network with an output on measurements on
Simplify it. and saying "Important to have repetition = 1" is not helpful at all. could you describe the real reason?
observables = [(cirq.Z(q)+1)/2 for q in qubits] Reply via ReviewNB There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. This is not a good place to have this function.
Also, please DO NOT use the python keyword Also, could you use Reply via ReviewNB There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Hmm, you have input layer with 2 input nodes, 2 hidden layers with 50 and 20 nodes respectively, and output layer with 1 output node Reply via ReviewNB There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Please remove all the commented lines if you don't use anymore.
As you can see, Reply via ReviewNB There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Lint please / use tf.math.reduce_mean, sum->loss, and please describe the definition of discriminator_gen_outputs in the new text block. Reply via ReviewNB There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Again, discriminator_real_outputs and discriminator_gen_outputs are used before their assignments. Please describe them in text block.
Also, I don't understand why you require the following 4 tensors here. zeros = tf.zeros(sum_tensor.shape) ones = tf.ones(sum_tensor.shape) twos = tf.ones(sum_tensor.shape) * 2 threes = tf.ones(sum_tensor.shape) * 3
Please don't create this tensor inside this function, just create and use directly in the main training loop.
Simplify the code.
sum_tensor = tf.reduce_sum(tf.map_fn(lambda t: t * 2 ** tf.range(tf.cast(gan_model.generated_data.shape[1], dtype=tf.int64)), tf.cast(tf.reverse(tensor=gan_model.generated_data, axis=[1]), dtype=tf.int64)), axis=1)
-->
# Convert binary to decimal def binary_to_decimal(discrete_data): return tf.cast(tf.math.reduce_sum(tf.bitwise.left_shift( tf.cast(discrete_data, dtype=tf.int64), tf.range(num_qubits)), axis=1), dtype=tf.float32) Reply via ReviewNB There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I don't know if I am generating the correct format of data. I have tried a whole mix of options from averaging over a number of repetitions to the frequency of occurance of each number but I am not sure what is correct. This version seemed to make the most sense to me as the discriminator will see a sample which is a set of outputs of a circuit with the generator weights and so should be able to deduce the distribution from that. I thought that averaging might lose information about the distribution. Reply via ReviewNB There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I need to change this to reflect the fact I removed the layer with 50 nodes. Though I'm not sure if I should add it back as removing it didn't seem to make a difference. Reply via ReviewNB There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I need to add an explaination of how this all works. However, it currently doesn't work, so I cannot write it yet Reply via ReviewNB There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. If it was learning correctly these graphs would show it tending towards a lognormal distribution. However, as you can see it isn't learning correctly. I have tried fiddling around with the learning rate etc. but I can't seem to fix it. Also, I don't think the entropy calculation is working correctly but it isn't necessary - I might just remove it. Reply via ReviewNB |
||
"nbformat": 4, | ||
"nbformat_minor": 0, | ||
"metadata": { | ||
"colab": { | ||
"name": "Untitled0.ipynb", | ||
"provenance": [], | ||
"authorship_tag": "ABX9TyO6fSnFkAXNT6HWNq7ayKg2", | ||
"include_colab_link": true | ||
}, | ||
"kernelspec": { | ||
"name": "python3", | ||
"display_name": "Python 3" | ||
} | ||
}, | ||
"cells": [ | ||
{ | ||
"cell_type": "markdown", | ||
"metadata": { | ||
"id": "view-in-github", | ||
"colab_type": "text" | ||
}, | ||
"source": [ | ||
"<a href=\"https://colab.research.google.com/github/oliverob/quantum/blob/master/docs/tutorials/qGAN.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>" | ||
] | ||
}, | ||
{ | ||
"cell_type": "code", | ||
"metadata": { | ||
"id": "KeNu6kUJsmxj", | ||
"colab_type": "code", | ||
"colab": {} | ||
}, | ||
"source": [ | ||
"" | ||
], | ||
"execution_count": null, | ||
"outputs": [] | ||
} | ||
] | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
some minor typos:
arbit
r
arya
n
n-
qubitA qGAN is a
quantum
version of ...Well, you said
is related to
the probability of the state j, but itis
the probability of the state j.Reply via ReviewNB