Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Unify gate memory layout for lgpu and ltensor #959

Merged
merged 77 commits into from
Nov 8, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
77 commits
Select commit Hold shift + click to select a range
4f37e20
initial commit
multiphaseCFD Oct 8, 2024
dad7de8
Auto update version from '0.39.0-dev40' to '0.39.0-dev41'
ringo-but-quantum Oct 8, 2024
3d30f21
Merge branch 'master' into applyControlledMatrix_LGPU
multiphaseCFD Oct 16, 2024
dd0307a
Auto update version from '0.39.0-dev45' to '0.39.0-dev46'
ringo-but-quantum Oct 16, 2024
d81874b
fix applyControlledMatrix
multiphaseCFD Oct 16, 2024
ed2df45
update GlobalPhase and CGlobalPhase gate support
multiphaseCFD Oct 16, 2024
e06cd31
fix applyControlledMatrix
multiphaseCFD Oct 16, 2024
ad6a179
update unit tests
multiphaseCFD Oct 17, 2024
c4b6d85
make format
multiphaseCFD Oct 17, 2024
3ce16e6
update GlobalPhase support with `scaleC_CUDA`
multiphaseCFD Oct 17, 2024
008c4f3
remove global_phase_diagonal
multiphaseCFD Oct 17, 2024
015b50a
remove unused lines
multiphaseCFD Oct 17, 2024
02eb290
initial commit
multiphaseCFD Oct 17, 2024
dd2dbab
Auto update version from '0.39.0-dev45' to '0.39.0-dev46'
ringo-but-quantum Oct 17, 2024
a77fef3
add py unit tests for globalphase gate
multiphaseCFD Oct 17, 2024
dffb78f
raise error for mpi-cGlobalPhase
multiphaseCFD Oct 17, 2024
2641cfc
make format
multiphaseCFD Oct 17, 2024
56b26e2
add changelog
multiphaseCFD Oct 17, 2024
38b762e
Trigger CIs
multiphaseCFD Oct 17, 2024
a0abe2e
quick fix for the setup.py
multiphaseCFD Oct 18, 2024
2f65169
add C++ unit tests
multiphaseCFD Oct 18, 2024
d303e87
Auto update version from '0.39.0-dev46' to '0.39.0-dev47'
ringo-but-quantum Oct 18, 2024
95662ec
add more C++ unit tests
multiphaseCFD Oct 18, 2024
c035b51
update the frontend
multiphaseCFD Oct 21, 2024
faad0e9
Auto update version from '0.39.0-dev47' to '0.39.0-dev48'
ringo-but-quantum Oct 21, 2024
148b8df
update changelog
multiphaseCFD Oct 21, 2024
a87d77b
Merge branch 'master' into applyControlledMatrix_LGPU
multiphaseCFD Oct 21, 2024
7ea85d8
Auto update version from '0.39.0-dev47' to '0.39.0-dev48'
ringo-but-quantum Oct 21, 2024
388efae
Trigger CIs
multiphaseCFD Oct 21, 2024
5974b47
Merge branch 'master' into update_gphase_lgpu
multiphaseCFD Oct 21, 2024
fbae962
Auto update version from '0.39.0-dev47' to '0.39.0-dev48'
ringo-but-quantum Oct 21, 2024
7ec070e
update py CIs
multiphaseCFD Oct 21, 2024
c1a141a
add more C++ unit tests
multiphaseCFD Oct 22, 2024
b54b8ca
add checks for ctrls and tgts
multiphaseCFD Oct 22, 2024
b7ec1da
tidy up naming
multiphaseCFD Oct 22, 2024
2970625
Merge branch 'update_gphase_lgpu' into applyControlledMatrix_LGPU
multiphaseCFD Oct 22, 2024
4e69bb3
update
multiphaseCFD Oct 22, 2024
454e4e0
quick fix
multiphaseCFD Oct 22, 2024
add8525
Merge branch 'master' into applyControlledMatrix_LGPU
multiphaseCFD Oct 23, 2024
9b56685
Auto update version from '0.39.0-dev48' to '0.39.0-dev49'
ringo-but-quantum Oct 23, 2024
b0300d6
update
multiphaseCFD Oct 23, 2024
7e4a34b
quick fix
multiphaseCFD Oct 23, 2024
8c07c01
make format
multiphaseCFD Oct 23, 2024
e7b2ec7
update lgpu and ltensor installation instructions
multiphaseCFD Oct 23, 2024
3e87117
initial commit
multiphaseCFD Oct 23, 2024
42ab62f
Auto update version from '0.39.0-dev48' to '0.39.0-dev49'
ringo-but-quantum Oct 23, 2024
f9e6a3e
revert some changes
multiphaseCFD Oct 23, 2024
42e3db3
Merge branch 'master' into unify_gate_memory_layout_lgpu_ltensor
multiphaseCFD Oct 25, 2024
a510f3a
Auto update version from '0.39.0-dev49' to '0.39.0-dev50'
ringo-but-quantum Oct 25, 2024
f2306cc
apply Joseph's comments
multiphaseCFD Oct 25, 2024
e6fe638
Auto update version from '0.39.0-dev49' to '0.39.0-dev51'
ringo-but-quantum Oct 25, 2024
3da92b2
apply afredo's comments
multiphaseCFD Oct 25, 2024
c3bd58a
add todos
multiphaseCFD Oct 25, 2024
e17ab64
Merge branch 'master' into applyControlledMatrix_LGPU
multiphaseCFD Oct 25, 2024
fb8d102
Auto update version from '0.39.0-dev50' to '0.39.0-dev51'
ringo-but-quantum Oct 25, 2024
eb7e171
update pybind and C++ tests
multiphaseCFD Oct 28, 2024
e3faa62
Auto update version from '0.39.0-dev51' to '0.39.0-dev52'
ringo-but-quantum Oct 28, 2024
fb58bd1
Merge branch 'master' into applyControlledMatrix_LGPU
multiphaseCFD Oct 29, 2024
4e0d219
Auto update version from '0.39.0-dev51' to '0.39.0-dev52'
ringo-but-quantum Oct 29, 2024
8b9f972
update py unit tests to avoid lq compilations
multiphaseCFD Oct 29, 2024
bc59f43
apply some Ali's suggestions
multiphaseCFD Oct 29, 2024
a26f711
Merge branch 'applyControlledMatrix_LGPU' into unify_gate_memory_layo…
multiphaseCFD Oct 29, 2024
fd474f1
rebase fix
multiphaseCFD Oct 29, 2024
a02a236
move wire reversion to `applyDeviceGeneralGate_`
multiphaseCFD Oct 29, 2024
ce084c2
update c++ unit tests
multiphaseCFD Oct 29, 2024
c347b27
update c++ unit tests
multiphaseCFD Oct 29, 2024
c03e4ef
revert changes in cmake
multiphaseCFD Oct 29, 2024
40c41b1
disable adjoint tests for N-controlled gates
multiphaseCFD Oct 29, 2024
f049317
Merge branch 'applyControlledMatrix_LGPU' into unify_gate_memory_layo…
multiphaseCFD Oct 29, 2024
1dd8295
update MPI backend
multiphaseCFD Oct 29, 2024
52d6d2f
Merge branch 'master' into unify_gate_memory_layout_lgpu_ltensor
multiphaseCFD Nov 7, 2024
498528f
Auto update version from '0.40.0-dev3' to '0.40.0-dev4'
ringo-but-quantum Nov 7, 2024
4ff25cd
make format
multiphaseCFD Nov 7, 2024
6e100c3
update generators for excitation gates
multiphaseCFD Nov 7, 2024
8cb35b0
add changelog
multiphaseCFD Nov 7, 2024
b8ac6bf
update generator tests
multiphaseCFD Nov 7, 2024
832eb8e
make format
multiphaseCFD Nov 7, 2024
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
3 changes: 3 additions & 0 deletions .github/CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,6 +9,9 @@

### Improvements

* Unify excitation gates memory layout to row-major for both LGPU and LT.
[(#959)](https://github.com/PennyLaneAI/pennylane-lightning/pull/959)

* Update the `lightning.kokkos` CUDA backend for compatibility with Catalyst.
[(#942)](https://github.com/PennyLaneAI/pennylane-lightning/pull/942)

Expand Down
2 changes: 1 addition & 1 deletion pennylane_lightning/core/_version.py
Original file line number Diff line number Diff line change
Expand Up @@ -16,4 +16,4 @@
Version number (major.minor.patch[-label])
"""

__version__ = "0.40.0-dev3"
__version__ = "0.40.0-dev4"
Original file line number Diff line number Diff line change
Expand Up @@ -399,39 +399,18 @@ class StateVectorCudaMPI final
applyParametricPauliGate({opName}, ctrls, tgts, params.front(),
adjoint);
} else if (opName == "Rot" || opName == "CRot") {
if (adjoint) {
auto rot_matrix =
cuGates::getRot<CFP_t>(params[2], params[1], params[0]);
applyDeviceMatrixGate(rot_matrix.data(), ctrls, tgts, true);
} else {
auto rot_matrix =
cuGates::getRot<CFP_t>(params[0], params[1], params[2]);
applyDeviceMatrixGate(rot_matrix.data(), ctrls, tgts, false);
}
auto rot_matrix =
adjoint
? cuGates::getRot<CFP_t>(params[2], params[1], params[0])
: cuGates::getRot<CFP_t>(params[0], params[1], params[2]);
applyDeviceMatrixGate(rot_matrix.data(), ctrls, tgts, adjoint);
multiphaseCFD marked this conversation as resolved.
Show resolved Hide resolved
} else if (opName == "Matrix") {
DataBuffer<CFP_t, int> d_matrix{
gate_matrix.size(), BaseType::getDataBuffer().getDevTag(),
true};
d_matrix.CopyHostDataToGpu(gate_matrix.data(), d_matrix.getLength(),
false);
// ensure wire indexing correctly preserved for tensor-observables
const std::vector<std::size_t> ctrls_local{ctrls.rbegin(),
ctrls.rend()};
const std::vector<std::size_t> tgts_local{tgts.rbegin(),
tgts.rend()};
applyDeviceMatrixGate(d_matrix.getData(), ctrls_local, tgts_local,
adjoint);
applyDeviceMatrixGate(gate_matrix.data(), ctrls, tgts, adjoint);
} else if (par_gates_.find(opName) != par_gates_.end()) {
par_gates_.at(opName)(wires, adjoint, params);
} else { // No offloadable function call; defer to matrix passing
auto &&par =
(params.empty()) ? std::vector<Precision>{0.0} : params;
// ensure wire indexing correctly preserved for tensor-observables
const std::vector<std::size_t> ctrls_local{ctrls.rbegin(),
ctrls.rend()};
const std::vector<std::size_t> tgts_local{tgts.rbegin(),
tgts.rend()};

if (!gate_cache_.gateExists(opName, par[0]) &&
gate_matrix.empty()) {
std::string message = "Currently unsupported gate: " + opName;
Expand All @@ -440,8 +419,8 @@ class StateVectorCudaMPI final
gate_cache_.add_gate(opName, par[0], gate_matrix);
}
applyDeviceMatrixGate(
gate_cache_.get_gate_device_ptr(opName, par[0]), ctrls_local,
tgts_local, adjoint);
gate_cache_.get_gate_device_ptr(opName, par[0]), ctrls, tgts,
adjoint);
}
}

Expand Down Expand Up @@ -1826,9 +1805,8 @@ class StateVectorCudaMPI final
* @param tgts Target qubits.
* @param use_adjoint Use adjoint of given gate.
*/
void applyCuSVDeviceMatrixGate(const CFP_t *matrix,
const std::vector<int> &ctrls,
const std::vector<int> &tgts,
void applyCuSVDeviceMatrixGate(const CFP_t *matrix, std::vector<int> &ctrls,
std::vector<int> &tgts,
bool use_adjoint = false) {
void *extraWorkspace = nullptr;
std::size_t extraWorkspaceSizeInBytes = 0;
Expand All @@ -1846,6 +1824,9 @@ class StateVectorCudaMPI final
compute_type = CUSTATEVEC_COMPUTE_32F;
}

std::reverse(tgts.begin(), tgts.end());
std::reverse(ctrls.begin(), ctrls.end());

// check the size of external workspace
PL_CUSTATEVEC_IS_SUCCESS(custatevecApplyMatrixGetWorkspaceSize(
/* custatevecHandle_t */ handle_.get(),
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -314,29 +314,12 @@ class StateVectorCudaManaged
applyDeviceMatrixGate_(rot_matrix.data(), ctrls, tgts, false);
}
} else if (opName == "Matrix") {
DataBuffer<CFP_t, int> d_matrix{
gate_matrix.size(), BaseType::getDataBuffer().getDevTag(),
true};
d_matrix.CopyHostDataToGpu(gate_matrix.data(), d_matrix.getLength(),
false);
// ensure wire indexing correctly preserved for tensor-observables
const std::vector<std::size_t> ctrls_local{ctrls.rbegin(),
ctrls.rend()};
const std::vector<std::size_t> tgts_local{tgts.rbegin(),
tgts.rend()};
applyDeviceMatrixGate_(d_matrix.getData(), ctrls_local, tgts_local,
adjoint);
applyDeviceMatrixGate_(gate_matrix.data(), ctrls, tgts, adjoint);
} else if (par_gates_.find(opName) != par_gates_.end()) {
par_gates_.at(opName)(wires, adjoint, params);
} else { // No offloadable function call; defer to matrix passing
auto &&par =
(params.empty()) ? std::vector<Precision>{0.0} : params;
// ensure wire indexing correctly preserved for tensor-observables
const std::vector<std::size_t> ctrls_local{ctrls.rbegin(),
ctrls.rend()};
const std::vector<std::size_t> tgts_local{tgts.rbegin(),
tgts.rend()};

if (!gate_cache_.gateExists(opName, par[0]) &&
gate_matrix.empty()) {
std::string message = "Currently unsupported gate: " + opName +
Expand All @@ -346,8 +329,8 @@ class StateVectorCudaManaged
gate_cache_.add_gate(opName, par[0], gate_matrix);
}
applyDeviceMatrixGate_(
gate_cache_.get_gate_device_ptr(opName, par[0]), ctrls_local,
tgts_local, adjoint);
gate_cache_.get_gate_device_ptr(opName, par[0]), ctrls, tgts,
adjoint);
}
}

Expand Down Expand Up @@ -432,9 +415,6 @@ class StateVectorCudaManaged

gate_cache_.add_gate(opName, par[0], matrix_cu);
}
std::reverse(ctrlsInt.begin(), ctrlsInt.end());
std::reverse(tgtsInt.begin(), tgtsInt.end());
std::reverse(ctrls_valuesInt.begin(), ctrls_valuesInt.end());
applyDeviceGeneralGate_(
gate_cache_.get_gate_device_ptr(opName, par[0]), ctrlsInt,
tgtsInt, ctrls_valuesInt, adjoint);
Expand Down Expand Up @@ -474,10 +454,6 @@ class StateVectorCudaManaged
auto ctrls_valuesInt =
Pennylane::Util::cast_vector<bool, int>(controlled_values);

std::reverse(ctrlsInt.begin(), ctrlsInt.end());
std::reverse(tgtsInt.begin(), tgtsInt.end());
std::reverse(ctrls_valuesInt.begin(), ctrls_valuesInt.end());

applyDeviceGeneralGate_(d_matrix.getData(), ctrlsInt, tgtsInt,
ctrls_valuesInt, inverse);
}
Expand Down Expand Up @@ -1620,10 +1596,9 @@ class StateVectorCudaManaged
* @param ctrls_values Control values.
* @param use_adjoint Use adjoint of given gate. Defaults to false.
*/
void applyDeviceGeneralGate_(const CFP_t *matrix,
const std::vector<int> &ctrls,
const std::vector<int> &tgts,
const std::vector<int> &ctrls_values,
void applyDeviceGeneralGate_(const CFP_t *matrix, std::vector<int> &ctrls,
std::vector<int> &tgts,
std::vector<int> &ctrls_values,
bool use_adjoint = false) {
void *extraWorkspace = nullptr;
std::size_t extraWorkspaceSizeInBytes = 0;
Expand All @@ -1641,6 +1616,10 @@ class StateVectorCudaManaged
compute_type = CUSTATEVEC_COMPUTE_32F;
}

std::reverse(tgts.begin(), tgts.end());
std::reverse(ctrls.begin(), ctrls.end());
std::reverse(ctrls_values.begin(), ctrls_values.end());

// check the size of external workspace
PL_CUSTATEVEC_IS_SUCCESS(custatevecApplyMatrixGetWorkspaceSize(
/* custatevecHandle_t */ handle_.get(),
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -793,14 +793,10 @@ TEST_CASE("Generators::applyGeneratorControlledPhaseShift",
}

TEST_CASE("Generators::applyGeneratorSingleExcitation", "[GateGenerators]") {
std::vector<typename StateVectorCudaManaged<double>::CFP_t> matrix{
// clang-format off
{0.0, 0.0}, {0.0, 0.0}, {0.0, 0.0}, {0.0, 0.0},
{0.0, 0.0}, {0.0, 0.0}, {0.0, -1.0}, {0.0, 0.0},
{0.0, 0.0}, {0.0, 1.0}, {0.0, 0.0}, {0.0, 0.0},
{0.0, 0.0}, {0.0, 0.0}, {0.0, 0.0}, {0.0, 0.0}
// clang-format on
};
std::vector<typename StateVectorCudaManaged<double>::CFP_t> matrix(
16, {0.0, 0.0});
matrix[6] = {0.0, -1.0};
matrix[9] = {0.0, 1.0};
std::mt19937 re{1337U};

for (std::size_t num_qubits = 2; num_qubits <= 5; num_qubits++) {
Expand Down Expand Up @@ -875,14 +871,12 @@ TEST_CASE("Generators::applyGeneratorSingleExcitation", "[GateGenerators]") {

TEST_CASE("Generators::applyGeneratorSingleExcitationMinus",
"[GateGenerators]") {
std::vector<typename StateVectorCudaManaged<double>::CFP_t> matrix{
// clang-format off
{1.0, 0.0}, {0.0, 0.0}, {0.0, 0.0}, {0.0, 0.0},
{0.0, 0.0}, {0.0, 0.0}, {0.0,-1.0}, {0.0, 0.0},
{0.0, 0.0}, {0.0, 1.0}, {0.0, 0.0}, {0.0, 0.0},
{0.0, 0.0}, {0.0, 0.0}, {0.0, 0.0}, {1.0, 0.0}
// clang-format on
};
std::vector<typename StateVectorCudaManaged<double>::CFP_t> matrix(
16, {0.0, 0.0});
matrix[0] = {1.0, 0.0};
matrix[6] = {0.0, -1.0};
matrix[9] = {0.0, 1.0};
matrix[15] = {1.0, 0.0};
std::mt19937 re{1337U};

for (std::size_t num_qubits = 2; num_qubits <= 5; num_qubits++) {
Expand Down Expand Up @@ -957,14 +951,12 @@ TEST_CASE("Generators::applyGeneratorSingleExcitationMinus",

TEST_CASE("Generators::applyGeneratorSingleExcitationPlus",
"[GateGenerators]") {
std::vector<typename StateVectorCudaManaged<double>::CFP_t> matrix{
// clang-format off
{-1.0, 0.0},{0.0, 0.0}, {0.0, 0.0}, {0.0, 0.0},
{0.0, 0.0}, {0.0, 0.0}, {0.0,-1.0}, {0.0, 0.0},
{0.0, 0.0}, {0.0, 1.0}, {0.0, 0.0}, {0.0, 0.0},
{0.0, 0.0}, {0.0, 0.0}, {0.0, 0.0}, {-1.0, 0.0}
// clang-format on
};
std::vector<typename StateVectorCudaManaged<double>::CFP_t> matrix(
16, {0.0, 0.0});
matrix[0] = {-1.0, 0.0};
matrix[6] = {0.0, -1.0};
matrix[9] = {0.0, 1.0};
matrix[15] = {-1.0, 0.0};
std::mt19937 re{1337U};

for (std::size_t num_qubits = 2; num_qubits <= 5; num_qubits++) {
Expand Down Expand Up @@ -1058,26 +1050,10 @@ TEST_CASE("Generators::applyGeneratorDoubleExcitation_GPU",
*/
// clang-format on

std::vector<typename StateVectorCudaManaged<double>::CFP_t> matrix{
// clang-format off
{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},
{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},
{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},
{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, -1.0},{0, 0},{0, 0},{0, 0},
{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},
{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},
{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},
{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},
{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},
{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},
{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},
{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},
{0, 0},{0, 0},{0, 0},{0, 1.0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},
{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},
{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},
{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0}
// clang-format on
};
std::vector<typename StateVectorCudaManaged<double>::CFP_t> matrix(
256, {0.0, 0.0});
matrix[60] = {0.0, -1.0};
matrix[195] = {0.0, 1.0};
multiphaseCFD marked this conversation as resolved.
Show resolved Hide resolved
std::mt19937 re{1337U};

for (std::size_t num_qubits = 4; num_qubits <= 8; num_qubits++) {
Expand Down Expand Up @@ -1167,26 +1143,16 @@ TEST_CASE("Generators::applyGeneratorDoubleExcitation_GPU",

TEST_CASE("Generators::applyGeneratorDoubleExcitationMinus_GPU",
"[GateGenerators]") {
std::vector<typename StateVectorCudaManaged<double>::CFP_t> matrix{
// clang-format off
{1.0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},
{0, 0},{1.0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},
{0, 0},{0, 0},{1.0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},
{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, -1.0},{0, 0},{0, 0},{0, 0},
{0, 0},{0, 0},{0, 0},{0, 0},{1.0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},
{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{1.0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},
{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{1.0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},
{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{1.0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},
{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{1.0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},
{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{1.0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},
{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{1.0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},
{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{1.0, 0},{0, 0},{0, 0},{0, 0},{0, 0},
{0, 0},{0, 0},{0, 0},{0, 1.0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},
{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{1.0, 0},{0, 0},{0, 0},
{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{1.0, 0},{0, 0},
{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{1.0, 0}
// clang-format on
};
std::vector<typename StateVectorCudaManaged<double>::CFP_t> matrix(
256, {0.0, 0.0});
matrix[60] = {0.0, -1.0};
matrix[195] = {0.0, 1.0};
for (std::size_t i = 0; i < 16; i++) {
if (i != 3 && i != 12) {
const size_t idx = i * 17;
matrix[idx] = {1.0, 0.0};
}
}
std::mt19937 re{1337U};

for (std::size_t num_qubits = 4; num_qubits <= 8; num_qubits++) {
Expand Down Expand Up @@ -1276,26 +1242,16 @@ TEST_CASE("Generators::applyGeneratorDoubleExcitationMinus_GPU",

TEST_CASE("Generators::applyGeneratorDoubleExcitationPlus_GPU",
"[GateGenerators]") {
std::vector<typename StateVectorCudaManaged<double>::CFP_t> matrix{
// clang-format off
{-1.0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},
{0, 0},{-1.0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},
{0, 0},{0, 0},{-1.0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},
{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, -1.0},{0, 0},{0, 0},{0, 0},
{0, 0},{0, 0},{0, 0},{0, 0},{-1.0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},
{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{-1.0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},
{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{-1.0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},
{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{-1.0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},
{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{-1.0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},
{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{-1.0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},
{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{-1.0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},
{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{-1.0, 0},{0, 0},{0, 0},{0, 0},{0, 0},
{0, 0},{0, 0},{0, 0},{0, 1.0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},
{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{-1.0, 0},{0, 0},{0, 0},
{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{-1.0, 0},{0, 0},
{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{0, 0},{-1.0, 0}
// clang-format on
};
std::vector<typename StateVectorCudaManaged<double>::CFP_t> matrix(
256, {0.0, 0.0});
matrix[60] = {0.0, -1.0};
matrix[195] = {0.0, 1.0};
for (std::size_t i = 0; i < 16; i++) {
if (i != 3 && i != 12) {
const size_t idx = i * 17;
matrix[idx] = {-1.0, 0.0};
}
}
std::mt19937 re{1337U};

for (std::size_t num_qubits = 4; num_qubits <= 8; num_qubits++) {
Expand Down
Loading
Loading