-
-
Notifications
You must be signed in to change notification settings - Fork 62
Description
Describe the feature
Operators are a general way to perform operations based on
inputs.
For instance, currently we do:
es = H.eigenstate(...)
vel = es.velocity()
The velocity is actually an operator, and we might wish
to perform other operators on the eigenstate.
Would it make more sense to change operators to act on
the objects them-selves? It comes with some abstraction, but
also some more interesting automatic handling, and I think added
benefit down the road (way more complicated operators).
I think we could potentially save some things here.
For instance:
import sisl.physics.operators as si_op
op = si_op.anticommutator(si_op.SpinX, si_op.Velocity) / 2
diag_elements = op.diagonal(es[, es])
matrix_form = op.matrix(es[, es])
This would also make many of the current methods a bit more generic.
And would allow simpler usage of them.
This also brings up the question on how we should deal with bra and ket objects.
Should we do:
si_op.bra(es)
# or
es.bra
# or?
Having these one could do PDOS (unexpanded to energies) with something
like this:
op = si_op.SpinIdentity # automatic handling of spinors (1 or 2)
PDOS_charge = es.bra * op.ket(es)
PDOS_x = es.bra * si_op.SpinX @ es.ket
PDOS_y = es.bra * si_op.SpinY @ es.ket
So how do we want this to look like?
What we need to decide is how the operator should deal with these points:
- name of diagonal method (
diagonal
seems obvious) - name of the full matrix method (diagonal + off-diagonal),
matrix
- name of the full matrix form that is similar to PDOS, i.e.
state.conj() * state
How should bra
, ket
and matrix operations look like? It might seem weird to reuse @
and *
.
Should we aim for the code to look and feel like the bra-ket notation, or what should we make it
look like?