Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Implement pareto optimization #610

Open
wants to merge 6 commits into
base: features/movingboundary-condensing-heat-exchanger
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
8 changes: 5 additions & 3 deletions docs/tutorials/pygmo_optimization.rst
Original file line number Diff line number Diff line change
Expand Up @@ -96,7 +96,7 @@ simulation. Furthermore, you have to define methods
- to get component or connection parameters of the plant :code:`get_param`,
- to run a new simulation for every new input from PyGMO :code:`solve_model`
and
- to return the objective value :code:`get_objective`.
- to return the objective values :code:`get_objectives`.

First, we set up the class with the TESPy network.

Expand All @@ -109,9 +109,11 @@ First, we set up the class with the TESPy network.


Next, we add the methods :code:`get_param`, :code:`solve_model` and
:code:`get_objective`. On top of that, we add a setter working similarly as the
:code:`get_objectives`. On top of that, we add a setter working similarly as the
getter. The objective is to maximize thermal efficiency as defined in the
equation below.
equation below. The :code:`get_objectives` method calls a :code:`get_objective`
method and collects all objectives values. This is useful if you are
implementing pareto or multi-objective problems.

.. math::

Expand Down
1 change: 1 addition & 0 deletions docs/whats_new.rst
Original file line number Diff line number Diff line change
Expand Up @@ -3,6 +3,7 @@ What's New

Discover notable new features and improvements in each release

.. include:: whats_new/v0-8-0.rst
.. include:: whats_new/v0-7-9.rst
.. include:: whats_new/v0-7-8-002.rst
.. include:: whats_new/v0-7-8-001.rst
Expand Down
24 changes: 24 additions & 0 deletions docs/whats_new/v0-8-0.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,24 @@
v0.8.0 - Under development
++++++++++++++++++++++++++

API Changes
###########
- The :code:`OptimizationProblem` implements the following changes:
- The dtype for :code:`objective` has been changed to list. With a single
element list, a single objective optimization is carried out. With lists
containing more elements, multi-objective optimization will be carried out.
- The function argument :code:`gen` of the :code:`run` method has been
renamed to :code:`evo`. The same applies for the :code:`individuals`
dataframe.
- Your tespy model now needs to have a function :code:`get_objectives`
instead of :code:`get_objective` and it must return a list of values.
- The intermediate printing during optimization has been removed.

New Features
############
- Modify the :code:`OptimizationProblem` class to allow multi-objective
optimization (`PR #610 <https://github.com/oemof/tespy/pull/610>`__).

Contributors
############
- Francesco Witte (`@fwitte <https://github.com/fwitte>`__)
2 changes: 1 addition & 1 deletion pyproject.toml
Original file line number Diff line number Diff line change
Expand Up @@ -24,7 +24,7 @@ exclude = ["docs/_build"]

[project]
name = "tespy"
version = "0.7.9.dev0"
version = "0.8.0.dev0"
description = "Thermal Engineering Systems in Python (TESPy)"
readme = "README.rst"
authors = [
Expand Down
2 changes: 1 addition & 1 deletion src/tespy/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@
import os

__datapath__ = os.path.join(importlib.resources.files("tespy"), "data")
__version__ = '0.7.9.dev0 - Newton\'s Nature'
__version__ = '0.8.0.dev0'

# tespy data and connections import
from . import connections # noqa: F401
Expand Down
105 changes: 33 additions & 72 deletions src/tespy/tools/optimization.py
Original file line number Diff line number Diff line change
Expand Up @@ -34,9 +34,9 @@ class OptimizationProblem:
constraints : dict
Dictionary containing the constraints for the model.

objective : str
Name of the objective. :code:`objective` is passed to the
:code:`get_objective` method of your tespy model instance.
objective : list
Name of the objective(s). :code:`objective` is passed to the
:code:`get_objectives` method of your tespy model instance.

Note
----
Expand All @@ -53,7 +53,7 @@ class OptimizationProblem:
documentation.
"""

def __init__(self, model, variables={}, constraints={}, objective="objective"):
def __init__(self, model, variables={}, constraints={}, objective=[]):
if pg is None:
msg = (
"For this function of TESPy pygmo has to be installed. Either"
Expand All @@ -71,11 +71,14 @@ def __init__(self, model, variables={}, constraints={}, objective="objective"):
# merge the passed values into the default dictionary structure
self.variables = merge_dicts(variables, default_variables)
self.constraints = merge_dicts(constraints, default_constraints)
self.objective = objective
self.variable_list = []
self.constraint_list = []

self.objective_list = [objective]
if not isinstance(objective, list):
msg = "The objective(s) must be passed as a list."
raise TypeError(msg)

self.objective_list = objective
self.nobj = len(self.objective_list)

self.bounds = [[], []]
Expand Down Expand Up @@ -183,7 +186,7 @@ def fitness(self, x):
i += 1

self.model.solve_model(**self.input_dict)
f1 = [self.model.get_objective(self.objective)]
f1 = self.model.get_objectives(self.objective_list)

cu = self.collect_constraints("upper")
cl = self.collect_constraints("lower")
Expand All @@ -203,35 +206,25 @@ def get_bounds(self):
"""Return bounds of decision variables."""
return self.bounds

def _process_generation_data(self, gen, pop):
"""Process the data of the individuals within one generation.
def _process_generation_data(self, evo, pop):
"""Process the data of the individuals within one evolution.

Parameters
----------
gen : int
Generation number.
evo : int
Evolution number.

pop : pygmo.population
PyGMO population object.
"""
individual = 0
for x in pop.get_x():
self.individuals.loc[(gen, individual), self.variable_list] = x
individual += 1

individual = 0
for objective in pop.get_f():
for individual, (x, obj) in enumerate(zip(pop.get_x(), pop.get_f())):
self.individuals.loc[(evo, individual), self.variable_list] = x
self.individuals.loc[
(gen, individual),
(evo, individual),
self.objective_list + self.constraint_list
] = objective
individual += 1
] = obj

self.individuals['valid'] = (
self.individuals[self.constraint_list] < 0
).all(axis='columns')

def run(self, algo, pop, num_ind, num_gen):
def run(self, algo, pop, num_ind, num_evo):
"""Run the optimization algorithm.

Parameters
Expand All @@ -245,60 +238,28 @@ def run(self, algo, pop, num_ind, num_gen):
num_ind : int
Number of individuals.

num_gen : int
Number of generations.
num_evo : int
Number of evolutions.
"""

self.individuals = pd.DataFrame(
index=range(num_gen * num_ind)
)
self.individuals = pd.DataFrame(index=range(num_evo * num_ind))

self.individuals["gen"] = [
gen for gen in range(num_gen) for ind in range(num_ind)
self.individuals["evo"] = [
evo for evo in range(num_evo) for _ in range(num_ind)
]
self.individuals["ind"] = [
ind for gen in range(num_gen) for ind in range(num_ind)
ind for _ in range(num_evo) for ind in range(num_ind)
]

self.individuals.set_index(["gen", "ind"], inplace=True)

# replace prints with logging
gen = 0
for gen in range(num_gen - 1):
self._process_generation_data(gen, pop)

print('Evolution: {}'.format(gen))
for i in range(len(self.objective_list)):
print(
self.objective_list[i] + ': {}'.format(
round(pop.champion_f[i], 4)
)
)
for i in range(len(self.variable_list)):
print(
self.variable_list[i] + ': {}'.format(
round(pop.champion_x[i], 4)
)
)
pop = algo.evolve(pop)

if num_gen > 1:
gen += 1
self.individuals.set_index(["evo", "ind"], inplace=True)

self._process_generation_data(gen, pop)
evo = 0
for evo in range(num_evo - 1):
self._process_generation_data(evo, pop)
pop = algo.evolve(pop)

print('Final evolution: {}'.format(gen))
for i in range(len(self.objective_list)):
print(
self.objective_list[i] + ': {}'.format(
round(pop.champion_f[i], 4)
)
)
for i in range(len(self.variable_list)):
print(
self.variable_list[i] + ': {}'.format(
round(pop.champion_x[i], 4)
)
)
if num_evo > 1:
evo += 1

self._process_generation_data(evo, pop)
return pop
17 changes: 16 additions & 1 deletion tutorial/advanced/optimization_example.py
Original file line number Diff line number Diff line change
Expand Up @@ -196,6 +196,21 @@ def solve_model(self, **kwargs):
self.nw.lin_dep = True
self.nw.solve("design", init_only=True, init_path=self.stable)

def get_objectives(self, objective_list):
"""Get the objective values

Parameters
----------
objective_list : list
Names of the objectives

Returns
-------
list
Values of the objectives
"""
return [self.get_objective(obj) for obj in objective_list]

def get_objective(self, objective=None):
"""
Get the current objective function evaluation.
Expand Down Expand Up @@ -242,7 +257,7 @@ def get_objective(self, objective=None):
}

optimize = OptimizationProblem(
plant, variables, constraints, objective="efficiency"
plant, variables, constraints, objective=["efficiency"]
)
# %%[sec_4]
num_ind = 10
Expand Down