Planners Interfaces¶
Optimization algorithms in Olympus can be accessed via Planners
objects. They have higher- and lower-level methods to allow more/less control of the optimization.
The most important methods for any Planner
in Olympus are: - optimize
- recommend
- ask
and tell
Here we show how these three interfaces can be used to optimize a surface of choice. Let’s start by instanciating a specific optimizer and emulator
[1]:
from olympus import Planner, Emulator, Campaign
planner = Planner('Hyperopt', goal='maximize')
emulator = Emulator(dataset='hplc', model='BayesNeuralNet')
[INFO] Loading emulator using a BayesNeuralNet model for the dataset hplc...
Optimize¶
[2]:
# optimise the surface for 5 iterations
campaign = planner.optimize(emulator, num_iter=5, verbose=True)
[INFO] Optimize iteration 1
[INFO] Obtaining parameters from planner...
[INFO] Obtaining measurement from emulator...
[INFO] Optimize iteration 2
[INFO] Obtaining parameters from planner...
[INFO] Obtaining measurement from emulator...
[INFO] Optimize iteration 3
[INFO] Obtaining parameters from planner...
[INFO] Obtaining measurement from emulator...
[INFO] Optimize iteration 4
[INFO] Obtaining parameters from planner...
[INFO] Obtaining measurement from emulator...
[INFO] Optimize iteration 5
[INFO] Obtaining parameters from planner...
[INFO] Obtaining measurement from emulator...
[3]:
# print the results
for p, v in zip(campaign.params, campaign.values):
print(p, v)
[3.99634412e-02 5.92366755e-02 5.56513608e-01 1.32164115e+00
1.28585018e+02 2.29232642e+00] [794.28760118]
[5.21297934e-02 4.11513637e-02 8.28823539e-01 2.12991777e+00
1.34636851e+02 4.56346874e+00] [190.70192684]
[6.88022674e-02 2.31711378e-02 7.86579842e-01 1.45240663e+00
1.06799456e+02 3.21836802e+00] [25.05118119]
[3.57485194e-02 1.01848064e-02 4.79433112e-01 1.59496475e+00
9.11006486e+01 8.93052795e+00] [837.64479039]
[3.19394342e-02 1.03955562e-02 2.87069883e-01 2.24478218e+00
1.22440051e+02 2.82028857e+00] [2000.172279]
Recommend¶
As you can see, the optimize
method is very convenient, but does not allow much control in what happens at each step of the optimization. The recommend
method runs instead a single iteration of the optimization, such that you can have access to the parameters and measurements at each iteraion. As an example, we perform the same 10-step optimization as above but using recommend
instead this time.
[4]:
max_iter = 5
# instantiate a Campaign object, which stores the results of the optimization
campaign = Campaign()
# tell the planner what is the optimization domain
planner.set_param_space(emulator.param_space)
for i in range(max_iter):
print(f"Iter {i+1}\n------")
# ask the planner for a new set of parameters
params = planner.recommend(observations=campaign.observations)
print('Parameters:', params)
# evaluate the merit of the new parameters
values = emulator.run(params.to_array(), return_paramvector=True)
print('Values:', values[0])
# store parameter and measurement pair in campaign
campaign.add_observation(params, values)
print()
Iter 1
------
Parameters: ParamVector(sample_loop = 0.04214239677999049, additional_volume = 0.053410186023081355, tubing_volume = 0.6886106912572801, sample_flow = 1.2313387694567246, push_speed = 120.33877426478782, wait_time = 9.831562416147625)
Values: ParamVector(peak_area = 182.92490526330087)
Iter 2
------
Parameters: ParamVector(sample_loop = 0.043293140897302164, additional_volume = 0.048965823324975226, tubing_volume = 0.686463996645267, sample_flow = 1.8937519623656645, push_speed = 98.51430725969507, wait_time = 5.601818498433116)
Values: ParamVector(peak_area = 215.42974384808343)
Iter 3
------
Parameters: ParamVector(sample_loop = 0.07643185535530703, additional_volume = 0.05667404894922595, tubing_volume = 0.6650598532796427, sample_flow = 1.916183318424112, push_speed = 122.7209899443508, wait_time = 1.510488953562232)
Values: ParamVector(peak_area = 324.8829338472122)
Iter 4
------
Parameters: ParamVector(sample_loop = 0.03882589187468909, additional_volume = 0.012301890870887077, tubing_volume = 0.278553853395321, sample_flow = 2.2053614339750007, push_speed = 104.96898650189364, wait_time = 9.562302646869888)
Values: ParamVector(peak_area = 2072.8976459933033)
Iter 5
------
Parameters: ParamVector(sample_loop = 0.009822229434326219, additional_volume = 0.029049327494845626, tubing_volume = 0.8460893640743111, sample_flow = 1.3473238976680675, push_speed = 123.70914753868718, wait_time = 4.0203055504668015)
Values: ParamVector(peak_area = 0.0)
Ask and Tell¶
Finally, one can use the ask
and tell
methods instead of recommend
. This allows to control what the planner knows before suggesting the next experiment to run. However, note that not all planners use the history of the optimization when to choose what experiment to propose next.
[5]:
max_iter = 5
# instantiate a Campaign object, which stores the results of the optimization
campaign = Campaign()
# tell the planner what is the optimization domain
planner.set_param_space(emulator.param_space)
for i in range(max_iter):
print(f"Iter {i+1}\n------")
# tell the planner about the optimization history
planner.tell(campaign.observations)
# ask the planner for a new set of parameters
params = planner.ask()
print('Parameters:', params)
# evaluate the merit of the new parameters
values = emulator.run(params.to_array(), return_paramvector=True)
print('Values:', values[0])
# store parameter and measurement pair in campaign
campaign.add_observation(params, values)
print()
Iter 1
------
Parameters: ParamVector(sample_loop = 0.03549386694005813, additional_volume = 0.03330348781561569, tubing_volume = 0.20541452615744327, sample_flow = 0.9918564339279039, push_speed = 85.3836698503327, wait_time = 7.732563361348338)
Values: ParamVector(peak_area = 0.0)
Iter 2
------
Parameters: ParamVector(sample_loop = 0.07948685024663711, additional_volume = 0.01405398186830723, tubing_volume = 0.4008361777353059, sample_flow = 1.213248706166157, push_speed = 122.85990779892668, wait_time = 1.9588104184592865)
Values: ParamVector(peak_area = 1087.4971033499935)
Iter 3
------
Parameters: ParamVector(sample_loop = 0.07459429555877291, additional_volume = 0.04632047521192829, tubing_volume = 0.8636585456822277, sample_flow = 2.4340302954514517, push_speed = 127.34080433592416, wait_time = 9.348306043102033)
Values: ParamVector(peak_area = 0.0)
Iter 4
------
Parameters: ParamVector(sample_loop = 0.04469537573702059, additional_volume = 0.016678012840125878, tubing_volume = 0.6607858069330779, sample_flow = 1.8523761717851137, push_speed = 138.98260252526268, wait_time = 9.501125947667138)
Values: ParamVector(peak_area = 330.6013244397631)
Iter 5
------
Parameters: ParamVector(sample_loop = 0.05279966827346898, additional_volume = 0.04876100437059912, tubing_volume = 0.7023206540114464, sample_flow = 1.0172256976081377, push_speed = 138.30474028379786, wait_time = 1.4750453547630402)
Values: ParamVector(peak_area = 70.43434451522799)