_{1}

^{*}

A more efficient method of locating the optimum of a second order response function was of interest in this work. In order to do this, the principles of optimal designs of experiment is invoked and used for this purpose. At the end, it was discovered that the noticeable pitfall in response surface methodology (RSM) was circumvented by this method as the step length was obtained by taking the derivative of the response function rather than doing so by intuition or trial and error as is the case in RSM. A numerical illustration shows that this method is suitable for obtaining the desired optimizer in just one move which compares favourably with other known methods such as Newton-Raphson method which requires more than one iteration to reach the optimizer.

The problem of locating the optimum of a second order response function has already been addressed by a method known as response surface methodology (RSM). RSM is simply a collection of mathematical and statistical techniques useful for analyzing problems where several independent variables influence a dependent variable or response. The main objective here is to determine the optimum operating conditions for the system or to determine a region of the factor space in which operating requirements are satisfied [_{1} and reaction temperature, x_{2}. The observed response can be represented as a function of the two independent variables as

y = f ( x 1 , x 2 ) + ϵ (1)

where ϵ is the random error term while the expected response function is

η = E ( y ) = f ( x 1 , x 2 ) (2)

When the mathematical form of Equation (2) is not known, the expected response function can be approximated within the experimental region by a first order or a second order response function [

According to [

The sequential steps involved in this algorithm are given below:

Initialization: Let the second order response function, f ( x ) be defined as

f ( x ) = a 0 + a 1 x 1 + a 2 x 2 + b 1 x 1 2 + b 2 x 1 x 2 + b 3 x 2 2

Select N support points such that

3 r ≤ N ≤ 4 r or 6 ≤ N ≤ 8

where r = 2 is the number of partitioned groups and by choosing N arbitrarily, make an initial design matrix

X = [ 1 x 11 x 12 1 x 21 x 22 ⋮ ⋮ ⋮ 1 x N 1 x N 2 ]

Step 1: Compute the optimal starting point,

x 1 * = ∑ m = 1 N u m * x m T , u m * > 0

∑ m = 1 N u m * = 1

u m * = a m − 1 ∑ a m − 1 , m = 1 , 2 , ⋯ , N

a m = x m x m T , m = 1 , 2 , ⋯ , N

Step 2: Partition X into r = 2 groups and calculate

1) M i = X i T X i , i = 1 , 2

2) M i − 1

Step 3: Calculate the following:

1) The matrices of the interaction effect of the variables, X_{1I} and X_{2I}

2) Interaction vector of the response parameter,

g = [ b 1 b 2 b 3 ]

3) Interaction vectors for the groups are

I i = M i − 1 X i T X i I g

4) Matrices of mean square error for the groups are

M ¯ i = M i − 1 + I i I i T

5) The Hessian matrices, H_{i} and normalized Hessian matrices, H i *

6) The average information matrix, M (ξN)

Step 4: Obtain the response vector, z and the direction vector, d.

Normalize d to have d*.

Step 5: Make a move to the point

x 2 * = x 1 * − ρ 1 d *

for a minimization problem or

x 2 * = x 1 * + ρ 1 d *

for a maximization problem where ρ 1 is the step length obtained from

d f ( x 2 * ) d ρ 1 = 0

Step 6: Termination criteria. Is | f ( x 2 * ) − f ( x 1 * ) | < ε where ε = 0.0001?

1) Yes. Stop and set x 2 * = x min or x max as the case may be.

2) No. Replace x 1 * by x 2 * and return to Step 5. If ρ 2 ≅ 0 , then implement Step 6(1).

In this section, we give a numerical illustration of the optimal designs technique for locating the optimum of a second order response function.

min f ( x ) = ( x 1 − 1 ) 2 + ( x 2 − 2 ) 2 by optimal designs technique.

Solution

Initialization: Given the response function, f ( x ) = ( x 1 − 1 ) 2 + ( x 2 − 2 ) 2 , select N support points such that

3 r ≤ N ≤ 4 r or 6 ≤ N ≤ 8

where r = 2 is the number of partitioned groups and by choosing N arbitrarily, make an initial design matrix

X = [ 1 − 2 1.5 1 − 1 1 1 − 0.5 0.5 1 0 0 1 0.5 − 0.5 1 2 − 1 ]

Step 1: Compute the optimal starting point,

x 1 * = ∑ m = 1 6 u m * x m T , u m * > 0

∑ m = 1 6 u m * = 1

u m * = a m − 1 ∑ a m − 1 , m = 1 , 2 , ⋯ , 6

a m = x m x m T , m = 1 , 2 , ⋯ , 6

a 1 = x 1 x 1 T = [ 1 − 2 1.5 ] [ 1 − 2 1.5 ] = 7.25 , a 1 − 1 = 0.1379

a 2 = x 2 x 2 T = [ 1 − 1 1 ] [ 1 − 1 1 ] = 3 , a 2 − 1 = 0.3333

a 3 = x 3 x 3 T = [ 1 − 0.5 0.5 ] [ 1 − 0.5 0.5 ] = 1.5 , a 3 − 1 = 0.6667

a 4 = x 4 x 4 T = [ 1 0 0 ] [ 1 0 0 ] = 1 , a 4 − 1 = 1.0000

a 5 = x 5 x 5 T = [ 1 0.5 − 0.5 ] [ 1 0.5 − 0.5 ] = 1.5 , a 5 − 1 = 0.6667

a 6 = x 6 x 6 T = [ 1 2 − 1 ] [ 1 2 − 1 ] = 6 , a 6 − 1 = 0.1667

∑ m = 1 6 a m − 1 = 2.9713

Since

u m * = a m − 1 ∑ a m − 1 , m = 1 , 2 , ⋯ , 6

then

u 1 * = 0.1379 2.9713 = 0.0464 , u 2 * = 0.3333 2.9713 = 0.1122 , u 3 * = 0.6667 2.9713 = 0.2244 ,

u 4 * = 1.0000 2.9713 = 0.3366 , u 5 * = 0.6667 2.9713 = 0.2244 , u 6 * = 0.1667 2.9713 = 0.0561

Hence, the optimal starting point is

x 1 * = ∑ m = 1 6 u m * x m T = 0.0464 [ 1 − 2 1.5 ] + 0.1122 [ 1 − 1 1 ] + 0.2244 [ 1 − 0.5 0.5 ] + 0.3366 [ 1 0 0 ] + 0.2244 [ 1 0.5 − 0.5 ] + 0.0561 [ 1 2 − 1 ] = [ 1.0001 − 0.0928 ¯ 0.1257 ]

That is,

x 1 * = [ − 0.0928 0.1257 ]

Step 2: Partitioning X into 2 groups of equal number of support points, we obtain the design matrices,

X 1 = [ 1 − 2 1.5 1 − 1 1 1 − 0.5 0.5 ] and X 2 = [ 1 0 0 1 0.5 − 0.5 1 2 − 1 ]

The respective information matrices are

M 1 = X 1 T X 1 = [ 3 − 3.5 3 − 3.5 5.25 − 4.25 3 − 4.25 3.5 ] and

M 2 = X 2 T X 2 = [ 3 2.5 − 1.5 2.5 4.25 − 2.25 − 1.5 − 2.25 1.25 ]

and their inverses are

M 1 − 1 = [ 5 − 8 − 14 − 8 24 36 − 14 36 56 ] and M 2 − 1 = [ 1 1 3 1 6 12 3 12 26 ]

Step 3: Calculate the following:

1) The matrices of the interaction effect of the variables for the groups as

X 1 I = [ x 11 2 x 11 x 12 x 12 2 x 21 2 x 21 x 22 x 22 2 x 31 2 x 31 x 32 x 32 2 ] = [ 4 − 3 2.25 1 − 1 1 0.25 − 0.25 0.25 ]

X 2 I = [ x 41 2 x 41 x 42 x 42 2 x 51 2 x 51 x 52 x 52 2 x 61 2 x 61 x 62 x 62 2 ] = [ 0 0 0 0.25 − 0.25 0.25 4 − 2 1 ]

2) Interaction vector of the response parameter,

g = [ b 1 b 2 b 3 ] = [ 1 0 1 ]

3) Interaction vectors for the groups are

I 1 = M 1 − 1 X 1 T X 1 I g = [ − 1 − 5.5 − 2.5 ]

I 2 = M 2 − 1 X 2 T X 2 I g = [ 0 4 3 ]

4) Matrices of mean square error for the groups are

M ¯ 1 = M 1 − 1 + I 1 I 1 T = [ 6 − 2.5 − 11.5 − 2.5 54.25 49.75 − 11.5 49.75 62.25 ]

M ¯ 2 = M 2 − 1 + I 2 I 2 T = [ 1 1 3 1 22 24 3 24 35 ]

5) Matrices of coefficient of convex combinations of the matrices of mean square error are

H 1 = d i a g { 6 6 + 1 , 54.25 54.25 + 22 , 62.25 62.25 + 35 } = d i a g { 0.8571 , 0.7115 , 0.6401 }

H 2 = I − H 1 = d i a g { 0.1429 , 0.2885 , 0.3599 }

and by normalizing H_{i} such that Σ H i * H i * T = I , we have

H 1 * = d i a g { h 11 Σ h i 1 2 , h 12 Σ h i 2 2 , h 13 Σ h i 3 2 } = d i a g { 0.8571 0.8571 2 + 0.1429 2 , 0.7115 0.7115 2 + 0.2885 2 , 0.6401 0.6401 2 + 0.3599 2 } = d i a g { 0.9864 , 0.9267 , 0.8717 }

H 2 * = d i a g { 0.1429 0.8571 2 + 0.1429 2 , 0.2885 0.7115 2 + 0.2885 2 , 0.3599 0.6401 2 + 0.3599 2 } = d i a g { 0.1645 , 0.3758 , 0.4901 }

6) The average information matrix is

M ( ξ N ) = Σ H i * M i H i * T = [ 3.0001 − 3.0448 2.4586 − 3.0448 5.1088 − 3.8476 2.4586 − 3.8476 2.9598 ]

Step 4: Obtain the response vector

z = [ z 0 z 1 z 2 ]

where

z 0 = f ( − 3.0448 , 2.4586 ) = 16.5707

z 1 = f ( 5.1088 , − 3.8476 ) = 11.0346

z 2 = f ( − 3.8476 , 2.9598 ) = 24.4204

and hence, the direction vector

d = [ d 0 _ d 1 d 2 ] = M − 1 ( ξ N ) z = [ − 86.2355 _ 521.3935 757.6757 ]

and by normalizing d such that d ∗ T d ∗ = 1 , we have

d * = [ d 1 d 2 ] = [ 521.3935 521.3935 2 + 757.6757 2 757.6757 521.3935 2 + 757.6757 2 ] = [ 0.5669 0.8238 ]

Step 5: Obtain the step length, ρ 1 from

x 2 * = x 1 * − ρ 1 d * = [ − 0.0928 0.1257 ] − ρ 1 [ 0.5669 0.8238 ] = [ − 0.0928 − 0.5669 ρ 1 , 0.1257 − 0.8238 ρ 1 ] .

That is,

f ( x 2 * ) = f [ − 0.0928 − 0.5669 ρ 1 , 0.1257 − 0.8238 ρ 1 ] = 4.7072 + 4.3271 ρ 1 + ρ 1 2

and

d f ( x 2 * ) d ρ 1 = 4.3271 + 2 ρ 1 = 0

Hence,

ρ 1 = − 2.1636

and by making a move to the next point, we have

x 2 * = [ − 0.0928 0.1257 ] + 2.1636 [ 0.5669 0.8238 ] = [ 1.1337 1.9081 ]

Step 6: Since | f ( x 2 * ) − f ( x 1 * ) | = | 0.0263 − 4.7072 | = 4.6809 , we make another move and replace x 1 * by x 2 * .

The new step length, ρ 2 is obtained as follows:

x 3 * = x 2 * − ρ 2 d * = [ 1.1337 1.9081 ] − ρ 2 [ 0.5669 0.8238 ] = [ 1.1337 − 0.5669 ρ 2 , 1.9081 − 0.8238 ρ 2 ] .

That is,

f ( x 3 * ) = f [ 1.1337 − 0.5669 ρ 2 , 1.9081 − 0.8238 ρ 2 ] = 0.0263 − 0.0002 ρ 2 + ρ 2 2

d f ( x 3 * ) d ρ 2 = 2 ρ 2 − 0.0002 = 0

which gives

ρ 2 ≈ 0

Since the new step length, ρ 2 is zero, then an optimizer had been located at the first move and hence

x 2 * = [ 1.1337 1.9081 ] and f ( x 2 * ) = 0.0263

By using optimal designs technique, we have been able to locate the optimum of a second order response function in just one move. This method circumvented the noticeable pitfall in RSM by taking the derivative of the response function to obtain the step length rather than doing so by intuition or trial and error as is

the case in RSM. A numerical illustration which gives x 2 * = [ 1.1337 1.9081 ] and

f ( x 2 * ) = 0.0263 in just one move compares favourably with other known methods such as Newton-Raphson method which requires more than one iteration to reach the optimizer.

Etukudo, I. (2017) Optimal Designs Technique for Locating the Optimum of a Second Order Response Function. American Journal of Operations Research, 7, 263-271. https://doi.org/10.4236/ajor.2017.75018