Social cognitive optimization

Social cognitive optimization (SCO) is a population-based metaheuristic optimization algorithm which was developed in 2002. This algorithm is based on the social cognitive theory, and the key point of the ergodicity is the process of individual learning of a set of agents with their own memory and their social learning with the knowledge points in the social sharing library. It has been used for solving continuous optimization, integer programming, and combinatorial optimization problems. It has been incorporated into the NLPSolver extension of Calc in Apache OpenOffice.

Algorithm

Let $f(x)$ be a global optimization problem, where $x$ is a state in the problem space $S$ . In SCO, each state is called a knowledge point, and the function $f$ is the goodness function.

In SCO, there are a population of $N_{c}$ cognitive agents solving in parallel, with a social sharing library. Each agent holds a private memory containing one knowledge point, and the social sharing library contains a set of $N_{L}$ knowledge points. The algorithm runs in T iterative learning cycles. By running as a Markov chain process, the system behavior in the tth cycle only depends on the system status in the (t − 1)th cycle. The process flow is in follows:

• [1. Initialization]：Initialize the private knowledge point $x_{i}$ in the memory of each agent $i$ , and all knowledge points in the social sharing library $X$ , normally at random in the problem space $S$ .
• [2. Learning cycle]： At each cycle $t$ $(t=1,\ldots ,T)$ • [2.1. Observational learning] For each agent $i$ $(i=1,\ldots ,N_{c})$ • [2.1.1. Model selection]：Find a high-quality model point $x_{M}$ in $X(t)$ , normally realized using tournament selection, which returns the best knowledge point from randomly selected $\tau _{B}$ points.
• [2.1.2. Quality Evaluation]：Compare the private knowledge point $x_{i}(t)$ and the model point $x_{M}$ ，and return the one with higher quality as the base point $x_{Base}$ ，and another as the reference point $x_{Ref}$ • [2.1.3. Learning]：Combine $x_{Base}$ and $x_{Ref}$ to generate a new knowledge point $x_{i}(t+1)$ . Normally $x_{i}(t+1)$ should be around $x_{Base}$ ，and the distance with $x_{Base}$ is related to the distance between $x_{Ref}$ and $x_{Base}$ , and boundary handling mechanism should be incorporated here to ensure that $x_{i}(t+1)\in S$ .
• [2.1.4. Knowledge sharing]：Share a knowledge point, normally $x_{i}(t+1)$ , to the social sharing library $X$ .
• [2.1.5. Individual update]：Update the private knowledge of agent $i$ , normally replace $x_{i}(t)$ by $x_{i}(t+1)$ . Some Monte Carlo types might also be considered.
• [2.2. Library Maintenance]：The social sharing library using all knowledge points submitted by agents to update $X(t)$ into $X(t+1)$ . A simple way is one by one tournament selection: for each knowledge point submitted by an agent, replace the worse one among $\tau _{W}$ points randomly selected from $X(t)$ .
• [3. Termination]：Return the best knowledge point found by the agents.

SCO has three main parameters, i.e., the number of agents $N_{c}$ , the size of social sharing library $N_{L}$ , and the learning cycle $T$ . With the initialization process, the total number of knowledge points to be generated is $N_{L}+N_{c}*(T+1)$ , and is not related too much with $N_{L}$ if $T$ is large.

Compared to traditional swarm algorithms, e.g. particle swarm optimization, SCO can achieving high-quality solutions as $N_{c}$ is small, even as $N_{c}=1$ . Nevertheless, smaller $N_{c}$ and $N_{L}$ might lead to premature convergence. Some variants  were proposed to guaranteed the global convergence. One can also make a hybrid optimization method using SCO combined with other optimizers. For example, SCO was hybridized with differential evolution to obtain better results than individual algorithms on a common set of benchmark problems .