# Social cognitive optimization

Social cognitive optimization (SCO) is a population-based metaheuristic optimization algorithm which was developed in 2002.[1] This algorithm is based on the social cognitive theory, and the key point of the ergodicity is the process of individual learning of a set of agents with their own memory and their social learning with the knowledge points in the social sharing library. It has been used for solving continuous optimization,[2][3] integer programming,[4] and combinatorial optimization problems. It has been incorporated into the NLPSolver extension of Calc in Apache OpenOffice.

## Algorithm

Let ${\displaystyle f(x)}$ be a global optimization problem, where ${\displaystyle x}$ is a state in the problem space ${\displaystyle S}$. In SCO, each state is called a knowledge point, and the function ${\displaystyle f}$ is the goodness function.

In SCO, there are a population of ${\displaystyle N_{c}}$ cognitive agents solving in parallel, with a social sharing library. Each agent holds a private memory containing one knowledge point, and the social sharing library contains a set of ${\displaystyle N_{L}}$ knowledge points. The algorithm runs in T iterative learning cycles. By running as a Markov chain process, the system behavior in the tth cycle only depends on the system status in the (t − 1)th cycle. The process flow is in follows:

• [1. Initialization]：Initialize the private knowledge point ${\displaystyle x_{i}}$ in the memory of each agent ${\displaystyle i}$, and all knowledge points in the social sharing library ${\displaystyle X}$, normally at random in the problem space ${\displaystyle S}$.
• [2. Learning cycle]： At each cycle ${\displaystyle t}$ ${\displaystyle (t=1,\ldots ,T)}$
• [2.1. Observational learning] For each agent ${\displaystyle i}$ ${\displaystyle (i=1,\ldots ,N_{c})}$
• [2.1.1. Model selection]：Find a high-quality model point ${\displaystyle x_{M}}$ in ${\displaystyle X(t)}$ , normally realized using tournament selection, which returns the best knowledge point from randomly selected ${\displaystyle \tau _{B}}$ points.
• [2.1.2. Quality Evaluation]：Compare the private knowledge point ${\displaystyle x_{i}(t)}$ and the model point ${\displaystyle x_{M}}$，and return the one with higher quality as the base point ${\displaystyle x_{Base}}$，and another as the reference point ${\displaystyle x_{Ref}}$
• [2.1.3. Learning]：Combine ${\displaystyle x_{Base}}$ and ${\displaystyle x_{Ref}}$ to generate a new knowledge point ${\displaystyle x_{i}(t+1)}$. Normally ${\displaystyle x_{i}(t+1)}$ should be around ${\displaystyle x_{Base}}$，and the distance with ${\displaystyle x_{Base}}$ is related to the distance between ${\displaystyle x_{Ref}}$ and ${\displaystyle x_{Base}}$, and boundary handling mechanism should be incorporated here to ensure that ${\displaystyle x_{i}(t+1)\in S}$.
• [2.1.4. Knowledge sharing]：Share a knowledge point, normally ${\displaystyle x_{i}(t+1)}$, to the social sharing library ${\displaystyle X}$.
• [2.1.5. Individual update]：Update the private knowledge of agent ${\displaystyle i}$, normally replace ${\displaystyle x_{i}(t)}$ by ${\displaystyle x_{i}(t+1)}$. Some Monte Carlo types might also be considered.
• [2.2. Library Maintenance]：The social sharing library using all knowledge points submitted by agents to update ${\displaystyle X(t)}$ into ${\displaystyle X(t+1)}$. A simple way is one by one tournament selection: for each knowledge point submitted by an agent, replace the worse one among ${\displaystyle \tau _{W}}$ points randomly selected from ${\displaystyle X(t)}$.
• [3. Termination]：Return the best knowledge point found by the agents.

SCO has three main parameters, i.e., the number of agents ${\displaystyle N_{c}}$, the size of social sharing library ${\displaystyle N_{L}}$, and the learning cycle ${\displaystyle T}$. With the initialization process, the total number of knowledge points to be generated is ${\displaystyle N_{L}+N_{c}*(T+1)}$, and is not related too much with ${\displaystyle N_{L}}$ if ${\displaystyle T}$ is large.

Compared to traditional swarm algorithms, e.g. particle swarm optimization, SCO can achieving high-quality solutions as ${\displaystyle N_{c}}$ is small, even as ${\displaystyle N_{c}=1}$. Nevertheless, smaller ${\displaystyle N_{c}}$ and ${\displaystyle N_{L}}$ might lead to premature convergence. Some variants [5] were proposed to guaranteed the global convergence. One can also make a hybrid optimization method using SCO combined with other optimizers. For example, SCO was hybridized with differential evolution to obtain better results than individual algorithms on a common set of benchmark problems [6].

## References

1. ^ Xie, Xiao-Feng; Zhang, Wen-Jun; Yang, Zhi-Lian (2002). Social cognitive optimization for nonlinear programming problems. International Conference on Machine Learning and Cybernetics (ICMLC), Beijing, China: 779-783.
2. ^ Xie, Xiao-Feng; Zhang, Wen-Jun (2004). Solving engineering design problems by social cognitive optimization. Genetic and Evolutionary Computation Conference (GECCO), Seattle, WA, USA: 261-262.
3. ^ Xu, Gang-Gang; Han, Luo-Cheng; Yu, Ming-Long; Zhang, Ai-Lan (2011). Reactive power optimization based on improved social cognitive optimization algorithm. International Conference on Mechatronic Science, Electric Engineering and Computer (MEC), Jilin, China: 97-100.
4. ^ Fan, Caixia (2010). Solving integer programming based on maximum entropy social cognitive optimization algorithm. International Conference on Information Technology and Scientific Management (ICITSM), Tianjing, China: 795-798.
5. ^ Sun, Jia-ze; Wang, Shu-yan; chen, Hao (2014). A guaranteed global convergence social cognitive optimizer. Mathematical Problems in Engineering: Art. No. 534162.
6. ^ Xie, Xiao-Feng; Liu, J.; Wang, Zun-Jing (2014). "A cooperative group optimization system". Soft Computing. 18 (3): 469–495. doi:10.1007/s00500-013-1069-8.