Would a Search Space Look Somewhat Like This? |
In machine learning (ML) we often hear about the term search space. The term is frequently used in describing almost any ML algorithm. Talk about any ML algorithm, like genetic algorithms, artificial neural networks, bio-inspired models, the term search space would somehow turn up in the discourse to haunt the student. A novice in the subject of machine learning may find it difficult to understand the meaning of the term, let alone conceiving the whole idea that it actually refers to.
So what does the term search space actually refer to. In the context of mathematical optimization it refers to a region that contains all the possible solutions of a problem. To this end, it is also expected to contain the optimal solution or solutions that may be considered to be optimal, sub-optimal or near optimal. But even this much of an explanation is a little bit vague. We can try to develop more concrete notions of search space with the help of the following example.
Suppose that you are given a mathematical model. You somehow know that the model is supposed to solve a well known regression problem. For instance, the problem may be find prices of different types of clothes to near perfection. The model that you are given and that is supposed to find the prices is a function of a few variables. These may be the color, the texture, the material and so on of the clothes. You may find the values of these variables from a relevant corpus. Now, the model also has a few coefficients. Depending upon the model there can be two, three or even a lot more coefficients. In numerical optimization you are normally expected to find the values of these coefficients.
In the context of numerical optimization, the search space corresponds to the values of all the possible combinations of these coefficients. For instance, if your model has six coefficients and if they are expected to assume real numbered values, then the search space is the six dimensional real plane.
It may help a lot if we have a way to visualize the six dimensional real plane. This is however not easy. It is not possible to visualize a space beyond three dimensions. So for the sake of ease of understanding we can restrict out search space to comprise of two unknown coefficients. That is to say that we can assume that our model has two coefficients and we have to find values of them.
Visualization now becomes easier since our search space is comprised of the two dimensional real plane. We can reserve the third dimension of the three dimensional real plane for a so called error term. The error term tells us on as to how well the model performs if we tweak the values of the coefficients a little bit here and there. This may be the error between the output of our model and some known reference model to which we are trying to map our model.
Now, normally the search space starts to look more like a mountain range with peaks and valleys here and there. This way of thinking about the search space makes it easier to conceive it. As we choose different combinations of the coefficient values, we may hit somewhere close to a peak or to a valley depending upon what we have chosen.
Normally the goal in numerical optimization is to hit the bottom of the lowest valley. This is specifically the case if our objective is error minimization as mentioned above. Most of the numerical optimization algorithms are designed by keeping this view of the search space in mind. Indeed, it is a goal of the algorithm to be able to traverse the whole search space nicely, easily and to be able to reach a globally optimal solution if at all.
Psyops Concepts by Psyops is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
Based on a work at http://psyopsconcepts.blogspot.dk/.
Permissions beyond the scope of this license may be available at http://psyopsconcepts.blogspot.dk/.
No comments:
Post a Comment