回答:
チェックアウトSVM分類に実用的なガイドいくつかのポインタのための、特に5ページ。
and is a practical method to identify good parameters (for example, ).
Remember to normalize your data first and if you can, gather more data because from the looks of it, your problem might be heavily underdetermined.
Check out section 2.3.2 of this paper by Chapelle and Zien. They have a nice heuristic to select a good search range for of the RBF kernel and for the SVM. I quote
To determine good values of the remaining free parameters (eg, by CV), it is important to search on the right scale. We therefore fix default values for and that have the right order of magnitude. In a -class problem we use the quantile of the pairwise distances of all data-points as a default for . The default for is the inverses of the empirical variance in features space, which can be calculated by from a kernel matrix .
Afterwards, they use multiples (e.g. for ) of the default value as search range in a grid-search using cross-validation. That always worked very well for me.
Of course, we @ciri said, normalizing the data etc. is always a good idea.