Ostatnio intensywnie pracowaem nad SBL i postanowiem
skoczy wersj robocz do produkcji wynikw do 
koca lipca. Wtedy te przel wersj do testowania.
 
New the most recent developments in SBL include:
 
1. Edytor Atrybutw.
 
Poprzednio atrybuty mona byo wycza tylko w 
input file. Teraz mona wcza i wycza atrybuty
z interfejsu.
 
2. Wsparcie dla krosswalidacji dla szukania w przestrzeni
   modeli.
 
3. Wsparcie dla ASA
 
4. Wsparcie dla MultiSimplex
 
5 Wsparcie FinishLearning dla metod waonych.
 
Oznacza to e mona w dowolnym momencie zakoczy
uczenie metod waonych i zrobi test z najlepszymi
wagami do momentu przerwania z interfejsu lub przej
do nastpnej partycji w krosswalidacji. Nie trzeba jednym
sowem czeka na zbieno minimalizacji.
 
* W szukaniu w przestrzeni modeli dostpne modele to:
 
k-NN, Metric-Optimization, k-Optimization, weighted-NN
Adding, Weighted-NN Dropping, Weighted-NN Simplex
weighted-NN MultiSimplex, Weighted-NN ASA,
Attribute Selection BFS1,BFS2,BFS3,Ranking, k-opt-Ranking.
 
Wersja szukania w przestrzeni modeli jest najprostsza i
do zakoczenia doktoratu nie bede robi adnych nowych
dodatkw. W ogle do zakoczenia doktoratu nie bede
robi adnych nowych metod.
 
Do zrobienia pozostay ju tylko miary probabilistyczne
i komitety. Macierz odziaywa i sekwencja modeli jak
i dowolne komitety modeli po doktoracie.
 
W sierpniu planuje wyjecha na zasuone wakacje ale
zabiore komputer i bed pisa 3 strony pracy dziennie.
Rzadnych developmentw.



HI,
 
I have learned to easily get 82.8% of classification accuracy
on hayashi either with the weights search procedure or 
through minimization. However the results with the weights
search procedure are repetitive (there is a separate test
set and no stohastic factor) with minimization one should
repeat the results a couple of times to get the average and
the variance of the model. I have performed the minimization tests only once but with the local simplex method I have
obtained much higher training accuracy than with the global
ASA method. The result on test was in both cases the
same 82.8%. I am sure that committe and weight averaging
would lead to the results comparable with FSM.
The difference in minimization between ASA and simplex
was that ASA weights have been varied in range 0-1
whereas simplex weights in range 0-10.
 
In the thesis it is crucial to perform numerical experiments
on the influence of weights search range on the final result
as well as on how the speed of convergance depends on the
range. I have discovered that sometimes wider range
 increases the speed of convergance.
 
The trick on hayshi to get the excellent 82.8% result is
the proper k=1, Minkowski Exponent=0.6 and
 standardization
 
 I think and my claims are justified by the results of other
 authors published in the literature that
 weighting attributes through L1O rather than CV may
 lead to overfitting. It is essential in the thesis to do
 appropriate numerical experiments and providing CV learning
 feature in SBL (so far learning in all cases is done through
 L1O). It is essential to do also appropriate numerical
experiments for other methods too. 

