首页 > 学院 > 开发设计 > 正文

CS231n Assignment1--Q2

2019-11-06 07:36:44
字体:
来源:转载
供稿:网友

Q2: Training a Support Vector Machine

svm.ipynb

CIFAR-10 Data Loading and PReprocessing

Training data shape: (50000, 32, 32, 3) Training labels shape: (50000,) Test data shape: (10000, 32, 32, 3) Test labels shape: (10000,) 这里写图片描述

Train data shape: (49000, 32, 32, 3) Train labels shape: (49000,) Validation data shape: (1000, 32, 32, 3) Validation labels shape: (1000,) Test data shape: (1000, 32, 32, 3) Test labels shape: (1000,)

Training data shape: (49000, 3072) Validation data shape: (1000, 3072) Test data shape: (1000, 3072) dev data shape: (500, 3072)

[ 130.64189796 135.98173469 132.47391837 130.05569388 135.34804082 131.75402041 130.96055102 136.14328571 132.47636735 131.48467347]

这里写图片描述

(49000, 3073) (1000, 3073) (1000, 3073) (500, 3073)

SVM Classifier

loss: 9.321169 numerical: -28.975654 analytic: -28.995244, relative error: 3.379303e-04 numerical: 48.866697 analytic: 48.866697, relative error: 6.274050e-12 numerical: 16.762563 analytic: 16.762563, relative error: 1.353703e-11 numerical: 51.719000 analytic: 51.719000, relative error: 5.419848e-13 numerical: 34.657653 analytic: 34.657653, relative error: 2.774503e-12 numerical: -12.155716 analytic: -12.155716, relative error: 4.110508e-11 numerical: 1.544919 analytic: 1.510942, relative error: 1.111887e-02 numerical: 12.871566 analytic: 12.854885, relative error: 6.484290e-04 numerical: -47.174364 analytic: -47.174364, relative error: 7.444645e-12 numerical: -0.466306 analytic: -0.443806, relative error: 2.472205e-02 numerical: -4.083085 analytic: -4.083085, relative error: 1.109454e-10 numerical: 13.340045 analytic: 13.340045, relative error: 3.587123e-12 numerical: 0.861528 analytic: 0.861528, relative error: 4.376815e-10 numerical: 8.238503 analytic: 8.238503, relative error: 2.420306e-11 numerical: 5.651986 analytic: 5.712372, relative error: 5.313629e-03 numerical: -13.594373 analytic: -13.594373, relative error: 1.605655e-12 numerical: -11.023395 analytic: -11.023395, relative error: 3.846707e-11 numerical: -25.628873 analytic: -25.628873, relative error: 1.589220e-11 numerical: -10.922934 analytic: -10.922934, relative error: 1.864324e-11 numerical: -6.793161 analytic: -6.793161, relative error: 2.469798e-11

Naive loss: 9.321169e+00 computed in 0.138057s Vectorized loss: 9.321169e+00 computed in 0.006166s difference: -0.000000

Naive loss and gradient: computed in 0.144088s (500, 10) (500,) Vectorized loss and gradient: computed in 0.005354s difference: 0.000000

Stochastic Gradient Descent

iteration 0 / 1500: loss 789.956929 iteration 100 / 1500: loss 286.957111 iteration 200 / 1500: loss 108.054238 iteration 300 / 1500: loss 42.851563 iteration 400 / 1500: loss 18.755515 iteration 500 / 1500: loss 10.476052 iteration 600 / 1500: loss 7.454007 iteration 700 / 1500: loss 5.941640 iteration 800 / 1500: loss 5.516817 iteration 900 / 1500: loss 5.062649 iteration 1000 / 1500: loss 5.504820 iteration 1100 / 1500: loss 4.991620 iteration 1200 / 1500: loss 5.268961 iteration 1300 / 1500: loss 5.576416 iteration 1400 / 1500: loss 5.379530 That took 7.905974s 这里写图片描述 training accuracy: 0.369796 validation accuracy: 0.378000

# Use the validation set to tune hyperparameters (regularization strength and# learning rate). You should experiment with different ranges for the learning# rates and regularization strengths; if you are careful you should be able to# get a classification accuracy of about 0.4 on the validation set.learning_rates = [1e-7, 5e-5]regularization_strengths = [5e4, 1e5]# results is dictionary mapping tuples of the form# (learning_rate, regularization_strength) to tuples of the form# (training_accuracy, validation_accuracy). The accuracy is simply the fraction# of data points that are correctly classified.results = {}best_val = -1 # The highest validation accuracy that we have seen so far.best_svm = None # The LinearSVM object that achieved the highest validation rate.################################################################################# TODO: ## Write code that chooses the best hyperparameters by tuning on the validation ## set. For each combination of hyperparameters, train a linear SVM on the ## training set, compute its accuracy on the training and validation sets, and ## store these numbers in the results dictionary. In addition, store the best ## validation accuracy in best_val and the LinearSVM object that achieves this ## accuracy in best_svm. ## ## Hint: You should use a small value for num_iters as you develop your ## validation code so that the SVMs don't take much time to train; once you are ## confident that your validation code works, you should rerun the validation ## code with a larger value for num_iters. #################################################################################passfor i in range(len(learning_rates)): for j in range(len(regularization_strengths)): svm = LinearSVM() svm.train(X_train, y_train, learning_rate=learning_rates[i], reg=regularization_strengths[j],num_iters=1500, verbose=True) y_train_pred = svm.predict(X_train) train_acc = (np.mean(y_train == y_train_pred)) y_val_pred = svm.predict(X_val) valid_acc = (np.mean(y_val == y_val_pred)) results[(learning_rates[i],regularization_strengths[j])] = train_acc,valid_acc if valid_acc > best_val: best_val = valid_acc best_svm = svm################################################################################# END OF YOUR CODE ################################################################################## Print out results.for lr, reg in sorted(results): train_accuracy, val_accuracy = results[(lr, reg)] print 'lr %e reg %e train accuracy: %f val accuracy: %f' % ( lr, reg, train_accuracy, val_accuracy)print 'best validation accuracy achieved during cross-validation: %f' % best_val

iteration 0 / 1500: loss 799.779545 iteration 100 / 1500: loss 292.351218 iteration 200 / 1500: loss 110.462690 iteration 300 / 1500: loss 43.564036 iteration 400 / 1500: loss 18.821538 iteration 500 / 1500: loss 10.338367 iteration 600 / 1500: loss 6.433465 iteration 700 / 1500: loss 6.566043 iteration 800 / 1500: loss 5.455641 iteration 900 / 1500: loss 5.246358 iteration 1000 / 1500: loss 5.279144 iteration 1100 / 1500: loss 5.268271 iteration 1200 / 1500: loss 5.012111 iteration 1300 / 1500: loss 5.404834 iteration 1400 / 1500: loss 5.478970 iteration 0 / 1500: loss 1574.778009 iteration 100 / 1500: loss 213.106899 iteration 200 / 1500: loss 32.626224 iteration 300 / 1500: loss 9.114379 iteration 400 / 1500: loss 6.495307 iteration 500 / 1500: loss 5.586557 iteration 600 / 1500: loss 5.680854 iteration 700 / 1500: loss 5.040575 iteration 800 / 1500: loss 5.680529 iteration 900 / 1500: loss 5.605016 iteration 1000 / 1500: loss 5.837396 iteration 1100 / 1500: loss 6.054111 iteration 1200 / 1500: loss 5.478108 iteration 1300 / 1500: loss 5.388771 iteration 1400 / 1500: loss 5.892955 iteration 0 / 1500: loss 786.731296 iteration 100 / 1500: loss 372589076869931435404747201187846029312.000000 iteration 200 / 1500: loss 61585990370051785004894134177410779318588604347968448883358878335248629760.000000 iteration 300 / 1500: loss 10179670970827104896078110321806246284423900004611290427100234052002452305363258503118939368480524451094462464.000000 iteration 400 / 1500: loss 1682618083295312148842471678978817759687110901366742347187448625071896880179190059955444785031475508580320163065249167013728902881310179493675008.000000 iteration 500 / 1500: loss 278123293213115722097370059754842829514103165147861980829003262265728581496311930776975905372498693007340822399276312865583191924009540020841611894514427147986789234274277083054080.000000 iteration 600 / 1500: loss 45971552900595313116772029731775298161615086412875563312473746890148851650899554535727913980317513350595229797846942222378439358912489297030041266226299675735950339247033183711929521823440364030454612383616670367744.000000 iteration 700 / 1500: loss 7598729511924862306843869847719811454111134463928988065576103889726492053689774080742626421156473555706227478414125142862484512009972445834660067689802748525182554256062059845298727089565150119555369117525512481723395132169001343612422471248716496896.000000 iteration 800 / 1500: loss 1256009130695476514042065188895708694386167197150478807130575071941587750791985204332778572941545839167934144338508622733642684600438543354626241487711673204846488733859201229622235768049134329587590947446520585586993428750974636359652013290906599124339943647917297088091449884107866112.000000 iteration 900 / 1500: loss inf iteration 1000 / 1500: loss inf iteration 1100 / 1500: loss inf iteration 1200 / 1500: loss inf iteration 1300 / 1500: loss inf iteration 1400 / 1500: loss inf iteration 0 / 1500: loss 1544.611375 iteration 100 / 1500: loss 4238678903987820189366779422151797188648360345528845059954184416150988909166920159824040826765763779805042522194269368745984.000000 iteration 200 / 1500: loss 10945328083072100003028181743038293275798547088038776390451287457648334703187769946699615759302360535359122463204117538266218886366287342739804724717868689791260101164228515999586856183176780444739060870167407716397367014815094874657772466601984.000000 iteration 300 / 1500: loss inf iteration 400 / 1500: loss inf iteration 500 / 1500: loss inf iteration 600 / 1500: loss nan iteration 700 / 1500: loss nan iteration 800 / 1500: loss nan iteration 900 / 1500: loss nan iteration 1000 / 1500: loss nan iteration 1100 / 1500: loss nan iteration 1200 / 1500: loss nan iteration 1300 / 1500: loss nan iteration 1400 / 1500: loss nan lr 1.000000e-07 reg 5.000000e+04 train accuracy: 0.364878 val accuracy: 0.368000 lr 1.000000e-07 reg 1.000000e+05 train accuracy: 0.353265 val accuracy: 0.364000 lr 5.000000e-05 reg 5.000000e+04 train accuracy: 0.051306 val accuracy: 0.046000 lr 5.000000e-05 reg 1.000000e+05 train accuracy: 0.100265 val accuracy: 0.087000 best validation accuracy achieved during cross-validation: 0.368000

这里写图片描述

linear SVM on raw pixels final test set accuracy: 0.368000 这里写图片描述


发表评论 共有条评论
用户名: 密码:
验证码: 匿名发表