$ python3 demo/classification_transfer_learning.py --extractor mobilenet_v1_1.0_224_quant_embedding_extractor_edgetpu.tflite --data /cavy_hamster --output cavy_model.tflite --test_ratio 0.95
---------------------- Args ----------------------
Embedding extractor : .//mobilenet_v1_1.0_224_quant_embedding_extractor_edgetpu.tflite
Data set : .//cavy_hamster
Output path : .//cavy_model.tflite
Ratio of test images: 95%
--------------- Parsing data set -----------------
Dataset path: .//cavy_hamster
Image list successfully parsed! Category Num = 2
W third_party/darwinn/driver/package_registry.cc:65] Minimum runtime version required by package (5) is lower than expected (10).
---------------- Processing training data ----------------
This process may take more than 30 seconds.
Processing category: cavy
Processing category: hamster
---------------- Start training -----------------
W third_party/darwinn/driver/package_registry.cc:65] Minimum runtime version required by package (5) is lower than expected (10).
---------------- Training finished! -----------------
Model saved as : .//cavy_model.tflite
Labels file saved as : .//cavy_model.txt
------------------ Start evaluating ------------------
W third_party/darwinn/driver/package_registry.cc:65] Minimum runtime version required by package (5) is lower than expected (10).
Evaluating category [ cavy ]
Evaluating category [ hamster ]
---------------- Evaluation result -----------------
Top 1 : 86%
Top 2 : 100%
Top 3 : 100%
Top 4 : 100%
Top 5 : 100%
[1] Qi, Hang, Matthew Brown, and David G. Lowe. "Low-shot learning with imprinted weights." Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2018.
[1] Hoerl, Arthur E., and Robert W. Kennard. "Ridge regression: Biased estimation for nonorthogonal problems." Technometrics 12.1 (1970): 55-67.
[2] Tibshirani, Robert. "Regression shrinkage and selection via the lasso." Journal of the Royal Statistical Society: Series B (Methodological) 58.1 (1996): 267-288.
[3] Frank, LLdiko E., and Jerome H. Friedman. "A statistical view of some chemometrics regression tools." Technometrics 35.2 (1993): 109-135.
[4] Zou, Hui, and Trevor Hastie. "Regularization and variable selection via the elastic net." Journal of the royal statistical society: series B (statistical methodology) 67.2 (2005): 301-320.
[5] Fan, Jianqing, and Runze Li. "Variable selection via nonconcave penalized likelihood and its oracle properties." Journal of the American statistical Association 96.456 (2001): 1348-1360.
tqdm means "progress" in Arabic (taqadum, تقدّم) and is an abbreviation for "I love you so much" in Spanish (te quiero demasiado). Instantly make your loops show a smart progress meter - just wrap any iterable with tqdm(iterable), and you're done!
これは,tqdmで用意されているset_postfix関数を使うことで実現できる.
実装例
ループの外側をtqdmで包んで,そのなかでset_postfixを呼ぶ.
引数はOrderedDict
for epoch inrange(n_epochs):
with tqdm(self.train_loader, ncols=100) as pbar:
for idx, (inputs, targets) inenumerate(pbar):
optimizer.zero_grad()
outputs = network(inputs)
loss = self.criterion(outputs, targets)
loss.backward()
self.optimizer.step()
pbar.set_postfix(OrderedDict(
epoch="{:>10}".format(epoch),
loss="{:.4f}".format(loss.item())))
fe Figure 1: Test errors of LeNets pruned at varying sparsity levels κ¯, where κ¯ = 0 refers to the reference network trained without pruning. Our approach performs as good as the reference network across varying sparsity levels on both the mod
Table 1: Pruning results on LeNets and comparisons to other approaches. Here, “many” refers to an arbitrary number often in the order of total learning steps, and “soft” refers to soft pruning in Bayesian based methods. Our approach is capable of pruning up to 98% for LeNet-300-100 and 99% for LeNet-5-Caffe with marginal increases in error from the reference network. Notably, our approach is considerably simpler than other approaches, with no requirements such as pretraining, additional hyperparameters, augmented training objective or architecture dependent constraints.