- Jul 16, 2016
-
-
Daniel Povey authored
Some minor changes: replace '*1.0e-03' with '/1000.0f' for more consistent rounding; minor documentation and cosmetic changes
-
- Jul 15, 2016
-
-
Daniel Povey authored
CHiME4 recipe:
-
Dan Povey authored
-
sw005320 authored
-
Yiming Wang authored
Shrink in-value of ClipGradientComponent toward some smaller value when clipping proportion exceeds some threshold (#803) (also minor bug fix in profiling in cu-vector.cc)
-
- Jul 14, 2016
-
-
Dan Povey authored
-
- Jul 13, 2016
-
-
Yiming Wang authored
* add an option to set random seed for model initialization and egs shuffling in training * fix * fix2 * do random seed checks * remove the check for randome seed of egs generation
-
Daniel Povey authored
Online feature fbank fix
-
- Jul 10, 2016
-
-
Daniel Povey authored
-
- Jul 08, 2016
-
-
Daniel Povey authored
Speed up softmax
-
Daniel Povey authored
linux_x86_64_mkl.mk now show respect to --static-fst=yes
-
Shiyin Kang authored
New: For CuMatrix::Softmax<float>, for dim = 16, speed was 0.0153621 gigaflops. Old: For CuMatrix::Softmax<float>, for dim = 16, speed was 0.0138999 gigaflops. New: For CuMatrix::Softmax<float>, for dim = 32, speed was 0.0614275 gigaflops. Old: For CuMatrix::Softmax<float>, for dim = 32, speed was 0.0507328 gigaflops. New: For CuMatrix::Softmax<float>, for dim = 64, speed was 0.235765 gigaflops. Old: For CuMatrix::Softmax<float>, for dim = 64, speed was 0.203548 gigaflops. New: For CuMatrix::Softmax<float>, for dim = 128, speed was 0.729239 gigaflops. Old: For CuMatrix::Softmax<float>, for dim = 128, speed was 0.725481 gigaflops. New: For CuMatrix::Softmax<float>, for dim = 256, speed was 2.30126 gigaflops. Old: For CuMatrix::Softmax<float>, for dim = 256, speed was 1.71863 gigaflops. New: For CuMatrix::Softmax<float>, for dim = 512, speed was 5.0565 gigaflops. Old: For CuMatrix::Softmax<float>, for dim = 512, speed was 3.69659 gigaflops. New: For CuMatrix::Softmax<float>, for dim = 1024, speed was 10.2482 gigaflops. Old: For CuMatrix::Softmax<float>, for dim = 1024, speed was 6.38335 gigaflops. New: For CuMatrix::Softmax<double>, for dim = 16, speed was 0.0143354 gigaflops. Old: For CuMatrix::Softmax<double>, for dim = 16, speed was 0.013143 gigaflops. New: For CuMatrix::Softmax<double>, for dim = 32, speed was 0.0590478 gigaflops. Old: For CuMatrix::Softmax<double>, for dim = 32, speed was 0.0495458 gigaflops. New: For CuMatrix::Softmax<double>, for dim = 64, speed was 0.228611 gigaflops. Old: For CuMatrix::Softmax<double>, for dim = 64, speed was 0.193465 gigaflops. New: For CuMatrix::Softmax<double>, for dim = 128, speed was 0.668961 gigaflops. Old: For CuMatrix::Softmax<double>, for dim = 128, speed was 0.676449 gigaflops. New: For CuMatrix::Softmax<double>, for dim = 256, speed was 2.1013 gigaflops. Old: For CuMatrix::Softmax<double>, for dim = 256, speed was 1.51862 gigaflops. New: For CuMatrix::Softmax<double>, for dim = 512, speed was 4.13055 gigaflops. Old: For CuMatrix::Softmax<double>, for dim = 512, speed was 3.1547 gigaflops. New: For CuMatrix::Softmax<double>, for dim = 1024, speed was 6.43429 gigaflops. Old: For CuMatrix::Softmax<double>, for dim = 1024, speed was 5.02974 gigaflops. minor changes
-
scinart authored
-
- Jul 07, 2016
-
-
Daniel Povey authored
-
Daniel Povey authored
Fixes #883
-
Daniel Galvez authored
Avoid using python's / operator so that we do not create a floating point value accidentally when python3 is used.
-
- Jul 06, 2016
-
-
Daniel Povey authored
LogLikelihoodRatio function should be const. Does not modify the PLDA
-
Matthew Maciejewski authored
-
- Jul 04, 2016
-
-
Daniel Povey authored
-
Daniel Povey authored
Fix issue with 'explicit' constructors that prevented inclusion in stl vector, plus various cosmetic and documentation changes
-
Joachim authored
Would cause an error if num hidden layers > num lstm layers.
-
Dan Povey authored
-
- Jul 01, 2016
-
-
Xingyu Na authored
-
- Jun 29, 2016
-
-
Wonkyum Lee authored
- fix a bug (unknown feature) which occurs when “—endpoint=true” with use of fbank feature
-
Wonkyum Lee authored
-
- Jun 28, 2016
-
-
Daniel Povey authored
Fix bug introduced in #863
-
Shiyin Kang authored
resize the result matrix before back-propagation
-
Xingyu Na authored
-
Daniel Galvez authored
-
- Jun 26, 2016
-
-
Daniel Povey authored
Standard way to represent inf
-
Shiyin Kang authored
-
Daniel Povey authored
For compatibility with MS Visual Studio.
-
tal1974 authored
-
Daniel Povey authored
Speed up SoftmaxComponent::Backprop()
-
- Jun 25, 2016
-
-
Daniel Povey authored
yesno recipe data prep: paths to absolute + misc.
-
jfainberg authored
-
Shiyin Kang authored
New: For CuMatrix::DiffSoftmaxPerRow<float>, for dim = 16, speed was 0.0165568 gigaflops. Old: For CuMatrix::DiffSoftmaxPerRow<float>, for dim = 16, speed was 0.00355242 gigaflops. New: For CuMatrix::DiffSoftmaxPerRow<float>, for dim = 32, speed was 0.0678791 gigaflops. Old: For CuMatrix::DiffSoftmaxPerRow<float>, for dim = 32, speed was 0.0145515 gigaflops. New: For CuMatrix::DiffSoftmaxPerRow<float>, for dim = 64, speed was 0.24739 gigaflops. Old: For CuMatrix::DiffSoftmaxPerRow<float>, for dim = 64, speed was 0.0583246 gigaflops. New: For CuMatrix::DiffSoftmaxPerRow<float>, for dim = 128, speed was 0.898427 gigaflops. Old: For CuMatrix::DiffSoftmaxPerRow<float>, for dim = 128, speed was 0.225076 gigaflops. New: For CuMatrix::DiffSoftmaxPerRow<float>, for dim = 256, speed was 2.89009 gigaflops. Old: For CuMatrix::DiffSoftmaxPerRow<float>, for dim = 256, speed was 0.834096 gigaflops. New: For CuMatrix::DiffSoftmaxPerRow<float>, for dim = 512, speed was 6.72164 gigaflops. Old: For CuMatrix::DiffSoftmaxPerRow<float>, for dim = 512, speed was 1.92722 gigaflops. New: For CuMatrix::DiffSoftmaxPerRow<float>, for dim = 1024, speed was 10.4916 gigaflops. Old: For CuMatrix::DiffSoftmaxPerRow<float>, for dim = 1024, speed was 2.78281 gigaflops. New: For CuMatrix::DiffSoftmaxPerRow<double>, for dim = 16, speed was 0.0148584 gigaflops. Old: For CuMatrix::DiffSoftmaxPerRow<double>, for dim = 16, speed was 0.00260567 gigaflops. New: For CuMatrix::DiffSoftmaxPerRow<double>, for dim = 32, speed was 0.0586865 gigaflops. Old: For CuMatrix::DiffSoftmaxPerRow<double>, for dim = 32, speed was 0.0121077 gigaflops. New: For CuMatrix::DiffSoftmaxPerRow<double>, for dim = 64, speed was 0.22893 gigaflops. Old: For CuMatrix::DiffSoftmaxPerRow<double>, for dim = 64, speed was 0.0527767 gigaflops. New: For CuMatrix::DiffSoftmaxPerRow<double>, for dim = 128, speed was 0.763462 gigaflops. Old: For CuMatrix::DiffSoftmaxPerRow<double>, for dim = 128, speed was 0.175736 gigaflops. New: For CuMatrix::DiffSoftmaxPerRow<double>, for dim = 256, speed was 2.40457 gigaflops. Old: For CuMatrix::DiffSoftmaxPerRow<double>, for dim = 256, speed was 0.58351 gigaflops. New: For CuMatrix::DiffSoftmaxPerRow<double>, for dim = 512, speed was 4.55165 gigaflops. Old: For CuMatrix::DiffSoftmaxPerRow<double>, for dim = 512, speed was 1.42464 gigaflops. New: For CuMatrix::DiffSoftmaxPerRow<double>, for dim = 1024, speed was 4.36421 gigaflops. Old: For CuMatrix::DiffSoftmaxPerRow<double>, for dim = 1024, speed was 1.94971 gigaflops.
-
Shiyin Kang authored
-
Shiyin Kang authored
-
- Jun 23, 2016
-
-
Dan Povey authored
Modify error message and documentation to suggest nvidia-smi -c 3 (process exclusive mode) as thread exclusive mode is deprecated now
-