BatchNorm2d.patch 3.1 KB

1234567891011121314151617181920212223242526272829303132333435363738394041424344454647484950515253545556575859
  1. --- /usr/local/lib/python3.5/dist-packages/torch/nn/modules/batchnorm.py
  2. +++ /usr/local/lib/python3.5/dist-packages/torch/nn/modules/batchnorm.py
  3. @@ -1,8 +1,7 @@
  4. class BatchNorm2d(_BatchNorm):
  5. r"""Applies Batch Normalization over a 4D input (a mini-batch of 2D inputs
  6. with additional channel dimension) as described in the paper
  7. - `Batch Normalization: Accelerating Deep Network Training by Reducing
  8. - Internal Covariate Shift <https://arxiv.org/abs/1502.03167>`__ .
  9. + `Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift`_ .
  10. .. math::
  11. @@ -10,9 +9,8 @@
  12. The mean and standard-deviation are calculated per-dimension over
  13. the mini-batches and :math:`\gamma` and :math:`\beta` are learnable parameter vectors
  14. - of size `C` (where `C` is the input size). By default, the elements of :math:`\gamma` are set
  15. - to 1 and the elements of :math:`\beta` are set to 0. The standard-deviation is calculated
  16. - via the biased estimator, equivalent to `torch.var(input, unbiased=False)`.
  17. + of size `C` (where `C` is the input size). By default, the elements of :math:`\gamma` are sampled
  18. + from :math:`\mathcal{U}(0, 1)` and the elements of :math:`\beta` are set to 0.
  19. Also by default, during training this layer keeps running estimates of its
  20. computed mean and variance, which are then used for normalization during
  21. @@ -27,7 +25,7 @@
  22. This :attr:`momentum` argument is different from one used in optimizer
  23. classes and the conventional notion of momentum. Mathematically, the
  24. update rule for running statistics here is
  25. - :math:`\hat{x}_\text{new} = (1 - \text{momentum}) \times \hat{x} + \text{momentum} \times x_t`,
  26. + :math:`\hat{x}_\text{new} = (1 - \text{momentum}) \times \hat{x} + \text{momemtum} \times x_t`,
  27. where :math:`\hat{x}` is the estimated statistic and :math:`x_t` is the
  28. new observed value.
  29. @@ -46,10 +44,8 @@
  30. learnable affine parameters. Default: ``True``
  31. track_running_stats: a boolean value that when set to ``True``, this
  32. module tracks the running mean and variance, and when set to ``False``,
  33. - this module does not track such statistics, and initializes statistics
  34. - buffers :attr:`running_mean` and :attr:`running_var` as ``None``.
  35. - When these buffers are ``None``, this module always uses batch statistics.
  36. - in both training and eval modes. Default: ``True``
  37. + this module does not track such statistics and always uses batch
  38. + statistics in both training and eval modes. Default: ``True``
  39. Shape:
  40. - Input: :math:`(N, C, H, W)`
  41. @@ -63,8 +59,12 @@
  42. >>> m = nn.BatchNorm2d(100, affine=False)
  43. >>> input = torch.randn(20, 100, 35, 45)
  44. >>> output = m(input)
  45. +
  46. + .. _`Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift`:
  47. + https://arxiv.org/abs/1502.03167
  48. """
  49. + @weak_script_method
  50. def _check_input_dim(self, input):
  51. if input.dim() != 4:
  52. raise ValueError('expected 4D input (got {}D input)'