Tags: ricosjp/phlower
Tags
### Added * Add `attach_handler` method to `PhlowerTrainer` to add an extra handler at training process. * Add `lazy_load` parameter to `TrainingSetting` to load data lazily. * When `time_series_length` is -1, `PhlowerGroupModule` determines the time series length automatically from the input data. * Add `NaNStoppingHandler` to stop training when loss becomes NaN. * Add sliding window method for time series data in training process. * Add distributed data parallel (DDP) training. * Add `evaluate_context_manager` parameter to `TrainerSetting` to choose context manager during evaluation. * Add `LayerNorm` module in `phlower.nn` * Add `aggregation_method` parameter to `LossCalculator` to choose aggregation method of losses. (sum or mean) * Add `cg` mode in iteration solver of `PhlowerGroupModule`. * Add `PhlowerPresetGroupModule` as a preset group. ### Changed * Time series tensor is splitted into each time step when forwarding with `time_series_length` in `PhlowerGroupModule`. * Display details of losses at training process. * Default inference mode is changed to `torch.inference_mode` from `torch.no_grad`. ### Fixed * Fix `restart` method not to load recursively previous restarted checkpoints. * Fix to set empty tcp port for DDP
Fixed Fix PhlowerTensor with physical dimension to handle torch.stack. * Changed * Decompose input members when to apply reverse transform after prediction * Change default value of bias parameter in setting class of GCN. (False -> True) * Change default value of bias parameter in setting class of coefficient_network in IsoGCN. (False -> True) * Fixed * Fix PhlowerTensor with physical dimension to handle torch.stack.