Scripts

run

class run.TASK_NAMES(value)[source]

Bases: enum.Enum

An enumeration.

FINETUNE = 'finetune'
FIT = 'fit'
HDF5 = 'create_hdf5'
PREDICT = 'predict'
TEST = 'test'
run.launch_hdf5(config: omegaconf.dictconfig.DictConfig)[source]

Build an HDF5 file from a directory with las files.

run.launch_predict(config: omegaconf.dictconfig.DictConfig)[source]

Infer probabilities and automate semantic segmentation decisions on unseen data.

run.launch_train(config: omegaconf.dictconfig.DictConfig)[source]

Training, evaluation, testing, or finetuning of a neural network.

myria3d.train

myria3d.train.train(config: omegaconf.dictconfig.DictConfig) pytorch_lightning.trainer.trainer.Trainer[source]

Training pipeline (+ Test, + Finetuning)

Instantiates all PyTorch Lightning objects from config, then perform one of the following task based on parameter task.task_name:

fit:

Fits a neural network - train on a prepared training set and validate on a prepared validation set. Optionnaly, resume a checkpointed training by specifying config.model.ckpt_path.

test:

Tests a trained neural network on the test dataset of a prepared dataset (i.e. the test subdir which contains LAS files with a classification).

finetune:

Finetunes a checkpointed neural network on a prepared dataset, which muste be specified in config.model.ckpt_path. In contrast to using fit, finetuning resumes training with altered conditions. This leads to a new, distinct training, and training state is reset (e.g. epoch starts from 0).

Typical use case are

  • a different learning rate (config.model.lr) or a different scheduler (e.g. stronger config.model.lr_scheduler.patience)

  • a different number of classes to predict, in order to e.g. specialize a base model. This is done by specifying a new config.dataset_description as well as the corresponding config.model.num_classes. for RecudeLROnPlateau scheduler). Additionnaly, a specific callback must be activated to change neural net output layer after loading its weights. See configs/experiment/RandLaNetDebugFineTune.yaml for an example.

Parameters

config (DictConfig) – Configuration composed by Hydra.

Returns

lightning trainer.

Return type

Trainer

myria3d.predict