.. _Transformers page: Transformers **************** Transformers are components that apply a transformation on the point cloud. They can be divided into class transformers (:class:`.ClassTransformer`) that transform the classification and predictions of the point cloud, feature transformers (:class:`.FeatureTransformer`) that transform the features of the point cloud, and point transformers (:class:`.PointTransformers`) that compute an advanced transformation on the point cloud that involves different information (e.g., spatial coordinates to derive receptive fields that can be used to reduce or propagate both features and classes). Transformers are typically use inside pipelines to apply transformations to the point cloud at the current pipeline's state. Readers are strongly encouraged to read the :ref:`Pipelines documentation` before looking further into transformers. Class transformers ==================== Class reducer --------------- The :class:`.ClassReducer` takes an original set of :math:`n_I` input classes and returns :math:`n_O` output classes, where :math:`n_O < n_I`. It can be applied to the reference classification only or also to the predictions. On top of that, it supports a text report on the distributions with the absolute and relative frequencies and a plot of the class distribution before and after the transformation. A :class:`.ClassReducer` can be defined inside a pipeline using the JSON below: .. code-block:: json { "class_transformer": "ClassReducer", "on_predictions": false, "input_class_names": ["noclass", "ground", "vegetation", "cars", "trucks", "powerlines", "fences", "poles", "buildings"], "output_class_names": ["noclass", "ground", "vegetation", "buildings", "objects"], "class_groups": [["noclass"], ["ground"], ["vegetation"], ["buildings"], ["cars", "trucks", "powerlines", "fences", "poles"]], "report_path": "class_reduction.log", "plot_path": "class_reduction.svg" } The JSON above defines a :class:`.ClassReducer` that will replace the nine original classes into five reduced classes where many classes are grouped together as the ``"objects"`` class. Moreover, it will generate a text report in a file called `class_reduction.log` and a figure representing the class distribution in `class_reduction.svg`. **Arguments** -- ``on_predictions`` Whether to also reduce the predictions if any (True) or not (False). Note that setting ``on_predictions`` to True will only work if there are available predictions. -- ``input_class_names`` A list with the names of the input classes. -- ``output_class_names`` A list with the desired names for the output classes. -- ``class_groups`` A list of lists such that the list i defines which classes will be considered to obtain the reduced class i. In other words, each sublist contains the strings representing the names of the input classes that must be mapped to the output class. -- ``report_path`` Path where the text report on the class distributions must be written. If it is not given, then no report will be generated. -- ``plot_path`` Path where the plot of the class distributions must be written. If it is not given, then no plot will be generated. **Output** The examples in this section come from applying a :class:`.ClassReducer` to the `5080_54435.laz` point cloud of the `DALES dataset `_ . An example of the plot representing how the classes are distributed before and after the :class:`.ClassReducer` is shown below. .. figure:: ../img/class_reducer_plot.png :scale: 20 :alt: Figure representing the distribution of classes before and after the class reduction Visualization of the class distributions before and after the class reduction. An example of how the classes represented on the point cloud look like before and after the :class:`.ClassReducer` is shown below. .. figure:: ../img/class_reducer_pcloud.png :scale: 40 :alt: Figure representing a class reduction. Visualization of the original (left) and reduced classification (right). Class setter ----------------- The :class:`.ClassSetter` assigns the classes of a point cloud from any of its attributes. A :class:`.ClassSetter` can be defined inside a pipeline using the JSON below: .. code-block:: json { "class_transformer": "ClassSetter", "fname": "Prediction" } The JSON above defines a :class:`.ClassSetter` that will assign the ``"Prediction"`` attribute as the point-wise classes of the point cloud. **Arguments** -- ``fname`` The name of the attribute that must be considered as the new classification of the point cloud. Distance reclassifier ----------------------- The :class:`.DistanceReclassifier` takes an original set of :math:`n_I` input classes and returns :math:`n_O` output classes. It can be applied to the reference classification or to the predictions. The transformation is based on relational filters, k-nearest neighbors neighborhoods, and point-wise distances involving the structure and feature spaces. It also supports a text report on the distributions with the absolute and relative frequencies and a plot of the class distribution before and after the transformation. A :class:`.DistanceReclassifier` can be defined inside a pipeline using the JSON below: .. code-block:: json { "class_transformer": "DistanceReclassifier", "on_predictions": false, "input_class_names": ["ground", "vegetation", "building", "other"], "output_class_names": ["ground", "lowveg", "midveg", "highveg", "building", "other"], "reclassifications": [ { "source_classes": ["vegetation"], "target_class": "highveg", "conditions": null, "distance_filters": null }, { "source_classes": ["vegetation"], "target_class": "lowveg", "conditions": [ { "value_name": "floor_dist", "condition_type": "less_than", "value_target": 0.5, "action": "preserve" } ], "distance_filters": null }, { "source_classes": ["vegetation"], "target_class": "lowveg", "conditions": null, "distance_filters": [ { "metric": "euclidean", "components": ["z"], "knn": { "coordinates": ["x", "y"], "max_distance": null, "k": 1, "source_classes": ["ground"] }, "filter_type": "less_than", "filter_target": 1.0, "action": "preserve" } ] }, { "source_classes": ["vegetation"], "target_class": "midveg", "conditions": null, "distance_filters": [ { "metric": "euclidean", "components": ["z"], "knn": { "coordinates": ["x", "y"], "max_distance": null, "k": 1, "source_classes": ["ground"] }, "filter_type": "inside", "filter_target": [1.0, 5.0], "action": "preserve" } ] } ], "report_path": "reclassification.log", "plot_path": "reclassification.svg", "nthreads": -1 } The JSON above defines a :class:`.DistanceReclassifier` that will preserve the ground, building, and other classes while transforming the vegetation class into lowveg (low vegetation), midveg (mid vegetation), and highveg (high vegetation). In the process, it will generate a text report in a file called `reclassification.log` and a figure representing the class distributions in `reclassification.svg`. **Arguments** -- ``on_predictions`` Whether to also reduce the predictions if any (True) or not (False). Note that setting ``on_predictions`` to True will only work if there are available predictions. -- ``input_class_names`` A list with the names of the input classes. -- ``output_class_names`` A list with the desired names for the output/transformed classes. -- ``reclassifications`` A list of dictionaries such that each dictionary specifies a class transform operation. -- ``source_classes`` The names of the classes such that only points of these classes will be modified by the reclassification operation. -- ``target_class`` The name of the target/output class to which those points that satisfy the conditions and distance-based filters will be assigned. -- ``conditions`` A list of dictionaries such that each dictionary specifies a relational filter. See :ref:`documentation about advanced input conditions ` . -- ``value_name`` See :ref:`documentation about advanced input conditions value name ` . -- ``condition_type`` See :ref:`documentation about advanced input condition type ` . -- ``value_target`` See :ref:`documentation about advanced input conditions value target ` . -- ``action`` See :ref:`documentation about advanced input conditions action ` . -- ``distance_filters`` A list of dictionaries where each dictionary specifies a distance-based filter. -- ``metric`` The distance metric to be computed for :math:`n` components. It can be either ``"euclidean"`` .. math:: \operatorname{d}(\pmb{p}, \pmb{q}) = \sqrt{\sum_{j=1}^{n}{(p_j-q_j)^2}} or ``"manhattan"`` .. math:: \operatorname{d}(\pmb{p}, \pmb{q}) = \sum_{j=1}^{n}{\left\lvert{p_j-q_j}\right\rvert} . -- ``components`` A list with the names of the components defining the vectors whose distance will be computed. Supported components are ``"x"``, ``"y"``, and ``"z"`` for the corresponding coordinates from the structure space and also any feature name from the point cloud's feature space. -- ``knn`` The dictionary with the k-nearest neighbor neighborhood specification. -- ``coordinates`` The coordinates defining the points for the neighborhood computations. For example, ``["x", "y", "z"]`` implies typical 3D neighborhoods and ``["x", "y"]`` implies typical 2D neighborhoods. -- ``max_distance`` The max distance that any neighbor must satisfy. Points further away than this distance will be excluded from the neighborhood. -- ``k`` The number of :math:`k`-nearest neighbors. -- ``source_classes`` Neighborhoods will only contain points belonging to the given source classes. If None, then all points will be considered as neighbors, no matter their class. -- ``filter_type`` Like :ref:`the advanced input condition type specification ` but also supports ``"inside"`` (:math:`x \in [a, b] \subset \mathbb{R}`). -- ``filter_target`` See :ref:`documentation about advanced input conditions value target ` . -- ``action`` See :ref:`documentation about advanced input conditinos action ` . -- ``report_path`` Path where the text report on the class distributions must be written. If it is not given, then no report will be generated. -- ``plot_path`` Path where the plot of the class distributions must be written. If it is not given, then no plot will be generated. -- ``nthreads`` The number of threads for the parallel computations. Note that using ``-1`` means as many threads as available cores. **Output** The examples in this section come from applying a :class:`.DistanceReclassifier` to the `PNOA_2015_GAL-W_478-4766_ORT-CLA-COL` point cloud of the `PNOA-II dataset `_ . The figure below is the plot representing how the classes are distributed before and after the :class:`.DistanceReclassfier`. .. figure:: ../img/pnoa_veg_reclassif_plot.png :scale: 30 :alt: Figure representing the distribution of classes before and after the distance-based reclassification. Visualization of the class distributions before and after the distance-based reclassification. The figure below represents the vegetation reclassified by heights following the specifications from the JSON above (note that the floor distance was computed using the :ref:`height features miner ++ `). .. figure:: ../img/pnoa_veg_reclassif.png :scale: 60 :alt: Figure representing the reclassified point cloud. Visualization of the reclassified point cloud. Non-vegetation classes are colored white, low vegetation points are blue, mid vegetation ones are green, and high vegetation is red. Feature transformers ======================= Minmax normalizer ------------------- The :class:`.MinmaxNormalizer` maps the specified features so they are inside the :math:`[a, b]` interval. It can be configured to clip values outside the interval or not. If so, values below :math:`a` will be replaced by :math:`a` while values above :math:`b` will be replaced by :math:`b`. A :class:`.MinmaxNormalizer` can be defined inside a pipeline using the JSON below: .. code-block:: json { "feature_transformer": "MinmaxNormalizer", "fnames": ["AUTO"], "target_range": [0, 1], "clip": true, "report_path": "minmax_normalization.log" } The JSON above defines a :class:`.MinmaxNormalizer` that will map the features to be inside the :math:`[0, 1]` interval. If this transformer is later applied to different data, it will make sure that there is not value less than zero or greater than one. On top of that, a report about the normalization will be written to the `minmax_normalization.log` text file. **Arguments** -- ``fnames`` The names of the features to be normalized. If ``"AUTO"``, the features considered by the last component that operated over the features will be used. -- ``target_range`` The interval to normalize the features. -- ``clip`` When a minmax normalizer has been fit to a dataset, it will find the min and max values to compute the normalization. It can be that the normalizer is then applied to other dataset with different min and max. Under those circumstances, values below :math:`a` or above :math:`b` might appear. When clip is set to true, this values will be replaced by either :math:`a` or :math:`b` so the normalizer never yields values outside the :math:`[a, b]` interval. -- ``minmax`` An optional list of pairs (e.g., list of lists, where each sublist has exactly two elements). When given, each i-th element is a pair where the first component gives the min for the i-th feature and the second one gives the max. -- ``frenames`` An optional list of names. When given, the normalized features will use these names instead of the original ones given by ``fnames``. -- ``report_path`` When given, a text report will be exported to the file pointed by the path. -- ``update_and_preserve`` When true, the features that were not transformed by minmax normalization will be kept in the point cloud with the normalized features. When false, the values of non-transformed features might be missing. **Output** A transformed point cloud is generated such that its features are normalized to a [0, 1] interval. The min, the max, and the range are exported through the logging system (see below for an example corresponding to the minmax normalization of some geometric features). .. list-table:: :widths: 31 23 23 23 :header-rows: 1 * - FEATURE - MIN - MAX - RANGE * - linearity_r0.05 - 0.00028 - 1.00000 - 0.99972 * - planarity_r0.05 - 0.00000 - 0.97660 - 0.97660 * - surface_variation_r0.05 - 0.00000 - 0.32316 - 0.32316 * - eigenentropy_r0.05 - 0.00006 - 0.01507 - 0.01501 * - omnivariance_r0.05 - 0.00000 - 0.00060 - 0.00060 * - verticality_r0.05 - 0.00000 - 1.00000 - 1.00000 * - anisotropy_r0.05 - 0.06250 - 1.00000 - 0.93750 * - linearity_r0.1 - 0.00070 - 1.00000 - 0.99930 * - planarity_r0.1 - 0.00000 - 0.95717 - 0.95717 * - surface_variation_r0.1 - 0.00000 - 0.32569 - 0.32569 * - eigenentropy_r0.1 - 0.00028 - 0.04501 - 0.04473 * - omnivariance_r0.1 - 0.00000 - 0.00241 - 0.00241 * - verticality_r0.1 - 0.00000 - 1.00000 - 1.00000 * - anisotropy_r0.1 - 0.05643 - 1.00000 - 0.94357 .. _Standardizer: Standardizer -------------- The :class:`.Stantardizer` maps the specified features so they are transformed to have mean zero :math:`\mu = 0` and standard deviation one :math:`\sigma = 1`. Alternatively, it is possible to only center (mean zero) or scale (standard deviation one) the data. A :class:`.Standardizer` can be defined inside a pipeline using the JSON below: .. code-block:: json { "feature_transformer": "Standardizer", "fnames": ["AUTO"], "center": true, "scale": true, "report_path": "standardization.log" } The JSON above defines a :class:`.Standardizer` that centers and scales the data. Besides, it will export a text report with the feature-wise means and variances to the `standardization.log` file. **Arguments** -- ``fnames`` The names of the features to be standardized. If ``"AUTO"``, the features considered by the last component that operated over the features will be used. -- ``center`` Whether to subtract the mean (true) or not (false). -- ``scale`` Whether to divide by the standard deviation (true) or not (false). -- ``report_path`` When given, a text report will be exported to the file pointed by the path. -- ``update_and_preserve`` When true, the features that were not transformed through normalization (i.e., standardization) will be kept in the point cloud with the normalized features. When false, the values of non-transformed features might be missing. **Output** A transformed point cloud is generated such that its features are standardized. The mean and standard deviation are exported through the logging system (see below for an example corresponding to the standardization of some geometric features). .. list-table:: :widths: 40 30 30 :header-rows: 1 * - FEATURE - MEAN - STDEV. * - linearity_r0.05 - 0.47259 - 0.24131 * - planarity_r0.05 - 0.32929 - 0.22213 * - surface_variation_r0.05 - 0.10697 - 0.06362 * - eigenentropy_r0.05 - 0.00781 - 0.00184 * - omnivariance_r0.05 - 0.00025 - 0.00010 * - verticality_r0.05 - 0.55554 - 0.30274 * - anisotropy_r0.05 - 0.80188 - 0.14316 * - linearity_r0.1 - 0.49389 - 0.24075 * - planarity_r0.1 - 0.29196 - 0.21008 * - surface_variation_r0.1 - 0.11583 - 0.06376 * - eigenentropy_r0.1 - 0.02512 - 0.00533 * - omnivariance_r0.1 - 0.00100 - 0.00035 * - verticality_r0.1 - 0.57260 - 0.30121 * - anisotropy_r0.1 - 0.78585 - 0.14570 Variance selector -------------------- The variance selection is a simple strategy that consists of discarding all those features which variance lies below a given threshold. While simple, the :class:`.VarianceSelector` has a great strength and that is it can be computed without known classes because it is based only on the variance. A :class:`.VarianceSelector` can be defined inside a pipeline using the JSON below: .. code-block:: json { "feature_transformer": "VarianceSelector", "fnames": ["AUTO"], "variance_threshold": 0.01, "report_path": "variance_selection.log" } The JSON above defines a :class:`.VarianceSelector` that removes all features which variance is below :math:`10^{-2}`. After that, it will export a text report describing the process to the `variance_selection.log` file. **Arguments** -- ``fnames`` The names of the features to be transformed. If ``"AUTO"``, the features considered by the last component that operated over the features will be used. -- ``variance_threshold`` Features which variance is below this threshold will be discarded. -- ``report_path`` When given, a text report will be exported to the file pointed by the path. **Output** A transformed point cloud is generated considering only the features that passed the variance threshold. On top of that, the feature-wise variances are exported through the logging system. The selected features are also explicitly listed (see below for an example corresponding to a variance selection on some geometric features). .. list-table:: :widths: 60 40 :header-rows: 1 * - FEATURE - VARIANCE * - omnivariance_r0.05 - 0.000 * - omnivariance_r0.1 - 0.000 * - eigenentropy_r0.05 - 0.000 * - eigenentropy_r0.1 - 0.000 * - surface_variation_r0.05 - 0.004 * - surface_variation_r0.1 - 0.005 * - anisotropy_r0.05 - 0.020 * - anisotropy_r0.1 - 0.022 * - linearity_r0.1 - 0.051 * - linearity_r0.05 - 0.056 * - planarity_r0.1 - 0.066 * - planarity_r0.05 - 0.075 * - verticality_r0.05 - 0.092 * - verticality_r0.1 - 0.097 .. list-table:: :widths: 100 :header-rows: 1 * - SELECTED FEATURES * - linearity_r0.05 * - planarity_r0.05 * - verticality_r0.05 * - anisotropy_r0.05 * - linearity_r0.1 * - planarity_r0.1 * - verticality_r0.1 * - anisotropy_r0.1 K-Best selector ------------------ The :class:`.KBestSelector` computes the feature-wise ANOVA F-values and use them to sort the features. Then, only the :math:`K` best features, i.e., those with highest F-values, will be preserved. A :class:`.KBestSelector` can be defined inside a pipeline using the JSON below: .. code-block:: json { "feature_transformer": "KBestSelector", "fnames": ["AUTO"], "type": "classification", "k": 2, "report_path": "kbest_selection.log" } The JSON above defines a :class:`.KBestSelector` that computes the ANOVA F-Values assuming a classification task. Then, it discards all features but the two with the highest values. Finally, it writes a text report with the feature-wise F-Values and the associated p-value for each test to the file `kbest_selection.log` **Arguments** -- ``fnames`` The names of the features to be transformed. If ``"AUTO"``, the features considered by the last component that operated over the features will be used. -- ``type`` Specify which type of task is going to be computed. Either, ``"regression"`` or ``"classification"``. The F-Values computation will be carried out to be adequate for one of those tasks. For regression tasks the target variable is expected to be numerical, while for classification tasks it is expected to be categorical. -- ``k`` How many top-features must be preserved. -- ``report_path`` When given, a text report will be exported to the file pointed by the path. **Output** A transformed point cloud is generated considering only the K-best features according to the F-values. Moreover, the feature-wise F-Values and their associated p-value are exported through the logging system. The selected features are also explicitly listed (see below for an example corresponding to a K-best selection on some geometric features). .. csv-table:: :file: ../csv/kbest_selector_report.csv :widths: 40 30 30 :header-rows: 1 .. list-table:: :widths: 100 :header-rows: 1 * - SELECTED FEATURES * - surface_variation_r0.1 * - anisotropy_r0.1 Percentile selector ---------------------- The :class:`.PercentileSelector` computes the ANOVA F-Values and use them to sort the features. Then, only a given percentage of the features are preserved. More concretely, the given percentage of the features with the highest F-Values will be preserved. A :class:`.PercentileSelector` can be defined inside a pipeline using the JSON below: .. code-block:: json { "feature_transformer": "PercentileSelector", "fnames": ["AUTO"], "type": "classification", "percentile": 20, "report_path": "percentile_selection.log" } The JSON above defines a :class:`.PercentileSelector` that computes the ANOVA F-Values assuming a classification task. Then, it preserves the :math:`20\%` of the features with the highest F-Values. Finally, it writes a text report with the feature-wise F-Values and the associated p-value for each test to the file `percentile_selection.log`. **Arguments** -- ``fnames`` The names of the features to be transformed. If ``"AUTO"``, the features considered by the last component that operated over the features will be used. -- ``type`` Specify which type of task is going to be computed. Either, ``"regression"`` or ``"classification"``. The F-Values computation will be carried out to be adequate for one of those tasks. For regression tasks the target variable is expected to be numerical, while for classification tasks it is expected to be categorical. -- ``percentile`` An integer from :math:`0` to :math:`100` that specifies the percentage of top-features to preserve. -- ``report_path`` When given, a text report will be exported to the file pointed by the path. **Output** A transformed point cloud is generated considering only the requested percentage of best features according to the F-values. Moreover, the feature-wise F-Values and their p-value are exported through the logging system. The selected features are also explicitly listed (see below for an example corresponding to a percentile selection on some geometric features). .. csv-table:: :file: ../csv/percentile_selector_report.csv :widths: 40 30 30 :header-rows: 1 .. list-table:: :widths: 100 :header-rows: 1 * - SELECTED FEATURES * - surface_variation_r0.1 * - verticality_r0.1 * - anisotropy_r0.1 .. _Explicit selector: Explicit selector --------------------- The :class:`.ExplicitSelector` preserves or discards the requested features, thus effectively updating the point cloud in the :ref:`pipeline's state ` (see :class:`.SimplePipelineState`). This feature transformation can be especially useful to release memory resources by discarding features that are not going to be used by other components later on. A :class:`.ExplicitSelector` can be defined inside a pipeline using the JSON below: .. code-block:: json { "feature_transformer": "ExplicitSelector", "fnames": [ "floor_distance_r50_0_sep0_35" "scan_angle_rank_mean_r5_0", "verticality_r25_0" ], "preserve": true } The JSON above defines a :class:`.ExplicitSelector` that preserves the floor distance, mean scan angle, and verticality features. In doing so, all the other features are discarded. After calling this selector, only the preserved features will be available through the pipeline's state. **Arguments** -- ``fnames`` The names of the features to be either preserved or discarded. -- ``preserve`` The boolean flag that governs whether the given features must be preserved (``true``) or discarded (``false``). **Output** A transformed point cloud is generated considering only the preserved features. .. _PCA transformer: PCA transformer ------------------ The :class:`.PCATransformer` can be used to compute a dimensionality reduction of the feature space. Let :math:`\pmb{F} \in \mathbb{R}^{m \times n_f}` be a matrix of features such that each row :math:`\pmb{f}_{i} \in \mathbb{R}^{n_f}` represents the :math:`n_f` features for a given point :math:`i`. After applying the PCA transformer a new matrix of features will be obtained :math:`\pmb{Y} \in \mathbb{R}^{m \times n_y}` such that :math:`n_y \leq n_f`. This dimensionality reduction can help reducing the number of input features for a machine learning model, and consequently reducing the execution time. To understand this transformation, simply note the singular value decomposition of :math:`\pmb{F} = \pmb{U} \pmb{\Sigma} \pmb{V}^\intercal`. The singular vectors in :math:`\pmb{V}^\intercal` can be ordered in descendant order from higher to lower singular value, where singular values are given by the diagonal of :math:`\pmb{\Sigma}`. Alternatively, the basis matrix defined by the singular vectors can be approximated with the eigenvectors of the centered covariance matrix. From now on, no matter how it was computed, we will call this basis matrix :math:`\pmb{B}`. We also assume that we always have enough linearly independent features for the analysis to be feasible. When all the basis vectors are considered, it will be that :math:`\pmb{B} \in \mathbb{R}^{n_f \times n_f}`, i.e., :math:`n_y=n_f`. In this case we are expressing potentially correlated features in a new basis where each feature aims to be orthogonal w.r.t. the others (principal components). When :math:`\pmb{B} \in \mathbb{R}^{n_f \times n_y}` for :math:`n_y`). .. figure:: ../img/pca_transformer_comparison.png :scale: 50% :alt: Figure representing three different features that have been reduced to a single one using PCA. The anisotropy, surface variation, and verticality computed for spherical neighborhoods with :math:`10\,\mathrm{cm}` radius reduced to a single feature through PCA. Point transformers ==================== Some point transformers like :class:`.ReceptiveField` or :class:`.DataAugmentor` and their derived classes (e.g., :class:`.ReceptiveFieldFPS`, :class:`.ReceptiveFieldGS`, :class:`.ReceptiveFieldHierarchicalFPS`, :class:`.SimpleDataAugmentor` )are used in the context of deep learning models. Thus, they are not available as independent components for pipelines. Other point transformers, typically those that extend :class:`.PointTransformer` can be used as components in pipelines and are detailed here. Point cloud sampler ---------------------- The :class:`.PointCloudSampler` generates a new point cloud by sampling from the current one (i.e., the point cloud in the pipeline's state, see :ref:`documentation on pipelines `). A :class:`.PointCloudSampler` can be defined inside a pipeline using the JSON below: .. code-block:: json { "point_transformer": "PointCloudSampler", "neighborhood_sampling": { "support_conditions": [ { "value_name": "HighCA_rel", "condition_type": "greater_than_or_equal_to", "value_target": 0.667, "action": "preserve" } ], "support_min_distance": 1.25, "support_strategy": "fps", "support_strategy_num_points": 100000, "support_strategy_fast": true, "support_chunk_size": 50000, "center_on_pcloud": false, "neighborhood": { "type": "sphere", "radius": 2.5, "separation_factor": 0 }, "neighborhoods_per_iter": 10000, "nthreads": -1 } } The JSON above defines a :class:`.PointCloudSampler` that will generate a point cloud considering spherical neighborhoods with radius :math:`2.5\,\mathrm{m}` centered on those points in the current point cloud with a relative frequency of high class ambiguity neighbors greater than or equal to :math:`0.667`. A point is said to have a high class ambiguity if it is greater than or equal to :math:`0.667`. When sampling, all the center points that are close to each other in less than :math:`1.25\,\mathrm{m}`. **Arguments** -- ``fnames`` The names of the features that must be included in the sampled point cloud. If ``null``, then all the available features will be included. -- ``neighborhood_sampling`` When ``null``, no neighborhood sampling will be applied. If given, it must be a key-word specification of the desired neighborhood sampling strategy, as described below: -- ``support_conditions`` A list with the conditions that must be satisfied by any center point whose neighborhood could be included in the generated point cloud (provided it satisfies the other criteria). The specification for each conditions is similar to the one described in the :ref:`conditions for advanced input documentation `. -- ``support_min_distance`` When more there are many center points that are close to each other in less than this distance, only one will be considered. -- ``support_strategy`` If the support points are not calculated with a null separation factor, then the support strategy will be used to select the initial candidates. See the :ref:`receptive fields documentation ` for further details because the specification works in the same way. -- ``support_strategy_num_points`` If the support points are not calculated with a null separation factor, and the ``"fps"`` support strategy is used, then this number of points will govern the number of initially selected candidates. See the :ref:`receptive fields documentation ` for further details because the specification works in the same way. -- ``support_strategy_fast`` If the support points are not calculated with a null separation factor, fast heuristics can be applied to speedup the computations. See :ref:`FPS receptive field documentation ` for further details. -- ``support_chunk_size`` When given and distinct than zero, it will define the chunk size. The chunk size will be used to group certain tasks into chunks with a max size to prevent memory exhaustion. -- ``center_on_pcloud`` When ``true`` the neighborhoods will be centered on a point from the input point cloud. Typically by finding the nearest neighbor of a support point in the input point cloud. In general, it is recommended to set it to ``false`` for most use cases of the :class:`.PointCloudSampler`. -- ``neighborhoods_per_iter`` When doing multiple iterations to compute the neighborhoods, the overlapping might yield many repeated points. However, there is no need to store repeated elements in memory (which can be prohibitive). When the number of neighborhoods per iter is set to be greater than zero, only this number of neighborhoods will be computed at once, thus controlling the required memory. -- ``nthreads`` The number of threads involved in parallel computations, if any. -- ``neighborhood`` The definition of the neighborhood. See :ref:`the FPS neighborhood specification ` for further details because it follows the same format. -- ``type`` Supported neighborhood types are: ``"sphere"``, ``"cylinder"``, ``"rectangular3d"``, and ``"rectangular2d"``. -- ``radius`` A decimal number goverening the size of the neighborhood. -- ``separation_factor`` A decimal number governing the separation between neighborhoods. It it recommended to set it to zero so the custom support extraction strategy of the :class:`.PointCloudSampler` is used. **Output** A point cloud is generated by sampling spherical neighborhoods from high class ambiguity regions in the original point cloud. The class ambiguity has been measured for a KPConv-like neural network model. The point cloud is taken from the `Architectural Cultural Heritage (ArCH) dataset `_ . .. figure:: ../img/arch_highCA_sampling_fig.png :scale: 50% :alt: Figure representing the spherical neighborhoods sampled from high class ambiguity regions. The spherical neighborhoods sampled from high class ambiguity regions (those inside the orange bounding box). The points are colored by class ambiguity. .. _Simple structure smootherPP: Simple structure smoother++ ------------------------------- The :class:`.SimpleStructureSmootherPP` generates a new point cloud by smoothing the coordinates of each point considering its local neighborhood. A :class:`SimpleStructureSmootherPP` can be defined inside a pipeline using the JSON below: .. code-block:: json { "point_transformer": "SimpleStructureSmootherPP", "neighborhood": { "type": "sphere", "radius": 20, "k": 1024 }, "strategy": { "type": "idw", "parameter": 2, "min_distance": 4 }, "correction": { "K": 0, "sigma": 3.14159265358979323846264338327950288419716939937510 }, "nthreads": -1 } The JSON above defines a :class:`.SimpleStructureSmoother` applied on spherical neighborhoods with :math:`20\;\mathrm{mm}` radius using inverse distance weighting with :math:`p=2` and :math:`\epsilon=4`. It does not use Fibonacci orthodromic correction at all. **Arguments** -- ``neighborhood`` The definition of the neighborhood. -- ``type`` The type of neighborhood. It can be either ``"knn"`` (3D k-nearest neighbors), ``"knn2d"`` (2D k-nearest neighbors considering the :math:`(x, y)` coordinates only), ``"sphere"`` (spherical neighborhood), and ``"cylinder"`` (cylindrical neighborhood). -- ``radius`` The radius for the sphere or the disk of the cylinder. -- ``k`` The number of k-nearest neighbors. -- ``strategy`` The specification of the smoothing strategy. -- ``type`` The smoothing strategy. It can be either ``"mean"``, ``"idw"`` (Inverse Distance Weighting), or ``"rbf"`` (Radial Basis Function). -- ``parameter`` The :math:`p \in \mathbb{R}` parameter for the IDW exponent or the Gaussian RBF bandwith. -- ``min_distance`` The :math:`\epsilon \in \mathbb{R}` parameter governing the min distance for IDW smoothing. Distances smaller than this will be replaced. -- ``correction`` The configuration of the Fibonacci orthodromic correction. -- ``K`` The number of pionts in the spherical Fibonacci support. The greater the better but it will lead to higher execution times (i.e., it increases the computational cost). -- ``sigma`` The hard cut threshold for the Fibonacci orthodromic correction between a point in a centered neighborhood :math:`\pmb{x}_{j*} \in \mathbb{R}^{3}` and a point from the Fibonacci support :math:`\pmb{q}_{k*} \in \mathbb{R}^{3}`. .. math:: \omega(\pmb{x}_{j*}, \pmb{q}_{k*}) = \max \left\{ 0, \sigma - \arccos\left(\dfrac{ \langle\pmb{x}_{j*}, \pmb{q}_{k*}\rangle }{ \lVert\pmb{x}_{j*}\rVert } \right) \right\} -- ``nthreads`` The number of threads to be used for parallel computations (-1 means as many threads as available cores). **Output** Smoother versions of a point cloud using the JSON above with different parameters. The input data comes from the `Head and Neck Organ-at-Risk CT Segmentation Dataset (HaN-Seg) `_ dataset. .. figure:: ../img/hanseg_structure_smoothing.png :scale: 50 :alt: Figure representing smoothed versions of a medical 3D point cloud representing the head and neck regions. The smoothed versions of a medical 3D point cloud representing the head and neck regions. Each color represents a distinct organ.