pipeline package
Subpackages
- pipeline.handle package
- Submodules
- pipeline.handle.pcloud_concat_handler module
PcloudConcatHandlerPcloudConcatHandler.__init__()PcloudConcatHandler.handle_input_concats()PcloudConcatHandler.handle_input_concat()PcloudConcatHandler.handle_structure_transform()PcloudConcatHandler.handle_first_concat()PcloudConcatHandler.handle_mindist_decimation()PcloudConcatHandler.handle_fps_transformation()PcloudConcatHandler.handle_classwise_sampling()
- pipeline.handle.pipeline_decoration_handler module
- Module contents
- pipeline.pps package
- pipeline.state package
Submodules
pipeline.pipeline module
- exception pipeline.pipeline.PipelineException(message='')
Bases:
VL3DException- Author:
Alberto M. Esmoris Pena
Class for exceptions related to pipeline components. See
VL3DException.- __init__(message='')
- class pipeline.pipeline.Pipeline(**kwargs)
Bases:
object- Author:
Alberto M. Esmoris Pena
Abstract class providing the interface for any pipeline and a common baseline implementation.
- Variables:
in_pcloud (str or list) – Either a string or a list of strings representing paths to input point clouds.
in_pcloud_concat (list) – An alternative to in_pcloud that concatenates many point clouds and supports conditional filters. It must be a list of dictionaries each specifying an input point cloud and potentially a list of conditions.
out_pcloud (str or list) – Either a string or a list of strings representing paths to output point clouds. Any output string that ends with and asterisk “*” will be used as a prefix for outputs specified in the components of the pipeline.
- __init__(**kwargs)
Handles the root-level (most basic) initialization of any pipeline.
- Parameters:
kwargs – The attributes for the Pipeline.
- abstractmethod run()
Run the pipeline.
- Returns:
Nothing.
- to_predictive_pipeline(**kwargs)
Transforms the current pipeline to a predictive pipeline, if possible. See
PredictivePipeline.- Returns:
A predictive pipeline wrapping this pipeline and providing a predictive strategy.
- Return type:
- is_using_deep_learning()
Check whether the pipeline uses deep learning or not.
By default, pipelines do not support deep learning. Any pipeline that supports deep learning models must explicitly overload this method to return True.
- Returns:
True if the pipeline uses deep learning, false otherwise.
- Return type:
bool
- write_deep_learning_model(path)
Write the deep learning model used in the pipeline to disk.
- Parameters:
path (str) – Path where the deep learning model must be written.
pipeline.pipeline_executor module
- exception pipeline.pipeline_executor.PipelineExecutorException(message='')
Bases:
VL3DException- Author:
Alberto M. Esmoris Pena
Class for exceptions related to the execution of pipelines. See
VL3DException- __init__(message='')
- class pipeline.pipeline_executor.PipelineExecutor(maker, **kwargs)
Bases:
object- Author:
Alberto M. Esmoris Pena
Class to handle the execution of components in the context of a pipeline.
- Variables:
maker (
Pipeline) – The pipeline that instantiated the executor.out_prefix (str) – Optional attribute (can be None) that specifies the output prefix for any component that needs to append it to its output paths.
pre_fnames (list) – Cached feature names before preprocessing. Can be used to merge consecutive miners.
- __init__(maker, **kwargs)
Handle the root-level (most basic) initialization of any pipeline executor.
- Parameters:
maker (
Pipeline) – The pipeline that instantiated the executor.kwargs – The attributes for the PipelineExecutor
- __call__(state, comp, comp_id, comps)
Execute the component of the pipeline associated to the given identifier.
By default, comp_id is expected to be an integer and comps a list such that comps[comp_id] = comp.
See
pipeline_executor.PipelineExecutor.pre_process(),pipeline_executor.PipelineExecutor.process(), andpipeline_executor.PipelineExecutor.post_process().- Parameters:
state – The pipeline’s state. See
PipelineState.comp – The component to be executed.
comp_id – The identifier of the component. Typically, it should be possible to use it to retrieve the component from comps.
comps – The components composing the pipeline.
- Returns:
Nothing.
- load_input(state, **kwargs)
Load the input point cloud in the pipeline’s state.
- Parameters:
state (
PipelineState) – The pipeline’s state.kwargs – The key-word arguments. They can be used to specify the path to the input point cloud through the “in_pcloud” key.
- Returns:
Nothing but the state is updated.
- pre_process(state, comp, comp_id, comps)
Handles the operations before the execution of the main logic, i.e., before running the logic of the current component.
See
pipeline_executor.PipelineExecutor.__call__().
- process(state, comp, comp_id, comps)
Handles the execution of the main logic, i.e., running the current component and updating the pipeline state consequently.
See
pipeline_executor.PipelineExecutor.__call__().
- post_process(state, comp, comp_id, comps)
Handles the operations after the execution of the main logic, i.e., after running the logic of the current component.
See
pipeline_executor.PipelineExecutor.__call__().
pipeline.predictive_pipeline module
- class pipeline.predictive_pipeline.PredictivePipeline(pipeline, pps, **kwargs)
Bases:
Pipeline- Author:
Alberto M. Esmoris Pena
A predictive pipeline is any pipeline that can be used as an estimator.
In other words, the predictive pipeline can be seen as a map \(f\) from a given input \(x\) that yields the corresponding estimations, aiming to approximate as much as possible the actual values \(y\).
More formally:
\[f(x) \approx y\]However, the predictive pipeline itself is not limited to the predictive model \(\hat{y}\). It also accounts for other components such as the data mining, imputation, and feature transformation.
For instance, let \(m_1\) represent a data miner, \(m_2\) another data miner, and \(i\) represent a data imputer. For this case, the composition of these components with the estimator \(\hat{y}\) would lead to a sequential predictive pipeline that can be described as follows:
\[f(x) = (\hat{y} \circ i \circ m_2 \circ m_1)(x)\]- Variables:
pipeline (.Pipeline) – The wrapped pipeline. It must be possible to use it to compute predictions. For example, a pipeline made of data mining components only will fail.
pps (.PipelinePredictiveStrategy) – The pipeline’s predictive strategy. It must be compatible with the wrapped pipeline. The strategy defines how to use the pipeline to make predictions.
- __init__(pipeline, pps, **kwargs)
Handles the root-level (most basic) initialization of any pipeline.
- Parameters:
kwargs – The attributes for the Pipeline.
- predict(pcloud, out_prefix=None)
The predict method computes the predictions from the wrapped pipeline.
- Parameters:
pcloud (
PointCloud) – The point cloud to be predicted.out_prefix – Optional argument to update the output path of the predictive pipeline strategy that is used as the output prefix for its components.
- Returns:
The predictions.
- Return type:
np.ndarray
- get_first_model()
Obtain the first model that appears in the predictive pipeline components.
- Returns:
The first found model in the predictive pipeline. None if no model was found.
- Return type:
ModelOpor None
- is_using_deep_learning()
See
pipeline.Pipeline.is_using_deep_learning().
- write_deep_learning_model(path)
See
pipeline.Pipeline.write_deep_learning_model()
pipeline.sequential_pipeline module
- class pipeline.sequential_pipeline.SequentialPipeline(**kwargs)
Bases:
Pipeline- Author:
Alberto M. Esmoris Pena
Sequential pipeline (no loops, no recursion). See
Pipeline.- __init__(**kwargs)
Initialize an instance of SequentialPipeline. A sequential pipeline execute the different components in the order they are given. See parent
SequentialPipeline- Parameters:
kwargs – The attributes for the SequentialPipeline
- Variables:
sequence (list) – The sequence of components defining the SequentialPipeline.
- run()
Run the sequential pipeline.
See
sequential_pipeline.SequentialPipeline.run_for_in_pcloud()andsequential_pipeline.SequentialPipeline.run_for_in_pcloud_concat().- Returns:
Nothing.
- run_for_in_pcloud()
Run the sequential pipeline considering in_pcloud as the input specification.
See
sequential_pipeline.SequentialPipeline.run().- Returns:
Nothing.
- run_for_in_pcloud_concat()
Run the sequential pipeline considering in_pcloud_concat as the input specification.
See
sequential_pipeline.SequentialPipeline.run().- Returns:
Nothing.
- run_case(in_pcloud, out_pcloud=None)
Run the sequential pipeline for a particular input point cloud.
- Parameters:
in_pcloud – The input point cloud for this particular case.
out_pcloud – Optionally, the output path or prefix.
- Returns:
Nothing.
- to_predictive_pipeline(**kwargs)
See
Pipelineandpipeline.Pipeline.to_predictive_pipeline().
- is_using_deep_learning()
A sequential pipeline is said to use deep learning if it contains at least one ModelOp which is based on a deep learning model.
See
pipeline.Pipeline.is_using_deep_learning().
- write_deep_learning_model(path)
Write the deep learning models contained in the sequential pipeline.
See
pipeline.Pipeline.write_deep_learning_model().
Module contents
- author:
Alberto M. Esmoris Pena
The mining package contains the logic to handle pipelines made of many processing stages.